►
From YouTube: 2020-11-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Start
off
with
a
summary
of
p1
metrics
issues
from
that's
tracked
in
this
project,
which
spans
a
couple
of
repos,
I
should
also
remind
people
this
is
the
spec
repo,
the
proto
repo
types
repo.
So
we've
got
the
p1
14
in
to
do's
three
in
progress
with
associated
prs.
13
has
been
closed
and
I
should
also
point
out
that
the
let
me
see
of
the
14
out
of
15
are
in
the
to
do's.
I
think
there
was
one
more
that
was
tracked.
B
That's
not
related
to
metrics,
oh
yeah,
this
one
maturity
that
one
I
could
pick
up
during
the
specific
meeting.
But
majority
are
our
metrics.
B
This
is
the
diff
from
last
week,
and
I've
also
been
tracking
metrics
related
issues
in
the
collector
and
collector
con
trip
scrubbing
through
the
backlog
of
100
sums
issues
we
haven't
finished,
scrubbing
all
of
the
issues
yet
but
of
the
ones
we
have
scrubbed.
There
are
13,
that's
been
marked
as
related
to
metrics.
B
None
of
them
are
p1s
they're,
all
of
the
p2
or
p3
priority
level.
At
this
moment,
but
again
we
haven't.
We
haven't
scoped
through,
like
all
149
yet
and
10
in
the
contrib
repo.
B
Triage
is
still
ongoing
for
the
collector
sick
and
there's
another
ad
hoc
meeting
this
friday
after
the
spec
sick
issue
triage
meeting.
This
is
like
a
one-time
cost
in
order
to
be
able
to
get
a
really
good
visibility
on
metrics
related
issues
for
collector
the
spec
issue,
triage
they're,
always
ongoing
they've
we've
been.
B
A
Just
let
me
know
well
as
long
as
you're
going
to
stay
in
the
meeting,
I'd
love
for
you
to
keep
sharing.
So
thank
you,
andrew
the
first
item
on
today's
agenda.
I
put
up
first
because
there's
this
topic,
that's
been
simmering
for
a
while
and
it's
time
to
give
it
more
attention.
A
I've
also
invited
theo,
shaw,
snagl
who's
here
and
I'd
like
to
introduce
he's
founder
or
lead
engineer,
principal
of
some
sort,
elevated
sort
at
circonus
and
also
one
of
the
co-authors
of
the
circonus
histogram
paper,
which
we've
discussed
a
little
bit
and
one
of
the.
I
think
this
is
sort
of
a
really
of
central
importance,
so
I'd
like
to,
and
so
this
is
an
issue
that
uk
raised.
A
So
I
want
to
have
a
discussion
among
the
group
and
and
I'm
not
sure
whether
to
hand
this
off
to
theo
or
to
uk.
At
this
point,
maybe
the
two
of
you
could
decide.
A
Well,
I
I
I'm
sorry
to
make
such
a
cold
start
here.
What
maybe,
what
I
should
say
is
we've
been
discussing
this
topic
of
sort
of
parameterized
histograms
and
things
that
we
can
encode
using,
say
an
exponential
encoding
or
a
log,
linear,
encoding
or
something
like
that,
which
ultimately
represent
ways
to
compress
a
an
encoding
of
a
histogram
and
for
a
while
we
discussed
the
dd
sketch
histogram
and
then
the
we.
A
At
the
same
time,
we've
looked
at
the
open,
metrics
groups
work
on
this
topic
and
then
it
sort
of
occurred
to
me
that
we
have
a
prometheus
compatibility
problem
here,
which
is
that
many
of
our
customers
are
going
to
consume
this
data
through
a
prometheus
pipeline.
And
when
that
happens,
we
we
can't
necessarily
guarantee
good
results
with
any
old
parameter
values
for
histogram,
and
that
led
me
to
look
at
the
circonus
histogram.
E
Sure
so
so
for
some
context,
we've
been
pulling
histogram-centric
data
into
our
platform
at
sarconis
for
about
nine
years,
and
we
have
trillions
upon
trillions
of
measurements
that
are
stored
there,
sometimes
as
many
as
a
trillion,
a
minute
that
gets
stored
in
histogram
data.
E
So
some
of
the
lessons
that
we
learned
that
informed
what
we
built
as
far
as
recreation,
transport
and
ingestion
storage
won't
go
into
visualization
issues
at
all,
but
from
the
from
the
ingestion
pipeline
and
recording
this
stuff
and
software,
there
were
a
couple
of
considerations
that
became
really
apparent.
Some
of
them
were
surprising.
E
The
first
thing
is:
is
that
people
don't
know
what
histograms
are
so
the
primary
thing
that
people
think
about
with
histograms
is
like
99th,
percentile
or
median,
or
maybe
a
population,
distribution
or
cdf,
if
they're
very
sophisticated,
but
those
things
aren't
histograms.
Those
are
things
that
you
can
see
from
a
histogram
right
histogram.
E
If
you
just
got
to
go
back
to
like
third
grade
and
think
hey
I've
got
these
bins
and
I've
got
samples
in
each
bin
right
so
like
how
many
people
got
an
a
on
the
test.
How
many
people
got
a
b
on
the
test?
That's
what
a
histogram
is.
The
real
challenge
that
we
have
in
our
computing
environments.
Is
that
because
people
don't
understand
them,
when
they
have
to
choose
bin
boundaries,
they
inevitably
choose
them
incredibly
poorly
and
they
also
choose
them
incompatibly.
E
So
you
have
two
people
in
an
organization
that
have
different
ideas
around
what
their
latency
profiles
might
be
for
an
endpoint.
They
want
to
record
those
in
a
histogram
they'll
choose
what
they
think
are
decent
bin
boundaries
for
those
it
turns
out.
They're,
usually
not
data
scientists
that
are
doing
this
and
they're.
Not
they
don't
have
a
lot
of
scientific
experience,
so
they
they
don't
do
controlled
studies,
and
things
like
that.
E
Where
that
I
mean,
if
you
look
there's
a
lot
of
there's
patents
on
how
you
actually
choose
bin
boundaries
for
for
certain
experiments,
none
of
that
stuff
ends
up
being
used
and,
what's
worse,
is
that
these
two
groups,
inside
of
an
organization,
will
choose
different
bim
boundaries
and
then,
when
you
have
to
do
analysis
over
a
combined
set
of
data,
there's
no
way
to
do
that
without
an
incredibly
complex
statistical
process
that
introduces
error
in
ways
that
are
hard
to
quantify
and
hard
to
understand
if
you're,
not
a
statistician.
E
So
what
we
found
was
that
one
you
got
to
take
the
option
options
away
from
people
and
give
them
something
that
is
robust
enough
and
good
enough,
so
that
they
can
kind
of
have
one
thing
which
is
an
on
switch.
They
would
like
to
use
them
or
not
use
them
and
then
be
able
to
build
consistent,
coherent
value
across
everything
that
everyone
collects
everywhere
and
the
things
for
doing
that
are
lack
of
parameterization
and
bin
values
that
have
the
probability.
E
Sorry,
the
property
of
being
mergeable,
so
the
mergiable
histogram
stuff
ends
up
being
a
real
game,
changer
for
the
end
user
experience,
the
end
user
being
an
operator
or
data
scientist,
or
somebody
trying
to
make
sense
of
this
data,
so
that
mergeability
removes
just
a
like
kind
of
a
huge
wealth
of
sophistication
that
becomes
required
later.
E
If
you
don't
do
it
and
then
so,
I
think
that
there's
a
lot
of
histogram
technologies
out
there
that
sort
of
have
this
property
in
particular
the
dd
sketch
has
has
it
is
parameterizable,
but
in
the
papers
they
choose
a
parameter
and
they
stick
with
it.
So
if
I
mean
I
guess,
ours
is
parameterizable
as
well,
but
we
dictate
which
parameters
they
are.
So
you
don't
really
get
to
choose
them.
E
The
hdr
histogram
support
the
only
parameter,
that's
really
relevant.
There
is
significant
digits
which
pose
problems
if
someone
tunes
that
up
or
down
and
then
you've
got
two
other
things
which
are
really
not
designed
for
the
use
cases
of
telemetry
data,
which
are
t-digest
and
higher-order
moment
sketches
from
bayless.
E
Both
of
those
are
really
designed
to
do
highly
accurate
approximations
of
data
distribution.
For
making
online
decisions
so,
for
example,
kind
of
guessing
at
how
large
or
how
large
a
kernel
density
for
an
index
in
a
database
is
so
you
can
guess
on
whether
or
not
you
it
would
be
a
good
idea
or
the
cost
of
using
an
index
for
a
query
that
sort
of
thing.
It's
really
not
designed
to
do
time,
series
based
or
like
overtime,
distribution,
analysis
and
data
extraction
from
distributions.
E
So
you
really
want
something
with
explicit
binning,
so
so
that's
kind
of
like
the
the
overview
of
the
considerations
that
that
we
have
found
were
really
really
important
and
then,
when
comparing
the
ones
that
kind
of
fall
into
that
category,
I
think
I
mean
obviously
I'm
super
biased
towards
the
surconus
one.
We
have
some
really
great
results
with
it.
It's
open
source,
the
one
thing
that's
really
useful
about
the
circonus
one-
is
that
for
embedded
systems
that
work
not
on
like
linux,
that
are
on
microcontrollers.
E
So,
if
you're
taking
data
measurements,
for
example
in
our
customer
world,
if
you're
taking
data
measurements
at
10,
kilohertz
off
of
a
well
head
sensor
and
a
field,
that's
doing
fracking
and
you're
taking
pressure
sensors
like
the
the
tech,
that's
on
that
thing
is
not
like
a
raspberry
pi.
I
mean
it
is
a
little
tiny
microprocessor
that
you
know
uses
some
sort
of
rf
back
to
the
station
and
it's
low
power
so
being
able
to
do
all
of
this
stuff
in
a
really
tight
computational
space.
E
From
like
an
instruction
point
of
view
and
a
layer,
one
layer
to
kat's
point
of
view
is
really
important.
So
those
are
those
are
sort
of
the
considerations
that
there
is
another
consideration
which
is
non-technical
in
that
there
are
patents
floating
around
this
space
and
the
implementations
in
question
may
or
may
not
violate
those
patents,
regardless
of
whether
the
implementer
owns
the
patent,
so
that
that
is
like
a
general
overarching
worry
in
general
in
in
like
this
space.
So
this
is
true
in
the
database
space.
E
You
see
it
all
the
time
where,
like
database
planners,
can't
estimate
something
for
a
certain
index,
because
that
method
is
actually
piloted
by
somebody
else.
So
that's
a
consideration.
I
don't
know
if
that.
That's
probably
not
a
consideration
for
this
group,
but
it'd
probably
be
legal
governance
of
whoever
is
going
to
do
the
adoption.
E
I
think
that
covers,
like
all
the
bases
of
my
opinions
on
on
histograms,
but
I'm
happy
to
answer
any
questions.
Anybody
has
about
histograms
or
data
collection
or
volume.
D
So
good,
okay,
my
comment
on
this
is
like
as
a
standard,
so
we
want
to
like
maximize
the
interoperability
between
instagram
producers
and
histogram
consumers.
Yes,
what
standard
is
about
so
in
in
this
line?
So
in
the
I
have
a
proposed
pr
for
the
protobuf
for
protobuf
protocol
to
transport
histograms
in
that
pr
specifically
treated
different
histogram
types
as
a
compression
method
of
the
explicit
bounds.
D
Everything
so
logic.
Logic
goes
like
this.
Everything
starts
from
the
explicit
bounds,
which
basically
a
list
of
doubles.
That
is
the
basically
the
least
common
denominator.
What
you
call
so,
if
everything
else
fails,
you
explicitly
not
define
the
bond
is
a
binary.
One
has
a
counter
of
this
fan.
Who
has
a
kind
of
this?
That's
the
explicit
band,
so
in
my
pr,
everything
else
is
designed
as
a
derivative
of
the
explicit
band,
for
example
the
simplest.
The
linear
bounds
is
basically,
if
you
see
a
pattern,
is
say:
10,
20,
30,
40..
D
There's
no,
no
need
to
to
repeat
this
for
500
times
you
just
say
the
width
is
10
repeats
for
50
times
something
like
this.
So
similarly
I
have
I
defined
linear
log
or
exponential.
If
every
bound
is
1.1
times
of
the
previous
span,
you
can
declare
it's
exponential.
D
Then
I
defined
the
log
linear,
which
is
hybrid
of
log
and
linear.
So
the
top
level
you
have
exponential
exponential
bands
within
each
exponential
bucket.
You
define
multiple
linear
sub
buckets
and
I
believe
this
kernel
third
histogram
could
be
encoded
in
the
log
linear
format,
so
in
this
in
this
protocol.
D
D
D
E
So
so
so
some
thoughts
on
that.
So
I
mean
I,
your
initial
comment
about
the
beauty
of
a
standard
being
the
interoperability.
So
when
I
look
at
this
problem,
I'm
looking
at
this
from
like
a
end
user
experience,
point
of
view
where
the
compatibility
really
needs
to
be
the
data
model,
not
the
protocol.
Clearly,
you
need
compatibility
there,
but
the
data
model
also
really
needs
kind
of
forced
compatibility,
and
the
problem
with
allowing
the
buckets
to
be
of
arbitrary
boundaries
is
that
you
can't
translate
between
two
of
them.
E
So
if
you
have
one
that
goes
from,
you
know
10,
to
10,
to
20,
to
20,
to
30
30
to
40
and
it's
units
of
10,
but
you
have
another.
Come
in
from
15
to
25
25
to
35
35
to
45.
E
You
can't
really
do
anything
with
those
two
data's.
You
can't
translate
them.
They
are
not
compatible
in
a
way,
that's
useful
to
end
users.
They
aren't
mergeable
and
by
merging
them.
If
you
force
merge
them,
you
end
up
having
bucket
enlargement.
That
is
almost
unquantifiable,
especially
in
the
in
the
case
that
I
just
gave
it's.
Actually,
it
bleeds
into
a
single
enormous
bucket
because
they
all
overlap
with
each
other.
E
E
It's
really
about
the
election
to
allow
for
bin
boundaries
so
that
one
person
can
produce,
or
in
my
my
big
fear,
is
one
vendor
produces
data
in
one
set
of
bin
boundaries
and
another
vendor
produces
it
in
another,
and
then
it
locks
the
end
user
into
using
their
platform,
because
there's
no
way
to
really
translate
between
the
two
of
them.
D
Yes,
that
is
correct,
so
the
thing
is,
you
are
raising
a
very
good
point
in
standards
or
competitiveness,
that's
a
big
model
level,
so
they
did
another
level.
I
think
at
least
even
you
choose
a
data
model.
I
believe
at
least
one
parameter
will
be
required.
That
is
basically
the
resolution.
D
E
Yeah,
so
our
experience
with
that
is
a
little
different
than
that,
so
I
mean
so
for
our
example.
The
the
smallest
type
of
histogram
that
we
support
is
43,
000
buckets
right
and
it
doesn't
really
take
any
memory,
it's
very,
very
small
because
they
tend
to
be
very
sparse.
So
you
can
work
over
trillions
of
data
points
and
years
of
second
by
second
data,
and
I
mean
it
fits
in
an
l2
cache
on
on
most
systems.
It's
really
really
tiny
data,
that's
the
beauty
of
histograms
and
not
dealing
with
real
samples.
E
It's
a
very
navigable
problem.
I
I
do
hear
the
the
the
significant
digits
there
are
a
lot
of
people
who
say
you
know
two
significant
digits
is
not
enough.
I
need
three
or
I
need
four.
Our
experience
has
been
that
that
two
significant
digits
is
actually
enough
for
all
of
the
use
cases
in
the
monitoring
world,
but
I
would
concede
that
we
haven't
seen
them
all
and
that
there
are
absolutely
cases
where
you
might
need
three
or
four
significant
digits.
Well,.
D
By
here
you
mean
decimal,
significant
digits,
so
with
two
significant
you
get,
let's
say
exactly
one
percent
relative
error
about
that
yeah
right
about
that.
I
think
it
is
especially
in
the
telemetry
world
if
you
are
doing
response
time
as
well,
one
percent
is
more
than
one
enough.
In
fact,
once
I
asked
one
of
the
newer
customers
is
five
percent
good.
I
thought
they
might
say
no,
but
they
just
even
think
without
even
thinking
yeah
sure
yeah.
D
So
the
question
is
the
memory
requirement
for
the
one
percent.
Even
at
one
percent
resolution,
you
are
gonna,
depending
on
your
level,
giving
the
range
of
your
number,
the
smallest,
from
smallest
to
the
largest,
assuming
you
are
using
a
log
or
lucky
linear
family
of
the
data
model.
D
You
can
do
like
three
percent
relative
error
for
a
range
for
a
for,
like
we
call
a
contrast
for
a
contrast
of
one
billion.
That
is,
if
you
are
tracking
a
number
from
one
to
one
million.
D
You
need
320
buckets
for
for
three
for
a
three
percent
regressive
error.
We
think
that
300
is
kind
of
in
the
middle
gram.
It's
300
buckets,
so
three
percent
relative
error,
but
some
other
users
or
use
cases
might
object,
like
I
think
in
dd
sketch
their
favor
is
the
preference,
probably
even
higher
proposition.
D
A
I'd
like
to
give
michael
a
chance
to
speak
this,
this
issue,
it's
great
we're
going
to
run
over
time.
Pretty
soon
can
michael
speak.
F
I
can
I
apologize
I'm
a
little
bit
under
the
weather
today,
so
for
me,
I've
been
following
mostly
in
the
in
the
github
prs
and
and
there's
one
where
our
team
is
working
on
the
java
sdk
in
the
hotel
in
the
hotel
repository.
So
that
should
be
readable.
F
The
biggest
thing
that
I
want
to
stay
ahead
of
is
the
exporter
for
prometheus,
like
you
were
pointing
out
so
so
far,
uk's
suggestion
does
continue
to
work
for
us,
so
I'm
super
happy
there,
but
I'm
curious
about
theo's
thoughts
on
what
it
would
look
like
to
export
to
fixed
bucketed
histograms
for
for
time,
series
databases
that
simply
don't
support
anything
else.
E
So
the
the
beauty
of
that
is
that
on
the
database
side
now
you're
you're
into
the
storage
area
right
I
mean
you
can
do
anything
with
that.
Histogram
that
you
want
right,
you
could
convert
it
into
a
two
digest
or
a
dd
sketch,
or
something
like
that.
If
you
wanted
to
store
it
that
way,
obviously
you
couldn't
regurgitate
it
exactly,
but
it's
it's.
E
I
think
it's
important
one
to
remember
that
if
you
have
300
bins
in
your
histogram,
you
don't
actually
have
to
submit
300
sample,
like
300
data
points
back
right,
the
ones
that
haven't
changed
or
the
ones
that
haven't
grown
or
the
ones
that
don't
have
any
data
in
them.
Zeros
are
pretty
easy
to
number
to
communicate,
communicate
by
omission.
So
what
we
see
in
general
I
mean
like,
I
said
you
should
read
the
circle,
his
paper,
it's
pretty
digestible
and
very
short,
but
it
has
a
little
over
43
000
bins,
the
typical
histogram
representation.
E
Ours
isn't
in
protobuf,
but
that
doesn't
matter
so
much.
We
do
have
a
protobuf
implementation
ends
up
being
around
300
bytes
250
bytes
right.
So
it's
just
really
it's
a
really
tiny
representation.
E
We
tend
to
encode
that
histogram
in
a
single
line
we
base64
encode
it
and
put
it
in
a
single
line
for
exportation
which,
which
sounds
horrible
until
you
realize
that
it's
one
line
of
text
and
then
it's
super
easy
to
deal
with
or
ignore,
but
if
it's
in
protobuf,
obviously
you're
not
going
to
read
any
of
that
with
a
human
anyway,
so
so
the
encoding
mechanism
that
was
actually
proposed
isn't
isn't
a
bad
one
at
all.
E
If
you
do
go
to
converge
on
a
single
universal
binning
scheme,
which
I
actually
think
is
the
primary
value
of
of
of
a
standard
like
to
me,
the
standard
isn't
about
like
what
layer
7
protocol
you're
using
at
all
it's
about
at
me
as
a
consumer.
When
I
monitor
some
somebody's
device-
and
I
get
this
data
back,
can
I
take
the
latency
profiles
for
endpoints
off
envoy
and
combine
those
with
my
aws
load
balancer
and
my
glb
load
balancer,
and
the
answer
is
if
those
three
different
vendors
chose
different
bidding
schemes?
E
The
answer
is
simply
no.
I
can't
do
that.
I
can't
compare.
I
can't
contrast.
I
get
completely.
You
know
cornered
as
an
end
user
of
that
telemetry
data.
So
to
me,
the
power
of
the
standard
is
forcing
the
vendors
to
actually
talk
around
the
data
model
that
allows
their
users
to
be
empowered
to
merge
the
data
together
from
different
vendors.
D
So
I
think
the
big
question
is:
let's
still,
let's
say:
the
data
model
is
a
very
good
word,
so
do
we
want
to
as
open
telemetry?
Do
we
want
to
standardize
on
a
data
model?
Personally,
I
think
this
is
at
this
stage
is
a
little
bit
too
heavy-handed,
especially
with
this
in
the
real
world.
There's
plenty
a
wide
number
of
prometheus
case
data,
whether
a
source
or
as
producer
consumer,
it's
already
there
and
the
prometheus
by
nature.
It's
only
the
explicit
band.
Only
what
I've
been
thinking
about
this
is
as
a
consumer.
D
If
you
receive
a
message
and
does
not
fit
your
own
data
model,
you
will
just
have
to
do
the
the
the
introduce
our
artifact
split,
split
bucket
thing,
assume
the
linear
distribution
within
a
bucket
split
split,
the
input
bucket
fit
into
your
bucket
ban.
This
would
be
inevitable
if
you
want
to
do
the
data
model
transition
and
given
enough
buckets
the
artifact
or
error.
This
introduces
should
be
relatively
small.
If
you
have
say
300
buckets,
I'm
gonna
fit
it
into
my
400
bucket
bands.
D
It's
it's!
It's
dense
enough.
It's
just
like
the
in
in
like
say
in
video
processing,
I
received
the
300
pixel
picture,
but
my
native
resolution
is
720.
I'm
gonna
scale
it
to
720
anyway,
so,
but
if
the
pixel
is
enough
at
a
hundred
or
something
level
from
the
resolution
point
of
view,
you
still
see
a
human
eye
still
see
the
same
picture
on
your
screen.
E
I
I
would
say
that
the
prometheus
folks,
given
the
the
recent
video
from
bjorn
at
grafana,
labs
it's
clear
that
they
realize
that
their
boundary
system
is
not
going
to
go
forward
with
them.
So
doing
you
know
they
already,
don't
they
already
hate
what
they
have
so
trying
to
accommodate
what
they
have
as
opposed
to
embrace,
where
they're
going
or
perhaps
help
them
standardize
on
something
that
everyone
else
standardizes
on
as
well
like.
E
There
is
no
standard,
and
I
think
our
industry
has
a
loss
because
we
don't
have
a
standard
well
and
as
long
as
that
standard
is
open
and
everyone
can
implement
it
without
any
royalties.
E
D
We
are
going
that
route,
I
would
say
a
promising
candidate
is
the
lucky
linear
and
in
in
the
log
linear
family.
I
would
promote
the
the
binary
scale,
which
is
essentially
the
dispatch
the
bit
wise,
the
fast
option
of
the
ddt
sketch,
which
is
honestly
in
within
within
the
new
relic
our
internal
hysteria.
We
don't
have
either
resource
or
anything.
Yet
we
internally
use
it.
Our
model
was
forced
into
the
binary
binary
likely
linear
family.
The
reason
is,
it's
very
fast.
You
just
binary.
You
do
a
few
binary
bitwise
operation.
E
Yeah,
so
it
does
two
caveats
with
that:
it
does
pose
challenges
on
on
embedded
systems
that
don't
have
floating
point,
because
you
don't
get
a
floating
point
number
in
the
first
place.
So
you
can't
do
bit
wise
on
floating
coin,
because
there
is
no
floating
point
to
be
consumed
in
the
first
place.
Don't
know
if
that's
important
for
telemetry
purposes.
E
A
Yeah
at
this
point,
I
think
we
should
table
this
discussion.
There
is
a
legal
question
here,
that's
lurking,
and
I
and
I
don't
want
to
deny
that.
I
also
I
want
to
indulge
myself
in
asking
you
questions
theo
because
you're
here,
but
I
also
think
we
should
use
this
time
to
to
move
on
to
the
other
items
in
the
agenda.
So
really
thank
you
for,
for
that.
That
was
extremely
useful.
A
I
think
the
group
has
some
work
to
do,
and
maybe
we
can
call
on
you
again
or
I'd
like
to
invite
you
back
at
some
point.
Maybe
after
we've
worked
out
a
position
here,
sure
just.
A
Yeah,
it
really
was
to
me
thank
you
so
much,
okay
group.
I
had
written
down
a
question
mark
to
talk
about
prometheus
in
the
collector.
Is
there
anyone
who
would
like
to
take
that
topic
up?
Otherwise,
I
I
don't
know
if
there
is
an
actual
person
to
speak
on
that
topic
right
now,.
A
We
may
have
to
work
on
the
aws
team
in
getting
them
to
come
to
this
meeting.
I
know
there
is
some
work
that
I'm
privately
aware
of
about
and
some
decision
making
has
been
happening
with
regards
to
the
prometheus
receiver,
but
since
nobody's
here
to
represent
that
I
will
let
us
move
on.
I
think
next
week
I
think
that'll
come
up
again
joshua.
Would
you
like
to
give
us
an
overview
of
the
current
state
of
many
different
semantic
conventions?
Yeah.
G
Definitely
for
us
to
talk
about
and
to
hopefully
assuage
any
fears.
I
my
intention
is
not
to
drill
into
each
of
these.
During
this
meeting.
I
just
wanted
to
give
a
broad
overview
of
the
status
here
in
the
in
the
agenda,
and
there
were
a
few
items
that
I
think
would
be
helped
by
discussion.
This
first
one.
There
was
a
question
around
metric
instrument
units,
and
I
think
justin
was
that
yours.
H
A
This
one
is
on
the
pr
I
I
reviewed
this
last
night
and
I
think
it's
a
really
exciting
introduction
of
a
little
bit
more
theory
to
this
specification
that
we've
been
writing.
So
I
I
was
basically
writing
encouraging
remarks,
because,
when
you
dive
into
the
uc
specification,
you
find
all
this
new
stuff
that
we
haven't
really
been
talking
about
like
metric
versus
non-metric
units
and
proper
units
versus
logarithmic
and
interval
units,
which
are
something
we've
talked
about,
but
like
it's
really
hard
to
find
good
resources
on
so
anyway.
A
I'm
excited
that
justin's
writing
this
and
I
sort
of
put
some
tangential
comments
in.
H
Yeah,
thank
you
for
that.
This
was
a.
This
is
actually
a
really
tough
pr
to
get
together,
as
as
I
read
more
of
that,
ucum
specification,
like
my
pr
just
kept
getting
bigger
and
bigger
and
bigger,
and
then
eventually
I
just
had
to
like
pare
it
down
to
something
reasonable
that
described
what
we
actually
wanted.
A
G
I
Yeah
I'll
I'll
give
like
my
brief
overview
and
and
maybe
josh
can
as
well.
First
I
saw
your
comment
josh
that
you
were
wondering
if
we
should
allow
this
changes
to
allowed
for
ga
that
you
commented
on
the
issue.
I
mean
that
seems
like
a
good
idea
from
my
perspective,
given
how
like
murky,
it
still
is
exactly
what
we're
gonna
do
here
and
then
I
also
added
a
kind
of
a
clarifying
comment
on
the
pr
itself
that
per
the
discussion
you
and
bogdan
had
last
week.
I
A
A
I
No,
it's
it's
all
good
yeah!
You
just
mentioned
this
io
convention,
which
I
hadn't
seen
before.
So
I've
responded
to
your
comment
there.
I
was
a
little
unclear
and
maybe
there's
something
I
need
to
go
read
with
with
how
the
process
is
with
otel,
but
I
did
have
two
approving
pr's
here,
but
when
I
talked
to
tyler
about
he's
like
oh
no,
we
usually
require
more
than
two.
So
maybe
there's
a
the
over
arching
process
thing
that
I
need
to
be
more
understanding
of
with
hotel
prs.
A
Yeah,
I
think
tyler's,
probably
right,
we
there's
it's
a
little
hazy
as
I
now
have
power
to
merge
all
these
things.
I
I
was
asked
when
I
was
granted
that
power
to
be
careful.
So
I
said
a
little
cautious
if
something
looks
like
there
hasn't
been
much
real
scrutiny,
even
though
it
has
a
few
approvals,
I'm
not
looking
to
to
merge
things
in
a
hurry
for
for
just
to
make
it.
A
You
know
fewer
open
prs,
so
I
I
would
invite
aaron
to
comment
on
the
question
about
the
semantic
convention,
this
dot
io,
which
he
was
going
to
to
write
from
the
collector.
Maybe
hey.
I
haven't
actually
taken
a
look
at
this.
My
apologies
yeah.
It
was
just
an
observation
I
made.
I
actually
see
it
much
like
chris.
A
I
want
to
point
out
that
I
approved
this
like
a
week
ago
and
now
I'm
raising
concerns,
because
I
reread
it
you
know.
So
I'm
glad
I
didn't
merge
it
with
just
two
approvals.
Yeah
this
this
convention,
the
dot
io
metric
name
convention,
says
that
if
you
have
something,
that's
bidirectional
we're
going
to
use
a
single
metric
name
and
then
labels
for
the
directionality,
and
I
saw
that
pattern
on
the
second
read.
So
I
think
it's
a
good
idea.
Unless
people
disagree.
I
G
Okay,
so
the
other
one
that
I
think
we
could
look
at
quickly.
Is
the
database
metrics
pr
there's
a
question
there
about
the
name
spacing,
and
I
think
that
josh
you
mentioned,
adding
the
database
type
into
the
namespace
like
hierarchy,
and
I
I
think
that's
a
good
idea,
but
I
I
just
wanted
to
make
sure
that,
like
we're
all
on
the
same
page,
there.
A
Yeah,
I
just
got
a
whiff
of
uncertainty
about
whether
there's
something
that
could
really
be
generalized
across
vendors
for
this
level
of
I
don't
know.
I
remember
like
in
sort
of
like
undergraduate
computer
science
learning
about
how
do
you,
how
do
you
monitor
caches
and
like
one
would
be
to
talk
about
like
compulsory
misses
and
that
sort
of
thing
I
don't
know
if
there's
a
better
way
to
do
this.
H
I
think
yeah,
I'm
on
the
fence
about
these
these
metrics
in
general,
about
how
much
value
they
really
had.
I
included
them
like
I've.
I've
said
that
in
this
pr
included
them
because
one
library
is
already
generating
them.
B
A
H
G
Yeah
andrew,
I
think
that's
all
I
had
there's
some
open
questions
and
I
think
in
general
all
these
could
use
more
review.
So
I
just
wanted
to
put
that
out.
There.
A
All
right!
Well,
that's
thank
you
very
much
josh.
So
there's
two
items
I
see
on
the
agenda
here
and
I
I
don't
actually
know
which
one
is
more
important.
Now
that
I
see
the
second
one
who
may
I
ask
out
of
the
final
bullet
since
I
know
who
I
added
the
first
one
there
yeah.
J
A
J
Yeah,
so
I
am
kind
of
confused
here
like
so
what
happens
if
I
don't
set
like
my
like
start
time
for
the
integer
sum
or
cumulative
matrix,
also
like
I
found
some
examples
like
some
of
the
receivers
are
setting
like
the
receiver
start
time
as
the
magic
like
extra
time
for
community
metrics.
So
is
that
idea
good
or
what's
the
recommendation
here.
A
A
So
my
inclination
is
to
say
we
should
require
non-zero
start
time,
but
I
think
it's
a
gray
area
for
deltas.
It's
cumulative
where
this
really
matters.
H
Yeah,
I
think
an
issue
is
a
in
order
for
this
yeah.
I
agree.
A
There
may
be
a
collector
issue
already.
I
don't
know
if
that's
been
triaged
all
the
way
andrew
could
could
say.
Perhaps
I
know
it's
an
issue
in
the
collector
every
photo
right
now.
A
Well,
the
protocol
buffer
library
doesn't
really
have
any
statements
about
requiredness,
but
I
think
the
our
protocol
is
not
very,
very,
very
careful
to
say
what's
required.
I
think
the
protocol
says
it's
required,
but
I'm.
E
A
Right-
and
that
is
exactly
the
reason
why
we
might
want
to
say
you're
you're
allowed
to
have
a
start
time-
zero,
the
the
there
are,
I
guess
some
protocols.
H
Yeah
that
I
think
that's
also
another
question
is
like
what
the
the
reset
is.
Is
it
the
reset
of
the
thing
that
you're
measuring
or
is
it
the
reset
of
the
telemetry
system
itself?
I
think
in
prometheus
there's
a
distinction
there.
They
wouldn't,
if
there's
a
resettable
value
they
wouldn't
be
using.
I
think
a
counter
at
that
point,
but
I
think
this
just
brings
up
a
good
category
of
questions,
including
is
it
a
required
field
and
making
that
clear
in
the
otlp?
H
If
it
is,
I
think
it
makes
it
somewhat
clear
that
it
is.
It
is
in
certain
cases
and
isn't
in
is
not
in
others,
with
the
end
time
being
the
the
thing
that
is
actually
required
in
every
metric,
but
it's
been
a
while,
since
I've
looked
at
the
otp,
so
maybe
not
the
most.
A
If
so,
if
we
do
allow
start
time
0,
I
think
we're
going
to
have
to
specify
what
what
we
mean
the
the
there
is
a
way
to
specify
this,
so
that
things
aren't
completely
bad.
So,
if
you
say
something
like
processes
should
use
a
resource
that
uniquely
identifies
them
such
that
when
they
restart
they
will,
they
will
have
a
new
resource.
Therefore
they
will
never
try
to
be
sort
of
combined
in
an
otlp
way
into
the
same
metric
time.
Series.
A
H
I
think
that
sounds
like
a
really
great
option,
because
I
think
that
conveys
a
lot
of
the
intended
meeting
and
it
clarifies
what
that
actually
would
encode.
A
A
Well,
now
we're
ahead
of
time
on
the
schedule.
So
that's
great,
I
guess
so
we're
left
with
this
one
item
and
for
for
better
or
for
worse
bogen's,
not
here
bogdan,
has
been
a
sort
of
of
a
different
position
in
mind
on
this
issue,
so
it's
gonna
be
hard
for
me
to
have
a
debate.
A
The
the
high
level,
just
as
an
overview
for
everybody,
who's
new
here,
is
that
we
have
used
the
term
label
in
the
metrics
world
to
mean
a
thing
that
we
aggregate
a
dimension
value
that
we
aggregate
over
and
we've
used
the
term
attribute
in
the
tracing
side
of
our
specs
to
talk
about
a
key
value.
That
sort
of
logically
applies
to
a
span,
and
we
more
or
less
see
these
as
the
same
logical
concept.
A
But
terminology
is
confusing
when
users
come
up
to
us
and
and
learn
this
for
the
first
time,
so
we've
talked
about
getting
rid
of
the
term
label
and
that's
partly
being
forced
by
the
trace
spec
having
been
frozen.
At
this
point,
the
word
attribute
is
pretty
ingrained.
A
There
is
a
lengthy
discussion
in
an
earlier
issue
948
there.
If
anyone
wants
to
read
this,
but
the
I
think
the
the
direction
we're
leaning
here
is
to
try
and
solve
this
by
getting
rid
of
the
word
label.
But
then
all
these
technicalities
arrive
arise
and
it's
unfortunate
that
the
book
is
not
here
because
well,
the
technicalities
have
to
do
with.
A
How
do
you
represent
things
like
map,
valued
attributes
or
list
valued
attributes
which
are
logically
it's
supportable,
but
just
sort
of
ridiculously
expensive?
And
that's
one
concern
and
then
I
guess
there's
this
question
over
type
value
type.
So
if
you,
if
you
give
me
a
label
value
or
a
resource
value
that
has
an
integer
or
a
double
value
or
some
other
obscure,
you
know
non
non-string
value
and
then
I
export
that
through
otlp
and
then
I
export
that
into
prometheus.
What
am
I
going
to
get
like?
A
H
It
seems
like
maybe
there's
other
examples
between
the
otp
to
the
prometheus
endpoint
but
like
that
seems
solvable,
and
I
think
that
it
should
try
to
conform
to
the
prometheus
specification
or
the
open
metric
specification
as
best
we
can
just
based
on
the
fact
that
you
can
represent
numbers
of
strings
and
you
can
represent
all
of
those
data
models
of
strings
just
like
making
sure
that
that's
formed
into
some
sort
of
similar
encoding
or
defined
encoding.
H
I
guess
is
the
proposal
to
not
allow
like
strings
or
anything
but
strings
in
the
attributes
as
like
the
converse.
E
E
I
have
some
some
insight
in
the
list
versus
map
value
attributes.
So
I
know
so
it's
acronis
all
of
our
labels.
If
you
call
them,
we
call
them
tags,
but
we
support
list
values
where
you
have
a
a
key
can
have
multiple
values.
So,
for
example,
if
you're
monitoring
application
that
is
in
node,
I
don't
know
what
it
would
be,
it
would
be
like
say
c
and
lua
and
you
tag
it
with
language
c.
E
I
will
say
that
that
it,
it
wasn't
worth
it
and
we
support
that,
but,
like
the
lack
of
support
throughout
the
industry,
for
that
sort
of
thing
makes
list
valued
things
almost
incomprehensible
across
different
platforms
and
technologies.
So
if
it's
not
a
dictionary,
it
just
tends
to
fall
apart
somewhere
and
it's
it's
not
worth
supporting.
A
Yeah,
this
arises
from
the
sort
of
the
the
open
telemetry
having
its
umbrella,
covering
traces
and
logs.
Basically,
is
that
we
in
those
signals
it
tends
to
be
a
little
bit
more
normal
to
use
list,
values
and
and
map
values
and,
and
I've
been
arguing
for
the
json
data
model
the
whole
time,
and
we
still
don't
have
null
values
which
bothers
me.
But.
E
A
Yeah
there's
definitely
definitely
we
do
not
have
a
desire
to
to
to
logically
treat
the
semantics
of
multi-valued
attributes.
So
we've
been
saying:
if
it's
a
list,
then
then
you
encode
it
as
a
you
know,
comma
separated
list,
because
you
have
to
coerce
it
into
a
string.
But
I
appreciate
your
observations
on
json.
There
theo,
I
think,
I'm
inclined
to
say
we
should
end
this
meeting
unless
someone
else
has
a
topic
just
because
we
we
couldn't
really
have
the
the
conversation
about
label
versus
attribute
in
without
bogdan
here.
A
I
I
H
Well,
I
mean
so
partly
right.
I
think
if
that's
that's
useful
from
a
spec
standpoint
or
even
like
internal
to
the
project
standpoint,
but.
H
For
this,
that,
like
is
really
driving,
it
is
the
the
end
user
perspective,
and
it's
that
when
you
have
a
new
person
come
to
the
project
and
they
have
to
learn
the
difference
between
what
a
label
is
and
what
an
attribute
is,
and
then
they
find
out
that
they're
really
similar
and
that
they
have
a
lot
of
overlap.
It's
a
lot
of
mental
strain
that
is
not
needed.
The
open-source
project
itself
is
already
a
pretty
massive
thing
to
try
to
learn
and
to
add
to
that
is
not.
H
H
I
think
they're
pretty
related
problems.
The
way
I
see
it
like
there
is
some
cognitive
overhead
to
having
two
terms
that
you
have
to
learn
the
label
and
the
attribute,
but
we
have
like
a
clear
definition
of
what
a
label
is
and
what
an
attribute
is,
and
if
we
consolidate
down
to
just
using
one
term,
then
you
have
to
know
that,
like,
depending
on
the
context
in
which
you're
using
an
attribute,
the
values
that
you're
allowed
to
use
are
different
and
I'm
not
sure
if
that
is
more
or
less
cognitive.
Overhead.
I
H
I
Me
that
seems
like
a
path
forward,
at
least.
Alternatively,
you
have
to
create
a
separate
concept
for
the
name
of
things
that
can
be
shared
across.
You
know
tracing
and
metrics
and
then
separately.
Well,
you
know,
a
label
is
only
allowed
to
be
a
string
and
an
attribute's
allowed
to
be
these
other
set
of
types.
You
know.
A
A
After
pulling,
we
apply
resources
at
the
source.
You
have
to
put
that
those
resources
into
your
telemetry
and
then
at
some
point
later,
you're
going
to
have
to
export
that
data
to
a
traditional
stream
system
that
has
no
such
concept
as
a
resource.
It's
that
moment
where
you're
gonna
have
to
take
your
resource
value
and
slap
it
onto
your
label
into
your
label
position
in
some
other
protocol
and
it's
natural
to
have
resources.
A
So
so
now,
you're
saying
well,
resources
can
only
be
strings
because
we
might
want
to
turn
them
into
metric
labels,
and
I
don't
think
we
want
to
say
that
I
I
tend
to
to
sort
of
not
worry
about
the
sort
of
problems
that
like,
if
somebody
puts
a
map,
valued
or
list
valued
resource
up
and
then
configures
a
resource
pipeline
in
some
hotel,
collector
to
say,
take
that
resource
and
apply
it
as
a
metric
label.
And
now
you
have
a
metric
label.
That's
a
map
value!
I'm
not
worried
about
that
case.
A
It
doesn't
bother
me
it
shouldn't
it
shouldn't
go
through,
it
should
come
through
as
some
sort
of
error
or
some
sort
of
like
unrepresentable.
Is
that
you,
the
string
I
rep
suggested
in
one
of
my
comments.
It's
just
not
a
big
deal.
So
to
me.
Sorry,
but
what's
really
a
big
deal
is
that
we
need
to
be
able
to
talk
about
putting
resources
into
label
position.
C
So
on
a
different
topic,
if
we
want
to
pause
on
this
josh,
I
have
a
this
is
john.
I
wanted
to
at
some
point
dig
in
more
to
the
sdk
specification
I
have
because
I
firmly
believe
that
the
java
implementation
is
so
far
different
than
what
is
being
specified
that
we
need
to
do
a
rewrite.
C
I
have
been
doing
a
clean
room
rewrite
of
the
java
sdk.
In
my
spare
time
I
mean
spare
time
I
still
get
paid
for
it,
based
on
your
specification,
which
is
actually
super
useful,
both
in
vetting
the
specification
as
being
something
that's
consumable
and
also
just
in
you
know,
seeing
whether
you
can
work
from
it
and
it's
viable
outside
of
the
go
that
the
go
language
that
it
was
based
on
and
so
far
I
would
say
it's
super
awesome,
but
holy
moly.
I
need
I
need
the
controller.
C
A
To
support
both
push
and
pull,
and-
and
I
realized
that
the
code
in
the
hotel
go
sdk
just
can't
do
one
both
of
those
at
the
same
time,
but
I
don't
think
it's
going
to
require
many
changes
to
have
the
pull
controller
generalized
into
a
push
controller.
Essentially
that
you
can
put
it.
You
know
you
can
combine
these
behaviors
actually
pretty
pretty
well.
So
I
appreciate
your
question
and
I
think
you're
right
to
say
that
we
haven't
written
that
down
and
and
as
this
issue
points
out,
we
actually
aren't
finished
with
with
designing.
A
I
think
this
support
I
was
planning
to
look
at
that
issue.
I
think
we
should
be
able
to
have
one
sdk
support,
both
a
prometheus
exporter
and
an
otlp
exporter.
At
the
same
time,.
C
A
To
and
and
we
have
to
maybe
there's
some
configuration
parameter
that
will
be
needed
to
help
clarify
this
so
right
now
the
pull
controller
go,
has
a
threshold,
a
timing
value
says
if,
if
you've
recently
sort
of
computed
a
snapshot
of
the
current
state
of
things,
don't
recompute
one
just
like
a
second
later
like
so
there's
a
minimum
so
but
the
pull
controller.
Otherwise,
today,
it'll
wait
until
you
come
along.
If
so,
if
you
don't
scrape
that
target
for
an
hour,
it's
gonna
not
do
any
collection
for
that
whole
hour.
A
So
the
way
to
hybridize
these
is
to
have
the
pull
controller,
have
a
timer
attached
to
it.
That
says
we're
going
to
pull
whenever
requested,
but
we're
going
to
also
attach
a
timer
that
makes
sure
that
there's
a
minimum
frequency
at
which
we,
which
we
collect
and
then
trigger
the
exporters
that
are
push-based
at
that
moment,
I
guess
so
they
have
their
own
process.
They
have
their
own
processor,
then
that.
A
A
Shouldn't
need
their
own
processor
in
this
case,
because
they're
both
going
to
export
the
same
cumulative
state
and
the
the
way
the
goprocessor
is
designed
the
basic
processor
it
can
actually-
and
this
is
not
a
required-
I
don't
think
it's
a
requirement,
but
it
keeps
two
state
values
when
you're
doing
that
conversion
from
from
delta
to
cumulative
so
that
you
could
have
a
two
exporters
attached.
One
is
doing
cumulative
export
and
one
is
doing
stateless.
A
C
Know
I
know,
but
this
is
this
is
super
useful
and
I
I'd
like
to
spend
more
time,
probably
chatting
about
it
or
reading
your
extremely
well-written
specification,
which
was
incredibly
useful
and
ver.
It
was
very
easy
to
write
like
write
code
based
on
it
based
on
what
there
is
the
rest
of
it.
I
had
to
kind
of
make
up,
which
is
fine.
Well,
that's
really.
C
A
A
So
since,
since
you
gave
me
the
opportunity
to
say
there
are
two
open
spec
sdk
pr's
that
I've
working
on
you've
commented
on
them,
john,
but
I'm
looking
for
others
before
we
get
that
those
merged
tyler
pointed
out
a
terminology
problem
which
I
intend
to
fix
in
place
in
these
open
pr's.
So
we're
going
to
get
rid
of
the
term
export
kind
and
use
aggregation.
Temporality,
it's
much
more
established.
I
don't
know
why
there
was
a
conflict.
That
was
a
mistake.
Thank
you.
All
cool
thanks,
josh,
have
a
great
week,
see
you
later.