►
From YouTube: 2020-10-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
Cool
glad
to
see
so
many
people,
I
think
we
should
start
with
this
first
item
on
the
list
here,
and
I
know
the
group
here
and
don't
know
which
one
of
you
like
to
speak
between
wesley
yang
gavin
whoever's
on
that
group.
Let's
talk
about
steps
d,
yeah.
B
I
think
I
will
go
because
I
don't
see
ying
here
he's
actually
the
expert,
but
if
he's
not
here,
then
I'll-
I'm
hopefully
the
expert
anyway,
so
yeah.
So
we
we
wanted
to
ask
a
couple
questions
about
statsd
and
how
we
should
convert
them
to
in
the
receiver.
So
right
now,
the
implementation
in
the
receiver
is
very
basic.
It
still
needs
to
be
expanded
quite
a
lot,
so
in
statsd
there
are
gauges
and
the
gauges
have
three
different
ways
which
they
can
be
sent.
B
So
you
can
directly
send
the
value
as
I
understand
it,
or
you
can
basically
do
an
increment
of
the
value
like
you
can
send
like
my
as
it
shows
there
minus
10
or
plus
four,
so
you
can
just
send
a
diff
for
the
value
right
right
now.
We
actually
have
only
implemented
directly
sending
the
value
and
anyway,
the
question
was
what
open
telemetry
type
do
we
convert
this
to
like?
Is
there
any
sort
of
issue
with
that?
B
Unfortunately,
the
guy
who
originally
wrote
the
code
is
no
longer
with
us.
He
left
aws
and
he
no
longer
works
on
open
telemetry.
So
new
folks,
like
myself,
are
having
to
try
to
pick
it
up
and
we're
very
clueless.
B
Unfortunately,
we're
super
new
at
this,
but
yeah
it
looked
like
there's
like
an
open,
telemetry
type
that
does
the
kind
of
the
increment
that's
sending
the
diff,
but
we
weren't
sure
about.
Also.
Can
you
also
sometimes
send
like
directly
then
set
the
value
anyway?
Yeah.
That's
the
first
question.
C
Well,
I'd
be
happy
to
answer
what
I
think
there
the
I.
I
think
I
have
to
say
that
there's
no
real
official
stats
de-specification,
so
this
this
information
that
you
put
here
is
I
mean.
Perhaps
we
can
take
a
look
at
this
link.
It
does
it's
the
steps,
the
oregon
github,
but
I'm
not
sure
who
owns
that
org
or
how
active
it
is
these
days
or
in
any
sense
whether
this
is
should
be
considered
a
sort
of
definitive
spec.
It's
not
quite
that.
C
C
So
there's
I
think,
unfortunately,
this
becomes
a
question
of
how
different
vendors
and
back-end
systems
are
gonna
behave
when
you
sort
of
have
some
uncertainty
about
metric
kind
like
it
looks
like
it's
a
gauge,
because
you
said
g
but
you're
trying
to
use
this
interface.
That
is
very
strongly
suggested
of
a
counter.
C
I
would
prefer
to
see
those
turned
into
these
up
down
counters,
since
that's
what
they're
there
for
I'm
curious.
I
know
michael's
on
the
call
who
could
talk
about
what
datadog
thinks
about
this
particular.
Actually,
I
would
like
to
know.
A
Yeah,
I
actually
agreed
we
discussed
about
this
a
little
bit,
but
datadog
only
affords
for
setting
the
value
directly
and
not
increment
and
decrement.
We
do
expect
those
to
be
that's
how
you
interact
with
the
counter.
So
this
is
more
of
a
statsd
thing
than
a
dog's
deadly
thing.
C
And
I
know
that
this
could
create
trouble
in
some
systems,
like
I,
I've
spent
some
time
looking
at
the
staff
stackdriver
protocol
recently
and
like
if
you
have
a
very
a
system,
that's
very
strict
about
metric
kinds
and
you're
trying
to
write
a
point.
You're
saying.
C
I
think
this
is
a
counter
and
the
database
thinks
it's
engaged
like
things
get
really
really
confusing
when
you,
when
you
have
this
ambiguity
over
metric
kind,
but
at
least
I
would
say,
as
far
as
the
current
specs
we
have,
that
you
can,
we
can
either
just
disregard.
Like
michael
said,
I
don't
think
that
this
is
widely
known
behavior
or
we
could
turn
those
into
encounter
events
if
they
want.
C
B
Okay,
so
okay
wait,
I
wanna
to
be
specific.
B
So
if
you're
setting
it
directly
and
it's
actually
like
a
real
gauge-
not
really
actually
a
counter,
then
is
that
still
going
to
be
an
up
down
counter
type
or
is
it
going
to
be
a
different,
open,
telemetry
type.
C
Like
will
the
title,
the
symmetry
in
the
protocol
doesn't
have
these
types.
Those
are
instrument
types
in
the
api,
but
we
have
documented
the
transformation
from
instrument
types
into
protocol
types.
So
forgive
me
I'm
being
a
little
bit
loose
with
terminology.
C
When
we
talk
about
the
protocol,
we
have
turned
these
things,
which
are
called
counters
into
what
are
called
thumbs
in
the
otlp
and
if
it's
a
gauge
the
sort
of
traditional
treatment
of
a
gauge
we
think
of
as
a
last
value.
So
these
are
by
the
time
we
talk
about
the
protocol.
C
B
Okay,
so
if
I'm
understanding
you
correctly
and
sorry
because
we'll
have
to
like
go
into
the
code
and
like
really
actually,
but
my
understanding
is
too
basic
right
now,
but
they're
basic
they're
you're
saying
there
won't
be
a
problem,
basically
like,
depending
upon
what
messages
we
see,
we
can
correctly
like
do
the
right
thing
when
we
convert
to
open
telemetry
and
like
there
won't
be
some
weird
thing
where
like,
where
like
the
type,
will
actually
be
changing
like
if
you're
sending
me
one
type
of
values
and
then
suddenly
the
stats
dsd
implementation
that
the
client
has
starts,
sending
the
other
type
of
values
like
the
the
type
of
the
metric.
C
The
the
thing
about
breaking
was
more
of
a
vendor
concern.
I
know
lightstep
system
has
a
little
bit
of
trouble
with
with
with
metrics
changing
type,
but
I
didn't.
I
shouldn't
have
said
that
at
all
there
should
there
really
is
not
ambiguity
in
how
to
say,
handle
these
relatives
these
these
things
that
look
like
gauges
but
are
really
increments
or
decrements.
C
I
don't
think
there
is,
and
I
was
just
writing
the
the
way
I
would
summarize
it
is
that
statsd
messages
are
fundamentally
delta,
temporalities
we're
going
to
turn
them
into
these.
These
increment
documents
are
going
to
become
sums,
but
they're
deltas.
C
If
the
gauge
is
just
a
gauge
and
the
name
of
the
protocol
is
actually
gauge,
we
don't
have
that
word
in
the
instruments,
so
we
have
that
word
in
the
protocol.
So
let's
see
so
for
your
first
example.
This
is
the
classic
gauge
which
they
become
gauges
in
the
protocol,
the
the
plus
and
the
minus
can
become
otlp
non-monotonic.
C
Some
here's
where
the
trick
comes
in
is
that
if
you
see
this
plus
four
and
you
want
to
do
something
like
this
the
moment
you
see
a
plus
four,
should
I
assume
it's
never
ever
going
to
be
negative
or
not
it's
like
if
I
call
it
monotonic
just
because
it
doesn't
have
a
minus
sign,
I
might
get
myself
into
trouble
later.
So
that's
where
there's
some
some
ambiguity,
but
it's
a
it
it's
sort
of
like
I
don't
know
what
else
to
do.
A
Okay,
I
will
jump
in
because
I'm
I'm
curious
about
something
that
we
discussed
exactly
about
this,
but
is
there
aggregation
in
the
open,
telemetry
processor
before
the
the
instrument
gets
to
the
exporter?
A
C
D
A
Yeah
taking
the
simple
example,
the
question
is:
if
we
get
a
a
bunch
of
plus
and
minus
values
in
the
receiver
from
statsd,
and
they
get
turned
into
an
up-down
counter,
do
we
have
to
do
any
aggregation
in
the
receiver
or
will
it
just
work.
C
Oh
right,
that
is
actually
a
question
that
I
believe
is
coming
immediately
next
in
this
question.
Oh
okay,
I
know
that's
coming
good,
so
leslie
does
that
feel,
like
we've
at
least
answered
the
first
of
these
topics
yeah,
I
think
so,
hopefully
cool.
C
I
can
kind
of
I've
read
ahead,
so
I
sort
of
know
what's
coming
but
yeah.
Let's
talk
about
steps.
These
sets.
B
Okay,
yes,
so
yes,
so
the
we!
We
had
a
discussion
with
michael
yesterday
about
this
and
yeah
like
okay.
So
there's
the
question
of
like
where
you
do
some
aggregation
and
processing.
Is
it
in
the
receiver?
The
processor,
I
think
you've
said
it's
better
to
do
things
all
everything
in
the
processor,
because
then
it's
generic,
but
we
then
realized
that
if
you
do
that,
I
think
there's
no
way
you
can
support
statsd
sets
which
we
think
might
be
okay.
B
We
think
it
might
be
okay
to
just
forget
about
those,
but
because
sets
are
a
special
type
that
have
no
relationship
to
any
open.
Telemetry
type.
You'd
have
to
do.
The
aggregation
and
processing
like
you'd
have
to
agree.
You
have
to
process
the
set
over
a
flush
interval
in
the
receiver
and
then
convert
it
to
an
ot
gauge
in
order
to
do
anything
with
it
at
all,
or
you
can
just
say
we
don't
implement,
sets
sorry.
C
Yes,
I
feel
like
I
have
opinions
on
this,
but
I
don't
actually
know
how
widely
used
these
are
or
how
important
it
is
to
support.
I
mean
we
do
have
the
notion
of
label
sets
in
built
into
open
symmetry.
So
in
some
ways
you
could
you
can
model
these
sets
as
just
like
boolean
variables
and
then
count
how
many
there
are.
C
It's
actually,
I
don't
really
understand
what
sets
are
doing
if
there's
a
cardinality
count
question
or
whether
there's
some
sort
of
I
don't
quite
get
it.
I'm
being
honest
at
this
point.
C
Okay,
good
that
at
least
my
understanding
is
what
I
think
it
is.
So
there
is
some
discussion.
I
don't
want
to
find
it
right
now,
but
searching
the
document
in
our
api
spec
like
how
talking
about
how
for
observer
instruments
there
is
this
notion
of
a
snapshot
being
taken
for
one
particular
callback
execution,
which
therefore
lets
you
generate
one
coherent
set
at
one
moment
in
time,
and
this
is
the
closest
I've
gotten
to
thinking
or
talking
about
a
set
in
the
open,
telemetry
sense.
C
But
if
you
are
one
of
these
snapshots,
the
ie,
you
came
from
observer
instrument,
meaning
you
were
executed
by
a
callback.
Then
at
least
that's
called
like
semantically
was
given
the
opportunity
to
run
through
its
entire
set
and
output.
One
label
set
per
item
in
its
set.
Therefore,
if
you
could
use
another
like
a
value
recorder,
observer
instrument,
sorry
value
observer
instrument
and
just
output,
a
boolean
variable
with
one
label
set
per
value,
and
then
we
have
like
a
counter
account
reducer
like
there's
some
sort
of
way.
C
B
I
thought
histograms
were
fine.
You
can
convert
that
okay,
there's
no
open
telemetry
type.
You
can
convert
them
to.
I
thought.
F
Can
be
converted
into
the
gauges,
I
think
I
will
have
a
call
separate
call
with
josh
tomorrow,
so
I
will
invite
you
wesley.
Okay,.
C
Yeah,
just
a
public
information.
I
do
have
a
call
set
up
with
this
team
to
talk
about
this
in
depth.
More
usually,
when
I
think
of
histograms
and
stats
d,
they
come
in
as
raw
points,
and
so
we've
been
kind
of
circling
around
a
conversation
about
otlp
support
for
raw
data
points,
how
you
can
almost
represent
them
as
just
individual,
like
values
in
your
repeated
value
array,
except
they
don't
have
independent
timestamps
right
now
in
otlp.
C
So
this
is
another
one
of
these
areas
where
well
we're
talking
about
how
the
it's
ideally
you'd
have
a
separate
processor
that
can
do
this
like
aggregation
inside
the
receiver,
but
like
not
dedicated
to
the
step
c
code
like
a
separate
processor
stage,
but
if
so
you're
going
to
end
up
converting
every
histogram
data
point
from
step
c
into
an
individual
otlp
data
point
and
then
passing
it
to
another
pipeline
stage
which
could
be
relatively
expensive.
I
think
so.
C
There's
probably
going
to
be
a
built-in
like
motivation
to
like
combine
these
two
pieces
of
code.
I'm
not
sure
of
that,
though
I
do
have.
I
I
do
have
a
strong
interest
in
whether
we
can
use
the
hotel
go
sdk
as
a
pipeline
stage
inside
the
collector.
I've
said
that
two
times,
maybe
tomorrow's
meeting,
we
can
talk
about
that
idea
in
depth.
C
C
Thank
you.
Anyone
else
wants
to
talk
about
statsy
and
otlp.
Maybe
let
me
know,
and
we
can
try
and
get
you
invited
next
on
our
agenda.
There
is
a
feedback
request
from
amman.
Are
you
on
the
call
today
yep
cool,
hi
hello?
Would
you
like
to
let
us
know
what's
going
on
here.
D
Yeah,
so
what
I'm
looking
for
is
the
ability
to
add
data
point
labels
to
metrics
from
a
collector
instance.
So
what
I
propose
here
is
kind
of
a
couple
of
potential
solutions
for
that,
so
I
know
that
there
exists
like
the
resource
attribute
processor
that
allows
you
to
add
resource
attributes.
But
currently,
if
you
want
to
add
data
point
labels,
as
you
can
see
in
my
example,
I
kind
of
showed
what
exactly
I
mean
by
that.
D
G
G
D
Is
it
one
thing?
Actually,
I
noticed
is
in
so
specifically
for
my
use
case,
I'm
using
the
prometheus
remote
right
exporter
and
that
doesn't
have
the
ability
to
convert
resource
attributes
into
prometheus
labels,
like
all
it
does.
Right
now
is
take
these
data
point
labels
and
converts
them
to
prometheus
labels.
So
I
think
that
I
put
that
as
one
of
the
potential
solutions
that
that
is
some
missing
functionality.
D
G
I
was
suggesting,
but
I
now
that
you
bring
up
the
idea
of
this
prometheus
remote
right
exporter.
You
know,
that's
not
so
bespoke.
Just
to
your
back
end
and
the
idea
of
you
know
adding
this
logic.
There
really
doesn't
feel
right
and
then
potentially
it
could
belong
other
places
too.
If
there
are
other
sort
of
canned
exporters
that
you
might
want
to
use.
D
C
Yeah,
I
personally
have
been
studying
the
prometheus
server
quite
a
bit
lately
and
I'm
starting
to
understand
that
whole
system
in
greater
depth.
One
of
the
things
that
is
a
major
part
of
prometheus
server
is,
after
you've
pulled
some
metric
data.
You
have
this
relabeling
step
and
I
think
I'm
starting
to
see
the
hotel
collector
as
being
at
least
partially
a
stand-in
for
the
prometheus
server
here.
C
So
you
should
be
able
to
apply
relabeling
of
various
sorts,
which
is
part
of
what
that
system
does
not
saying
we
should
try
to
do
everything
from
does
that's
definitely
what
prometheus?
That's
why
pcs
will
continue
to
exist,
but
my
question
for
this
I
haven't
read
through
the
whole
issue
yet
would
be
like.
Is
it
is
there?
Are
there
conditional
expressions?
C
Can
I
say
I
first
want
to
match
this
label
a
regular
x
or
a
key
value
like
conjunction
or
something
like
that
before
I
apply
some
relabeling
and
then
I
know,
prometheus
has
two
stages
of
relabeling
and
I'm
wondering
if
that
can
be
expressed
here
as
well.
One
of
them
is
at
the
target
phase,
and
one
of
them
is
the
global
phase.
So
you
can
it's
a
pretty
complicated
configuration,
but
this
looks
like
the
kind
of
stuff
we
want.
I
would
agree.
D
D
C
It
raises
a
con
sort
of
a
separate
question
about
whether
we
should
be
copying
those
resources
into
the
prometheus
remote
right.
I
think
that
this
is
what
I
mean.
One
of
the
things
that
prometheus
would
do
in
this
situation
is,
let
you
configure
it,
so
you
can
decide
which
labels
you
want
from
the
resources
to
record
and
which
ones
are
sort
of
lost
forever
and
that's
what
these
relabeling
rules
do,
but
I
I
personally
am
not
I'm
not
so
afraid
of
like
the
like
explosion
of
resource
like
label.
C
Information
like
I'd
like
to
have
more
resources
as
long
as
the
card
now
is
not
too
high,
so
I
don't
see
why
we
don't
just
apply
all
the
resources
for
the
prometheus
remote
right
and
would
that
make
things
better
for
you
and
stepping
back
a
bit.
I
think
I'm
starting
to
like
there's
been
a
persistent
question
here,
like
what's
the
difference
between
open,
telemetry
and
open
metrics,
which
can
be
rephrased
as
like.
What's
the
difference
between
what
we're
trying
to
do
is,
which
is
push
metric
data
and
what
prometheus
does,
which
is
parametric
data?
C
So
one
of
the
key
differences
I'm
starting
to
see
it's
not
about
cumulative
versus
delta
or
about
whether
you,
I
call
you
or
you
call
me
it's
this
question
of
where
the
resources
come
from
and
the
resources
come
from
the
instrumented
process
itself
when
we
push
data,
but
the
resources
come
from
prometheus
when
you
pull
data
like
that's
what
the
service
discovery
does
is
figure
out
what
your
resources
are
and
then
it
contacts
you
based
on
the
result
of
service
discovery,
so
we
so
so
absolutely
we
should
be
attaching
resources,
because
that's
what
you
have
to
do
in
a
push-based
model.
C
D
Yeah,
so
I
guess
overall
yeah
I'll
just
that
that
was
one
of
my
potential
solutions
so
like
if
you
can
attach
resource
attributes
to
as
labels
which
gets
exported
to
prometheus.
Then
that
means
that,
like
from
that,
would
satisfy
my
use
case,
because
all
I
have
to
do
is
use
the
resource
processor
to
add
resource
attributes.
So
I
guess
if
anybody
like
has
comments
if
they
could
leave
it
on
the
github
issue.
That
would
be
appreciated,
because
currently,
I'm
in
the
process
of
trying
to
figure
out
which
solution
to
proceed
with
here.
C
Cool
yeah,
it's
a
good
reminder
to
for
us
to
follow
issues
in
the
collector
repo.
I
know
I
kind
of
I
triage
through
the
ones
in
the
spec
repo
and
often
in
the
go
repo
but
they're
in
the
proto
repo.
But
that's
the
problem
with
the
metrics
project
is
we're
spread
across
all
the
repos
right
now,
so
I
forget
to
go
looking
in
the
collector
repo
cool.
I
will
add
this
to
my
list
and
thanks
for
such
a
well
written
up
issue.
D
J
Yeah
so
hi,
my
name
is
chris
wildman.
This
is
the
first
contribution
I've
made
to
open
telemetry,
so
bear
with
me
here,
yeah,
so
joshua
had
made
an
issue
that
we
are
missing:
the
semantic
conventions
for
messaging
in
metrics,
and
so
this
is
my
stab
at
trying
to
do
that.
So
I'm
looking
to
get
feedback
on
it.
J
C
Great
anybody
else
would
like
to
talk
about
this
right
now.
I
think
we
should
all
catch
up
to
this
issue
as
well.
C
One
of
the
things
that
I
I
wonder
when
I
see
this
type
of
issue,
it's
just
about
the
the
high
level.
Are
these
issues?
Are
these
semantic
conventions
that
also
apply
to
tracing
or.
I
C
They
definitely
metric
specific,
because
a
lot
of
a
lot
of
semantic
conventions
we
should
make
should
be
sort
of
cross
signal,
hopefully
so
that
we
don't
need
to
call
this
out
as
specifically
metrics
for
metric
semantic
conventions.
J
Totally-
and
I
mean
honestly,
all
these
labels
are
pulled
directly
from
the
span
tracing
messaging
conventions,
but
you
know
I
only
dropped
the
labels
that
were
or
attributes
that
were
high
cardinality
and
I
did
think
a
bit
about
how
to
generalize
this
this
transformation,
because
I
did
kind
of
think
it
exists.
Yeah.
C
How
okay
cool
do
you
want
to
say
more
on
that.
J
C
I
see
all
right
that
sounds
well
reasoned,
then,
and
even
better
yeah
I
was,
I
was
going
to
say
something
on
that
same
topic.
Basically,
is
that
we
we're
going
to
keep
running
into
this
problem
of
of
like
having
a
number
of
submitted
conventions
that
are
defined
for
tracing
and
then
like.
Oh,
we
want
to
use
the
same
set
for
metrics,
let's
not
copy
an
entire
document
and
scratch
out
three
of
them,
because
they're
high
cardinality-
and
I
was
just
also
making
that
connection
with
prometheus-
just
to
show
you
what
I'm.
C
Well,
you
get
in
your
communities
pod.
These
are
your
resources.
They
come
from
the
service
discovery
and
you're
not
going
to
want
to
pass
through
every
single
one
of
these.
Ideally,
they're
not
high
cardinality,
but
maybe
that's
the
problem
is
that,
like
seven
or
eight
of
these
are
and
like
we,
it's
like,
we
almost
want
to
combine
all
of
our
documents
into
these
are
all
the
semantic
conventions
and
then
just
like
shade,
the
ones
that
are
like,
probably
high
cardinality,
so
that
you
just
know
these
are
not
going
to
be
used
for
metrics
by
default.
C
C
C
J
C
G
Joshua,
I
made
a
suggestion
in
the
open
telemetry
specification
in
the
spec
sig
for
us
to
make
a
single
place
where
we
define
all
these
terms
and
that
so
we
wouldn't
have
to
recreate
them.
You
know
copy
and
paste
them.
It
got
punted
until
after
ga,
but
I
wonder
if
it
would
make
sense
for
us
to
just
kind
of
do
this
refactoring
as
we
are
putting
the
matrix
work
together.
C
C
Let's
see
so
alongside
there
are
too
many
josh's
and
joshua's.
Now,
that's
all
I
have
to
say.
K
Oh
thanks
joshua.
This
is
joshua
galbraith
from
new
relic
and
you
know
the
this
is
kind
of
going
along
with
with
chris's
pr.
We
also
put
up
prs
for
database
semantic
convention.
So
that's
this
one
is
one
that
justin
and
yuki,
and
I
from
new
relic,
worked
on
together
over
the
past
week,
and
we
wanted
to
to
kind
of
put
it
up
here
for
for
feedback
from
from
others
on
on
our
approach
and
and
if
anything
it
seems
like
it
might
be
missing
here.
C
Cool
well,
I'm
gonna
acknowledge
that
I've
fallen
behind
on
like
reviews,
so
I
will
do
all
these
things
after
the
meeting.
Others
should
as
well
conditional.
A
C
All
right,
this
is
like
a
lot
of
a
lot
of
work.
We
got.
We've
got
work
to
do
everybody,
let's
review
these
prs,
and
we
also
have
function
as
a
service.
All
right.
K
Yeah
michael's
on
the
call
he's
the
author
of
this
pr,
can
probably
answer
questions
better
than
I
can
michael
works
on
our
serverless
team
here
at
new
relic.
L
Yeah,
hey
everyone.
I
actually
had
a
similar
question
to
chris
earlier,
where
yeah
we
I'm
I'm
basically
leveraged
as
as
consistently
as
possible
the
the
same
labels
that
are
or
attributes
that
are
available
for
for
tray
semantics
in
the
function
as
a
service,
metrics
semantics,
and
I
realized
that
I
wasn't
really
defining
them
in
the
metrics
specification.
L
I
was
just
sort
of
linking
to
the
the
trace
semantics
and
I
wasn't
sure,
sort
of
what
our
expectation
was
there
if
we
wanted
to
redefine
them
in
for
a
metrics
use
case
or
if
we
wanted
a
central
location.
L
That
sort
of
just
defines
say
what
like,
for
example,
like
a
function
trigger
type
and
then
just
have
both
the
trace
semantics
and
the
metric
semantics
reference
that
central
definition,
but
yeah
that
I
sort
of
had.
I
ran
into
a
similar
situation
to
chris,
where
I
wasn't
sure
as
to
what
the
best
approach
was
there.
C
Yeah
I
mean
I
just
I
don't
like
seeing
us
duplicate
this
stuff
and
it's
less
it's
more
work
and
it's
also
more
to
read
so
I'm
in
favor
of
like
a
restructuring,
but
we
shouldn't.
I
think
that
there's
going
to
be
people
that
don't
still
want
that
to
block
ga,
which
is
fine
but
besides
ga
for
metrics,
is
not
that
close.
So
let's
not
worry
about
it,
and
we
should
talk
about
that
as
well
cool,
I'm
less
likely
to
have
useful
feedback
on
these.
C
I
don't
actually
use
functions
as
the
service
myself
right
now
cool
and
then
oh,
my
gosh
we're
drowning
in
work.
I
K
K
So
this
was
an
older
pr
that
I
felt
like
was
possibly
related
or
kind
of
overlapped
in
scope
with
the
one
that
I
opened.
C
K
Yeah,
so
in
general,
I
you
know,
I'm
looking
for
help
on
this
one
in
terms
of
a
pr
to
to
help
close
these
two
issues
and
I'm
looking
for
folks
who
are
have
a
lot
more
familiarity
with
with
rpc
or
grpc.
So
if
there's
any
folks
from
google
or
who
want
to
kind
of
pitch
in
on
this
one,
I'm
happy
to
help
coordinate.
C
M
Yeah,
so
I
can
get
us
in
touch
with
the
grpc
team,
or
we
can
just
write
this
through
erin.
Probably
aaron
is
best
aaron
if
you're
okay,
to
take
this
on.
N
Yeah,
I
think
I
think,
there's
also
some
rpc
conventions
for
trace
as
well
like
we
mentioned
for
all
the
other
ones
yeah.
I
don't
know
how
similar
those
can
be.
M
Yep,
I
don't
know
what
those
are
right
now.
So
do
you
mind
picking
this
up
is
that
okay
or
aaron
like
like
collaborating
with
with
josh.
C
C
That's
what
we're
after
and
I
think
at
the
time
in
the
past,
there's
been
some
confusion,
because
people
like
me
will
say:
isn't
there
this
nice
idea
that
we
could,
you
know,
take
your
spam
events
and
turn
them
into
metric
events
like
automatically
and
not
have
to
like
twice
specify
this
so
many
conventions
for
rpcs,
because
you've
already
got
trades
and
conventions,
and
although
that
sounds
like
a
nice
idea,
what
we
found
in
reality
is
that
there's
either
a
performance
question
that
gets
in
the
way
or
a
sampling
question
that
gets
in
the
way
like
if
you're
going
to
be
sampling.
C
K
The
help
here-
and
I
think
that
should
kind
of
put
us
at
more
or
less
parody,
with
what
exists
currently
on
the
trace
side
for
semantic
conventions.
If
we
can
get
get
these
reviewed
and
through
in
the
next
week
or
so.
C
K
C
Okay,
but
it
shouldn't
be
that
controversial,
I
would
say,
or
it
sounds
like
all
right,
we
are
close
to
the
bottom
of
our
agenda.
I
think
I
may
have
put
a
lot
of
these
links.
These
are
just
links
to
open
pr's
at
this
point.
Aaron
has
been
doing
great
work
on
this
one.
C
N
Yeah
yeah,
I
think
we've
got
two
two
approvals
and
I
just
basically
updated
some
more
comments
from
tigran
who
didn't
approve,
hopefully
he's
good
to
approve.
He
looks
back
through
it
now.
Yeah,
there's
yeah,
I'm
just
updating
some
of
the
network
stuff,
but
I
think
more
or
less
there's
just
some
like
small,
nitty
gritty
descriptions
that
are
left
but
yeah.
If
people
could
take
a
look,
that'd
be
great.
O
Yeah,
I
just
wanted
a
second
that
I
I
took
a
look,
and
this
is,
I
think,
it's
like
right
there
on
the
edge.
If
you've
already
taken
a
look
at
this
and
you've
left
a
comment.
Please
go
back
and
please
take
another
look.
It's
been
sitting
for
a
little
while
and
it's
not
deserving
of
that.
It's
a
really
great
pr.
So
please
go
back
and
plus
one
if
you
haven't
yet.
I
I
C
N
C
C
C
I
I
put
a
link
next
in
this
list
here
to
one
that
I
filed
like
10
minutes
before
the
meeting,
so
you
shouldn't
have
read
it
yet,
but
it
was
when
I
promised
to
file
soon
in
a
recent
meeting,
so
you
can
see
it
here.
I
didn't
fill
in
a
lot
of
detail
because
it's
it's
sort
of
like.
I
want
this
to
be
a
discussion
topic.
This
is
a
sort
of
step
towards
bridging
with
prometheus
and
I
think
it's
a
pretty
critical
part
of
what
prometheus
gives
the
user.
C
So
we
are
interested
in
this.
At
least
I
am
so
take
a
look
and
if
you
have
thoughts
or
questions,
maybe
write
them
up.
I
think
I
gave
this
three
minutes
of
thought
before
I
wrote
it,
but
I've
been
wanting
to
write
something
for
a
while.
So
we'll
see.
C
I
I
know
from
talking
to
people
here
at
lightstep
that
it's
commonly
something
people
monitor,
so
it's
definitely
something
that
people
will
miss
if
they
were
to
ask
if
you're
asked
to
say
how
could
I
do
this
system,
this
metrics,
the
metrics
reporting
that
I
have
without
a
prometheus
server?
This
is
one
of
the
things
that's
going
to
be
missing,
so
the
other
ongoing
item
that's
been
recurring
in
this
meeting
for
weeks
and
weeks
and
weeks
is
a
question
about
histograms.
C
I
personally
don't
have
anything
new
on
this
topic
and
other
than
that.
I
still
keep
thinking
about
it
and
I
have
read
through
now.
You
guys
comments
here,
but
I
haven't
formulated
what
I
feel
like.
I
should
say
to
them.
If
anybody
else
has
feelings
or
thoughts,
I'm
interested,
especially
if
they're
new
feelings
and
thoughts.
K
Yuki,
I
think,
is
on
the
call
I
don't
know
if
he
has.
He
has
further
ideas
to
discuss
here
beyond.
What's
what's
been
commented.
H
A
Yeah,
I
I
think
that
this
simple
answer
was
no,
but
I
can
connect
you
with
somebody
here
who
can
talk
more
intelligently
about
this.
The
reference
that
you
have
here
that
that
you
found
is,
is
our
updated
version
that
that
jaw
shock
asked
for
it.
So
that
is
the
reference
that
we
should
be
using,
and
I
can
connect
you
with
somebody
here
to
discuss.
F
That
is
this
a
talking
about
the
recommendation
of
the
spam
or
I
miss
sketchy.
I
just
curious
because
we
are
looking
at
the
histogram
put
into
the
stasi
as
well.
I'm
just
checking
if
this
one
we
should
follow
from.
If
this
is
the
standard,
I
I
have
pretty
discussion
with
michael
yesterday.
C
I
feel
like
at
least
the
one
thing
I
want
to
make
very
clear,
and
I
hope
that
we
all
remember
is
that
there's
two
questions
being
addressed
in
this
discussion.
One
is:
can
we
have
variable
width
or
variable
bucket
variable
boundary
histograms
that
adjust
to
the
data
and
the
other
is?
Can
we
compress
this
really
well,
so
that
they're
relatively
inexpensive?
C
I
think
those
are
the
two
questions,
because
even
if
you're,
using
dd
sketch
or
whatever
you
can
get
that
you've
answered
the
first
question,
but
you
haven't
necessarily
answered
the
second
question
unless
we
all
agree
on
an
encoding
and
any
encoding
that
we
all
agree
on
is
going
to
require
a
bunch
of
vendors
and
open
source
systems
to
all
implement
it.
So
having
a
bunch
of
encodings
is
going
to
be
unacceptable.
So
I
I.
C
I
I
like
the
idea
of
of
mergeable
histograms,
which
sort
of
lets
you
address
like
punt
on
the
the
coding
question,
at
least
as
long
as
I
have
a
way
that
I
can
merge
my
histograms,
I'm
less
concerned
about
how
the
compression
looks
at
looks,
and
the
problem
is
that
as
soon
as
you
like,
look
at
the
way,
prometheus
systems
work
and
all
the
systems
that
are
surrounding
it,
we've
got
this
kind
of
assumption
that
your
your
histograms
are
going
to
be
lined
up
using
exact
boundaries
and
that
there's
not
a
merge
function.
C
It's
just
you
have
to
know
your
boundaries
up
front.
I
stated
last
week
how
I
started
to
have
this
appreciation.
I
I
was
looking
at
again
at
the
circlehiss
paper,
which
uk
also
was
linked
to
in
this
conversation,
and
I,
through
the
sort
of
back
channel,
I
learned
something
a
little
bit
more
about
circle
hiss.
C
So
we
need
to
remember
also
that
there
are
patents
to
watch
out
for
in
this
space,
and
so
not
only
do
we
have
a
prometheus
compatibility
question
and
a
compression
question.
We
have
a
intellectual
property
question,
so
I
don't
know
what
to
do
about
this
one,
but
it
is.
I
think
it's
our
only
big
big,
big
question
that
is
remaining
here,
so
I
think
we
should
leave
it
open
and
keep
talking
about
it.
H
Yeah
yeah,
I
I
think
I
get
your
your
opinion.
I
think
we
are
the
goal.
The
good
case
we
want
is,
we
have
a
few,
a
small
set
of
common
formats
which
are
linear
exponential
and
I'm
also
proposing
the
hybrid
linear
and
exponential.
That's
the
three
common
thing
everybody
understand
and
then
there's
this
custom
for
every
bucket.
You
have
to
explicit
boundary
that
everything,
if
you
can't
do
the
standard
three
you'll
fall
into
the
custom
category.
H
C
One
of
the
things
that
I
find
irritating
and
I
don't
know
what
to
do
about-
is
that
the
open
metrics
or
the
prometheus
spec
for
histogram
boundaries
is
based
on
this
l
e
label
less
than
or
equal,
and
it
means
that
you
specify
the
upper
bound,
inclusive
and
the
lower
bound
exclusive
and
stemming
from
our
history
with
open
senses.
We've
got
the
opposite
in
our
spec
and
it
just
means
that,
like
your,
the
boundary
conditions
are
bad.
C
The
worst
case
for
the
sort
of
error
ratio
of
this
histogram
and
that's
unfortunate-
and
I
don't
know
if
we
should
do
something
about
that
either.
But
I'm
inclined
to
agree
with
the
position
you
just
took
uk
about
basically
just
well.
Let's,
let's
call
it
here.
Compression
is
a
nice
to
have,
but
we
don't
need
special
encodings.
We
can
get
pretty
good
support
with
you
know
this
exponential
bucket
or
log
linear,
some
of
us,
some
people
call
it.
C
You
know,
and
the
explicit
is
always
a
good
fallback,
although
it
doesn't
compress
very
well
and
any
one
of
our
representations
can
losslessly
be
put
into
explicit
boundaries.
Therefore,
we
could
get
to
ga.
I
think.
H
Yeah,
I'm
I'm
not
too
concerned
about
the
last
year
and
greater
than
e
we
could
choice.
Y
is
just
you
can
wire
to
say
yeah.
We
we
it's
the
le
version.
If
we
have
the
ge
version.
Bad
luck,
you
have
to
your.
There
will
be
lots
of
fidelity
during
the
transformation,
but
if
your
buckets
are
dense
enough,
it,
your
user,
may
not
even
notice
it
that's
great
yeah.
Thank
you.
H
C
Yeah,
yes,
so
I
guess
we
should
let's
keep
moving
on
the
agenda
and
look
at
the
rest
of
the
items,
but
this
is
one
where
I
think
everyone
keeps
thinking
about
it.
Please
and
and
there's
a
lot
of
factors
at
play,
and
I
don't
know
what's
best.
C
Okay
on
that
note,
okeydoke
there's
a
couple
of
minor
issues
and
since
john
watson
does
not
appear
to
be
on
the
call,
I
think
propose
that
we
may
not
want
to
talk
about
all
of
them,
but
just
one
about
batch
observers.
C
Honestly,
let's
not
talk
about
that
because
john's
not
here,
but
you
might,
you
might
take
a
position.
Batch
observers
were
being
a
sort
of
callback
that
can
fire
multiple
metrics,
which
is
really
practically
useful
because
you
call
read
mems
stats
or
something,
and
you
have
16
metrics
and
you'd
like
to
fire
them
all.
C
So
there
was
a
question
there
there's
something
about
duplicate
instrument,
registration
and
it
definitely
feels
like
we're
off
in
the
weeds
talking
about
it.
So
if
you
care
about
the
weaves
of
duplicate
instrumentation,
please
take
a
look.
It
is
way
too
much
conversation
for
something
so
minor
as
this,
but
go
for
it,
and
I
won't
bother
you
anymore
with
it.
C
I
think
now
would
be
a
good
time
for
us
to
have
a
free,
open
discussion.
Anybody
want
to
talk
about
this
one
labels
versus
attributes.
It
was
briefly
brought
up
at
the
spec
sig
this
week.
It's
been
filed
this
in
this
issue.
C
It's
been
filed
for
three
plus
weeks,
but
this
question
has
been
around
as
far
as
I'm
aware
from
the
beginning
of
opentometry,
starting
with
confusion
generally
over
the
word
tags
and
word
attribute
in
the
word
label,
and
it
doesn't
go
away
and
yet
I
think
the
sad
thing
is
it
doesn't.
The
user
is
not
any
better
off
for
having
us
now
clearly
defining
labels
and
attributes
as
separate
things.
So
I,
like
the
word
label.
Does
anybody
here
think
we
should
address
this
in
open
tomatry
as
a
group
or
want
to
talk
about
it?.
O
I
to
be
to
go
on
the
record.
I
actually,
I
prefer
labels,
but
I
honestly
labels
or
attributes
are
fine.
It's
just
having
one
is
ideal.
I
don't
know
if
we
want
to
talk
too
much
about
it.
I
think
bogdan
and
some
of
the
other
people
from
the
specifications
saying
would
probably
be
needed
to
have
a
meaningful
conversation
about
this,
because
I
think
that
there's
positions
on
the
other
side
that
aren't
being
represented
here.
J
I
think
it's
especially
important
when
we're
talking
about
consolidating
the
references
to
the
labels
or
attributes
that
are
shared
between
the
two
yeah
we're
going
to
need
one
name.
If
we're
going
to
point
at
them
all
at
one
place,.
C
I
I'd
like
to
highlight
one
thing
about
that
issue
is
that
it's
also
got
a
trace,
spec
label
attached
to
it,
which
means
it's
one
of
the
blocking
p1
issues
for
freezing
the
trace
portion,
the
specification
but
yeah
any
feedback
that
can
help
us
arrive
at
a
resolution
for
this
there.
This
is
like
top
of
the
agenda
for
tuesday's
spec
meeting.
C
C
That's
the
truth
to
this,
and,
and
so
maybe
that's
a
better
way
of
phrasing
it-
and
I
think
why
do
I
say
I,
like
the
word
label-
I
don't
know
I've
been
working
with
it
for
a
long
time,
it's
shorter
than
attributes.
So
that's
pretty
silly
reasons.
I
guess
is
anyone
gonna
fight
that
I
think
we
might
end
up
with
a
word
attribute
if
we
don't
fight
it.
So
that's.
O
C
G
I
have
a
clarification
question
just
so
that
I'm
clear
when
we're
talking
about
consolidating
down
to
one
term,
maybe
down
to
the
term
attributes.
Are
we
also
going
to
change
the
semantics
so
that
metrics
could
have
attribute
values
that
are
not
strings.
C
So
we
have
the
same
key
value
type
everywhere,
and
so
the
the
this
any
notion
of
there
being
a
restriction
on
metric
labels
is
purely
like
in
the
api,
and
you
you
see
at
the
point
of
export,
like
I'm
looking
at
a
label,
I'm
looking
at
a
metric
event
that
has
labels
that
have
to
be
strings,
but
the
resources
are
there
and
they
don't
have
to
be
strings
and
I'm
going
to
squat
them
together.
C
So
now
I'm
taking
boolean
and
integer
valued
resources
and
I'm
combining
them
with
string
only
valued
labels,
and
it
doesn't
make
any
sense.
I
have
to
handle
those
multi-values
anyway,
so
I
don't
think
it
makes
a
semantic
difference,
one
way
or
another.
It's
just
a
performance
consideration,
and
I
I
don't
know
that
it
really
matters.
C
Yeah
this
bores
me
this
topic,
so
I
I
don't
like
to
get
hung
up
on
that
like
oh,
it
can
only
be
a
string
or
whatever
that
doesn't
change
in
semantics.
Therefore,
I
don't
think
it
belongs
in
the
api
spec,
but
whatever
that's
my
position,
there's
one
more
link
here:
a
dictionary
of
common
attribute
label
definition.
G
I
G
Where
the
link
is,
I
would
love
for
us
to
revisit
this,
but
also,
I
don't
think,
as
we
have
these
pr's
that
are
related
to
symantec
conventions.
I
don't
think
it
makes
sense
to
bog
down
those
pr's
with
a
big
refactor
of
the
rest
of
the
specs.
So
maybe
just
as
they
are
written
with,
a
bunch
of
duplication
is
fine,
and
then
we
can
do
some
dry
work
later
to
clean
them
up.
C
J
Since
we
have
a
lot
of
pr's
around
semantic
conventions
right
now,
the
one
thing
I
would
ask
real
quick
is:
are
we
going
to
use
the
yaml
format
like
we
should
be
consistent
on
these
pr's?
If
we're
going
to.
G
I
was
surprised
to
see
it
in
yours
and
actually
I
didn't
bring
it
up,
but
I
should
have
because
I
think
that
some
changes
will
need
to
be
made
to
the
the
generator
that
is
powered
by
that
yaml
to
be
able
to
generate
the
tables
like
like
we
want
in
these
semantic
conventions
and
then
also
to
generate
constants
and
code.
N
So
I
also
tried
the
same
thing
for
my
pr,
but
there's
no
column
for
like
instrument
type
and
stuff
like
that,
so
I
kind
of
just
gave
up
on
it.
Yeah.
G
It's
very
tracing
specific
right
now,
but
I
vote
yes,
I
think
that
that
should
be
another
issue
and
another
pr,
probably
to
fix
that
generator
and
then
rework
these
semantic
conventions.
C
All
right,
thank
you
good
to
know
that
that
is
not
working
and
anyway
keep
up
the
good
work.
Everyone.
Thank
you
see
you
next
week,
the
earlier
time,
swaps
and
happy
long
weekend.
Hopefully,
if
you're
fortunate
to
get
one.