►
From YouTube: 2021-11-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
A
C
D
Yeah,
so
I
I
can
take
this
on
so
I'm
I
came
so
I
attended
the
client
side
sig,
which
is
a
new
sig
and
we're
working
on
semantic
conventions
for
client-side
telemetry,
and
the
first
thing
we're
looking
at
is
updating,
perhaps
the
resources,
the
resource
spec
to
so
we.
D
What
we
want
to
do
is
we
want
to
be
able
to
identify
the
telemetry
coming
from
of
client-side
devices,
so
there's
already
a
device
namespace
on
on
the
resource,
and
we
were
wondering
if
just
the
presence
of
the
device
is
enough
to
to
say.
Yes,
this
is
coming
from
a
client
side
or
if
we
should
add
some
other
flag
or
field
for
this.
D
A
history
on
this
we
way.
B
B
F
B
E
D
B
B
D
So
so
the
device
the
device
namespace
on
the
resource
is,
is
that
only
gonna
be
present
for
client
side.
D
D
Okay,
okay,
so
the
device.
D
Yeah,
so
I'm
just
wondering
like,
if
is
the
device
currently
added
present
in
any
for
any
instrumentation?
I
guess
is
it
already
being
added.
D
H
I
A
B
J
H
And
then
we
also
had
a
follow-up
question.
Apologies
aren't
a
bit
late,
but
about
how
to
best
represent
web
browsers.
Similarly,
so
there's
there's
the
device
part
of
the
resource.
Spec
that's
there
for
for
mobile
phones
and
computers,
and
things
like
that,
but
I
don't
think
there's
anything
any
sort
of
defined
semantic
inventions
for
how
we
would
capture
or
how
we
would
rather
write
a
resource
for
something.
That's
just
captured
any
web
browser
say
on
a
mobile
phone
or
a
desktop.
H
C
B
Yeah,
so
I'm
reviewing
the
log
data
model,
we
want
to
try
to
stabilize
it
and
one
of
the
fields
there
that
we
have
is
the
trace
tags,
so
we
have
trace
id
span
id
which
are
very
reasonable.
If
you're
emitting
a
log
record
in
the
context
of
a
trace
that
is
like
you,
you
actually
have
a
trace.
You
have
a
span,
you
record
the
trace
id
and
span
id,
and
the
log
record
is
now
associated
with
that
spawn
and
that
trace.
That's
that's
fine!
That's
great!
B
J
The
flag
uses
that
so
the
the
reason
for
this
was,
you
might
have
log
generation
with
trace
with
traces
that
are
that
are
not
sampled,
so
you
might
get
log
records
with
injected
trace
information,
but
you'd
never
see
spans
because
the
span
itself
was
sampled,
and
so
this
was
a
way
to
indicate
that
the
log
record
had
was
associated
with
the
trace,
but
that
the
span
this
expand
was
never
going
to
be
visible
anywhere
because,
okay,
that
okay,
that's.
B
B
The
the
trace
flux
says
that
the
beat
is
set
to
one
when
the
trace
is
definitely
selected
to
be
present.
The
fact
that
the
bit
is
set
to
zero
the
flag
is
set
to
zero
tells
you
that
maybe
it
should
be
present.
Maybe
no
am
I
wrong
in
my
understanding
or
how
the
sampled
flag
works
and
if
that's
the
case,
if
it's
a
maybe
how
is
that
useful?
I
don't
quite
understand.
B
B
No,
if
it's
a
one,
it's
a
definite,
it
means
that
it's
definitely
selected
everybody
that
that
follows
from
that
should
keep
it
like.
Should
nobody
should
drop
the
the
the
farther
samples
further
stunts
from
coming
from
that?
That's
great
I,
like
that,
that's
great,
but
it
doesn't
really
help
me
as
as
somebody
who
reads
the
log
record
and
now
I
need
to
know
whether
I
can
establish
a
connection
to
that
trace
or
no.
How
do
I
know
like
I
can?
K
How
I
use
it,
I
think
this
is
this-
is
actually
specified
and
google
cloud
logging
uses
this
there's
a
a
magic
flag
that
where
you
set
a
sampled
bit
to
one
it's
never
set
to
zero,
like
you,
just
don't
set
it
if
you're
not
sampled
effectively,
but
it
it
that
this
is
part
of
google
cloud
logging
and
I
think
it's
used
to
optimize
what
queries
run
when
you
first
hit
a
page.
K
So
if
there's
a
sample
trace
available,
it
will
immediately
try
to
link
those
traces
if
it
doesn't
know
if
it's
linked,
I
think
it
delays
something
it's
like
a
silly
optimization,
but
I
know
that
it's
part
of
the
google
logging
spec.
That's
for
sure
I
don't
know
if
that's
actually
useful
enough
for
you
to
keep
it,
but
I
know
I
know
it's
there
effectively
so
like
in
in
in
google
cloud
logging.
We
actually
want
the
sample
bit
if
it's
available
to
be
ingested
for
that
little
optimization.
B
What
I'm
hearing
is
you're
saying
that
the
sampled
flag
is,
has
three
states
actually
in
google
cloud
logging,
but
it's
not
in
w3c
definition
and
it's
not
in
open
telemetry
in
open
telemetry.
We
have
two
states.
One
says
this
trace
is
definitely
selected
for
sampling.
It
should
be
present.
All
the
spans
should
be
present.
You
can
be
sure
that
it
is
present.
The
second
state
is
which
is
zero,
says
I
don't
know.
D
B
B
I
We're
debating
an
unspecified
feature
of
the
w3c,
it's
totally
ambiguous.
There's
this
concept
written
up
called
multi-tenancy.
No,
no!
It's
not
explained
the
idea
that
one
operation,
one
participant
in
a
trace,
could
decide
to
sample
but
send
its
span
to
a
different
location
than
the
others
and,
like
it's
just
a
little
bit
ambiguous.
I
So
I
I
think
we
what
bogen
said
is
as
true
as
we
can
say
like
we'll,
we'll
set
it
when
we're
recording
it
and
we
won't
say
it
otherwise.
I
did
update
your
issue.
There's
there's
this
new
proposal
about
how
hotel
can
solve
this
specific
problem
about
being
more
specific
about.
Are
you
sampled
or
not
in
a
multi-tenant
arrangement?
I
But
I
don't
think
it's
what
you
actually
asked
my
my
conclusion
here
is
that
the
span
should
include
the
trace,
trace
flags
as
well,
not
that
the
logs
should
question
this
field.
B
Okay,
hold
on
a
second
with
a
spam,
so
if,
if
I
understand
correctly
what
you
and
bogdan
are
saying,
if
every
spam
that
leaves
open
telemetry
sdk
is
going
to
have
the
sample
flag
set
to
one,
it
means
nobody
who
processes
the
traces
after
that
can
make
a
decision
to
drop
the
trace,
because
flag
sample
fly
equals
to
one
is
an
indication
that
you
should
not
drop
any
more.
This
one
was
selected
for
for
sampling.
It
should
be
present
means
that
the
only
place
where
sampling
decision
can
happen
is
in
the
open,
telemetry
sdk.
B
Is
that
correct,
I'm
not
debating
whether
it's
right
or
wrong?
Do
I
understand
it
correctly
is
that
is
that
what
we
want
to
do.
I
Is
what
you're
describing
is
called
a
head
sampling
decision?
It's
the
one
that
you
make
leading
a
context,
not
the
one
that
you
make
on
the
way
to
storage,
and
so
I
I
agree
that
there's
some
ambiguity
and
the
way
I've
understood
it
is
that
we
will
set
this
the
sampled
flag
based
on
the
initial
decision.
And
then,
if
someone
does
drop
it
is.
It
is
a
decision
that
was
made
after
the
fact,
and
we
have
no
way
to
retroactively
change
that
flag.
B
When
you
say
someone,
it
possibly
can
be
the
collector
right
as
an
intermediary,
but
the
collector
cannot
make
that
decision
anymore,
because
everything
that
it
receives
from
the
sdk
already
has
those
sampled
flags
set
to
one.
The
collector
has
to
honor
that
it
cannot
make
a
different
decision
there.
L
I
don't
think
it
does.
I
think
it
can
make
a
different
decision
if
and
only
if
it
has
a
reasonable
expectation
that
it
has
all
of
the
spans
in
a
trace,
or
at
least
that's
when
it
should
be
able
to
make
that
decision.
If
it's
seeing
all
the
spans
in
the
trays,
it
can
decide
I'm
going
to
drop
everything
from
this
trace
and
nobody
downstream
from
me
will
know
whatever
existed.
L
The
same
thing
could
happen
at
a
storage
system
right,
the
storage
system
could
decide
I'm.
I
know
I'm
going
to
get
all
these
bands
for
this
trays,
I'm
just
going
to
drop
them
all,
because
it's
trace
id
starts
with
three
right.
I
don't
like
trace
ids
that
start
with
three.
They
they
go
away.
So
I
I
think
that
there
are
components
that
could
decide.
I
It
yeah
that's
also
comes
from
our
consistent,
but
the
partially
sampled
business
that
we've
been
allowing.
So
if
a
child
span
changes
probability,
then
its
parent
is
going
to
have
the
sampled
flag
set
and
it
will
not-
and
that
would
be
the
same
as
if
a
collector
had
dropped
that
span
as
a
result
of
say,
tail
sampling,
okay,.
K
K
Issue
here,
but
with
metrics,
we
actually
rely
on
the
sampled
flag
in
metrics,
for
example.
Ours.
That's
how
we
do
example,
our
sampling,
so
you
actually
we
actually
by
default,
we
don't
sample
metric
exemplars
unless
that
sampled
flag
is
set
to
one,
because
we
kind
of
just
assume
head
sampling
is
going
to
be
the
default
here
or
that
there'll
be
some
kind
of
a
head
sampler.
That
makes
a
decision
right.
If,
if
that,
if
that
changes,
our
default
for
exemplars
we'll
have
to
we'll
have
to
rethink
that
in
metrics
as
well
like.
K
I
think
this
is
a
concern
across
all
three
signals.
Okay,
your
underlying
question,
though,
of
like
whether
or
not
trace,
sampled
and
logs
make
sense,
or
what's
the
use
case,
all
I
can
tell
you
is.
I
know
that
cloud
logging
has
it
and
uses
it
and
it
means
the
same
thing
as
it
does
in
open
telemetry.
B
That
josh,
I'm
just
saying
that
we
cannot
so
what
what
what
you
guys
just
described
means
that
there
is.
You
should
never
observe
a
trace
with
a
sample
plug
equals
to
zero,
because
the
only
send
things
with
sample
frog
flag
equals
to
one,
and
if
you
decided
not
to
send
it,
you
just
don't
send
it
right.
You
never
send
something
with
sample
plug
equals
to
zero.
That's
what
I'm
hearing.
K
B
Okay
you're
saying
at
the
source-
let's
say
not
in
telemetry
sdk
we
may
emit
the
logs,
but
not
the
traces
in
the
context
of
which
the
log
record
was
generated
and
there
those
logs
will
have
the
sample
tag
set
to
zero.
We
know
that
we
didn't
emit
the
trace.
It
was
not
chosen
for
sampling,
we
dropped
the
trace,
but
we
decided
for
some
reason.
Maybe
we
won't
still
want
to
send
the
logs,
so
we
sent
the
loads
but
recorded
the
trace,
like
in
the
trace
flags
that
the
center
pipe
was
equal
to
zero.
K
For
example,
if
you're,
if
you're
saying
audit
logs
right
you're
never
going
to
drop
your
audit
locks,
they
always
come
in
and
and
you'll
have
a
trace
id.
So
you
can
correlate
audit
logs
if
you
need,
even
if
you
don't
have
a
trace,
that's
sampled,
so
it's
still
useful.
It's
just
it's
less
useful
than
if
you
have
a
trace,
but
it's
still
useful
to
have
it
there.
Whether
or
not
you
need
that
sampled
flag
to
to
denote.
You
know
how
to
form
your
ui.
D
B
E
I
want
to
confirm
one
thing:
w3c
will,
by
the
way
that
they
define
a
header
that
is
put
with
the
request,
will
never
be
able
to
cover
cases
where
delayed
sampling
happens
or
outside
the
process
sampling
happens.
So
I
think
we
should
stop
saying
that
w3c
does
not
define
that
is
not
in
the
scope
of
that
entity
to
define
this
behavior.
I
So
one
proposal
is
that
we
stop
using
the
sampled
flag
and
start
using
trace
state
if
we'd
like
to
be
more
correct,
and
that
is
part
of
this
probability-
sampling
specification,
but
it's
just
it
just
happens
that
way.
It's
not
intentional.
I
I
I
I
I've
posted
a
pr,
that's
now
undrafted,
it's
now
it
ready
for
review,
and
I
will
say
that
this
is
a
pretty
big
spec
change.
It's
a
pretty
complicated
specification
and
I
think,
no
matter
what
I
do
someone's
going
to
have
trouble
reading
it.
So
my
my
request
is
that
you,
if
you're
interested,
you
will
try
to
read
it
and
tell
me
where
you
stumble
or
where
I've
gone
astray
of
documenting
this
in
a
way
that
everyone
can
understand,
I
did
try
to
separate
normative
from
non-normative
content.
I
Following
this
other
discussion,
we've
had
about
how
you
write
a
specification
and
I've
written
up
a
little
bit
of
a
probabilistic
test
spec.
At
the
very
end
of
this
that
says,
you
should
be
able
to
satisfy
some
basic
statistics
with
this
and
how
to
go
about
that
test,
which
I
think
is
everything
we
needed.
So
please
have
a
look
and
see
what
you
think.
C
Perfect,
thank
you
so
much
looking
forward
to
the
reviews.
I
do
agree
that
it's
a
complex
one,
but,
let's
see
probably
the
feedback
will
help
with
that.
Okay,
the
next
one
is
mine.
Basically
it's
a
small
pr
that
has
been
sitting
for
a
little
while
it
has
enough
approvals.
It's
basically
for
basically
saying
that
onset
and
empty
environment.
Variable
values
are
the
same.
C
It
has
enough
approvals,
but
something
like
this
can
actually
be
dangerous,
so
to
speak
so,
but
it
has
been
there
for
some
days
no
more
complaints
or
changes
requested.
So
this
is
your
last
opportunity.
As
I
said
it
has
more
than
enough
approvals.
C
I
will
go
and
merge
it,
but
by
the
end
of
the
day,
unless
somebody
opposes
that,
so
please
take
a
look.
Okay,
the
next
one
is
joshua
reminder
of
state
of
offal.
K
Yeah
so
first
I
want
to
say
thank
you
to
everybody
who
helps,
because
this
has
been
an
enormous
effort
to
pull
this
document
together.
I
think
we're
starting
to
get
pretty
short
up.
What
I'd
like
to
do
we're
missing
a
sampling
roadmap
in
the
document,
so
those
of
you
involved
with
sampling
it'd
be
great
to
kind
of
get
that
written
and
just
you
know,
boxes
in
place
for
a
gantt
chart.
K
I'm
going
to
hopefully
bring
this
to
the
maintainers
meeting
on
monday
to
kind
of
go
through
and
just
get
approval,
we're
going
to
do
an
offline
approval
for
folks
who
can't
make
that
meeting.
But
I
want
to
get
some
online
discussion
as
well,
which
will
happen
over
the
next
week
once
this
thing
is
approved,
we'll
look
for
where
to
the
document
itself
can
be
released.
K
For
you
know
anyone
to
read
it's
already
public
whatever,
but
I'd
like
to
get
the
pieces
and
content
kind
of
into
locations
that
people
can
consume,
and
so
that's
the
next
step
there,
but
just
want
to
call
that
out
give
people
give
people
heads
up.
Please
take
a
look
at
the
specification
work
in
there.
If
you
haven't
yet
since
that
is
directly
relevant
to
the
sig
and
make
comments
thanks,
everybody.
C
Josh,
by
the
way
the
sampling
portion
is
for
me,
I'm
drafting
something
very
small,
and
then
I
will
pass
that
to
jim
mcd,
who
has
been
working
on
that
so
hopefully
later
today
we
will
have
something.
Sorry
for
the
delay.
Awesome,
yeah,.
H
C
H
N
K
The
this
is
directly
called
out
in
the
semantic
conventions
actually
which
are
still
experimental
but
effectively.
We
call
out
that
we
prefer
using
labels
over
different
metric
names
and
I
think
what
you're
seeing
is,
especially
since
this
is
from
the
collector
in
some
context,
what
you're
seeing
is
when
we
adapt
something
from
statsd
or
we
adapt
something
from
say
jmx
previously,
because
of
the
way
those
technologies
work
there.
K
There
was
no
such
thing
as
labels
right
in
early
versions,
so
you
actually
end
up
with
different
metric
names
and
we're
adapting
into
open
telemetry,
and
when
we
look
at
prometheus.
When
we
look
at,
I
think,
dog,
stats
d
or
like
modern
stance
d,
and
when
we
look
at
you
know
open
telemetry.
We
have
labels,
and
so,
if
you
look
at
the
semantic
conventions
for
metrics,
there's
a
notion
that
we
should
be
using
labels
to
denote
differences.
So
the
specifically
around
usage
metrics
right,
we
defined
usage
where
it
should
be.
K
There
should
be
a
label
for
each
type
of
usage,
and
you
should
add
all
those
labels
up
to
reach
one
right
where
usage
is
a
percentage
or
whatever.
Anyway,
we.
We
have
this
defined
in
our
semantic
conventions.
The.
G
So
it's
certainly
a
question
about
labels,
but
the
the
nuance
that
may
be
different
is
that
we
basically
have
two
two
data
points.
One
represents
a
total
and
the
other
represents
a
subtotal
right.
So
we
might
label
the
subtotals,
but
we
would
not
want
to
label
the
total
and
we
would
not
want
to
add
the
total
to
the
subtotals,
because
then
it
would
not
be
a
meaningful
number
anymore,
and
so
that's
the
that's
the
point
of
clarification.
I
was
looking
for
here
whether
or
not
that's
in
the
date
of
the
specification
or
not.
G
I
I
don't
know,
but
just
want
to
make
sure
we're
clear
on
what
we're
talking
about
here.
B
Let
me
rephrase
that
let's
say
we
define
the
semantic
convention
with
a
for
a
particular
metric
with
a
few
attributes
right
that
can
be
recorded
or
should
be
recorded
for
that
particular
metric.
The
question
now
is:
do
you
have
to
record
all
those
attributes
or
you
can
commit
some
of
the
attributes
and
if
you
omit,
let's
say
you
omit
one
of
the
attributes.
What's
the
expectation,
what
are
you
going
to
record
there?
B
So
this
one
says
that
if
you
omit
a
particular
attribute,
if
it's
defined
in
the
semantic
conventions,
you're
emitting
that
metric-
and
you
do
not
provide
that
particular
dimension-
that
particular
attribute,
then
you
have
to
record
as
the
values
the
aggregate,
the
total
across
all
those
values
across
all
those
across
that
dimension,
that
you
are
meeting.
That's
what
this
says.
K
I
don't
think
yeah
you're
you're
right
that
this
isn't
covered
in
the
specification
besides,
like
you're
supposed
to
infer
the
total
from
adding
those
labels
together
is
what
basically,
what
it
says.
Yes,.
K
Well,
so
I
I
like
the
idea
of
having
optional
labels,
but
I
think
there's
a
second
the
most
important
question
in
that
is,
whether
or
not
you
can
have
a
metric
data
stream
that
has
one
attribute
and
then
doesn't
have
any
attribute
like
in
the
same
metric
data
stream
and
how
much
we
want
to
promote
that
in
the
ecosystem.
Or
should
we
try
to
avoid
that?
I
know
there
are
some
metric
backends
that
have
issues
with
that,
I
believe
or
like
it
does
wonky
things
with
like
automatic
dashboarding.
K
If
you
have
like
one
set
of
attributes
for
some
of
your
data
streams
and
one
set
for
another,
that
can
do
some
weird
things,
depending
on
what
system
you're
looking
at
but
like
generally,
is
that
something
our
data
model
allows
it
do
we
want
to
encourage
it,
I
think
is,
is
an
open
question
like
if
it's
in
the
same.
D
E
Data
with
two
labels
in
one
loop-
and
I
think
the
answer
should
be
no
in
the
same
data
stream-
we
should
be
consistent.
We
either
send
two
labels
or
one
label
or
zero
label.
All
the
time
right
is
that
something
that
we
can
agree
on.
B
B
E
The
source
level
you
have
to
choose
only
one,
because,
because
that
will
guarantee
that
you
can
do
a
simple
removing
of
that
label
by
summing
of
all
of
the
points,
so
so
by
summing.
All
of
the
point
in
that
data
stream
data
stream
is
something
very
common
in
google,
but
it's
a
short
streaming
point.
If
you
sum
all
the
the
points
and
remove
that
label,
you
get
a
total
correct.
E
So
if
you
send
me
the
total,
the
sum
is
not
going
to
be
so
easy
to
to
do,
because
I
have
to
to
do
a
sum
where
this
label
is
set
and
or
is
this
label
is
not
empty
or
whatever
I
have
to
complicate
it.
So
I
think
er
even
prometheus
will
have
troubles
with
this
and
they
recommend
for
a
source
for
to
always
choose
for
cpu.
For
example,
you
always
send
that
state
or
you
don't
send
this
to.
B
Okay,
but
we're
saying
that
for
the
particular
source
you
still
have
the
choice
you
can
make
that
choice.
You
can
decide
that
you
are
following
the
semantic
conventions,
but
for
this
particular
source
you're,
not
including
the
state
label,
the
state
dimension
you're
not
doing
that
and
we're
saying
that
it
is
okay
to
do
so.
Is
that
true.
E
B
E
B
B
K
Do
here,
but
the
most
important
thing
is,
it
should
be
consistent
like
it.
You
should
consistently
choose
labels
from
the
same
meter
that
you're
reporting
metrics
on.
So
if
you're
reporting
metrics
a
particular
metric,
and
you
pick
a
label
set
from
that
same
meter,
you
should
have
a
consistent
set
of
labels
or
attributes
on
that
metric.
That,
I
think,
is
something
we
should
absolutely
call
out
in
our
semantic
conventions.
K
I
don't
know
if
we
need
to
require
it
in
the
data
model.
I
kind
of
don't
want
to
require
the
data
model,
but
I
think
we
should
just
fundamentally
have
that
as
a
semantic
invention,
because
it
leads
to
better
behavior
in
back
ends
and
more
consistency
for
users
like
if
we
don't
keep
that.
I
think
it's
bad.
B
G
I
also
want
to
think
about
this
in
a
very
general
sense
that
you
could
use.
It
could
have
two
different
applications
right,
maybe
one
written
in
ruby
and
one
ready
to
go
and
they're
trying
to
collect
process
metrics
and
really
depends
somewhat
on
the
libraries
available
to
them.
How
difficult
is
this
to
to
get
this
information,
and
in
one
case
they
may
only
really
be
able
to
get
a
total.
G
In
another
case,
they
may
only
be
able
to
get
some
of
the
sub
components,
and
maybe
not
in
this
specific
case,
but
in
the
scope
of
all
possible
metrics
we
may
want
to
collect.
There
are
probably
lots
of
instances
where
different
libraries
different
languages
are
going
to
have
access
to
different
information,
and
yet
it
all
can
be
represented
as
process
cpu
time,
so
it
this
seems
like
a
pretty
important
question
to
solve
in
a
very
general
sense,
and
not
just
in
terms
of
like.
I
I'd
like
to
say
this,
I
believe
this
is
specified
in
the
data
model.
We
we
also
talk
about
what
to
do
when
removing
attributes
and
the
behavior
when
removing
an
attribute
is
clarified.
As
saying
you
should
apply
the
default
aggregation
for
the
type
that
you're
producing
this
is
specified
so
that
a
collector
can
do
the
correct,
attribute
removal
and
if
there's
a
semantic
up
here
that
says
how
to
remove
attributes.
I
Then
it
implies
the
correct
behavior
for
this
question,
and
it
says
that
if
it's
at
some
point
you
should
add
the
the
points
up
in
the
series
and
so
that
the
total
should
be
computed
by
sum.
Whereas
if
this
was
a
question
about
gauges,
the
the
total
should
be
computed
by
averaging
gauges
or
by
computing
a
gauge
distribution,
and
that's
that
is
clear,
clear
in
the
data
model.
As
far
as
I
know,
but
we
might
have
to
revisit
the
way
it's
stated.
M
My
understanding
is
aligned
with
what
josh
mentioned,
and
I
gave
two
concrete
examples
here.
I
think
the
first
example
is
not
allowed
like
if
you
report
that,
basically
you
report
the
same
data
in
two
different
places
in
the
same
metric
stream.
This
is
a
bug
and
many
backend
would
be
totally
confused
in
the
second
case
which
is
allowed,
and
I
I
think
we
shouldn't
stop
that,
because
it
is
totally
possible
that
you
start
with
only
one
dimension
and
later
you
change
the
sdk
to
report
more
data.
M
As
long
as
you
can,
you
can
still
merge
them.
The
data
is
not
duplicated.
I
think
you're,
fine,
whether
it's
the
recommendation
or
not.
It's
debatable,
I
think
in
general.
Nobody
would
want
to
see
that,
but
in
in
practice
you
will
see
that
and
it's
hard
for
us
to
just
putting
this
back
saying
this
is
not
allowed.
M
M
B
So
yeah
this
is
this
is
good
josh,
so
can
we
can
we
say
that
if
you
emit
a
metric,
the
behavior
should
be
as
if
it
was
aggregated
as
it
is
described
in
the
data
model?
Maybe
maybe
call
that
out
right.
K
Well,
what
is
called
out
in
the
semantic
conventions?
I
don't
remember
where,
but
I'm
pretty
sure
that,
like
there's
a
naming
convention
guide
around
how
when
you
choose
labels,
you
know
the
things
should
naturally
aggregate
when
you
remove
them.
That
is
definitely
called
out
in
the
semantic
convention
naming
conventions.
I
know
it's
also
caught
on
the
data
model,
but
I'm
fine,
adding
it
what
riley
writes
here.
K
What
I
was
trying
to
suggest
is:
if
we
see
that
scenario
number
two
which
will
happen,
although
we
don't
want
that
to
happen,
that
often,
I
think,
is
totally
fine.
What
I'm
suggesting
is
I'd
like
to
go
a
bit
further
with
semantic
conventions
and
say
in
it
from
our
semantic
convention
standpoint
if
you're
instrumenting,
all
of
that
mid
a
would
come
from
the
same
meter
and
anything
that
that
meter
produces,
has
the
same
set
of
labels
and
everything
from
mid
b
would
be
from
a
different
meter.
K
We
actually
haven't
fixed
data
model,
to
have
instrumentation
library
mean
a
different
stream.
Yet,
but
yes,
it
will
be.
E
E
Are
us
translating
directly
from
my
sql
from
from
from
spanner
metrics
or
from
other
sources
that
are
not
otlp
and
we're
not
going
web
meters.
So
I
think
I
think
there
has
to
be
a
comment
at
the
data
model
level,
and
I
don't
like
the
specification
that
refers
only
to
meters
in
instruments.
I
think
we
should.
K
We
can
call
it
instrumentation
library
I
just
was
using
meter
for
convenience
for
everyone,
but
yeah
for
the
collector.
It
would
be
an
instrumentation
library
like
for
a
given
instrumentation
library,
you're
recording
against
you
need
to
be
consistent,
but
if
you
report
against
multiple
different
instrumentation
libraries,
it's
I
think
it's
okay
to
choose
a
different
set
of
when
we
have
optional
attributes,
it's
okay
to
have
a
different
set
as
long
as
that
instrumentation
library
is
consistent
of
which
ones
it's
chosen.
K
So,
for
example,
if
the
collector
has
a
receiver
for
my
sql,
it
should
use
the
same
set
of
attributes
for
for
every
metric
for
every
stream
in
a
particular
metric
id
that
it
uses
in
that
receiver.
If
it's
using
the
same
instrumentation
library,
which
it
should
be
right,
correct,
that's
what
I'm
suggesting.
M
I
have
a
question
josh,
so
if
you
start
with
a
mysql,
instrumentation
library,
with
three
dimensions
and
later
we
realize
there's
another
dimension,
which
is
super
important.
We
want
to
add
that.
Do
you
suggest
that
we
bunk
the
major
version
of
the
library
instrumentation
or
we
just
change
the
name
to
something
that
is
totally
different.
K
I
think
we
have
to
change.
We
have
to
change
the
bump
to
major
version
or
do
something
with
our
schema,
because,
because
again,
adding
depending
on
how
you
add
a
label,
if
you
add
a
label
that
doesn't
aggregate
away
properly,
you
have
effectively
broken
your
metrics
api,
like
adding
labels,
is
very,
very,
very,
very,
very,
very
touchy
with
metrics.
We
need
to
be
very
careful
about
it.
We
should
consider
adding
labels
a
breaking
change
to
our
users
right
now,
until
we
figured
out
some
more
nuance
to
what
that
looks.
Like.
I
Yeah,
I
agree
with
that
statement.
I
think
that
there
is
a
future
in
the
distant
future
where
we
can
have
a
collector
stage
pipeline
stage.
That
automatically
does
the
right
thing
here,
but
the
right
thing
is
going
to
be
is
not
exactly
easy
to
implement
until
we
talk
about
late
arriving
data
and
other
hard
challenges,
because
if
you
don't
have
late
arriving
data,
this
transformation
is
pretty
easy
to
apply.
E
Even
if
you
have
that's
another
topic,
probably
josh,
I
think
I
think
there
needs
to
be
a
comment,
and
now
probably
this
is
related
to
our
data
model-
that,
for
example,
whenever
we
send
a
metric
in
the
data
model,
we
should
include
all
the
the
data
points
for
that
metric
that
we
measure
at
that
specific
moment
in
time.
E
So
essentially
what
I
want
I
want
if,
for
example,
I
want
the
in
the
second
example
here,
the
hunt,
the
the
point
with
the
value
of
hundred
and
point
with
the
value
80
should
always
be
in
the
same
metric
stream
or
in
the
same
metric,
proto
message
and
not
in
a
separate
proto
messages.
B
E
What
do
you
mean
by
that?
So
so
metrics
are
aggregated,
so
whatever
you
happen
to
record
up
to
this
point
when
you
start
collecting
or
reporting
so
so
there
is.
There
are
two
problems
here.
One
is
the
reporting
from
the
user.
Second,
is
the
reporting
from
the
library
to
the
pipeline
to
the
collector
I'm
referring
to
the
second
one,
not
to
the
first
one,
whatever
you
report
it
up
to
t0,
we
will
aggregate
them
and
we'll
all
report
in
the
same
message.
That's
what
I'm
seeing.
E
B
E
M
K
M
K
All
right,
what
are
our
action
items?
Real,
quick,
I
think
action
item
number
one
is
we
need
to
clarify
semantic
conventions
around
what
optional
attributes
mean,
because
I
have
a
discussion
point
later
on
http
semantic
conventions
tried
to
implement
those
in
java,
and
I
think
we
need
to
work
on
that
spec
a
bit
and
it's
absolutely
related
to
that
bug
right
there.
So
I
think
we
need
to
work
on
defining
semantic
conventions
a
little
bit
better
so
like
in
terms
of
action
items,
there's
a
bit
on
what
optional
means.
K
There's
a
bit
on
this
notion
of
when
you
select
attributes.
What
does
that
mean?
Who
wants
to
take
ownership
like
I'm
willing
to
take
on
some
of
this,
but
only
after
we
get
the
open,
telemetry
road
map
out?
Who
who
has
time
to
take
on
writing
that
work?
Is
that
something
you
could
do
dan?
K
G
K
K
I,
like
that,
allow
languages
to
choose
their
own
representation.
You
know
it
needs
to
be
a
stable
api
right,
but
I
think
each
language
should
have
freedom
to
define
their
own
stable
api,
that's
optimal
for
their
implementation.
E
E
M
I
think
this
is
the
sdk,
and
this
is
loud4ga.
K
This
this
is
a
good
one
in
so
in
in
the
java
implementation.
There's
actually
a
callback,
we
don't
actually
register
a
reader,
we
registered
reader
factory
and
then
there's
a
callback
that
passes
a
collect
method
to
the
reader
factory,
to
construct
the
actual
reader
that
the
sdk
gets.
And
this
is
one
of
those
complications.
I
think
we
discussed
a
lot
in
the
sig.
K
I
think
there
was
even
kind
of
a
document
that
kind
of
identified
what
that
looks
like
it
sounds
like
we
need
to
clarify
this
a
bit
more,
but
they're
totally
right
that
there's
a
doubly
linked
ownership
of
the
meter
provider
owns
a
metric
reader
with
shutdown
in
force,
flush
and
there's
also
a
collect
call
that
the
meter
reader
can
do
to
a
meter
provider.
K
So
they're
not
happy
with
the
bi-directional
ownership,
but
I
I
don't
know
how
we
achieve
both
a
kind
of
push-based
shutdown
model
with
the
pull-based
collect
model,
because
we're
actually
mixing
push
and
pull.
If
anyone
has
an
idea
there
great,
but
I
feel
like
we're
kind
of
in
a
rock
and
a
hard
place
here,
because
of
pool
based
collectors
or
readers
and
push-based
shutdown.
M
My
take
is
the
control
flow
actually
goes
forward
and
backward
for
for
the
pool
case,
but
it
is
not
the
ownership.
I
I
think
when
people
talk
about
ownership,
it's
not
clear
what
type
of
it's
like
the
memory
ownership
or
the
control
ownership.
There
are
many
different
ways
of
interpreting
ownership.
I
guess
the
problem
here
when
people
mention
like
a
reader
can
be
registered
to
multiple
meter
providers.
M
Do
we
allow
look
that
readers
registered
to
multiple
providers?
We
haven't
mentioned
that
in
the
specs
similar
like
in
the
tracing
spec,
we
never
mentioned.
If
a
processor
can
be
registered
on
multiple
trees
or
provider,
it
is
vague,
but
in
the
implementation
I
think
most
of
the
language
six
decided
to
not
allow
a
reader
to
be
shared
or
like
for
tracing
not
allow
a
processor
to
be
shared,
although
in
some
languages
I
I
think,
because
that
is
not
required.
M
E
M
E
Yeah
but
but
you
do
that
at
the
runtime,
not
a
it's,
not
an
api
that
does
not
allow
that.
E
M
K
E
M
I
think
we're
running
over
time,
so
I'll
assign
this
to
myself,
and
I
think
this
is
allowed
for
ga.
We
can
clarify
some
of
the
wording
but
changing
the
flow.
I
think
the
control
flow
is
something
that
we
we
debated
a
lot
and
we
agreed
on
that.
So
maybe
it
is
after
ga.
K
I
don't
think
we're
ever
going
to
change
the
control
flow.
I
think
I
think
what
I
would
respond
with
here
is
that
we're
going
to
clarify
it
yeah
and
maybe
do
that
factory
thing
but,
like
I,
I
wouldn't
say
we're
going
to
change
the
notion
that
we
can
push
a
flush
and
close
and
then
collect
backwards
like
that
that
that
diagram
at
the
end
is
not
going
to
change.
I.
E
E
Because?
Because
because
if
the
user
can
delete
the
the
register,
callback,
calling
the
callback
will
say
fold
if
the
so
so
you
need
to
clarify.
Also
the
memory
object
in
languages
that
are
not
juicy.