►
From YouTube: 2022-09-27 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
C
D
D
Okay,
let's
start
that's
okay,
thank
you!
So
much
for
joining.
Let's
go
over
teaching
the
items.
First,
one
metrics
same
compatible
generation
from
German.
Please
review
it
again.
Your
Rams.
B
Yes,
I'm
here
yeah,
so
this
is
a
PR
in
the
build
tools
that
James
opened.
So
we
we
have
been
lucky
for
a
while
now
that
our
our
metric
symmetric
convinces
are
not
being
done
via
the
tool.
So
it's
just
a
markdown
file.
So
James
worked
on
on
on
adding
this
adding
support
for
this
in
the
in
the
semantic
convection
generator,
and
it
had
approvals
for
from
from
some
folks.
But
there
was
more
discussions
and
he
did
some.
Let's
say
major
changes.
B
The
way
the
the
result
is
and
I
have,
reviewed
and
and
and
I
think
it
looks
good,
but
yeah.
We
need
the
eyes
off
people
that
reviewed
before
his
changes
to
make
sure
that
everything
is
all
right
yeah
so
that
that's
basically
it
I
think
this
is
important
for
the
instrumentation
initiative
that
is
going
on
now,
because
this
will
drastically
improve
the
experience
for
generating
semantic
Rich
for
metrics.
B
Yeah
yeah
I
I,
don't
know
if
I,
probably
I
can't
do
that,
but
yeah
I
agree
that
that
should
be
the
case.
Yes,
oh.
A
D
Sweet
thank
you
so
much
for
that.
Okay,
and
this
is
an
important
project
for
for
the
for
instrumentation
stability,
so
yeah
especially
important
for
this.
B
You
know
the
if
you
scroll
below
there
is
a
list
of
issues
that
this
usually
tracks.
D
B
D
Sweet
thank
you
so
much
for
that.
Okay,
next
one
tigran
decided
it's
original
and
lower
camel
case.
Json
Keys
should
be
supported.
Please.
A
Yeah,
so
this
is
a
an
issue
with
Json
encoding.
You
know
TLP
the.
By
default
the
product
Buffs
Json
implementations.
They
support
two
ways
of
naming
the
fields.
One
is
to
use
the
original
names
as
they
are
declared
in
the
product
of
files
in
the
product
files
and
the
other
is
to
use
the
equivalent
lower
KML
case
field
names.
So
the
names
are
converted
automatically
and
the
the
exporters
are
allowing
to
use
both
of
the
names
and
the
parsers
are
required
to
accept
either
of
these
names
in
the
payload.
A
So
this
is
the
default
behavior
of
product
buff
implementations,
the
the
the
the
defaults
that
it
shows
chosen
is
the
lower
camel
case
but
optionally.
The
exporters
can
also
use
the
originals
if
they
want
to
so
this.
This
issue
is
about
making
a
decision
whether
in
otlp,
we
want
to
support
this
Behavior.
A
Only
need
to
support
that
that
one
way
of
field
naming
initially
there
was
a
very
brief
discussion,
I
think
only
Daniel,
I
think
on
the
expressed
opinion
that
we
should
support
both,
but
now
that
there
is
a
PR
that
attempts
to
make
sure
that
this
is
explicitly
listed
in
the
specification.
A
I
think
they're
not
concerned
that
that
is
unnecessary
complication.
There
is
no
need
to
support
both
naming
conventions,
so
this
is
the
question
I
have
to
you.
If
you
have
any
opinion
about
this
I'd
like
to
hear
it,
do
you
think
we
should
support
both?
Do
you
think
we
should
support
one
any
any
arguments
in
favor
one
or
the
other
approach.
E
But
just
to
clarify,
in
my
opinion,
I
I
I,
don't
have
a
strong
opinion
one
way
or
the
other
I
was
just
wondering.
If
there's
any,
is
there
any
benefit
in
restricting
this?
If,
if,
if
the
behavior
of
all
the
existing
implementations
is
one
thing,
then
why
would
we
specify
anything
different
than
that
like?
If,
if
there's
no
advantage
to
it,
then
I
don't
see
a
reason
to
do
it?
If,
if
there's
even
one
single
Advantage,
then
fine
I
don't
have
a
strong
opinion
at
all.
A
One
small
Advantage
is
when,
when
you're
re-implementing
it
from
scratch,
like
we
did
in
the
collector
in
The
Collector,
we
don't
use
the
the
the
protoboss
implementation.
We
have
our
own
implementation
for
Json
or
PLP,
and
there
you
have
to
support
both,
which
is
I,
mean
it's
fairly
simple.
To
do
no
big
deal,
but
it's
a
bit
kind
of
there
is.
There
is
a
possibility
that
you
can
have
some
bugs
there
like
if
you
have
typos
or
something
like
that
right.
A
So
I
guess
that
small
Advantage
is
there
that
if
you
are
doing
the
implementation
that
then
kind
of
nice
things
to
do
right,
less
chance
to
have
bugs
there.
F
E
Yeah
I
was
just
going
to
say
the
same
thing
that
Anthony
did
if
there's
some
other
and
we
we
can
always
put
it
in
our
spec
and
and
say
you
must
do
this,
but
drifting
too
far
from
the
protobuf
spec
just
makes
it
more
annoying
for
any
potential
producers.
You
know
if
I,
if
I
have
a
database
that
wants
to
export
otlp
for
self-monitoring
or
something
like
that,
you
know
I
think
staying
as
close
to
you,
the
expected
protobuf
spec
and
precedent.
A
So
you're
saying
what
we
mean
in
the
simplification
of
receiver
implementation,
we
lose
in
the
complication
from
for
from
the
exporter's
perspective,
because
now
they
maybe
can't
use
the
built-in
photograph
marshaller,
they
are
forced
to
re-implement
their
own,
maybe
because
what
we
require
is
impossible.
Some
people.
F
Yeah
I
don't
know
if
it's
additional
complication,
because
the
protocol
specs
says
that
producers
May
provide
an
option
to
use
the
original
name.
So
I
I
think
that
anybody
who
has
that
enabled
would
have
the
option
to
disable
it
potentially
if
it
doesn't
conflict
with
other
uses
that
they
have
that
require
that
to
be
enabled
for
some
reason,
but
I
think
it
would
be
not
non-obvious
Behavior
right.
It's
it's
going
to
lead
to
systems
that
don't
work
for
reasons
that
the
user
won't
immediately
be
able
to
understand.
E
F
E
So
if
I,
if
I
have
that
feature
enabled
and
I
just
don't
see
any
you
know
traces
in
my
my
collector,
my
back
end
or
whatever,
then
you
can
always
read
the
documentation
and
figure
out
why
that
is.
But
it's
just
one
thing
that
you
have
to
do.
A
A
E
Is
I
guess
the
the
question
at
the
core
is:
is
otlp
a
protobuf
product
like
protobuf-based
protocol,
or
is
it
a
subset
of
protobuf.
A
A
F
A
A
E
Also
I
think
some
of
the
the
enum
handling
we
restricted
beyond
the
default.
I
would
say:
I
I,
myself,
I
think
I've
been
fairly
consistent
about
arguing
against
deviations
at
each
each
step
along
the
way
and
from
what
I
remember
I
think
Anthony
has
as
well.
A
A
So
I
don't
know,
maybe
maybe
you
can
also
comment
on
the
issue.
I
know
you
already
did
Daniel,
maybe
anything
you
can
do
as
well
and.
F
A
F
Come
into
the
new
PR,
okay,
but
I,
think
Yuri's
not
here,
but
there
also
doesn't
seem
to
be
anybody
else.
Stepping
up
in
this
meeting
saying
you
should
deviate,
which
is
probably
a
signal.
E
I
would
be
interesting
interested
in
hearing
from
anyone,
potentially
at
a
vendor
who
has
implemented
a
custom
receiver
and
not
used
an
existing
Library,
where
this
would
be
a
problem.
A
A
Who
has
arguments
in
favor
or
against
one
of
the
approaches?
Please
comment
on
the
pr
Jake
I
think
it
will
be
good
if
you
do
as
well
yeah
I've
just
approved.
It
looks
good
to
me.
A
D
Think
we're
good.
Thank
you
perfect.
Thank
you.
So
much
for
raising
that.
Okay.
Next
item
add
changes
to
semantic
conventions
block
for
how
long
do.
G
Yeah,
that's
me
Andre
here,
hi
yeah,
so
I
raised
a
couple
of
issues
in
in
the
certification.
Repository,
sorry
and
I
got
the
feedback
from
Riley.
That
I
should
probably
pause
for
now.
If
you
scroll
down
to
the
comments
yeah,
this
one
is
from
probably
from
the
private
account,
but
the
other
ones
are
from
from
right.
Young
and
it's
related
to
this
thing
and
I
wonder:
does
it
mean
we
cannot
introduce
any
changes
to
semantic
conventions
for
metrics
or
whatever
else
or
and
for
how
long?
H
First,
the
feedback
we
provided
is
not
from
Individual.
H
We
have
a
quick
meeting,
so
it's
a
group
D
theorem
from
the
church
number
two
is
the
current
instrumentation
is
a
little
bit
that
it
has
been
running
for
more
than
two
years
we
haven't
shipped
anything
stable,
so
there's
some
fundamental
issue
we
want
to
address
and
before
that
we're
saying
taking
more
PR's,
that's
increasing
the
current
experimental
side
if
nothing
to
hold
so
instead
we're
taking
a
systematic
approach
to
look
at
how
we
can
ever
get
the
first
thing
stable
and
then
leverage
the
success
we
have.
H
There
have
other
work
streams
to
follow
it
before
that.
My
personal
feeling
is,
these
PRS
are
not
going
anywhere
like
we.
We
can
continue
to
spend
the
energy
from
the
community
having
people
review
and
got
them
merge
as
experimental.
But
later
we
have
to
revisit
everything
and
there's
another
work
student
talking
about
how
elastic
common
schema
can
be
aligned
with
open
Telemetry.
So
all
these
things
is
very
likely
to
reset
whatever
we
have,
especially
for
Matrix.
Even
if
you
look
at
Matrix
today,
it's
not
following
the
way
how
how
to
listen.
Romantic
Community
is
working.
H
We
don't
have
a
formal
way
of
eating
yamas,
we
generate
the
size.
So
my
suggestion
is
we.
We
save
the
energy
here
and
solve
the
fundamental
issue
before
we
move
forward.
Otherwise,
it's
it's
more
like
a
wasting
energy
and
redo
90
of
the
work
here.
But
meanwhile,
I
I
think
some
of
the
discussion,
like
the
the
priority
mentioned
something
about
the
operating
system
or
the
process.
H
Level
thing
is
really
helpful,
so
I
would
suggest
Mark
those
prns
Drive
continue
to
get
feedback
from
others,
but
don't
merge
that
for
now,
as
long
as
we
have
Clarity,
we
think
this
is
the
right
model.
We
can
decide
whether
like
we,
we
should
write
that
in
the
Yama
format
after
we
clean
up
the
thing
or
we
we
should
get
like
can't
even
see
another
thing,
because
if
yes
already
had
that
definition
but
to
just
take
from
UCS.
I
I
So,
what's
likely
to
break,
is
choosing
what
resource
to
attach
your
Telemetry
to
we're,
having
discussions
about
like
what
that
means
and
what
that
looks
like
and
then
any
kind
of
attribute
that
you
define
that
could
be
shared
with
a
metric
signal
right,
there's,
there's
event-based
data
and
then
there's
aggregated
based
data
and
for
like
time,
series,
databases,
time,
series,
databases,
struggle
with
high
cardinality,
and
so
we
need
to
kind
of
limit
ourselves.
I
If
an
attribute
is
designed
to
be
in
an
event
store,
it
can
have
high
cardinality
if
it's
designed
to
be
in
a
Time
series
database.
We
have
to
be
more
careful
about
that
cardinality
and
so
calling
out.
The
attributes
that
you
expect
to
share
are
important.
We
have
some
actual
bugs
from
like
the
Java
instrumentation,
where
some
very
high
cardinality
attributes
and
traces
were
shared
with
metrics
and
actually
caused
problems
and
and
storage
issues
and
query
time
performance
bugs.
I
So
those
are
like
the
things
that
are
likely
to
change
that
I
would
be
very
careful
of
now.
The
rest
of
it
will
be
about
what
allowable
changes
can
come
in
the
future.
I
So
it's
more
about
the
evolution
of
telemetry,
but
those
first,
two
things
are:
if
your
PR
touches
any
of
those
two
things
right
like
what
resource
to
attach
a
metric
to
what
resource
to
attach
a
trace
to
or
if
it
touches
any
kind
of
label
that
you
would
expect
to
be
shared
between
a
metric
and
something
else
where
the
cardinality
is
not
explicitly
kind
of,
you
know
reigned
in
then
you
should
expect
some
to
have
to
come
back
and
revisit
it.
A
So
I
think
the
original
question
is
still
valid
right.
What
Angie
is
asking
is:
are
we
blocked
for
an
unknown
period
of
time
or
months
years?
I
think
that's
that's
a
valid
question
right
and
what
what
you
said,
Josh
I,
think
we
need
to
have
specific
guidance
about
what
we
do
accept,
Now
versus
what
we
think
should
not
be
accepted
because
it's
subject
to
changes
in
the
future
right.
So
maybe
we
should
do
that
right.
Let's,
let's
have
that
guidance
and
apply
that
guidance
on
this
PRS
right.
So
maybe
some
of
the
PRS
are
fine.
A
They
can
be
accepted
and
some
of
those
because
they
are
highly
likely
to
to
to
change
as
a
result
of
what
we
decide
in
semantic
conventions.
Work
group,
that's
that.
For
that
reason
we
we
block
the
PRS
I
I
do
agree
that
we
should
not
be
blocking
all
of
the
evolution
of
semantic
conventions
that
open
Telemetry.
It's
probably
not
the
right
thing
to
do
unless
we
know
clearly
that
it
will
be
unblocked
in
let's
say
two
weeks
three
weeks.
That
would
be
fine,
but
we
don't
right.
So
we
can
just
block
it
indefinitely.
A
I
agree
with
this
with
that
Josh
can.
C
We
make
a
distinction
between
semantic,
conventional
metric
names
and
semiconventional,
attribute
names
and
semantic
conventional
resource
attributes,
because
I
feel
here
that
Andrea
is
proposing
sorry
for
your
name.
Wrong
is
proposing
metrics,
not
new,
cardinality,
literally
missing
metrics,
so
I
think
system.
Uptime
is
really
important,
I
think
process,
uptime
is
really
important.
I
know,
ECS
has
a
a
different
schematic
convention
and
we
can
support
both
we're
just
writing
conventions
for
what
these
names
mean.
When
they're
used
as
metrics
and
I
feel
like
we're
falling
behind
process
memory,
utilization
2
seems
really
uncontroversial.
C
It's
not
going
to
increase
it's
not
going
to
increase
cardinality
I,
just
think
all
these
things
are
reasonable
and
they
shouldn't
be
stuck
in
this
other
debate
about
inflationary
cardinality,
which
is
about
metric
labels
and
and
for
the
most
part,
I
believe
our
resource
conventions
are
about
adding
attributes
to
resources
which
really
should
not.
They
want
to
stop
repeating
this.
We
should
not
be
experiencing
cardinality
explosion
because
of
resource
attributes
in
Prometheus,
which
is
like
the
golden
model
that
we
have
right
now.
C
Those
resource
attributes
are
things
you
join
against
by
by
taking
your
your
existing
cardinality
find
your
target
info
using
instance
and
job.
There
are
your
resource
attributes.
You
can
join
them
using
a
group
left
expression
in
prom
ql.
There's
no
cardinality
from
new
attributes
just
want
to
stop
repeating
that,
and
then
we
can
move
forward
with
what
Josh
and
rally
were
talking
about
for
that
attribute
names.
I
Yeah,
so,
okay,
sorry
go
ahead
in
this
specific
issue.
In
this
specific
issue,
I
agree
that
I
think
these
metrics
are
okay
to
add.
There
is
one
meta
concern
that,
like
Tigard
and
I,
were
discussing
in
the
pending
design
doc
around
how
to
handle
identified
resource
attributes,
which
is
where
do
you
attach
the
metric?
I
Should
a
process
metric
be
attached
to
a
process
resource,
or
should
it
be
attached
to
a
host
resource?
Today,
all
of
our
process,
metrics,
are
attached
to
a
host
resource
and
I'd
suggest
I.
Don't
expect
that
to
change,
we
actually
consider
them
kind
of
orthogonal
to
each
other
and
that
that
is
just
a
decision
we've
made.
I
I
If
I'm,
if
I'm
getting
process,
metrics
and
I'm,
asking
the
host
I'm
running
like
the
PS
command
and
pulling
process,
metrics
right
do
I
attach
those
metrics
to
a
host
resource
and
process.
Id
is
a
label
on
a
particular
metric
stream
or
do
I
attach
them
to
a
process
resource
where
the
process
ID
is
considered.
Part
of
that
resource,
identity
and
I.
Don't
have
process
ID
as
a
label
on
the
metric,
but.
C
I
I
Same
yeah,
I,
don't
think
so,
but
in
any
case,
what
we're
trying
to
do
is
be
consistent
and
come
up
with
a
consistent
set
of
rules
for
everyone
to
follow.
If
you're,
following
the
existing
Convention
of
all
of
these
metrics
are
attached
to
a
post-resource
and
that
process
ID
is
part
of
the
time
series
as
a
label,
then
I
think
everything's
fine
here,
because
that's
the
approach,
we're
taking
with
the
initial
proposal.
I
However,
I
do
expect
us
to
bike
shed
that
specific
Nuance
over
and
over
and
around
in
circles
for
a
very
long
time
and
I.
Don't
want
you
to
be
blocked
by
it.
So
I
think
because
Samantha
like
because
in
practice
you
could
model
it
either
way
and
they're
transformable
between
each
other
kind
of
doesn't
matter.
So
just
stick
with
what
we
have
now
we'll
describe.
Why
and
we
can
deal
we
can
deal
with
the
Fallout
later
is
basically
what
I'm
suggesting.
As
long
as
you
know,
there
could
be
breaking
changes
coming.
J
A
You
can
we
have
that
that
maybe
one
page
guidance
what
you
just
said
right
Josh.
Can
you
I,
don't
know,
maybe
you're
you're
in
the
best
position
to
write
that
point
where.
I
C
It
can
I
say
an
example
of
the
problem
that
you're
describing
just
to
make
sure
we
all
agree
on
that.
So
you
know
you
could
have
an
SDK
a
go
SDK
I'm
thinking
of
and
it's
able
to
read
flash
proc
and
Linux
and
identify
its
own
process.
Cpu
usage,
its
own
host
Pro
parameters
like
how
much
memory
is
on
this
host
and
stuff,
and
it
could
report
its
own
metrics
on
that
stuff.
C
We
could
come
up
with
a
totally
different
set
of
semantic
conventions
for
how
a
host
metrics
receiver
on
The
Collector
running
on
the
same
node
is
able
to
report
exactly
the
same
information
using
different
resources
and
different
metrics
right.
It
and
I
don't
see
that
as
a
real
problem,
because
I
might
want
to
run
an
environment
with
no
collector
in
my
SDK
and
Report,
some
metrics
and
I
might
want
to
run
an
environment
with
a
with
a
collector
and
no
SDK
support
and
both
should
be
valid
and
both
should
have
meaning
I.
C
I
Interesting
so
yeah,
but
the
ques,
the
semantic
conventions
are
a
model
right.
How
do
I
report
these
kind
of
metrics
in
this
scenario,
so
I
don't
want
to
dive
into
it.
Now,
that's
what
we
want
that
working
group
to
be
defined
as
and
I
think
I'm
going
to
schedule
that
first
meeting
I.
Unfortunately,
I
was
going
to
do
it
this
week,
but
I
had
a
bunch
of
personal
stuff,
come
up
so
I'm
going
to
schedule
it
for
next
week
for
the
first
meeting.
But
what
like
this
is.
I
This
is
going
to
be
that
initial
topic
of
discussion.
We
have
a
design,
doc,
ticker
and
I
are
working
on
which
we
want
to
give
time
for
people
to
review
before
we
actually
have
that
first
working
group
meeting,
where
we'll
talk
through
I,
think
the
most
important
first
things
to
discuss.
But
basically
the
goal
I
think
that
that
we've
come
down
on
is
if
something
is
an
identity.
I
Sorry,
if
a
resource,
if
we
Define
Resources
with
good
identity
semantics,
it
actually
shouldn't
matter
in
practice,
whether
I
attach
to
a
process
Identity
or
an
SDK,
Identity
or
whatever,
because
we
can
transform
them
in
back,
ends
easily
and
figure
out
that
kind
of
identity
transformation.
We're
just
picking
a
default
for
how
we
want
to
do
semantic
conventions
modeling
right.
So
there
will
be
a
default
for
how
we
model
in
otel,
but
in
practice.
A
A
We
we
need
to
produce
some
sort
of
guidance
which
says
what
sort
of
semantic
conventions
we
agree
to
to
accept
for
now
and
what
sort
we
please
agree,
because
they
are
very
likely
to
to
be
changed
in
the
future,
and
we
need
that
guidance
quickly
right,
not
in
three
months.
We
need
it
in
a
couple
weeks.
A
H
That
yeah
my
girlfriend
is
next
week.
Drug
storage
is
going
to
schedule.
That
meeting
like
we'll
have
a
meeting
in
in
two
weeks
or
three
weeks,
and
we
should
use
that
meeting
to
address
this
issue
and
meanwhile
I'm
wondering
like
we
still
need
the
guidance,
because
I
I
I
feel
this
is
a
very
urgent
thing
that
we
need
to
address
as
soon
as
possible.
So
why
do
we
produce
some
intermediate
guidance?
The
Virgins
have
a
meeting
in
two
three
weeks.
A
H
A
H
A
Yeah
the
point
that
I'm
trying
to
make
is
that
we
we
don't
want
to
block
people
indefinitely
I,
think
that's
not
the
right
thing
to
do.
Yeah!
That's!
That's
all
I
wanted
to
communicate
that
we
want
to
unblock
you
guys,
one
way
or
another.
Either
we
say
it's
acceptable
or
we
say
it's
unacceptable,
but
we
do
that
reasonably
quickly
in
a
matter
of
let's
say
a
couple
weeks:
not
not
vlogging.
Definitely
that's!
That's
all
I
wanted
to
say
yeah.
G
Yes,
thank
you
very
much
tigrant.
Thank
you.
Jay
MCD
and
this
all
sounds
good
I'm,
not
sure
what
meeting
you
are
talking
about
with
Joshua.
It's
not
scheduled.
A
G
I
It'll
it'll
show
up
on
the
calendar
and
we'll
advertise
it
in
slack.
So
if
you're
on
slack
on
the
specification
working
group
we'll
advertise
the
the
time
there,
it's
either
going
to
be
on
a
Monday
or
on
a
Friday.
What
time
zone
are
you
by
the
way?
Because
there
is
a
doodle
you
can
fill
out
still?
Okay,.
G
Yeah
I
think
I
see
the
I'm
in
Europe
in
Warsaw
right,
so
it's
Central,
Europe
UTC.
I
H
I
thought
for
for
Josh,
I,
I
I
feel
there's
a
topic
we
we
should
discuss
in
the
very
first
or
maybe
the
second
meeting,
there's
elastic
common
schema
and
we're
looking
for.
If
we
we
could
align
some
effort
here
so
for
things
that
are
very
common,
like
system
uptime
or
this
cooperation.
If
you
look
at
ECS,
they
already
have
that
defined
for
many
years
and
it's
being
used.
So
so
there's
a
metal
question
like.
H
I
Yes,
that's
a
great
question:
yeah
and
I
I
will
I'll.
Add
that
to
the
docket.
A
G
D
And
by
the
way,
out
of
curiosity
is
according
to
the
discussion
we
have.
Maybe
this
one
is
not
blog
correct,
but
anyway
just
some
idea,
because
this
is
attached
to
the
to
the
entire
system,
but
anyway,
let's
continue
discussing
offline
okay.
Next
one
Bruno
possibility
to
really
suspect
version.
131,
yeah,
Bruno.
J
So
let
me
relay
the
request
from
the
eclipse
microphile
folks,
so
we
as
we
submitted,
requests
to
to
add
the
the
property
that
would
enabled
and
disable
the
SDK.
That's
that
was
requirement
that
that
group
came
came
to
and
this
took
off
a
while
to
merge.
Now
it's
merge
and
it's
solved,
but
the
spec
was
released
recently
and
I.
Don't
force
it
to
be
released
before
the
the
upcoming
micro
file
release,
which
will
happen
around
October
November.
J
So
this
is
to
ask
you
guys
if
we
can
release
1.13.1
with
whatever
we
have
up
until
now,
or
something
like
that.
D
D
So
that's
a
plan.
If
there
was
something
to
delay
that,
then
we
could
consider
doing
one
thirteen
one
I
think.
A
A
A
No
I
understand
that,
but
this
is
the
you're
suggesting
a
patch
release.
Patch
releases
are
for
bug
fixes.
This
is
not
a
bug.
This
is
new
environment.
Variable
right,
it's
it's
a
new
feature
and.
J
A
A
D
Yes,
thanks
very
much.
Okay,
in
that
case,
I
can
probably
prepare
the
release
on
Monday
next
Monday
and
let's
see
how
that
goes.
So,
basically,
we
will
have
like
a
live
review
next
Tuesday
in
the
call,
hopefully
that
that
works
sounds
good,
sweet,
okay,
two
more
from
Josh
Joshua
McDonald's.
Please
all.
C
Right
so
this
first
one
is
something
I'd
like
to
get
into
the
next
release.
Frankly,
so
I
want
to
bring
attention
to
it.
There's
So
Meta
comment
on
this
PR
here
we're
talking
about
it's
been
reopened.
This
28.38
and
the
observation
I
have
is
that
I
think
this
review
strung
on
for
too
long.
It
made
it
so
that
reviewers
lost
context
and
couldn't
really
see
what
they
were
reviewing
I
I.
This
happened
to
me
as
well.
C
The
original
PR
listed
some
options
that
we
I
believe
discussed
in
this
meeting
two
months
ago
by
the
time
I
got
to
like
really
scrutinizing
this
PR
I
had
forgotten
that
there
were
options
and
I
was
only
looking
at
it
from
the
frame
of
mind
of.
Is
this
a
correct
change
to
the
specification
which
looked
good
but
I
kept
seeing,
but
I
was,
and
I
was
confused
by
the
options
and
I'll
say
that
just
admit
it,
but
what
I?
What
I
saw
when
I
came
to
catch
up
with
this
PR?
C
C
It
does
have
some
approvals.
I
think
people
are
approving
this
because
they
don't
care
and
they
want
to
get
it
done,
but
there
are
still
options
and
I'd
like
us
to
take
the
best
one
when
I
finally
got
to
understanding
what
I
just
said,
I've
started
to
feel
in
line
with
the
other
two
commenters
that
the
other
option
might
be
better.
They
also
don't
want
to
make
work
for
David.
C
But
honestly,
this
PR
is
so
close
to
the
other
option
that
it
doesn't
take
much
to
change
and
I
want
to
discuss
whether
everyone
else
experienced
that,
like
me,
and
they
aren't
quite
sure
what
the
other
option
was,
and
if
so,
we
should
hopefully
ask
David
to
do
it,
one
more
time
or
I.
Even
I
looked
at
forking
his
PR,
you
know
copying
it
myself
and
drafting
it,
because
it
didn't
look
too
hard.
C
The
idea
is,
and
if
you
now
scroll
down
to
the
bottom
of
this
Carlos
I
made
some
some
ASCII
art
there.
The
point
is
that
the
the
top
diagram
is
what's
in
David's
PR
currently,
and
the
bottom
diagram
is
how
I
see
the
alternative.
C
Bogan
was
on
record
saying
he
liked
the
alternative
and
Aaron
Claussen
go.
Maintainer
is
also
on
record
as
saying
that
and
I
started.
Finally
understanding.
It
I
actually
think
that
the
second
option
or
the
alternative
option
is
actually
pretty
good.
It
leaves
this
complication
about
the
the
bridge
out
of
the
main
SDK
and
the
meter
provider
is
not
talking
about
bridges.
C
That's
how
I
see
it
and
still
I
think
it
leaves
us
with
a
cleaner
spec
and
it
doesn't
make
a
big
change
for
David
I
just
want
to
call
that
out.
I
think
we
should
review
the
alternative
and
I
also
think
we
should
move
quickly,
because
this
is
this
is
blocking
a
bunch
of
stuff
with
Hotel
go
and
The
Collector.
Would
anyone
like
to
speak
on
that
I.
K
I
have
a
question
just
to
make
sure
we
understand
each
other
on
the
alternative
you,
you
are
saying
that
metric
reader
has
an
interface
of
where,
where
it
exports,
this
export
data.
C
K
C
Would
call
it
a
processor
if
it
was
in
The
Collector
I
want
to
call
it
an
append,
only
processor,
because
I
don't
want
it
to
muck
around
with
data
that
I've
already
exposed.
I
just
wanted
to
say
here
you
can
add
Scopes,
as
you
wish
and
I
would
even
be
willing
to
say
it's,
not
the
responsibility
of
anyone
to
validate
that.
There's
no
conflicts,
whether
let
the
recipient
deal
with
that
I
would
I
would
be
happier
I.
C
Think
we're
trying
to
simplify
our
specification
right
now
and
the
reason
why
option
one,
which
is
the
pr
in
front
of
us,
is
a
little
complicated
more
so
than
I.
Think
it
has
to
be.
Is
that
we're
requiring
error
handling
and
and
it
is
going
to
create
a
harder
implementation?
Basically,
this
seems
to
be
the
slightly
easier
path.
Okay,
but
let's.
I
Talk:
let's
talk
the
user
experience
here,
like
two
I
mean
like
come
back
to
what
this
is
targeted
at,
which
is
say
getting
rid
of
open
census
or
or
like
a
deprecation
path
for
open
census
users
right
if
you're,
requiring
every
single
metric
reader
to
be
overridden
that
that
means
like
and
there's
not
a
great
automatic
path
to
do
that
right.
That's
like
really
ugly
to
set
up
or
impractical
in
some
cases
for
auto
instrumentation,
for
example,
to
actually
do
that
in
practice.
Secondarily,
the
notion
of
resource
being
shared
is
an
important
aspect.
I
Resource
is
not
exposed
today
in
an
easy
way
there.
It's
it's
somewhat
hard
to
get
access
to
Resource.
So
that
means
some
sort
of
weird
interdependency
between
this
overriding
metric
reader,
because
we
don't
give
metric
reader
resource
right.
Are
you
going
to
change
that
so
this
thing
can
have
access
to
it.
I
think
the
notion
of
sharing
resource
is
the
important
part
of
this
bridge.
I
I
C
When
you
append
Scopes,
you
don't
even
get
to
say
the
resource
you
just
give
me
more
Scopes
and
I
will
and
the
re
like.
Okay,
there's
some
enforcement
mechanism,
you're
saying
like
if
the
the
raw
data
is
passing
through,
do
they
have
an
opportunity
to
to
change
the
resource?
I
would
still
expect
a
single
resource
to
come
through
to
the
exporter.
The
exporter
does
not
expect
to
see
multiple
resources.
C
I
feel
like
we're
we're
describing
an
unexpected
Corner
case.
Essentially
I'm.
I
C
C
To
me,
a
metric
reader
is
a
is
a
an
implementation.
The
the
implementation
of
metric
reader
is
the
detail
here.
It's
every
SDK
can
do
it
differently.
The
SDK
the
reader
was
provided
as
a
configuration
mechanism,
and
we
added
a
single
method
called
collect
to
get
the
data
I.
Don't
see
why
you
can't
chain
them
to
to
add
data
without
it.
I
We
haven't
specified
this
to
function
correctly
and
the
implementations
vary
based
on
language,
and
so
we
we
actually
it
will
require
more
specification
work
to
put
it
into
reader
to
make
sure
that
certain
things
are
upheld
and
the
most
important
thing
is
I.
Think
the
user
experience
of
someone
trying
to
use
one
of
these
Bridges.
C
I
want
to
talk
about
user
experience
separately
from
the
question
about
whether
metric
reader
is
is
the
wrong
interface,
because
Java
created
a
loophole
that
we've
discussed
in
the
prior
PR.
That
makes
absolutely
no
sense
to
me
and
I:
don't
want
to
go
complicating
our
specification
because
Java
exposed
resource
to
the
reader
in
the
wrong
way.
So,
but
but
let's
pause
that
conversation
for
a
moment
because
I'm
I'm,
okay,
letting
Java
do
whatever
it
wants
as
long
as
it
meets
classification,
so
I
want
to
write
specification
to
correct
what
Java
did.
C
The
usability
issue
could
be
corrected
with
like
a
helper
method,
which
is
something
like
saying
this
SDK
setup
will
have
the
following
metric
producer:
I
want
to
call
them
processors
since
that's
effectively.
What
they
are
here
are
some
processors.
They
will
be
added
to
every
pipeline,
which
means
every
reader
will
have
automatically
have
processors
attached
that
inject
data
as
a
metric
producer
would
that's.
The
preference
I
have.
K
K
C
I
C
E
I
C
F
C
I
I'm
saying
it
so
Josh
we've
never
specified
this
type,
the
the
way
that
metrics
are
collected
for
traces.
We
specified
here's.
What
a
read
write
span
looks
like
here's.
What
a
span
looks
like
for
metrics,
we
never
specified
any
type
in
the
specification.
It
is
up
to
the
implementations,
do
whatever
the
hell
they
want.
So
if
we
wanted
to
go
with
that
route
like
that
means,
we
need
to
fully
specify
what
these
types
look
like
in
some
fashion
or
be
a
little
bit
more
prescriptive
here
to
to
answer
that
in
the
spec
right.
C
I
see
I,
guess,
I
see
what
you're
saying
I
didn't
think
it
was
missing
from
the
spec
and
I
I
mean
I,
guess,
I
agree
this
points
to
a
weaknesses
in
our
resource
specification.
I
C
I
That
that's
fine!
What
so,
if
you,
if
we
go,
look
at
the
implementations
of
metric
reader
and
Metric
producer
and
the
types
that
are
passed
around
okay,
maybe
go
allows
passing
resource
separately
from
the
data.
But
that's
not
true
in
every
language,
and
we
have
stable
releases
of
this
because
we
didn't
specify
that
that
they
has
to
be
passed
separately
or.
C
E
E
I
It's
a
so
what
the
way
Java
looks
and
I
guess.
Jack
can
correct
me
if
I'm
wrong
here,
but
there's
a
it's
a
flattening
of
the
data
model
into
a
single
kind
of
entity.
So
there's
this
there's
this
like
resource
metric
that
has
a
resource,
a
scope
and
a
metric
attached
to
it
that
you
get
a
collection
of
these
things.
E
Okay,
yeah
I
mean
that.
So
that's
why
I'm
a
little
confused
that
that
is
very
similar
to
what
we
do
in
go
as
well,
and
so
I
I
wonder
if,
if
I
just
I
worry
that
this
conversation
we're
talking
past
each
other
into
like
what
the
structure
is
going
to
look
like
in
the.
I
End
well,
what
I'm
suggesting
is
so
so
like
in
terms
of
let's
talk
about
requirements
for
what
this
thing
needs
to
do
right.
So
the
the
goal
is
like,
let's
say:
I
have
an
open
census.
Metric
system
right
requirement:
number
one
is
all
of
those
metrics
that
come
from
it
should
attach
to
the
same
Resource
as
open,
telemetry.
I
Okay
goal
number
two:
is
we
want
this
to
kind
of
be
easy
to
configure
and
an
addendum
to
what
open,
Telemetry
metrics
provide
the
idea
being
if
we
want
to
deprecate
open
census
and
move
to
open,
Telemetry
I,
add
this
bridge
right
and
then
I
can
slowly
remove
my
instrumentation
of
open
census,
metrics
and
replace
the
dope
and
Telemetry
without
losing
metrics,
so
I
can
go
through
piecemeal
and
change.
I
My
instrumentation
right
and
everything
should
work
now
to
actually
work
in
the
specification
with
like
metric
producer
meter
provider,
all
that
kind
of
stuff.
The
question
is
what
we
tried
to
what
we
proposed
for
the
open
senses
bridge
for
tracing
and
what
already
was
approved.
Was
this
notion
of
I?
First,
add
the
open
Telemetry
bridge
to
my
application,
which
means
I
now
use
the
open,
Telemetry
exporters
with
open
census
data,
and
we
have
this
thing
where
context
is
actually
shared
between
the
two
libraries
and
traces.
I
You
know
from
one
go
to
open,
Telemetry
processing
right
for
open
census.
We
can't
do
that
for
a
variety
of
reasons.
Basically,
the
view
specification
and
the
instrument
specification
is
significantly
different
to
the
point
where
it's
impossible.
So
what
we're
suggesting
here
is
that
we
get
the
best
possible
benefit
of
okay.
We
use
all
of
your
open
Telemetry
configuration
for
exporting
it.
We
just
have
to
use
the
actual
aggregate
metric
store
for
open
census
because
of
that
breakage
in
the
model
right,
but
the
idea
of
being
from
a
user
standpoint.
I
add
this
bridge.
I
All
of
my
configuration
for
open,
Telemetry
doesn't
change,
but
I
might
see
some
breaking
changes
going
from
the
old,
open
census,
exporters
into
the
open,
Telemetry
exporters
because
of
some
changes
in
functionality
and
some
changes
in
resource,
and
that's
a
one-time
breaking
change
that
I
make
and
then
post
that
I
start
slowly
re-instrumenting
my
metrics
right-
and
this
is
the
migration
path.
This
is
what
we're
looking
for
from
this
particular
feature
like
not
to
lose
sight
of
like
I
want
to.
I
You
know
this
isn't
just
about
open
census,
but
it
is
primarily
about
it
and
and
that
path
right.
So
how
do
I
take
this
other
storage
of
memory
and
attach
it
to
open
Telemetry
such
that
the
export
path,
Remains
the
Same,
and
so
this
whole,
like,
oh
just
you
know,
add
it
in
a
thousand
bajillion
places,
I'm
very
much
against,
because
it
goes
against
that
open
census.
Migration
philosophy
that
we
proposed
right
and
I
don't
see
the
benefit
to
users
either.
E
So
Josh
I,
I
I
would
be
surprised
if
anyone
on
the
call
kind
of
disagreed
with
a
lot
a
lot
of
those
points
you
just
made
I
think
fundamentally,
I
think
everyone
agrees
that
all
the
the
data
from
open
business
needs
to
show
up
under
this
resource.
I
think
that
the
user
story
of
of
having
that
come
in
in
a
migration
path
that,
like
is
forward-facing,
is
also
agreed
upon.
E
I
saw
this
PR
as
the
ability
to
do
that,
but
if,
if
you,
if
you
see
something
that
is
contrary
to
that,
I
definitely
want
to
see.
This
is
what
I'm
saying
I,
don't
think
I
understand
the
the
contrary,
nature
to
this
PR,
to
being
able
to
produce
that
so
I
want
to
understand
that
a
little
better.
C
Ahead,
I
was
trying
to
help
get
this
resolved
and
I
I've
stated
my
opinions
I'm
glad
to
hear
a
strong
opposition
and
and
I
think.
At
this
point
we
actually
still
have
a
debate
even
if
I
stepped
back
bugged
and
had
opinions
about
option
two
I'm
interested
in
getting
this
solved.
I
I
will
approve
the
current
PR.
2838
is
fine
with
me.
I
just
think
it's
a
little
harder
to
implement
Josh
I
do
hear
the
usability
issue.
I
did
say
something
like
with
metric
processor
that
gets
stapled
onto
every
export
pipeline.
I.
C
Think
that
solves
the
usability
issue.
I
want
to
step
back
and
let
others
speak,
but
we
should
resolve
this.
K
So
I
I
can
I
am
the
one
who
opposes
a
bit.
So
the
judge.
Can
you
explain
to
me
or
maybe
I'm,
not
understanding,
so
you
are
okay
with
this
current
proposal
which,
which
you
said,
does
it
not
make
a
weird
name
metric
producer,
but
you
are
not
able
to
produce
the
entire
metrics.
You
are
just
I
mean
you
are
producing
only
not
just
the
scope
and
the
metric
is
not
the
resource.
I
I
I
I
So
so
Bogdan
I
would
have
agreed
with
you,
except
the
direction
we
took
with
metrics
is
there
is
a
odd
relationship
like
inverse
relationship
between
reader
and
provider,
where
reader
can
determine
default,
aggregation
with
provider
and
therefore
a
this
metric
producer
thing
this?
What
I
would
call
a
bridge
metric
producer
cannot
have
that
interaction
or-
or
it's
not
obvious,
whether
that
interaction
is.
I
K
C
That's
actually
how
I
implemented
it.
In
my
prototype
I,
the
each
meter
produces
a
set
of
scope,
a
single
scope
and
a
list
of
metrics,
and
the
sdk's
job
is
to
call
all
the
meters
get
one
scope
for
each
and
my
original
concept
for
this
metric
producer
was
like
this
PR
in
front
of
us
is
just
now
called
producers
and
get
one
more
scope
from
each
of
them.
I
So,
where
do
I
hook
this
this
additional
set
of
metrics
that
will
attach
to
every
metric
reader
from
that
SDK
I
have
to
do
it
either
at
the
meter
provider.
You
know
you
know
point
or
something
that
knows
when
all
metric
readers
are
registered,
which
is
still
meter
provider
right.
So
no
matter
what
I'm
going
to
have
a
call
to
the
meter
provider.
That
will
be
aware
of
this
thing:
okay,
even
if
I
register
it
in
every
single
metric
reader.
I
K
See
what
I'm
saying?
Okay,
okay,
I,
see
what
you
are
seeing
you
you
want
to
achieve
another
thing,
which
is
if,
if
I
register,
multiple
metric
readers
I
to
the
same
provider,
you
want
all
of
them
to
see
the
the
the
the
bridge
metrics
without
having
users.
To
think
that
there
is
a
bridge.
I
mean
you
want
to
decouple
decouple
the
registration
of
the
readers
from
from
the
bridge
and
I.
I
Yeah,
this
is
that
this
is
that
story
for
open
census,
migration
of
like
okay.
First,
you
take
this
bridge
hit
where
all
of
you
you're
gonna,
have
a
breaking
change
where
suddenly
you're
using
Hotel
resources
and
exporters
right,
but
both
metrics
show
up
then
I
slowly
remove
my
instrumentation
of
open
senses
and
get
open,
Telemetry
metrics,
and
so
my
metrics
slowly
shift
from
one
scope
to
another
across
the
bridge
right.
But
the
idea
is
I'm
reusing
I'm
using
the
otel
exporter
specification
now
and
no
open
census
things
whatsoever
for
Xbox.
So.
K
Then
then,
then,
then,
if
we
do
this
model,
that
David
proposed
I
I
start
now
to
understand
the
motivations
and
kind
of
like
it,
with
the
only
one
exception,
I
think
the
metric
producer,
it's
it's
kind
of
a
wrong
name
and
I
think
I
think
the
reason
why
this
is
ex
so
so
what
we
are
seeing
is
the
data
that
are
coming
from.
This
is
a
scope
plus
the
metrics
associated
with
this
scope.
Correct.
K
That's
that's
the
output
of
this,
this
interface,
which
for
me,
looks
very
similar
with
the
with
the
meter
SDK
like
with
the
meter
so
I
think.
If
we
come
up
with
the
with
an
interface
I
think
the
SDK
meter
should
implement
the
same
interface
and
from
and
from
the
perspective
of
the
provider.
We
should
have
just
a
list
of
these
interfaces,
which
some
of
them
are
meter.
Sdk.
Some
of
them
are
bridges,
but
all
of
them
are
treated
the
same
way.
K
K
Think
here
is
where,
where,
where
probably
I
have
a
shortcut
in
my
mind,
which
is
we
are
making
these
two
look
like
a
meter
provider,
a
like
a
meter
producer,
but
not
all
the
things
are
applying
to
this
as
they
are
applying
to
the
other.
That's
why,
for
me,
is
like
I
want
to
see
it
from
different
way,
because
I
don't
want
to
connect
them
with
the
views
at
all.
Views
are
part
of
this
metric
reader
combination
with
with
the
providers.
That's
why
I
said.
K
Oh,
if
I
see
this
as
a
producer,
this
combination
of
those
two
and
I
have
a
way
to
to
see
this
as
a
producer,
then
completely
decouple
the
views
from
these.
Now,
if
we
are
attaching
to
the
provider,
my
problem
is
now
I
have
a
shortcut
in
my
mind.
When
do
a
view
apply
or
not
to
apply
to
these
metrics,
what
I
can
configure?
What
I
cannot
configure
to
this,
and
this
is
where
I,
where
I
have
troubles
understanding?
Does
it
make
sense
to
you.
I
Yeah
yeah,
but
the
thing
is
in
our
specification:
we've
coupled
the
two
right,
so
I
can't
I
can't
produce
a
metric
export
pipeline
without
attaching
it
to
a
meter
provider.
So
if
you
give
me
a
solution
to
that,
then
I
agree
with
you,
but
unfortunately
we
have
coupled
those
two
they're
the
same
thing.
Maybe
we
decouple.
This
is
maybe
then
propose
something
that
decouples
them.
I
haven't
seen
a
proposal
that
effectively
decouples
them
in
a
way
that
I
think
actually
is
any
different
than
what's
being
brought
by
David
here.
So
like
yeah.
K
I
Discussed
but
when
we
think
about
that
think
about
it
in
the
context
of
how
I
configure
the
metrics
SDK
right
and
what
it
looks
like
to
a
user.
If
you
give
me
a
syntax
of
like
here's,
what
it
would
look
like
and
go
Java
JavaScript
that
you
know
effectively
achieves
the
thing
we
want
of
I.
Add
the
bridge
right
and
all
of
my
exporters
are
shared.
I
K
I
K
I
Because
I
think
with
a
I
think
we
can
clearly
document
it.
B
I
think
that's
a
one-time,
confusion
and
C.
It
helps
motivate
deprecating
like
we
want
to
get
people
off
of
Open
census
right,
but
we
want
to
give
them
a
very
easy
path
to
do
so
and
I.
You
know
fundamentally
they're
going
to
be
controlling
their
views
in
open
census,
while
they're.
K
K
C
C
I
Confuse
okay,
we
went
back
and
forth
in
a
whole
bunch
the
point
being
summaries
that
there
are
things
the
only
time
you're
going
to
use.
This
interface
is,
if
you
can't
it
get
access
to
events
or
the
appropriate
callback.
If
you
can't
use
our
raw
instruments,
which
is
the
case
for
open
census
today.
I
So
that's
why
I'm
not
as
worried
about
those
confusing
users,
because
I
think
in
all
cases,
there's
a
technical
limitation
where
you're
going
to
have
to
use
the
in-process
aggregation
defined
by
the
underlying
system
until
that
instrumentation
is
Rewritten
and
in
almost
all
cases,
I
can
imagine
that
underlying
system
owns
the
aggregation.
So,
okay,
anyway,.