►
From YouTube: 2021-06-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
Is
it
okay
hope?
That's.
Okay,
please
add
yourself
to
the
agenda.
Thank
you
to
the
agenda
and
one
minute
we.
C
C
Okay,
let's
start
thank
you
for
joining
yeah.
We
have
only
a
pair
of
items
in
the
agenda.
We
usually
use
like
five
minutes
to
do
some
triaging
of
the
new
issues,
but
I
was
taking
a
look
at
the
new
issues
and
everything
is
triaged
so
far,
so
we
can
just
go
directly
to
the
items.
First
of
all,
john
watson
by
weekly
reminder.
C
D
Yeah,
as
I
keep
telling
people
every
other
week
and
no
one
shows
up
except
me-
there
is
an
asia,
pacific,
friendly
spec
meeting
on
on
tuesday
afternoons
every
other
week,
and
I
will
definitely
say
that
the
asia
civic
people
are
feeling
like
they
have
no
voice
and
are
that
the
community
is
fairly
actively
hostile
to
them,
because
no
one
shows
up
to
this
meeting.
D
E
C
Yes,
somebody
from
the
ruby
community
jumped
in
and
left
some
notes,
so
we
definitely
need
to
follow
up
on
that
one.
So
please,
please
do
so.
F
Please
yeah:
I
was
trying
to
upgrade
today
our
libraries
to
the
the
new
100,
rc
and
notice
in
the
project
status
that
development
on
metrics
has
been
paused
and
there
and
that
one
they're,
citing
one
of
the
reasons,
that's
why
I'm
bringing
up
here
is
being
the
specifications
still
not
finalized
for
for
metrics.
I'm
just
trying
to
understand
long
term.
What's
happening,
we've
been
diligently
trying
to
keep
our
libraries
up
to
date
since
last
june,.
B
Hi,
so
I
have
an
answer:
awesome
I've
been
working
with
hotel,
metrics
sig
on
specs
and
stuff,
but
I
was
the
author
of
the
primary
and
primary
author
of
the
existing
prototype
and
we
learned
a
lot
from
it.
It
has
some
issues
mainly
complications
caused
by
old
names,
as
well
as
some
stuff
that
just
doesn't
never
needed
to
be
done
and
was
exploratory
a
bit.
I
volunteered
to
produce
another
api,
a
prototype
that
matches
the
current
otap
as
riley
has
been
guiding
everyone
in
the
other
repositories.
B
I
have
a
rough
draft
right
now
and
I'd
like
to
show
it
to
you,
I'm
going
to
put
a
link
in
the
in
the
notes
here
in
a
minute.
It
is
just
an
api
sketch
to
this
point.
It's
not
connected
to
the
existing
sdk,
but
my
my
hypothesis
is
that
the
existing
sdk
is
fairly
mature,
doesn't
really
need
to
change
at
all
to
accommodate
the
new
api.
B
B
Than
to
like
force
you
to
change
it
a
bunch,
so
if
you've
finished
the
change
to
the
1.0,
rc
hang
tight
there
and
the
there's
something
new
coming
is
what
I
have
to
say:
excellent
thanks,
josh,
you're,
welcome
and.
G
A
bit
more
context
is
that,
as
as
of
the
1.0
rc,
we've
split
the
versioning
for
tracing
and
metrics,
and
so
you
should
be
able
to
continue
to
update
the
1.0
packages
without
them
having
any
dependence
on
the
metrics.
So
if
you
want
to
say
metrics
point
21
as
josh
is
advising,
then
you
should
still
be
able
to
update
traces
to
1.0
rc2
or
the
the
final
release.
G
When
we
get
there,
we've
tried
to
ensure
that
there
are
no
dependencies
flowing
from
the
stable
packages
to
the
metrics
unstable
packages,
and
I
think
the
the
note
in
the
project
status.
We
probably
should
update
fairly
soon
to
indicate
that
we're
back
to
development
on
metrics,
but
that
is
still
our
current
state
right.
We're
finishing
up.
Traces
and
metrics
is
waiting
for
the
api
to
to
give
us
the
the
stability
to
build
against.
B
I'll
just
add
that
I
no
I'm
not
going
to
force.
No,
no
one's
going
to
say
josh's
whatever
josh
does
is
what
we're
going
to
accept.
I
will
make
a
proposal
that
I
hope
the
go
community
likes.
The
api
for
the
previous
proposals
were
entirely
about
like
learning
the
data
model
and
getting
to
understand
what
open
senses,
compatibility
meant
getting
understand,
prometheus
compatibility-
and
I
just
don't
want
anyone
to
like
like
look
at
that
as
if
that's
being
proposed
today,.
H
Sorry
so
what
anthony
just
said
about
the
diversioning
that
the
tracing
and
the
metrics
version
will
be
split?
Will
that
be
true
for
every
language,
implementation.
B
B
Just
I
mean
that's
because
of
the
way
the
go
package
manager
works.
It
was
going
to
work
for
the
go
project.
I
would
say
that
other
languages
might
need
to
choose
different
approaches
based
on
how
dependency
injection
works
or
how
release
management
works
in
those
languages.
H
Yeah,
it's
just
like
weird
that,
like
the
different
language
implementation,
we'll
use
like
different
versioning
strategies-
I
guess
I
like
that
too,
like
I,
I
would
encourage
like
every
language
implementation
to
do
the
same,
because
to
me
it's
still
very,
very
weird
to
have
like
a
1.0,
something
which
is
not
stable.
I
G
I
G
Every
language
should
have
a
versioning
policy
in
their
repository
that
outlines
how
they're
handling
that
you
can
look
at
what
we've
got
in
the
gosig
that
describes
how
we've
split
up
the
modules
to
enable
us
to
do
the
the
decomposition
of
stable
packages
versus
unstable
packages
indicating
those
with
the
semver
version.
G
C
Yeah,
the
ones
that
are
using
1.0,
they
should
be
having
some
labels
saying,
unstable
or
alpha
or
beta,
or
something
even
if
they
have
1.0.
If
not,
then
that's
probably
an
error
and
yeah.
We
should
probably
fix
that,
but
I
don't
know
which
what
language
is
doing
this
bad
thing.
I
I
don't
think
any
of
them
is
doing
that.
It's
just
like
it's
just
you
know:
1.0
plus
the
label,
so.
E
The
question
is
within
the
api
packages
and
within
the
sdk
packages,
it's
ideal
that
they
all
have
the
same
version
number,
so
you
understand
which
packages
work
together
with
each
other,
because
open
telemetry
doesn't
mix
and
match
within
api
and
sdk.
It's
not
I'm
going
to
use
this
part
of
the
api
at
this
version
with
this
part
of
the
sdk.
At
this
other
version,
the.
E
The
end
user
might
have
different
versions
of
the
api
spread
around,
but
so
so
there's
just
a
certain
point
where
we
we
need
a
certain
amount
of
cohesiveness
and
since
open
telemetry
is
going
to
continue
to
add
more
and
more
signal
types
over
time,
right,
there's
metrics,
so
there's
logs
there'll
be
interest
in
adding
profiling,
behavior
and
so
on
and
so
forth.
So
there's
always
going
to
be
something
that's
experimental.
E
We
have
to
like
have
coherency
take
some
amount
of
some
amount
of
space.
Does
that
make
sense.
H
Yeah
I
like
I,
I
got
the
the
original
intention
too,
but
to
me
like
what
you
just
said
that,
like
the
versions
like
should
match
so
that
users
will
know
like
which
versions
are
compatible
with
which
one
that's
what
the
bill
of
material
is
for
yeah
like
you,
can
you
can
have
like
another
way
to
to
reach
it,
and
I
to
me
that's
like
more
secure
or
better,
because
that
is
a
solution
for
this
exact
problem.
D
Yeah
just
kind
of
sorry,
I
assume
you're
speaking
about
java
and
I
would
say
if
you
would
like
to
come
in
and
help
and
contribute,
and
you
know
make
the
project
better.
We
would
love
contributions
right
now.
We
just
simply
don't
have
resources
to
try
to
do
a
very
complicated
versioning
scheme
inside
the
project,
and
we
chose
the
thing
that
the
maintainers
were
able
to
do
and
be
able
to
move
forward
without
it,
causing
an
enormous
amount
of
pain,
so
apologize
if
it's
confusing,
but
we
just.
We
only
have
so
many
so
much
bandwidth.
J
You
have
oh
sorry,
I
want
to
call
it
java,
it's
complicated
because
there's
multiple
version
control
systems
and
they
don't
behave.
Similarly,
we
actually
had
a
bug
that
was
only
introduced
by
gradle
and
not
by
maven
for
the
same
bomb.
It's
wonderful
so
like
just
I.
I
just
want
to
call
out
that,
specifically
in
java,
it's
really
complicated
because
there's
subtle
differences
in
dependency
management.
G
Yeah,
I
think
go.
We
were
only
able
to
accomplish
it
because
go
modules
have
a
a
particular
way
of
working
with
versioning
and
we
actually
ended
up
having
one
of
the
aws
interns
this
year
build
us
some
custom,
tooling
for
managing
the
release
and
tagging
of
multiple
modules
so
that
it
was
actually
a
tractable
thing
to
do
for
us
as
maintainers
to
keep
these
versions
separate
as
part
of
what
led
us
as
well
to
the
somewhat
delayed
release
of
a
1.0
rc
with
go.
G
I
think
we
could
have
been
in
a
place
with
tracing
where
we
were
able
to
say:
okay,
if
we're
going
to
push
it
all
out,
we'll
push
it
all
out.
A
couple
months
ago,
tracing
was
relatively
stable,
but
keeping
metrics
cleanly
separated
being
able
to
version
them
separately
and
release
them
separately
was
important
to
us,
and
so
we
took
the
extra
time
to
get
that
done.
That's
the
choice
that
we
made
as
the
ghost
egg.
It's
a
other
sector,
obviously
for
you
to
make
other
choices.
I
One
other
thing,
jonathan,
is
ted
brought
up
a
great.
I
think
it's
a
correct
from
wrong
to.
I
think
it's
just
a
part
of
the
specification
might
be
an
otep
outlining
the
whole
project's
versioning
policy
as
like
a
whole
and
guidelines
for
each
sig,
and
that's
where
a
lot
of
each
sig
made
a
lot
of
determination
that
they
did
so
it
might
be
helpful
to
review
that
just
to
get
a
little
bit
more
context
for
the
whole
project.
Schools.
There.
H
I
H
I
E
Yeah
it'll,
you
know
we're
still
trying
to
get
over
this
major
hump
of
getting
metrics
and
and
logs
stable,
and
I
kind
of
feel
like
once.
We
reach
that
metrics.
E
It
seems
like
it's
on
track
to
be
stable
by
by
end
of
year,
like
generally
speaking,
not
fully
implemented
every
language,
but
but
I
kind
of
feel
like
once
we're
at
that
stage
things
will
these
in
others.
Some
of
these
issues
are
somewhat
temporary.
We
will
continue
to
have
experimental
features
and
open
telemetry.
E
The
the
main
thing,
the
the
main
thing
that
the
cigs
are
focusing
on
with
experimental
versus
stable,
is,
is
trying
to
ensure
that
that
they
have
a
process
for
developing
experimental
features
in
a
way
that
doesn't
destabilize
the
stable
stuff
that
there's
a
fair
amount
of
architectural
effort
that
had
to
go
in
to
ensure
that,
while
we're
working
on
something
like
metrics
we're
profiling,
something
new
we're
not
degrading
the
support
guarantees
we've
already
put
in
for
tracing
so
that's
kind
of
where
people
are
at,
but
once
we've
like
you
know
gotten
over
the
hump
of
these
major
signals
being
in
there
and
stable.
E
C
J
Update
I'll,
I
guess
I'll
start
with
saying
that
the
last
but
we
merged
the
two
meetings,
the
metrics
data
model
and
the
metrics
api,
and
I
think
that
that's
totally
the
right
thing
to
do
at
this
point.
Josh
mcdonald
has
an
awesome
like
all
things:
histogram
and
new
hipster
histograms,
so
open
histogram
is
making
progress
and
open
telemetry
needs
to
make
a
really
important
decision
there.
So
please
please,
please
provide
input.
J
B
That
I
was
going
to
talk
about
it
in
the
next
hour
about
the
histogram
question.
There
is
a
new
issue
up:
it's
called
the
histogram
referendum.
It's
just
basically
saying
we
completed
an
otep
where
we
did
a
lot
of
technical
research
into
histograms
came
up
with
essentially
two
kind
of
good
answers
that
are
incompatible
with
each
other.
It's
like
the
open,
histogram
proposal
or
a
exponential
proposal,
and
then
you
know
kind
of
coming
in
from
the
side
came.
Hdr
histogram
has
some
interest
in
being
included
in
the
proposal.
B
It's
a
little
bit
more
complicated
and
at
the
moment
there's
a
great
big
discussion
happening.
I
think
it's
a
little
too
too
early
to
call
it
on
this
discussion,
but
there
is
a
pretty
big
decision
to
make
and
I'm
not
sure
how
to
make
it.
I
don't
think
we
should
dictate
here.
I
don't
feel
confident
doing
that,
but
the
community
should
decide.
B
And
my
hypothesis
is
that
it's
too
expensive
for
the
whole
community,
the
vendors
included
to
implement
a
bunch
of
different
histogram
formats.
Moreover,
if
we
do,
we
just
end
up
with
all
kinds
of
inaccuracies
and
like
loss
of
fidelity,
that's
going
to
be
really
hard
to
capture,
so
it'll
be
good.
If
we
can
choose
one
format
and
stick
to
it,
but
there's
going
to
be
a
contentious
outcome,
no
matter
what
I'll
put
a
link
in
there
if
nobody
has
already
I'll
get
it
there,
why
doesn't
riley
talk
about
pr
1730.
I
Maybe
before
riley
does,
can
I
jump
in
about
the
two
meetings
merging?
Is
there
a
plan
to
remove
one
of
those
meetings,
or
is
it
still
a
plan
to
just
keep
two
metric
meetings
a
week.
J
From
from
my
standpoint,
I
think
we
should
drop
down
to
one
or
zero
when
we
no
longer
need
as
many
meetings
just
given
the
amount
of
contentious
topics
and
the
speed
at
which
we
need
to
move.
I
think
two
meanings
is
a
little
helpful
to
walk
through
topics
as
long
as
each
meaning
is
useful.
If
you
start
to
find
the
meanings
not
useful
escalate
like
please,
let's
get
rid
of
one
of.
A
These
yeah,
so
on
the
sdk
spec,
I
put
a
link
on
the
pr,
so
the
the
view
pr
is
taking
much
longer
time.
I
I
think
one
challenge
is:
we
have
several
stakeholders,
it's
a
little
bit
hard
to
make
sure
everyone
show
up
in
the
in
the
meeting
and
now
since
folks
are
coming
back
from
vacation
or
finish
moving
things
like
that,
I
expect
will
be
able
to
catch
up
quickly.
But
with
that
I
I
think
the
like
the
timeline.
A
A
These
are
only
couple
issues
that
we
need
to
hammer
out
in
the
sdk
spec
before
we
release
experimental
version,
and
I
I
think
the
scope
is
accurate.
I
believe
there
are
not
a
lot
of
open
topics
so
once
we
finish
the
view,
I
think
the
remaining
issue
is
the
exporter
and
the
aggregator,
and
I
aggregate
her
I,
my
current
class
is
just
a
set
of
flags
instead
of
allowing
people
to
customize
aggregators.
A
C
Great,
thank
you
so
much
for
that.
You
want
to
give
some
more
updates
more
questions
on
this
regard.
Anybody,
I
guess
not,
so
we
can
move
to
the
next
item.
Yeah.
I
did
a
small
pr
there.
Well
the
peers
don't
mind,
but
it's
a
pr
trying
to
add
an
option
to
limit
the
length
of
both
attributes
and
metrics.
C
It
has
been
standing
there
for
quite
a
long
time
and
it
has
enough
approvals,
but
I
would
like,
since
it
touches
the
metrics
part
as
well.
I
would
like
to
get
one
more
approval
from
somebody
involved
in
the
metrics
part.
So
please
please
check
that
out,
because
I
said
this
pr
has
been
standing
waiting
forever,
so
we
want
to
merge
it
as
soon
as
possible
or
review
that
do
something
about
it.
J
I
wanted
to
I
don't.
I
want
to
have
a
really
focused
discussion
on
this
and
I
don't
think
there's
enough
time
for
people
to
like
read
up
on
the
issues
and
think
through
it.
So
I
just
wanted
to
give
everyone
a
heads
up
to
think
through
this.
I
want
to
talk
about
it
next
week
effectively.
I
think
we
have
an
issue
with
metric
identity,
high
cardinality
databases
and
resource
semantic
conventions,
and
the
most
egregious
one
is
process
resource
semantic
conventions
where
the
command
line
args
show
up
as
a
resource
attribute.
J
So
I
I
basically
call
out
like
I
think
that
if
we
think
about
this
as
a
use
case
of
what
resource
label
should
go
into
a
prometheus
database,
it
simplifies
the
whole
question
of
like
what
we
should
do
to
solve.
That
use
case,
as
opposed
to
thinking
generically
about
you
know,
should
resources
have
identifying
and
descriptive
terms,
I
don't
care.
J
What
I
want
is
here's
the
use
case
that
I
think
motivates
all
of
this
discussion,
and
I
wanted
people
to
kind
of
read
up
on
it,
get
a
handle
for
the
problem
and
how
it's
different
between
metrics
and
logs.
Fundamentally,
what
I'm
positing
is,
I
think,
there's
a
different
use
case
for
resources
for
metrics
than
other
signals,
and
we
should
support
both
and
we
should
do
so
in
a
way.
That's
not
hella
confusing
to
people,
so
that's
hard
to
get
right
and
that's
the
tl,
dr.
So
please
think
about
it.
J
Please
read
the
description,
and
hopefully
we
can
have
a
really
good
discussion
next
week.
Is
it
okay?
If
it's
just
kind
of
confusing?
Sorry,
it's
okay?
If
it's,
if
it's
like
you
know
a
lot
of
people,
get
it
right
and
a
few
people
get
it
wrong,
but
and
they're
kind
of
confused.
But
if
it's
like
everybody
has
to
pay
attention
to
it
and
it's
confusing
everybody.
I
think
that's
bad
right.
B
J
I
threw
together
a
spectrum
of
of
design
space
to
look
into,
and
if
we
can
narrow
down
to
one
that,
we
would
like
to
investigate
I'm
willing
to
dive
into
it.
The
one
I'm
leaning
towards
is
effectively
when
we
define
a
semantic
invention.
J
J
General
users,
anything
that
they
throw
on
falls
into
one
of
those
two
categories
kind
of
implicitly,
and
we
we
leverage
it
in
that
fashion
in
a
way
that
they
would
most
likely
expect,
so
that
that's
basically
there's
this
there's
this
thing
that
we
reserve
for
us
as
instrumentation
authors,
that
we
use
to
do
the
right
thing
on
the
user's
behalf
and
when
the
user
touches
it,
they
get
like
the
same
consistent
behavior
on
everything,
and
they
kind
of
don't
need
to
interact
with
this
bit.
B
I
think
I
understood
you
based
on
my
understanding
of
this
question.
I
look
forward
to
this
discussion.
I
will
add
my
thoughts
into
the
topic
there.
I
I
totally
expect
a
a
giant
blog.
I
I'm
excited
yeah.
I
just
want
to
make
sure
that
the
prometheus
use
case
is
really
well
supported
in
our
data
model,
which
is
that
you
have
certain
identifying
attributes
that
are
written
in
to
the
somehow
and
then
everything
else
gets
attached
somewhere
downstream
and
as
long
as
you
make
sure,
your
identifying
attributes
are
clear.
B
You
can
do
that
and
I'm
hoping
that
we
come
up
with
not
just
a
descriptive
and
non-descriptive
distinction,
but
one
that
says
what
is
identifying
sort
of
a
third
category
and
the
nice
thing
about
having
schemas
is
that
we
could
potentially
just
write
this
into
the
system
without
changing
our
data
structures.
At
all
I
mean
the
schema,
tells
you
whether
something
is
descriptive
or
not
or
identifying,
so
that
you
don't
actually
have
to
add
another
bit
into
every
single
resource,
attribute
every
single
key
value
attribute
and
so
on
I'll.
Take
it
offline.
E
E
But
when
we're
writing
instrument
like
this
is
just
a
big
ball
of
configuration
and
I'm
a
little
concerned
that
we
we
need
to
kind
of
approach
it
holistically,
because
that's.
B
In
that,
along
those
lines
ted,
you
know,
riley's
been
proposing
these
terms,
which
came
from
open
census.
So
we've
got
the
term
view
and
we've
got
the
term
hint
and
they're
both
about
configuration
coming
from
different
locations
and-
and
I
think
what's
going
to
play
out-
is
that
the
problem
josh
has
referred
to
has
got
to
get
solved
using
views
and
hints
in
the
metric
space.
Maybe
that's
a
bigger
concept.
I've
actually
seen
people
talking
about
configuring
sampling
and
it
starts
to
look
a
whole
lot.
Like
a
view
to
me,.
E
You
just
you
hit
a
wall
when
you're
starting
to
write
instrumentation
for
libraries,
not
what
end
users
are
doing,
but
for
libraries,
where
you're
putting
in
tracing
and
then
you're
going
to
put
metrics
and
a
lot
of
the
metrics
you
want
are
the
same
thing:
you're
recording
with
tracing
the
real
straw
that
broke
the
camel's
back
for
me,
is
duration,
people
going
in
there
and
like
adding,
start
and
end
time
stamps,
so
they
can
get
a
duration,
http
request,
duration,
metric,
and
so
it's
it's
starting
to
get
a
little
crazy
making
and
the
next
question
everyone's
going
to
have
is
like
what
are
the
labels
that
get
put
on
all
these
things.
E
And
how
do
I
configure
those
labels
and
like
josh
is
saying
some
of
those
labels
also
come
from
resources,
so
having
some
coherent,
consistent
way
that
we
can
figure
these
things
and
like
and
lay
it
all
out.
We
we
need
to
provide
that
for
people
to
be
able
to
actually
write
and
maintain
instrumentation,
otherwise,.
A
E
The
positive
side
of
doing
this
work,
I
believe,
is,
will
end
up
with
a
way
to
have
this
in,
like
a
dsl
that
you
could
actually
put
into
a
configuration
file
and
that's
something
that
can
be
reused
to
do
some
of
the
stuff
josh
is
mentioning
around
being
able
to
construct
these
label
sets
or
adjust.
These
label
sets
dynamically
in
the
collector,
so
you're
not
having
to
redeploy
reconfigure
and
re-deploy
your
apps.
E
All
the
time
just
to
manage
your
label
sets
so
there's
a
lot
of
good
stuff
there,
but
it
is
gonna,
take
some
design
brain
power,
so
we're.
B
J
Sorry
go
ahead.
I
want
to
call
out
tyler's
point
because
I
think
tyler
is
the
voice
of
the
user
here,
to
the
extent
we
provide
this
configuration
is
awesome.
We
should
also
try
to
avoid
requiring
users
to
have
to
provide
it
for
for
just
default,
good
usage
right.
So
there's
there's
this
thing
like
I
would
love
if
traces
automatically
became
metrics
and
the
conventions
we
choose
for
attributes,
somehow
just
work
well
out
of
the
box
without
any
need
for
configuration
by
default.
If
we
can't
do
that,
I
think
we've
already
kind
of
failed.
J
I
agree.
We
need
the
configuration
as
well
because
I'm
a
fan
of
you
know
good
behavior
over
configuration
or
I
don't
want
to
call
it
convention
over
configuration,
because
I
think
that
turned
into
no
configuration
as
a
story
yeah,
but
I'm
a
fan
of
that
just
like.
I
want
to
make
sure
the
default
works
well
out
of
the
box,
because
I
think
it'll
take
us
a
while
to
nail
configuration
totally.
E
So
this
this
dovetails
sorry
bargain.
I
know
you
want
to
speak.
I
just
want
to
clarify
this.
This
dovetails
with
honorable
instrumentation
proposal
is
about
precisely
this.
If
we're
giving
our
end
users,
these
higher
level
constructs
to
write
instrumentation.
That
thing
can
basically
say
give
me
this
data.
If
you're
making
an
http
client
span,
you
got
to
give
me
this
data
and
I
will
do
all
of
that
for
you.
E
I
will
set
up
all
the
defaults
and
emit
all
of
the
default
stuff.
If
you
don't
configure
me,
but
I
will
also
provide
a
builder
so
that
you
can
configure
me-
and
this
appears
to
be
probably
the
best
way
to
do
it.
I
just
to
add
some
metrics
color.
I
don't
actually
think
we
need
hints
on
the
metric
api.
E
I
think
we
could
eliminate
that
and
simplify
the
metrics
api,
because
at
the
end
of
the
day,
it's
these
instrumentation
objects
that
are
going
to
have
to
do
this
work
and
what
once
they're
doing
that
work?
I
don't
know
that
you
need
this
extra
layer
of
hints
coming
in
through
metrics
api,
because
they're
just
this
is
actually
just
configuration.
B
However,
there
seems
to
be
a
data
model
distinction
between
labels
that
are
or
attributes
that
are
descriptive
and
attributes
are
not
descriptive
and
if
it's
great,
to
put
them
into
our
resource
conventions
for
all
the
ones
that
we're
specifying
but
like.
If
a
user
comes
in
with
a
new
type
of
attribute,
it
may
or
may
not
be
descriptive,
and
I
think
we're
sort
of
trying
to
talk
about
whether
there
needs
to
be
an
api
or
whether
it's
called
a
hint,
that's
kind
of
a
configuration
at
compile
time
or
whether
it's
something
else.
B
But
it
does
seem
important
to
me
that
we
have
it
in
the
data
model.
But.
E
B
B
Is
a
presentation
for
ordinary
attributes
too,
and
I
before
we
get
too
far,
I
want
to
talk
about
this.
There's
proposal
to
do
sampling
and-
and
we
need
this
additional
probability
score
on
our
data
and
one
way
to
do
additional
scoring
on
all
of
our
data
is
to
add
a
non-descriptive
attribute
that
says:
here's
my
sampling
whatever
and
so
there's
a
new
category
of
attribute.
B
That
is
definitely
not
meant
to
be
used
as
a
time
series
like
distinguisher
and
absolutely
should
be
dropped
before
you
activate,
but
it
is
still
there
to
tell
you
something
useful
that
you
might
want
to
know
about,
and
but
it's
not
describing
the
data
it's
describing
how
the
data
was
collected
and
that's
that's
emerging
and
the
reason
why
that's
emerging
is
it's
too
expensive
for
us
to
add
a
whole
new
field,
a
whole
new
like
list
of
secondary
non-descriptive
attributes
in
our
data
structures,
because
it's
too
many
bytes
to
it's
just
more
stuff.
B
B
K
B
So
right,
so
I'm
talking
about
something
the
sdk
might
produce,
and
then
the
consumer
is
supposed
to
know
that
certain
types
of
attributes
are
not
descriptive
and
certain
types
are
not.
I
think
what
ted
was
getting
at
just
now
is
that
there
will
come
a
time
when
somebody
who's
writing.
B
K
J
Yeah,
so
I
I
don't
think
we
can
declare
semantic
convention
stable
until
we
figure
this
out,
and
I
want
to
get
semantic
inventions
marked
as
stable,
so
we
can
start
releasing
stable
instrumentation,
so
the
more
I
dove
into
what
does
it
take
to
mark
a
resource
convention
is
stable.
You
have
to
understand
when
you
can
change
it
and
if
changing
a
resource
semantic
invention,
changes
your
metric
identity.
J
Well,
that
means
every
freaking
resource.
Semantic
convention
is
locked
down
to
the
the
the
most
limited
level
of
change.
That
metrics
allows
because
metrics
is
the
most
limiting
of
resource
attributes.
So
what
I'm
trying
to
do
is
figure
out.
How
do
we,
like
the
the
the
meta-level
discussion
here,
is?
J
We
can
write
a
nice
what
what
are
allowed
changes
to
semantic
conventions
today,
if
you
were
to
say
what
are
allowed
changes
to
resource
semantic
conventions
effectively,
it's
almost
nothing
because
of
this
identity
problem
on
metrics
and
that
that's
that's
why
I'm
diving
into
it!
That's
why
I
think
it's
a
high
priority
to
figure
this
out,
but
if
we
have
a
different
way
of
figuring
out
semantic
conventions
on
resources,
that's
fine!
It's
just!
J
If
we
constantly
are
flipping
around
cardinality
on
metrics
and
metric
streams
and
labels
like,
I
think
our
instrumentation
will
only
be
as
good
as
how
stable
it
is
to
users
and
we
don't.
We
don't
have
any
way
of
not
exposing
that
kind
of
crap
to
them
right
now.
So.
J
I,
I
think,
that's
acceptable.
You
mean
for
like
looking
at
attributes
and
saying
like
this
is
primary,
and
this
is
not
yes,
I
think
that's
an
option
on
the
table.
Yeah,
that's
one
way
we
could
go
okay,.
K
I
would
like
to
to
have
a
a
deeper
discussion
on
this.
Do
you
think
this
is
the
next
thing
that
we
need
to
discuss
during
this
meeting.
J
J
I
don't
think
we're
accomplished
anything
this
week,
besides
kind
of
thinking
through
some
possible
solutions
and
saying
here's
the
one
that
we'd
like
to
approach
my
goal
with
the
bug
is,
I
I
listed,
I
think
three
distinct
avenues
of
of
search
to
find
a
solution
and
I'd
like
for
us
to
kind
of
agree
on
what
direction
we
went
ahead
as
a
community
in
terms
of
how
to
find
a
solution
which
one
we're
most
comfortable
with
and
again.
I
don't
even
expect
us
to
do
that
today.
J
I
think
we've
had
enough
discussion
here
that
people
understand
the
problem
and
I
just
want
us
to
think
about
it
and
then
come
back
next
week,
so
that
I
think
there
needs
to
be
another.
Possibly
you
know
10
15,
20,
minute,
discussion
of
okay,
we've
thought
about
this:
a
bunch.
Here's
the
solution,
space
that
we're
all
comfortable
with
and
we
can
discuss
on
the
bug.
C
By
the
way,
next
tuesday,
it's
probably
some
people
want
to
make
it
to
this
specific
specification
call.
So
just
for
your
information
josh.
Maybe
we
need
to.
K
Yes,
I
might
not
be
here,
but
I'll,
be
here
yeah.
So
probably
we
should
keep
it
for
two
weeks
from
now,
just
to
make
sure
everyone
is
here.
J
We
I
think
we
can
also
make
progress
on
the
bug
so
by
next
week,
if,
if
you
can,
if
people
can
get
their
thoughts
down
I'll
try
to
collect
what,
I
think
is
consensus
from
comments
and
say
here's
a
space
that
we're
going
to
investigate
and
get
some
thumbs
up
thumbs
down
buy
next
week,
like
I
said,
I
think
we
need
to
make
progress
on
this
to
to
get
semantic
conventions
mark
stable
for
resources
so-
and
I
don't
want
metrics
to
be
in
the
way
of
good
semantic
conventions
for
trace-
is
the
other.
J
My
other
thought
anyway.
So
I,
if
that
sounds
like
a
good,
a
good
step
forward.
Let's
call
the
discussion
now
because
we
spent
a
lot
of
time
on
it.
Sorry,
please
comment
on
the
bug
comment
on
ideas.
I
have
three
avenues
of
investigation.
If
you
think
of
another
avenue
to
explore,
add
it
and
I'll
try
to
collect
consensus
and
mark
down
in
the
bug
which
avenue
will
we,
as
a
community,
want
to
explore
next
so
next
tuesday?
J
C
Perfect,
thank
you
so
much
for
that.
Okay,
I
guess
we
can
go
to
the
next
items.
Ted.
You
have
a
pair
of
announcements.
E
Yeah
so
kicking
off
some
new
work
streams,
sampling
sig
been
threatening
to
start
this
for
a
while,
but
there's
finally,
some
resources
available
to
make
it
happen.
So
the
time
I
picked
is
thursdays
at
8
a.m-
pacific,
not
this
thursday,
but
next
thursday
I'll
be
pinging.
Everyone
who
said
they
were
interested
in
sampling.
E
Two
main
things
to
look
at
here
is
josh's
sampling
proposal
and
jaeger
has
an
adaptive
sampling
concept
that
they
need
supported
on
some
level
in
open
telemetry,
so
that
they
can
move
over
to
open
telemetry
from
the
jaeger
clients.
So
those
are
kind
of
the
two
two
high
priority
things
we
want
to
look
at
for
starters
in
the
sampling
group,
but
if
you're
interested
in
sampling
just
fyi,
that's
coming.
B
B
B
Every
span
to
us
is
also
a
metric
and
as
long
as
we
know,
there
was
no
sampling
or
we
know
exactly
how
much
sampling
there
was.
We
can
create
metrics
as
soon
as
we
receive
a
span
and
that
may
not.
That
may
seem
like
a
small
feature,
but
it's
a
pretty
big
important
part
of
our
product,
and
it
has
to
do
with,
like
essentially
a
log's
use
case
where
you're
looking
at
spans
you're.
Looking
at
their
events
and
you're
saying.
B
That
is
a
view
of
your
spans
that
produces
not
just
a
stream
of
spans,
but
also
a
metric
corresponding
to
that
view,
because
we
can
count
every
span
probabilistically
speaking,
so
that's
the
content
in
otep
148
and
that's
what
we
as
vendors
are
trying
to
achieve
so
they're.
Definitely
these
two
things
fit
together,
the
stuff
that
aws
is
doing
and
what
I'm
trying
to
propose.
A
I
have
a
small
question
regarding
something
like
the
scope
might
require
two
or
three
minutes,
so
so
the
sampling
here
we're
talking
about
is
you
have
a
lot
of
spends
or
a
lot
of
data
points,
and
some
of
them
might
already
been
sampled
or
not,
but
you
still
want
to
do
further
assembly.
There
is
another
sampling
I
heard
from
a
lot
of
device
developers.
A
A
So
probably
they
happen
to
have
a
coincidence
that
every
time
they
sample
the
temperature,
the
gpu
is
trying
to
flash
the
data
to
the
frame
buffer.
So
it's
it's
almost
free
the
temperature
is
low,
but
if
the
sampling
like
every
one
second
happen
to
align
with
where
the
gpu
is
doing
heavy
computation,
then
the
core
temperature
would
be
much
higher.
So,
in
order
to
fix
this
problem,
people
normally
would
introduce
some
statistically
like
like
chaos
or
something
like
they
put
some
random
number
on
the
timing.
So,
instead
of
doing
that
on
the
fixed
every
one.
A
K
K
Of
I
think
it's
just
if
I
understand
correctly,
the
solution
is
just
as
jitter
on
the
collection
intervals
should
not
be
60
seconds,
or
one
second
should
be
one
seconds,
plus
ten
percent
jitter
or
whatever
something
like
that
and
that's
it.
A
B
B
Potential
is
that
you,
you
know
sample
that
in
an
asymus
callback
that
core
temperature
every
second,
let's
say
and
then
out
after
one
minute,
you
could
output
a
single
point.
It's
chosen
from
within
your
minute
and
I'm
you
can
go
so
far
with
sampling,
but
I
think
there
might
be
someone
who
wants
to
go
with
a
time
weighted
average
of
the
temperature
measured.
K
I
think
the
reason
why
I
think
this
is
different
is
because
it's
way
more
under
our
control,
especially
because
it
happens
in
the
sdk
compared
with
the
other
one,
which
has
to
happen.
It's
more
or
less
api,
related,
more
or
less
happens
on
the
critical
path.
That's
why
I
think
I
think
josh
problem
is
a
bit
more
complicated
because
we
can
play
with
different
solutions
on
our
on
our
side
on
the
sdk
that
can
feed
the
the
pro
or
that
can
solve
the
problem.
E
I
think
this
group
is
going
to
be
focused,
at
least
at
the
beginning,
on
the
kind
of
sampling
issues
that
tracing
traditionally
faces,
but
with
the
mindset
that
we're
also
looking
at
how
this
might
affect
metrics
and
logs,
as
far
as
being
able
to
sample
those
or
not
screw
them
up
in
the
process
of
sampling,
traces.
E
K
That's
a
very
interesting
point,
also
talking
about
sampling
right
now
we
talk
about
sampling
in
context
of
logs
and
sorry
of
traces
and
josh
proposal,
with
the
sampling
rate
propagating
the
sampling
rate
through
all
the
the
trades
and
so
on.
K
It's
very
interesting
if
we
need
to
have
a
kind
of
an
api
or
something
that
says,
is
this
request
or
is
this
context
the
entire
context
sample,
which
means
which
means,
for
example,
we
may
drop?
We
may
want
to
drop
metrics
data
based
on
these
sampling
measurements
or
we
may
want
to
to
drop
logs
based
on
this
thing,
so
it's
more
or
less
a
context,
context,
property
or
a
request
property
rather
than
not
necessarily
just
a
trace
property.
I
mean
I'm
not
saying
that
this
is
should
be,
but
we
can
think
this
way.
E
Yeah,
so
anyways
sampling,
sig,
we'll
be
starting
up,
so
I
don't
you're
gonna,
try
my
best
not
be
a
thing
that,
like
all
maintainers,
need
to
track
or
attend
just
people
who
are
interested
in
the
sampling
aspect
of
open
telemetry.
E
We
have
hit
a
matte
coloring
problem.
However,
I
just
want
to
point
out
this
is
like
an
aside,
but
we
have
three
meetings
that
come
in
the
9am
block
and
we
have
three
zooms
and
so
there's
no
zoom
link.
I
can
choose
for
this
meeting
that
does
not
ensure
that
we
bleed
into
the
zoom
call
of
one
of
the
next
groups,
which
we
all
know
tonight.
So
can
you
ask.
E
Yeah,
I
think
we've
hit
hit
the
limit
there.
So
anyway,
it's
just
an
aside.
I'm
gonna
add
another
one
of
those.
C
Oh
okay!
Yes
thank
you
for
that.
There's
the
last
item
from
daniel.
I
think
he
couldn't
attend,
but
it's
about
there's
a
pr
that
he
has
for
making
logs
use
more
types
which
are
like
for
attributes
because
for
for
traces
and
metrics,
it's
restricted
to
you
know
a
few
types
and
he
wants
to
make
it
like
to
be
able
to
use
any
type
for
logs.
C
I
don't
think
we
have
enough
people
who
are
bursted
in
logs
here.
So
I
can
just
say.
Please
take
a
look
at
that.
I'm
not
versatile!
So
far,.
E
Yeah,
just
for
clarity
does
anyone
know
is
this.
Is
this
purely
a
convenience
issue
saying
we
want
the
api
to
be
able
to
take
in
anything
and
then
convert
to
the
existing
set
of
types
that
we
have
in
our
protocol
for
attributes?
I
assume
that's
what
this
is,
but
does
anyone
have
reason
to
think
it's
something
different.