►
From YouTube: 2021-07-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
This
good
morning,
everybody
or
good
afternoon
for
some
of
us,
please
add
any
important
item
to
the
agenda
that
you
have
in
your
mind,
mean.
While
add
yourselves,
please
be
in
this.
B
C
B
Okay,
let's
start
if
let's
go
with
the
agenda
well,
first
of
all,
there
are
no
new
important
issues
that
haven't
been
triaged
to
my
understanding
correctly,
if
I
am
grown
because
of
that,
we
can
just
go
directly
to
the
agenda
items
so,
first
of
all
josh
resource
attributes.
Please.
D
Yeah,
so
we
talked
about
this
two
weeks
ago
and
gave
some
time
to
churn
and
think
through
the
bug,
but
effectively
this
started
out
as
a
bug
about
how
do
I
know
what
labels
are
important
from
a
resource
to
export
into
prometheus,
so
the
tl
dr
on
the
issue,
is
you
know
the
we
have
this
issue
in
metrics
databases,
not
all
of
them,
but
a
lot
of
them
that
cardinality
causes
problems.
D
Additional
labels
are
an
issue
and
like
best
conventions
for
say,
prometheus
and
others
are
to
limit
the
labels
that
you
use
to
the
minimum
set
that
you
actually
need,
as
opposed
to
a
lot
of
like
say,
tracing
databases
or
logging
data
databases
are
like
hey
attributes.
Are
your
best
way
to
get
good
search,
queries
right,
so
we
have
this
this
conflict
between
those
two
and
to
the
extent
that
we
can
support
both
those
use
cases
we'd
like
to
so
so.
What
what
we?
What
I
did
was
create
an
issue.
D
We
had
a
little
bit
discussion,
it
was
mostly
yuri
josh
and
I
and
the
there
are
two
feature
proposals
that
I
think
I
want
to
make
based
on
this.
I
want
to
run
this
by
the
spec
sig
for
concerns
for
thoughts
for
alternatives,
but
effectively
in
that
bug.
D
We
list
out
a
set
of
requirements
for
user
stories
that
we
want
to
support,
particularly
there's
this
user
story
from
the
prometheus
world
and
from
many
other
places
where
you
deploy
an
application
and
somewhere
in
the
environment,
you
append
resources
to
it
or
append
resource
attributes
to
it.
So
this
would
be.
I
want
to
do
canary
based
evals
right.
That's
an
example,
so
I
want
to
deploy
a
system
where
I
give
an
attribute
of
this
is
my
canary
and
then
I
have
my
existing
prod
system
with
an
attribute
of
here's.
D
What
production
is
doing
and
when
I
look
at
my
telemetry,
I
can
evaluate
my
canary
against
production
and
I
can
isolate
my
alerts
to
whether
it's
the
canary
or
whether
it's
prod
right.
That's,
that's
an
example
of
adding
resource
attributes
after
the
fact
and
appending
its
telemetry.
The
second
thing
is
when,
when
we
define
our
resource
detectors,
we
want
to
be
holistic
with
all
the
possible
like
things
that
we
could
put
in
there
attributes
that
users
might
want
to
query
in
say
a
log
database
or
a
tracing
database
right.
D
So
we
want
to
be
complete
and
thorough,
and
if
you
look
at
a
bunch
of
the
resource
semantic
conventions,
they
are.
In
addition,
we
want
to
know
which
ones
are
truly
identifying
for
these
limited
metric
databases.
So
we
want.
We
want
to
limit
that
and
know
like
hey.
You
know
the
if
you
know
a
kubernetes
kind
of
pod
id
you
don't
also
have
to
know
that
the
thing's
in
kubernetes
right,
you
don't
have
to
have
an
attribute.
That
says
this
is
a
kubernetes
pod.
D
If
you
have
an
attribute
that
says,
here's
the
kubernetes
pod
id
because
one
implies
the
other
okay.
So
this
is
the
notion
of
finding
your
your
limited,
primitive
key
and
then
the
the
third
kind
of
requirement
there
is
is
that
in
an
exporter
I
can
know
what
resource
attributes
are
descriptive
and
which
ones
are
identifying
for
the
semantic
conventions
that
are
used
in
that
schema
and
then,
in
addition,
all
of
these
additional
labels
that
are
added
at
runtime.
D
I
I
get
in
some
fashion,
I
don't
they
don't
get
kind
of
destroyed
or
ignored
or
in
any
kind
of
way,
so
that
I
can
basically
mix
the
identifying
attributes
and
these
addended
attributes
together
and
get
a
set
of
attributes
for
my
metrics.
So
the
proposal
would
be
two
mechanisms.
Two
new
features.
Okay,
and
this
is
again
more.
D
It
doesn't
have
to
be
these
two.
These
are
just
the
two
that
I
think
I
want
to
work
on.
Based
on
that
discussion,
one
would
be
inside
of
semantic
conventions
and
telemetry
schema.
We
have
a
way
of
denoting
what
resource
attributes
are
descriptive
and
which
ones
are
identifying,
and
we
update
the
semantic
conventions
in
the
specification
to
denote
if
something's,
descriptive
and
all
others
are
assumed
to
be
identified.
D
We
would
additionally
have
code
gen
that
would
produce
some
library
somewhere
for
for
the
schema
urls,
where
I
get
a
list
of
all
those
descriptive
attributes
that
I
can
leverage
inside
of
exporters,
so
cogen
cochin,
you
know,
or
reading
the
schema
file
directly
to
understand
this.
D
I
want
to
formalize
a
mechanism
where
users
can
do
this
appending
of
resource
labels
in
a
way
where
right
now
resource
has
a
schema
url
attached
to
it.
Okay,
so
let's
say
we
have
resource
as
a
schema
url
attached
to
it,
and
and
we
append
some
attributes
that
are
not
part
of
that
resource
label
right,
a
user
has
appended
them
in
some
fashion.
D
How
does
that
work?
What
does
that
mean?
How
does
it
function?
How
do
we
behave
so
basically
expanding
the
specification
to
kind
of
clarify
that
use
case
and
clarify
what
basically,
this
would
be
looking
at
resource
merges,
so
those
would
be
the
two
proposals
that
we'd
put
forward
to
solve
this
bug.
D
C
Pedantic
naming
issue,
but
I
wonder
we're
calling
this
identifying
and
descriptive
attributes,
but
we
might
use
these
attributes
as
indexes
these
descriptive
attributes
as
indexes
and
tracing,
and
I
wonder
if
like
are
we
just
specifically
saying
some
of
these?
Are
our
metrics
labels
and
others
are
not?
Is
that
like
yeah.
D
What
that
means,
I
mean
specific
yeah
it.
I
think
this
would
mostly
get
used
by
metric
systems
because
they're
the
ones
with
the
problem
and
the
name
I
I'm
not
tied
to
descriptive
and
identifying
minimal
identifying,
could
also
be
an
option.
Yeah.
C
Yeah,
I
just
wonder
if,
if
calling
it,
you
know
like
a
metric
label
or
metric
index,
or
something
like
that,
like
you
know,
used
for
metrics
or
some
some
some
flag
like
that,
as
as
a
way
of
identifying
them
anyways,
that's
just
pedantic.
I
was
just
if,
if
it
does
specifically
map
to
like
use
these
ones
for
metrics
and
only
put
these
other
ones
on
on
your
traces,
that
might
be
more
direct
way
to
speak
to
the
end
user.
C
If
it,
if
we
think
this
goes
beyond
just
which
ones
did
you
put
on
the
metrics,
then
you
know
I
I
kind
of
like
that,
because
I
don't
think
we're
necessarily
discouraging
people
from
using
these
as
indexes
in
an
environment
where
you
can
handle
that.
D
We're
just
saying
this
is
no,
and
even
even
for
metrics
like
if
you're
using
say
influx
db
and
you're
throwing
metrics
in
there.
You
can
use
all
these.
It's
it's
only
for
systems
where
there's
an
issue
with
high
cardinality
and
you
need
to
limit
labels,
which
is
most
of
the
metric
systems
out
there,
but
not
all
of
them.
D
C
Right,
yeah,
okay
anyways,
that
yeah
not
not
a
huge
issue,
but
I
was
just
just
wanted
to
poke
at
the
use
case.
You're
right,
though,
there's
presumably
metric
systems
now
and
more
in
the
future
that
we'll
be
able
to
to
deal
with
with
bigger
label
sets.
So
so
maybe
we
shouldn't
pull
the
word
metric
into
it.
C
Which
was
you
mentioned,
end
users
are
going
to
be
appending
more
attributes
onto
these
things,
and
we
need
to
know
what
do
they
mean
in
those
cases
right
like
if
users
are
pending
more?
Are
they
attending
these
to
be
descriptive
or
identifying?
C
And-
and
I
I
wonder
if
like
if
this
bubbles,
all
the
way
up
to
the
api
in
some
level
like
do
we
end
up
having
to
give
the
end
user
the
ability
to
do
this?
I'd
rather
not,
but
you
know.
D
Yeah,
my
initial
proposal
is
my
initial
proposals.
Anything
the
user
appends
via
one
of
these
mechanisms
would
be
treated
as
kind
of
sacrosanct
like
we
preserve
it
as
much
as
we
possibly
can,
because
they've
gone
to
the
effort
of
adding
it
versus
like
an
auto
detection
mechanism,
right
that
is
provided
by
someone
else.
So
anything
the
user
would
add
via
the
environment
variable
or
whatever
other
mechanisms
sdks
provide,
would
be
considered,
identifying
or
crucial
or
core.
The
the
real
concern
is
the
resource.
Has
a
schema
url
attached
to
it.
D
D
How
do
we
interact
with
schema,
urls
and
resource
attributes
at
the
same
time,
for
these
things
that
fall
into
neither
we
don't
have
a
space
for
them
to
not
be
part
of
the
schema
url,
that's
more.
What
that
issue
is
about.
C
Right
yeah,
I
mean,
if
you
it's
over,
I
mean
I
don't
get
details,
but
yeah.
If
part
of
what
you're
doing
is,
is
loading
up
a
map
of
everything
in
your
schema.
Then
it's
pretty
easy,
quick
to
detect
that
once
you
paint
the
cost
of
loading
that
map
up.
It's
quick
to
detect
that
these
things
aren't
in
the
map
and
you
can
be
like
all
right.
This
is
some
other
thing.
I
guess
I
do
worry
a
little
bit.
C
Maybe
a
thing
to
think
about
here
is:
if,
if
these,
if
it's
so
destabilizing
to
change
your
label,
sets
like
if
we're
being
really
restrictive
here,
like
maybe
a
thing
to
think
about
when
you're
looking
at
this
is
we
we
don't.
We
want
users
to
not
feel
scared
to
like
add
attributes
and
resources,
and
things
like
that.
We
don't
want
a
situation
where
you've
got
multiple
devs
and
one
of
them
packs
on
tacks
on
a
new
resource
because
they
have
it
and
then
that
like
screws
up
some
operator
somewhere.
C
D
Yeah,
I
guess
the
I
I
hear
I
hear
what
you're
saying,
but
I
also
want
to
throw
the
inverse
of
like.
We
know
that.
That's
a
use
case
that
people
want
of
the
ability
to
annotate
things
and
so
to
the
extent
that
we
can
specify.
So
all
I'm
proposing.
Is
we
formalize
it
in
the
specification
with
we
identify
holes
that
we
have
today
for
addressing
that
use
case
and
formalize
some
specification
prs,
those
pr's
would
go
for
full
review
and
everything
I
just
want
to
make
sure
this
is
the
like.
E
A
E
To
add
yeah,
so
one
question
was
what,
if
we
ever
have
to
give
this
to
user,
like
the
user
wants
to
introduce
sort
of
like
non-identifying
key
at
like
attributes,
for
example,
I
think
we
can
hold
off
in
the
future.
Maybe
there's
a
new
api
that
we
use
for
defining
attributes.
Essentially,
that
might
like,
let
us
give
that
to
the
user,
but
it
can
be
like
way
in
the
future.
E
The
other
thing
that's
important
important,
I
think
to
remember
when
we're
talking
about
this
particular
feature
is,
is
there's
another
way
to
frame
the
question?
That's
not
about
whether
attributes
are
actually
descriptive
or
identifying,
but
is
about
where
an
attribute
comes
from,
especially
a
resource
resource
attribute,
and
the
prometheus
system
is,
has
developed
this
very
robust
mechanism
for
applying
attributes
out
of
the
process.
D
Yeah
that
that
was
an
alternative
that
was
mentioned
in
the
bug,
but
I
didn't,
I
didn't
see
a
lot
of
consensus
for
it,
so
I
would
also
prefer,
to
basically
say
of
our
resource
detectors,
will
be
minimally
identifying
and
we
attach
additional
labels
for
querying
sometime
downstream,
but
I'm
highly
biased
based
on
these
systems
that
I
deal
with
so
I'm
kind
of
curiou
like
that's
one.
I
want
to
hear
from
a
lot
more
people
before
we
push
down.
I
just
want
to
call
that
out.
C
Josh,
I
kind
of
suspect
it's
both
like.
I
think
we
need
to
like
the
way
we're
trying
to
do
things
as
open
telemetry
is.
It
should
work
while
running
the
minimal
set
of
components
like
you
shouldn't
have
to
run
a
collector
for
the
system
to
work,
and
so
I
think
part
of
that
is
yeah
thinking
through
like
right
from
the
get-go.
How
do
we
describe
some
of
this
stuff
to
a
back
end
in
some
way?
C
I
do
keep
coming
back
to
when
we
look
at
this
like
thinking
about.
If
there
is
some
way
for
an
operator
to
load
up
what
the
heck
it
is
they're
actually
using
somewhere
in
this
pipeline,
then
that
that
makes
things
so
much
safer
and
like
that
that
point
in
the
pipeline
becomes
very
useful
for
things
like
not
just
like
statically
setting
up
what
labels
do.
I
want
attached
to
these
metrics,
but
also
things
like
migrations
from
schema
to
schema
a
lot
of
that
stuff
that
pgren
put
in.
C
In
order
to
do
that
stuff,
you
have
to
have
some
place
where
the
end
user
said
this
is
the
stuff
I
wanted.
So
if
you
see
like
say,
new
schema
come
in,
translate
it
to
the
old
schema
for
this
stuff
and
and
so
on
and
so
forth.
C
But
but
I
don't
think
we
can
just
rely
on
that.
I
don't
think
we
can
say
like
okay,
the
users
are
gonna
have
to
set
that
stuff
up
or
like
their
label
sets,
are
gonna,
be
super
unstable
or
like.
We
won't
solve
this
like
super
basic
use
case,
unless
they
set
up
something
like
that.
I
just
feel
like
we
probably
won't
be
able
to.
C
It
seems
hard
to
fully
solve
this
stuff
entirely
from
the
supply
side,
without
actually
knowing
what
the
end
users
want,
and
it
sounds
like
what
josh
is
saying
is
in
the
prometheus
world,
they've
kind
of
gone
that
that
route
to
some
degree,
where
somewhere
downstream
there's
a
system
that
knows
exactly
what
it's
actually
trying
to
use
and
it's
it's
manipulating
the
data
that
might
end
up
just
being
in
different
back
ends,
but
I
think
open
telemetry
should
think
about
adding
that
kind
of
processing
to
the
collector.
D
Okay,
so
can
I
rephrase
what
I
think
I
just
heard
sure?
Okay,
so
there's.
D
So
so
we
we
need
to
be
the
jack
of
all
trades,
which
means
users
who
haven't
configured.
Anything
crazy
should
get
a
good
default
experience,
but
we
also
expect
the
ops
to
also
want
control,
so
we
shouldn't
rely
on
the
one
like
to
the
extent
we
can
make
the
default
be
good.
We
should,
but
additionally,
we
should
have
some
hook
for
ops
to
do
the
right
thing
downstream
yeah.
So
we
want
to
do
both
of
those
things
as
opposed
to
just
one
or
just
the
other.
C
Yes,
I
think
you
need
to
a
thing
we
keep
coming
back
to
is.
We
need
to
provide
people
with
the
same
set
of
defaults,
but
they're
also
going
to
want
to
come
in
and
change
those
things,
and
that
shows
up
on
the
front
end
with
configuration
right,
there's
a
way
to
configure
this
stuff,
which
is
like
one
way
an
end
user
could
deal
with.
It
is
saying
like
no.
No,
I
actually
want
these
particular
resources
to
act
as
identifiers
as
configuration
and
stuff
like
that.
C
But
the
other
place
that
we
see
this
come
up
is
without
rebooting
the
system.
The
operator
wants
to
be
able
to
apply
further
configuration
without
having
to
be
able
to
get
into
the
configuration
of
an
application
and
reboot
it,
and
that's
that
that's
like
another,
just
like
powerful
way
of
doing
this
stuff.
C
You
know
that's
why
we
have
all
this
to
some
degree
all
this
pipelining
in
the
collector
right,
because
it's
like
a
really
great
place
to
to
do
this
kind
of
work
and,
if
we're
going
to
apply,
if
we're
not
gonna,
have
our
schemas
be
totally
like
set
in
stone
right
then
you're
gonna
have
a
situation
where
you
roll
out
in
a
schema
update,
and
that's
that's
going
to
change
something
somehow
somewhere
right,
because
we
said
they're
not
just
locked
in
stone
forever
and
the
solution
to
that.
Not
breaking
things
is
somewhere
down
the
pipeline.
C
You
have
a
way
to
convert
in
places
to
a
prior
schema
and
we're
gonna
supply
people
with
with
conversion
tools
whenever
we
we
make
updates
to
stable
schemas,
but
as
soon
as
you
do
that,
there's
the
question
of
like
well.
If
the
user
rolled
out
new
stuff
like
where
and
when
do,
they
still
want
the
old
stuff,
and
so
I
think
the
moment
you
start
getting
into
any
of
this
backwards
compatibility
and
schema
management
or
anything
you.
C
C
So
so
I
think
it's
inevitable
that
that,
at
some
point
in
the
collector
we
we
start
looking
at
a
way
for
the
end
user
to
to
to
do
this
stuff
as
like
processing
roles
and
then,
if
back
ends
at
some
point,
come
up
with
a
convenient
way
to
say
export
the
the
dashboards
people
are
using
like
I
can.
Most
people
have
dashboard.
C
Most
metrics
tools
have
like
a
dashboard
api,
and
I
can
imagine
us
writing
tools
where
it's
like
take
my
dashboard
and
then
turn
that
into
a
set
of
processing
rules
in
the
collector
to
ensure
no
matter
what
changes
I
do
to
my
apps,
I'm
not
going
to
bust
the
data
that
I
know
I'm
using
in
this
dashboard
over
here,
something
something
like
that
would
be
kind
of
cool,
but
anyways.
That's
that's
just
getting
far
afield.
I
just
I
just
I
think
when
it
lay
down.
D
Yeah,
I
I
totally
agree
with
that,
and
you
know
that's
half
of
prometheus
rewrite
rules.
I
think,
are
this:
maybe
more
than
half,
maybe
80
of
them
and
20
of
the
use
case.
I
mentioned
that.
I
think
we
should
also
support,
but
okay,
so
going
forward
the
two
proposals
I
put.
Do
you
see
those
in
conflict
with
everything
that
we
just
talked
about?
Should
I
go
with
those
two
proposals
or
not
that's
that's
kind
of
what
I'm
trying
to
understand
consensus
here.
D
Should
I
put
forward
a
proposal
for
each
one.
We
can
talk
about
the
nuances
of
each
one
individually
or
do
you
want
to
see
a
different
direction
taken
here.
C
I
I
think
what
you're
talking
about
is
is
first
and
foremost
defaults.
There
needs
to
be
a
way
to
identify
what
out
of
resources,
and
this
will
also
show
up
on
attributes
as
well
as
soon
as
we
start
making
metric
labels
out
of
span
attributes
thing
we're
talking
about
every
tuesday
afternoon
at
the
4
pm
asia,
pacific
spec,
sig
that
people
should
come
to,
but
we're
gonna
end
up
wanting
to
say
like
these
are
the
the
these
constitute
a
default
label
set
right
so
like
these
resources
need
to
go
on
there.
C
If
it's
an
http
client
thing
that
you're
counting
these,
these
attributes
count
as
like
the
default
label
set.
If
you
want
to
call
that
like
identifying
versus
description,
that's
fine
or
like
default
label
set,
might
be
more
more
straightforward,
maybe
too
overfitted
but
anyways
you're
defining
the
defaults,
and
I
think
that's
that's
good,
and
then
the
second
thing
we
need
to
look
at
is
like.
Well,
how
do
you
apply?
C
D
C
B
Perfect,
thank
you
so
much
for
that.
Okay,
so
continuing
with
the
agenda,
there's
a
small
issue
that
was
open
the
pr
sorry
long
time
ago.
Please
review
that.
If
not,
I
will
merge
it.
I
wanted
more
eyes
on
it
because
it's
basically
it's
bringing
back
one
of
the
initially
removed
environment
variables
regarding
your
tlp
being
safe
or
unsafe,
but
it
has
been
waiting
there
forever.
It
has
actually
enough
reviews.
So
please
take
a
look.
B
If
not,
I
will
merge
it,
probably
by
by
the
end
of
the
day,
so
yeah
and
the
second
one
is
that
there
is
what
seems
to
be
a
very
important
issue
opened
by
luke
miller,
regarding
whether
we
should
clean
be
cleaning
up
or
not
context
objects
before
or
after
injection,
and
this
is
for
supporting
when
you
have
to
mix,
for
example,
manual
and
automatic
instrumentation,
or
when
you
have
nested
spans.
B
The
discussion
get
there
is
getting
very
technical
trust
proposed
that
we
first
need
to
accept
the
fact
that
we
have
to
support
nested
response.
It's
not.
I
don't
know
how
trivial
this
is.
This
doesn't
look
trivial
for
a
start,
but
please
take
a
look
at
that.
I
think
it's
important
enough
to
pay
attention
to
this.
One.
A
By
the
way,
it's
cleaning
up
request
objects,
not
context.
A
The
the
basic
the
basic
thing
that
was
trying
to
be
solved
here
was
a
way
to
detect
whether
something
else
that
already
like
a
a
piece
of
instrumentation
above
you
had
already
injected
the
context
into
the
http
request,
and
the
initial
proposal
was:
let's
take
a
look
at
what
headers
are
in
the
request
right
now
and
see
whether
something
has
already
put
the
headers
in
there,
but
that
only
works.
A
C
Is
this
because
you
would
want
you
would
not
want
the
when
you
when
we're
talking
about
just
people?
Now,
when
we're
talking
about
nested
spans,
we're
saying
on
an
http
client,
you
might
have
a
logical
http
request
and
under
that
you
might
have
mechanical
http
requests
like
first,
you
did
a
request,
then
you
did
a
retry,
then
you
got
a
300
something
redirect,
and
so
you
did
that,
and
so
all
of
that
counts
as
one
logical
http
request.
But
then
you
actually
maybe
did
multiple
physical
http
requests
under
that.
A
I
think
more
more
if
you
have
two
competing
pieces
of
instrumentation
that
have
come
in
from
two
different
libraries
that
you
weren't
aware
that,
were
you
just
pulled
in
libraries
and
they
had
instrumentation
built
into
them.
So
it's
not
that
one
is
necessarily
logical.
It's
just
that.
Like
hey,
I
brought
in
my
neti
instrumentation
and
coming
from
the
library-
and
I
brought
in
my
I
don't
know.
A
My
vertex
and
instrumentation
is
probably
a
bad
example,
because
those
are
probably
different
and
they
both
are
trying
to
instrument
the
http
client,
and
you
aren't
aware
of
it
you're
just
using
some
libraries
that
you
pulled
in
and
a
way
for
instrumentation
to
recognize
that
something
has
already
injected
the
context
into
the
request,
and
we
don't
need
to
do
it
again.
I
think
that
was
the
idea.
A
C
A
C
A
A
C
Right,
I
mean
they're,
it's
their
their
bug
to
deal
with.
Let
me
put
it
that
way,
because
I
mean
you
might
say
that
it's
it
is
in
the
case
of
a
java
agent.
It's
the
agent's
bug
by
continuing
to
apply
instrumentation
to
a
future
version
of
a
library
that
has
now
begun
started
to
include
its
own
instrumentation,
that
particular
case,
but
but
it's
an
application
owner
who's
going
to
have
to
sort
that
out.
If,
if
we're
applying
instrumentation,
when
we
shouldn't
be
right,
like
the
operator
can't
deal
with
that
after
the
fact.
A
A
Like
that's,
I
think
the
crux
of
this
problem
is
that
when
I'm
a
library
author,
I'm
going
to
write
instrumentation
for
my
library
right
not
going
to
assume
that
somebody
else
has
written
instrumentation
for
some
other
library
that
I
happen
to
be
utilizing
right.
So
I
think
that
the
ownership
of
this
bug
is
actually
extremely
tricky
and
if
we're
going
to
leave
it
up
to
application
owners
or
they're
going
to
be
just
be
continually
unhappy.
C
No,
I
mean
I
think
we
should
have
if
we
need
to
to
think
about
this
in
more
detail.
I
keep
coming
back
to
the
registry,
but
definitely
it's
the
case.
If,
if
there's
a
library
we're
supplying
instrumentation
for
and
then
they
start
natively
supplying
instrumentation,
you
know
we
want
them
to
tell
us
that
they
did
that,
so
that
we
can
adjust
what
we're
doing
and
say
that
this
this
instrumentation
only
applies
up
to
a
max
version
so
that
that's
that's
like
a
human
communication
path
that
has
to
be
there
yeah.
A
That's,
I
think,
a
narrow
case
of
what
I'm
talking
about,
though,
which
is
two
different
java
libraries,
both
of
which
are
assuming
that
they
are
the
instrumentation
of
record
for
http,
client
instrumentation,
for
example,
and
I
mean,
and
they
those
those
teams
aren't
communicating
and
we
don't
have
any
control
with
them
like
those
are
two
different
ones
like
the
spring
library
and
the
vertex
library.
A
It's
like
you
know,
spring
uses,
vertex
spring
wants
to
instrument
vertex
once
instrument,
and
you
end
up
using
both
of
them
like
whose
bug
is
it
it's
everybody's
bug,
yeah,
so
the
I
think
the
goal
would
be.
The
goal
would
be,
how
do
we
devise
an
instrumentation
paradigm
and
system
so
that
people
don't
run
into
this
trouble
and
so
that
it
will
solve
itself?
A
C
I
I
think
we
can
just
we
have
to
shake
out
how
we
avoid
this
bug.
Maybe
put
it
that
way.
How
do
we
avoid
this
bug?
Is
an
issue
of
documentation
and
convention
and
like
spreading
the
word
and
things
like
that,
but
when
you're
asking
like
whose
bug
is
it's
the
to
put
another
one,
the
application
owner's
problem,
if
they
have
this
bug
right,
like
they're
the
person
who's
going
to
like
once
the
bug
exists
in
their
system,
like
that's
the
person
who's
going
to
have
to
deal
with
it
in
that
moment,
I.
A
Guess
and
they're,
unfortunately,
the
probably
the
people
with
the
least
information
about
how
to
deal
with
it
right,
but
I
think
last
right
wins
should
hopefully
yeah.
I
think
that
so,
if
I
recall
now,
the
last
right
wins,
I
think
it
works
in
general.
I
think
the
issue
was
maybe
if
the
second
instrumentation
wasn't
using
the
first
instrumentation
span
as
its
parent,
then
you
might
end
up
with
an
orphan
span,
but
I
think
that's
an
instrumentation
bug
like
yeah.
A
Yeah,
I
agree
with
you
there.
I
think
that
that's
probably
just
an
issue
with
the
instrumentation
should
always
assume
that
somebody
might
have
created
a
span
and
should
just
pull
it
out
of
the
context
and
use
that
as
the
parent
and
then
I
think
the
problem
goes
away.
You
might
end
up
with
more
than
one
client
span,
but
that's
a
different
issue
than
like
bad,
like
bad
data
on
the
wire
yeah.
C
I
would
think
if
last
right
wins
then,
generally
speaking,
that
would
mean
the
lowest
level
piece
of
instrumented
software
wrote
is
going
to
be
the
one
that
writes
last,
maybe
not
always,
but
generally
speaking
you,
you
should
still
have
like
a
complete
graph
anyways
we're
we're
beating
this
to
death
in
this.
This
meeting,
I
think
last
strike
wins
should
should
work,
and
we
should
verify
like
why
that
wouldn't
work
would.
E
F
If
there's
a
third
party
providing
http
instrumentation,
they
don't
respect
the
flag,
it's
their
problem,
but
in
general,
like
we
internally,
we
have
a
rule
for
different
instrumentation
library
to
compete
and
there's
a
there's
a
lot
like.
How
do
we
follow
the
order?
I
I
guess
the
like
this
sounds
like
the
similar
problem.
A
Yeah,
unfortunately,
we
haven't
gotten
anyone
to
agree
on
that
mechanism
at
all
and
also
like,
for
example,
splunk
has
customers
who
explicitly
do
not
want
to
be
suppressing
those
spans.
They
want
to
see
all
the
details,
and
so
we
need
to
make
sure
that
it's
something
that
is
configurable
yeah
there
is
a.
There
is
a
pr
out
there
to
try
to
solve
this
problem
and
we've
not
been
able
to
get
agreement
on
what
solution
is.
F
What
I've
seen
is
the
upper
level
instrumentation
normally
give
that
flexibility,
for
example,
if
it
is
grp
say
by
default,
jrpg,
would
assume
that
you
don't
want
the
underlying
hdb
span.
However,
the
grpc
instrumentation
library
will
give
you
a
flag,
so
you
can
tell
the
library
hey.
I
want
to
get
all
the
details.
Don't
suppress
the
underlying
library.
C
Or
maybe
the
higher
level
stuff
shouldn't
over
instrument
things
to
the
point
that
you
wouldn't
want
that
lower
level
data,
like
I
kind
of
wonder
like
if
this
is
a
case
where
it's
like
actually
higher
level
libraries
should
just
back
off
a
bit
and
and
utilize
the
data
coming
out
of
the
lower
level
libraries,
rather
than
rewriting
that
information
and
then
trying
to
suppress
stuff.
That
seems
I
could
be
wrong,
but
it
seems
kind
of
like
grabby,
like
it's.
F
It's
depending
on
the
scenario
like
some
higher
level
library.
They
might
use
the
lower
level
like
transport
for
some
creative
stuff.
So
one
example
I
heard
from
the
donation
is
they
have
signalr,
which
they
create
a
a
like,
a
long
connection
that
can
last
for
a
couple
hours
and
they
do
create
stuff.
They
try
to
pull
the
server
and
and
those
details
will
will
be
surprising
to
the
user,
so
they
they
don't
want
to
expose
to
that
to
the
user
by
default.
C
Yeah
I
mean
we're
really
getting
into
some
some
nuance
here.
I
think
that's
going
to
take
a
while
to
shake
out.
F
Yeah
a
great
example
is
the
http,
like
the
request
where
you
can
send
a
chunk
of
data,
so
you
might
have
a
big
request
where
you
chunk
things
into
smaller
granularity
and
some
user.
They
work
on
the
lower
level.
They
want
to
see
every
chunk,
but
most
of
the
user,
focusing
on
the
business
logic.
They
wouldn't
care
about.
That.
A
Sounds
like
a
great
topic
for
the
instrumentation
saying
to
talk
about.
G
B
A
B
Up
the
next
one
is
something
that
I
am
interested
in
so
sorry
for
putting
that
here,
but
we
don't
have
any
more
important
issues
other
than
probably
a
metrics
update,
and
is
that
I
have
this
issue
that
I
have
been
thinking
about.
I
would
like
to
identify
whenever
people
are
using.
Actually
these
throws,
you
know
like
custom
java
agent
or
something
there
to
identify
that
this
telemetry
data
was
originated
from
that
specific
distro
agent
or
digital
service
or
distro
client.
B
One
of
the
interesting
thing
is
what
to
do
when
it
hits
a
custom
collector
in
turn.
Should
that
collector
override
the
resource
or
should
be
a
different
label,
so
we'll
be
preparing
a
pr
just
for
your
attention,
and
but
please
comment
that
in
the
issue,
if
not,
I
will
create
a
pr
later
today,
but
yeah.
That's
close
to
my
heart.
That's
all!
B
Finally,
there's
I
remember
in
the
past
we
used
to
have
a
matrix
update.
So
since
we
have
time
maybe
the
metric
screw
could
give
us
an
update
for
the
rest
of
us
mortals.
F
Yeah,
so
the
the
matrix
api
is
already
experimental
and
so
far
based
on
the
feedback
and
the
prototype,
it
seems
there's
no
outstanding
issue
so
so
far
it
seems
to
be
doing
a
great
job
and-
and
if
people
see
any
issue,
please
report
that
on
github
and
because
we
have
a
timeline
like
once,
we
hit
the
timeline,
we'll
change
the
spec
to
feature
freeze.
So
if
anything
that
people
think
we
we
should
make
a
scope
change,
they
want
to
add
something.
We
should
discuss
that
as
early
as
possible
regarding
the
matrix
isdk
spike.
F
So
the
good
thing
is
we
we're
seeing
good
progress
on
the
histogram
part
and
also
like
josh
suresh
started
to
help
on
the
exemplar,
so
that
that
should
help
us
to
spread
the
like
the
the
words
so
like
now,
I'm
focusing
on
the
on
the
view
part,
which
is
the
part
of
the
sdk
spec
and
it
it
has
entangled
with
a
like
the
meter
provider
and
the
pipeline.
F
How
we
export
things
so
we're
trying
to
solve
several
problems
that
people
reported
last
year
like
how
you
can
have
multiple
exporters
and
they
can
run
on
different
schedule.
How
do
you
allow
push
and
pull
to
run
together,
and
how
do
you
allow
the
data
collection
cycle
to
be
different
than
the
data
exporting
cycle
and
the
view
pr
has
been
stuck
for
several
weeks
and
now
it
seems
we're
making
progress
again.
So
hopefully
we
can.
F
F
There
are
multiple
important
things
we
want
to
decide
like
exponential
versus,
like
arbitrary
buckets
and
whether
it's
base
two
or
base
ten.
Some
of
the
decision
will
have
long
term
impact,
which
means
like
once
we
decided
it's
very
hard
for
us
to
change
the
direction
so
we're
taking
a
serious
consideration-
and
I
know
multiple
people
are
doing.
The
prototype
and
j
macd
has
been
coordinating
multiple
experts
to
make
sure
we
were
making
the
right
choice
and
on
the
as
detailed
side,
I
think
there
are
still
some
remaining
issues.
F
For
example,
what
are
the
default
exporters?
Visual
support?
We
already
have
some
good
idea,
for
example,
will
support
the
memory
exporter
for
testing
and
the
console
exporter
for
people
to
understand
how
things
work
and,
of
course,
like
the
premises
and
and
probably
statsd
exporter
as
well
and,
of
course,
we'll
also
support
the
otlp
exporter,
which
supports
all
kinds
of
the
the
matrix
data
model
we
have
so
we
we
need
somehow
to
document
that
I
mean.
F
If
I
can
finish
the
view,
I
probably
will
go
back
on
the
api
spec
and
focus
on
the
hint
to
see
if
we
can
finish
the
hint
api
before
feature
freeze.
Otherwise
I
think
most
likely
will
will
release
the
stable
version
of
the
spec
without
the
hint
that
can
be
an
additive
change
and
we're
not
seeing
this
as
a
huge
blocker,
because
instrumentation
library
can
still
use
documentation
to
give
you
the
the
the
best
recommendation.
F
Just
for
people
who
haven't
been
heavily
involved
in
the
matrix
6
so
just
want
to
explain
the
vo
versus
hint.
So
view
is
part
of
the
isdk.
It
gives
the
application
developer
the
power
to
do
some
configuration.
For
example,
we
got
all
the
libraries
that
already
got
instrumented
using
the
matrix
api,
but
as
a
user
you
want
to
say
I
only
carry
those
two
attributes
as
dimension.
I
don't
want
all
the
other
dimensions,
because
my
system
cannot
afford
that
high
cardinality
or
I
just
don't
want
to
pay
for
that.
F
So
you
can
change
that
or
you
can
say.
Oh,
I
have
some
like
counter,
but
I
I
don't
want
the
total
sum.
What
I
want
is
I
want
to
change
that
to
a
histogram.
I
want
to
see
the
distribution,
so
it
gives
the
user
flexibility
how
they
can,
how
they
can
morph
one
existing
instrument
instead
of
using
a
default
result.
They
want
to
view
that
as
a
different
thing
and
the
hint
api
is
part
of
the
api,
the
target
user
is
the
library
owner
who
use
the
instrumentation
api.
F
So
we
want
them
to
be
able
to
give
the
user
some
default
configuration
to
make
it
easier.
So
the
user
by
default
will
fall
into
a
page
of
success.
For
example,
if
you're
the
http
library
owner-
and
you
probably
have
a
lot
of
things
to
put
into
the
attributes,
like
the
http
word-
the
http
status
code
and
many
many
things,
but
you
also
understand
that
normal
user,
if
they
take
all
the
attributes
as
they
mentioned,
they
probably
will
just
shut
down
the
business,
because
it's
too
costly.
F
So
what
as
a
library
owner
you
want
to
give
them
the
hint
so
basically
saying
my
recommendation,
is
you
take
the
http
verb
and
status
code,
but
don't
take
like
something
like
http
url
as
a
created
dimension?
So
we
believe
this
is
a
very
good
addition.
So,
in
this
way,
like
the
user
can
just
configure
the
isd
can
say
I
want,
I
trust
the
library
owner.
I
just
want
the
default
recommendation,
so
this
is
the
next
thing
I'm
going
to
work
on
after
we
finish
the
vopr.
E
For
everyone
else
who
is
listening
about
the
histogram
topic,
I
will
say
that
it
is
full
of
very
detailed
discussion
right
now
there
I
would
say
there
is
a
consensus.
That's
still
there
about
this
exponential
histogram
being
simpler
than
log
linear.
E
There
does
also
seem
to
be
a
preference
for
base
2
as
a
matter
just
because
it
goes
with
the
exponential
a
little
bit
better.
If
we
were
talking
about
log
linear,
the
base
10
proposal
is
pretty
solid
there.
The
last
little
little
detail
here
is
one.
That's.
I
essentially
invited
the
expert
from
prometheus
histograms
to
come
to
look
at
what
we've
done.
E
He
shared
a
protocol
proposal
that
that
prometheus
would
be
happy
with,
and
it's
pretty
pretty
close
to
what
was
already
there
from
the
new
relic
proposal
from
months
and
months
ago.
So
there's
like
one
last
detail
involving,
I
guess,
you'd
call
it
precision
near
zero,
which
I
doubt
many
people
actually
care
about.
So
I'd
say
from
that
level
the
discussion
is
winding
down.
E
I
don't
think
we'll
ever
have
these
experts
agreeing
my
goal
was
to
get
non-experts
to
look
at
the
issue
and
and
weigh
in
a
little
bit
so
that
hasn't
really
happened.
There
aren't
that
many
more
discussion
points
left
there,
though,
and
some
point
soon.
I
think
a
decision
has
to
be
made.
E
It's
it's
definitely
moving
in
the
same
direction
as
we
had
otep149
gave
us
a
base
to
exponential
histogram,
and
I
think,
looking
at
that,
unless
something
startling
happens
in
this
debate,
I
don't
think
it's
gonna
change
and
we
will
be
hearing.
I
think
in
the
next
hour
from
dynatrace
has
to
present
their
prototype
of
the
histogram.
So
if
you're
interested
stay
tuned
and
we'll
have
that
in
a
few.