►
From YouTube: 2021-10-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
D
D
C
F
F
On
the
agenda,
I
have
the
first
item.
May
I
start
sure
the
in
the
notes
I
have
links
here.
There
is
a
release
candidate
for
otlp
v
0.11.
It
includes
two
changes.
In
addition
to
some
comment
changes.
The
two
changes
are
the
exponential
histogram,
which
is
pretty
big
and
the
multi-container
traces
data.
Metrics
data
analog
data
types.
F
My
proposal
is
that
we
release
this
week
with
just
those
two
features
and
wait
for
the
next
release
to
add
the
new
features
which
relate
to
optional
fields.
That's
the
min
max
stuff
that
is
pending
right
now,
so
I'm
asking
that
you
either
agree
or
don't
agree,
agree
by
approving
the
otlp
release
and
disagree
by
saying
something
about
how
we
should
get
min
max
in
at
the
same
time.
F
F
Release
now,
rather
than
wait,
the
reason
now
is
that
it
takes
quite
a
while
to
update
the
collector
with
any
new
support
that
we
want
to
add
and
to
get
the
p
data
support
in
for
the
exponential
histogram,
we'll
probably
take
a
full
release
cycle,
and
that
way
we
could
actually
begin
having
an
exponential
histogram
in
place
sooner.
That
was
my
motivation
to
answer
your
question
perfect.
A
Thank
you
josh,
so
this
is
obviously
a
new
field,
so
backwards
compatible
from
the
protobox
perspective.
But
do
we
need
to
tell
anything.
F
Fine
right,
so
I
I
have
thought
through
this,
and-
and
I
think
this
is
something
that
we
might
want
to
discuss
right
now.
There
there's
a
draft
pr
that
I
put
together
before
we
merged
the
exponential
instagram
that
was
part
of
a
prototype
essentially,
and
it
included
a
p
data
change
for
the
collector.
F
That's
one
thing
I
want
to
just
get
moving,
but
as
part
of
that
p
data
change,
I
added
a
converter
to
convert
the
exponential
histogram
into
an
explicit
histogram
and
that
test
that's
tested,
and
that
would
be
what
I
propose
is
that
we
put
that
into
the
collector
as
a
library
of
help,
and
then
we
go
through
each
of
the
receivers
and
or
the
exporters
that
might
need
mostly
exporters,
I'm
thinking
of
the
otlp
exporter
in
particular,
but
the
prometheus
exporter.
F
A
F
I
think
you're
right
I
mean
we
could
go
into
depth
about
how
when
you
convert
between
exponential
and
explicit
you
are,
you
could
introduce
arbitrary
errors.
I
think
that's
understood.
We
might
want
to
add
another
statement
in
the
data
model,
pr
which
is
the
other
link
here.
I
don't
think
we
should
quite
release
this
prototype
until
we
merge
my
data
model,
pr
which
says
what
what
this
data
type
does.
So
those
are
the
open
items
on
that
topic.
G
Hey
I
wanted
to
check
sorry,
my
sound
was
broken
and
my
camera
is
broken
completely.
Did
we
did
we
talk
about
the
optional
support
in
mid
max
as
part
of
that
discussion,
or
not
just
curious.
F
I
I
mentioned
it
at
the
beginning.
This
is
the
reason
why
I
want
to
make
an
otlp
v011
release
now
and
wait
for
your
change
with
the
optional
support,
as
well
as
the
min
max
change
is
that
I
feel
well.
I
I'm
personally
more
more
interested
in
getting
the
exponential
histogram
done,
but
it's
also
that
this
min
max
and
the
optional
field
requires
changing
the
p
data
compiler.
I
think
and
there's
some
more
unknowns
there
and
I
wanted
to
get
my
knowns
out
first
yeah.
G
Well,
I
was
going
to
give
you
our
known
unknowns,
which
is
so
go.
Go
proto
does
not
support
optional.
The
it's
actually
not
super
difficult
to
add
support.
It's
just.
Unfortunately,
any
change
to
go,
go.
Proto
means
vendoring,
gogo
proto,
because
they're
not
accepting
pull
requests
so,
but
it's
all
of
the
wiring
to
be
able
to
handle
optional
is
is
mostly
there.
You
just
have
to
do
like
a
quick
20
line,
change
to
go,
go
proto
at
the
right
spot,
it's
just!
G
Then
it
needs
to
be
thoroughly
tested
and
there's
a
lot
of
open
questions
for
the
collector.
We
had
a
20
minute
discussion
bag
tonight
during
the
collector
sig.
I'd
be
really
happy
to
hear
from
other
people,
but
I
would
not
block
exponential
histograms
on
this
and
we
might
wanna.
I
don't
know
I
my
my
suspicion.
Is
it's
going
to
be
a
one
to
two
month
process
for
us
to
figure
out
what
we're
going
to
do
with
protocol
buffers
in
practice
just
because
we
have
to
make
some
hard
decisions.
F
Before
we
updated
the
proposed
pr,
I
did
verify
that
if
you
take
the
recent
google
protobuf
compiler
and
just
compile
the
optional
it
does,
what
I
expected
it
to
do
was
just
just
to
make
pointers
out
of
the
optional
fields
and
not
to
touch
any
of
the
other
fields.
That's
all.
I
checked,
though,.
G
Yes,
yes,
that
is
exactly
what
google
does
and,
if
you
add
that
flag
everything
builds
except
for
google
proto
gogopro
can
support
that
flag
if
we
bump
it
to
the
latest
plugin
compiler
api,
and
we
have
two
choices
for
how
we
implement
it.
I
think
that
are
reasonable.
One
is
to
make
things
pointers
and
one
is
to
put
a
bit
flag
in
there
like
a
bit
mask
with
has,
but
that
is
anyway,
those
who
are
interested.
I
might
come
to
the
gosig
to
talk
about
this,
where,
where
should
the
discussion
happen?
F
Okay,
my
second
agenda
item
moving
on
is
a
little
bit
of
advocacy
last
week
in
this
space.
I
spoke
for
five
or
so
minutes
about
the
current
proposals
that
we've
got
from
open,
telemetry
about
probability
sampling
and
the
the
two
oteps
the
bulk
of
that
have
already
merged
that
was
168
and
170,
and
that
contained
a
proposal
essentially
to
use,
trace,
state
and
trace
state
is
a
sort
of
general
purpose
mechanism
that
the
w3c
gives
us
for
this.
F
It's
kind
of
what
it's
meant
for,
but
in
this
particular
application
the
what
we're.
What
we're
facing
now
is
a
little
bit
of
hesitancy
to
move
forward,
because
we
know
that
the
trace
state
solution
is
really
not
as
good
or
performant
as
a
solution
where
we
modify
the
the
trace
parent.
So
there
are
pending
issues
and
prs
in
the
w3c
trace
context
repository
all
referring
to
this
coming
from
open
telemetry,
and
at
this
point
I
think
we
should
lean
into
it
and
actually
try
and
get
the
outcome.
F
That
is
better
for
us
and
I
think
we
do
actually
have
quite
a
lot
of
influence
over
the
w3c
trace
context.
Group.
I'm
actually
not
sure
what
influence
there
is
on
that
group
outside
of
open
telemetry,
and
in
this
case
the
the
proposal
is
just
so
much
better.
If
we
go
with
a
trace
parent
solution,
the
difference
is
something
like
30
bytes
versus
two
bytes,
and
so
it
requires
really
showing
the
w3c
that
we
have
some
consensus
across
vendors
and
that
we
think
that
this
is
for
the
good
of
the
entire
ecosystem.
F
So
if
you
think
that
or
you
sort
of
understand
what
I'm
getting
at
I'd
like
you
to
take
a
look
at
these
two
linked
issues-
issue-
sorry
polls,
467
and
468.,
they
are
both
in
sort
of
a
draft
or
incomplete
state,
and
I
think
what
we
need
to
do
is
start
focusing
our
politics
on
these.
And
so
I
want
to
see
if
anyone
else
associated
with
this
pr
is
on
the
call
I'm
looking
for
bogdan
or.
H
Yeah,
so
the
pr
that
I
put
in
there
is
468.
H
and
it
was
sort
of
at
the
time
it's
kind
of
an
alternative
to
bogdan's
proposal,
although
I
don't
really
see
them
as
directly
competing
because
it
would
be
entirely
possible
to
use
like
the
randomness
from
his
proposal
and
then
add
the
sampling
constant
from
mine,
but
essentially
what
my
pr
does
is
add
the
the
probability
and
the
randomness
bytes
that
josh
had
proposed
for
the
trace
state.
It
adds
them
as
an
optional
field
on
the
end
of
the
trace
parent.
H
If
we
went
with
bogdan's
proposal
in
467
and
enforced
optionally,
some
randomness
within
the
trace
id,
then
we
would
only
need
to
propagate
the
probability
online,
so
that
would
be
only
an
additional
two
bytes,
plus
a
separator,
so
three
characters
or
yeah
one
bite
and
a
character,
a
separators
three
total
characters.
H
F
Yes,
thank
you.
That's
all
exactly
what
I
was
getting
at
as
well,
and
I
think
just
maybe
to
reiterate
this
slightly.
The
these
two
proposals.
467
468
are
orthogonal
if,
if
done
correctly,
and
that
means
that
you
can
have
daniel's
proposal
which
would
give
you
a
separator
plus
the
four
bytes
when
you
don't
have
randomness
in
the
trace
id,
but
you
could
have
randomness
in
the
trace
id
with
this
new
flag
proposed
by
467
and
then
only
need
the
separator
and
the
two
bytes.
F
And
so
I
wanted
to
revisit
that.
If
anyone
can
remember
any
details,
but
I
don't
think
we
should
discuss
it
in
in
person.
I
think
we
should
just
take
this
to
a
thread
somewhere
and
see
what
we
can
do.
I
just
want
to
stoke
some
interest
and
and
share
what
where
we
are-
and
I
think
we'll
keep
moving
on
this.
I
will
discuss
this
with
daniel
and
bo
dunn
as
well.
H
F
I
don't
know,
I
don't
have
a
slack,
I
don't
have
a
select
thread
or
an
issue.
I
think
we
probably
should
keep
this
in
an
issue
and
I'm
not
sure
why
we
talk
after
this
meeting.
H
F
A
The
problem
is
that
the
word
label
is
actually
used
as
a
as
a
key
in
in
the
file
right.
It's
it's
part
of
the
format,
so
I
started
implementing
it
and
then
we
found
that
it's
actually
still
using
the
word
label,
which
is
undesirable.
Now
the
question
is:
can
we
change
this
right?
It's
a
no-tab
which
is
approved
and
merged,
but
it's
not
anywhere
in
the
specification.
A
A
Does
that
mean
that
we
are
okay
with
changing
the
old
app
fixing
it
because
it's
actually
wrong,
or
we
consider
that
it's
because
because
it's
done,
then
it's
going
to
be
a
breaking
change
to
add
a
bit
more
color
to
this.
There
are
no
schema
files
so
far,
which
are
actually
using
this
feature
of
the
schema
file.
So
we're
not
breaking
anything
in
open
telemetry.
A
G
I
I
commented
on
the
on
the
all
right-
the
issue,
I
guess,
but
my
question
is:
do
we
do
we
assume
that
something
in
an
otep,
that's
not
on
the
spec,
is
actually
the
stable
thing
that
we're
going
to
have
going
forward,
or
I
thought
an
otep
meant
we're
going
this
direction
and
until
it's
specified
it's
not.
A
G
I
need
to
change
how
I
review
oteps,
if
that's
not
the
case,
because
that
anyway,
but
yeah,
so
I
I'm
I'd,
say
just
fix
it.
A
A
So
that
is
probably
why
we,
why
I'm
kind
of
reluctant
here,
but
anyway,
we're
not
breaking
anything
here.
So
let's
just
do
that.
We
kind
of
didn't,
do
it
in
the
right
order,
but
it's
not
too
late.
So,
let's,
let's
fix
that,
I
will
just
submit
prs
in
the
specification
which
adds
the
concept
of
the
schema
there
with
that
correction
of
the
remaining
of
the
label
to
attribute,
and
we
should
be
good
with
that.
A
A
C
And
I
have
a
similar
request
for
this
pr
here
for
apache
rocket
mq,
I'm
not
really
familiar
with
it,
but
from
from
my
spec
reviewer
and
it
it
looks
reasonable
and
I
I
approved
it.
But
if
anyone
is
familiar
with
rocket
mq,
it
would
be
great
if
they
could
could
take
a
look
and
and
voice
their
opinion,
that's
1904,
and
from
what
I've
what
I've
read.
C
I
think
we
are
in
general,
looking
for
subject
matter
experts
for
semantic
conventions,
in
particular
where
the
instrumentation
sig
is
looking
into
finding
some
people
to
volunteer
for
certain
areas.
J
Yeah
thanks:
this
is
a
question
that's
coming
out
of
the
log
sig,
but
I
think
it
impacts
possibly
all
signals.
So
basically,
we
have
a
proposal
to
establish
a
semantic
convention
for
describing
a
file
like
the
name,
the
path
etc.
J
I
think
we
have
some
consensus
on
the
structure
of
that,
but
I
think
the
open
question
is
whether
it's
appropriate
for
this
to
be
sort
of
a
base
attribute
like
like
just
file
at
the
root
of
attributes.
J
You
know,
certainly
there
would
be
contexts
where
a
little
bit
more
information
is
necessary,
but
I'm
not.
I
I'm
not
convinced
I'm
of
the
opinion
that
just
establishing
the
structure
is
what's
most
important
and
that
if
we
need
to
sort
of
nest
that
elsewhere,
we
can
do
that
later.
J
But
I'm
calling
for
opinions
on
this
because
I'd
like
to
to
wrap
this
issue
up,
if
possible,.
A
C
Yeah,
I
think
my
concern
was
just
that
it's
not
clear
from
from
the
name
of
the
attribute
if
it's,
the
the
the
subject
being
acted
on
or
if
it's
the
log
where
the
data
is
read
from,
so
that
that's
not
clear
from
the
from
the
context
and
so
forth
that
we
might
want
to
have
a
dedicated
one
for
the
log
source
or
something
like
that.
C
But
I
think
we
can
continue
the
discussion
on
the
issue.
Do
you
have
any
any
other
input
for
right
now
then,
on
this.
J
I
think
perhaps
that
there's
a
an
open
question-
it's
not
on
the
issue
here,
but
it
sort
of
does
it
makes
sense
anywhere
in
our
semantic
conventions
to
sort
of
design
for
nesting.
Like
is
it
you
know,
given
that
we
will
describe
files
in
probably
many
contexts,
should
we
really
be
worried
about
sort
of
the
prefix
on
that,
or
is
that
just
something
that
we
can
insert
in
the
same
structure
in
multiple
locations
as
necessary?
Or
is
there
another
way
to
handle
that.
J
So
if
we
have
a
let's
say
file
name
and
we
have
file
path
right,
we
started
defining
a
structure
for
this
for
describing
a
file
right
now
in
the
context
of
a
log
source,
we
might
say
log
source,
dot,
file,
name,
log
source,
dot,
file
path.
But
in
some
other
context
you
might
say
you
know
some
other
prefix.
A
Yeah
yeah,
I
understand
what
you're
saying
we
we
don't
do
that
today
and
it's
not
a
unique
problem
right
when
you're
referencing
an
entity,
it's
actually
unclear
right.
So
exactly
what
learning
was
asking
about?
We
don't
do
that
anywhere.
It's
not
just
a
problem
unique
for
files,
anything
that
is
supposed
to
be
an
associated
entity
during
the
processing
of
when
we
emitted
a
log
or
a
span.
We
don't
really
record
the
nature
of
this
association
right
in
any
way.
A
A
C
All
right,
I
think,
when
we
worked
on
the
on
the
fast
functioning
service
semantic
conventions.
We
already
have
something
like
that,
where
one
one
end
of
the
transaction
described
of
the
operation
describes
the
invoked
function
by
its
resource
attributes,
so
it's
fast.name,
fast
id
and
so
on,
and
the
other
one
describes
the
outgoing
card.
C
So
it's
fast
dot,
invoked
name
and
fast,
dot
invoked
provider
on
the
outgoing
on
the
outgoing
span,
but
that's
a
special
case
where
we
added
or
to
some
extent,
even
even
duplicated,
those
three
attributes
for
this
sake
and
and
then
just
wrote
into
the
spec
that
it
should
be
equal.
So
the
outgoing
span
attribute
should
be
equal
to
the
incoming
resource
attribute.
But
we
don't
have
any
proper
mechanism
for
that.
Yet
it's
a
bit
similar
to
the
to
the
link
types
that
we
or
write
link,
attributes
that
we
have.
J
What
I'm
hearing
is
there
may
be
a
need
for
some
kind
of
generic
mechanism
for
providing
you
know
linking
or
relative
semantic
conventions.
However,
that
aside
is
there.
J
A
A
I
I
don't
think
it
will
be
sustainable
to
just
introduce
new
types
of
new
names
for
the
attributes,
which
include
something
like
file
name
in
them,
just
to
describe
the
notion
of
the
relation
of
whatever
is
happening
with
the
files
right
with
with
the
request.
It's
it's.
You
only
have
two
things,
whether
it's
very
common
or
outgoing.
G
Okay,
can
I
throw
on
a
crazy
idea?
We
have
typed
attributes.
Does
it
make
sense
to
specify
a
file
type?
I
think
dan
in
this
in
this
issue
accurately
denotes
different
ways
of
the
ways
files
manifest
in
file
systems
and
how
to
map
them
to
attributes
what?
G
If
what
if
this
attribute
is
just
a
substructure
with
path
stream-
and
I
forget
what
the
other
one
was,
because
I
need
to
look
at
it
again,
but
what
if
we
we
specify
basically
a
file
type
and
then
a
mapping
to
a
actual
structured
attribute,
since
we
have
them
for
how
files
map?
So
when
dan
was
asking,
does
this
belong
as
a
template,
or
does
it
belong
as
a
specific
attribute
per
file?
Maybe
both
right?
G
So
I
just
want
to
throw
it
out
out
there
as
an
option,
because
I
don't
want
to
lose
the
mapping
of
how
files
can
manifest
into
ways
to
express
them
in
open
telemetry.
I
think
that's
the
most
valuable
part
of
this
issue
that
we
need
to
keep
and
yeah.
There's
there's
many
ways
we
can
go
about
it.
A
I
think
what
what
you're
saying
is
what
then
what's
proposing
right
file
is
sort
of
a
structured
object
which
can
be
recorded
as
a
value
of
an
attribute,
and
the
name
of
the
attribute
can
be
something
like
log
file
or
or
something
like
processed
file
right
and
the
attributes
can
be
different.
The
names
of
the
attributes
can
be
different,
but
the
value
the
structure
of
the
value
is
going
to
be
something
like
the
name,
the
path
or
whatever
else
is
there,
the
stream?
A
A
I
don't
think
we
allow,
with
our
structured
objects
as
values
of
the
attributes.
At
the
moment,
the
only
thing
that
we
do
that
is
close
to,
that
is
a
race
right
and,
and
we
don't
even
allow
arrays,
which
mix
values
of
different
types.
Even
that
is
not
allowed.
A
I
C
Yeah,
thank
you,
then.
Just
one
thing
I
wanted
to
mention.
Until
now,
we
only
had
this
idea
or
or
ask
for
for
the
source
of
a
log,
the
log
file.
So
maybe
we
are
reading
good
with
a
with
a
specific
attribute
that
is
just
suitable
for
logs
and
we
can
re,
rethink
and
re-address.
C
J
Sure
I
think
that
makes
sense
and
again
if
we
do
end
up
defining
this
type,
that
is
a
structure
then
we
could
hopefully
we've
designed
it
in
such
a
way
where
it
will
apply
elsewhere
and
it
should
be
backwards
compatible.
So
we'll
go
there.
K
Hi
so
yeah,
I
linked
a
pr
here
and
the
subject
of
the
pr
is
adding
more
specificity
around
the
the
retry
for
the
otlp
protocol,
and
so
the
the
impetus
for
this
is
that
there's
been
issues
in
at
least
three
language
sdks
asking
for
where
people
are
asking
for.
You
know
the
the
retry
behavior
that
is
currently
specified
and
the
language
sdks
haven't
implemented
it
and
at
least
in
the
the
java
language
sdk.
It's
it's.
K
It's
not
implemented
because
of
lack
of
language
in
the
spec.
So
I'm
trying
to
push
that
forward,
get
some
language
in
the
spec
that
would
allow
java
and
the
other
languages
to
actually
go
forward.
With
these
implementations.
A
K
Yeah,
so
there's
no
language
that
prohibits
it,
I'm
just
looking
for
an
endorsement,
and
it's
because
the
the
maintainers
over
in
in
the
java
sdk,
you
know
if,
if
language
were
later
to
be
added
that
stipulates,
you
know,
default
parameters
for
the
retry
behavior
or
how
the
mechanism
should
work
in
the
implementation
pro
is
in
conflict
with
that
new
language.
Then
it
would
be
a
breaking
change,
and
so
I
think
what
they're
looking
for
is
the
language
to
become
ahead
of
time
ahead
of
their
implementation.
A
Yeah
yeah
there
was
never
an
intent
to
disallow
using
the
built-in
grpc
retries.
So
I
think
it's
fine,
I'm
just
not
sure
that
we
want
to
go
into
the
the
exact
details
that
you
have
in
the
pr
with
specifying
the
the
intervals
and
all
the
all
the
numeric
values
that
you
have
there.
That
part,
I'm
not
sure,
but
I
think
it's
yeah.
It
should
be
actually
completely
fine
to
use
the
built-in
one.
K
Okay,
okay,
so
so
I
don't
have
any
attachment
to
the
default
values.
I
do
think
that
there
should
be.
You
know
if,
if
we
don't
want
to
get
into
that,
then
maybe
we
have
language
that
explicitly
says
that
the
spec
won't
won't
propose
default
values,
and
so
maybe
we
stick
to
the
parameters
that
are
configurable
so
which
things
which
handles
you
have
and
have
that
in
line
with
the
grpc
spec.
But
not
actually,
you
know,
have
any
recommended
recommended
default
values.
L
Yeah
I
revealed
that
yeah
from
jack.
I
I've
seen
multiple
issues
that
would
pop
out.
So
I
I
I
guess
we
try
it's
a
complex
topic.
I
don't
expect
anyone
could
finish
the
request
back
in
week
like
if
you
have
a
bunch
of
data.
Some
data
is
totally
broken,
that
you
shouldn't
retry
and
part
of
the
data
is
reliable,
that
you
should
retry.
What
do
you
do?
It
requires
you
to
design
some
partial
success,
responsibility
right
and
also
when
you
have
retry,
you
start
to
have
the
data
duplication
and
how
to
handle
the
data.
L
G
I
was
actually
going
to
suggest
exactly
what
riley
said
that
this
this
might
be
a
good,
maybe
even
a
sig,
around
retry,
because
we
ran
into
the
same
issue.
We
started
turning
retry
back
by
a
lot
with
open
telemetry,
just
to
make
sure
that
we
weren't
sending
bad
data
points
repeatedly
where
we
know
they
fail
so
yeah.
I
think
we
take
this
offline
if
we
can,
but
we
should
address
it
relatively
quickly.
It's
going
to
hit
users
soon.
F
That's
one
lightstep
has
a
metrics
validation
logic
that
we
apply
and
it
results
in
rejecting
some
of
the
data
points
and
not
all
of
them,
and
we
came
up
with
our
own
protocol
for
annotating
the
success
and
failure
of
partial
operations,
and
it's
very
it's
very
ambiguous.
What
we
should
be
doing
as
open
telemetry.
A
F
Let's
step
we'll
re
we'll
swallow
the
points
that
it
it
can't
accept
and
and
call
them
success,
because
otherwise
they
keep
coming
back
and
what
we
did
was
come
up
with
a
way
to
count
the
failures
so
that
it
happens.
But
as
riley
points
out,
we
can
over
count
if,
if
those,
if
there's
replayed
attacks
and
so
or
you
know,
replay
requests
and
so
on,
it's
not
very
secure.
K
So
so
I
think
that
that's
definitely
a
valid
concern.
I
my
gut
reaction
is
that
actually
is
kind
of
conflating
two
issues,
because
the
otlp
protocol
specification
does
describe
the
conditions
for
retryability
and
so
like
it's
clear
about
the
status
codes
that
are
are
retriable
and
you
know
for
even
for
http.
E
K
500
status
codes
are,
and
400
ones
are
not,
and
200
codes
are
obviously
good,
and
so
I
think
the
language
would
have
to
go
there,
and
so
these,
I
think,
there's
two
issues
they're
dependent
on
each
other,
and
you
know
we
can't
go
forward
with
a
retry
solution.
That's
good
before
both
issues
are
solved,
but
I
think
they
can
be
pursued
in
parallel.
K
L
So,
for
example,
if
you
have
a
retry
mechanism
that
you
implement
in
java
and
later
you
don't
change
any
api
surface,
but
you
change
the
behavior,
let's
say
for
traces.
If
you
miss
something,
then
you
will
try
immediately
based
on
the
first
thing.
First
out,
so
you
start
from
the
oldest
trace,
but
for
metrics,
normally
people
care
about
alerts,
so
they
would
like
in
case
of
failure.
They
would
want
you
to
try
the
latest
metrics
first
and
forget
about
the
history,
unless
you
can
finish
the
most
recent
metrics.
L
If
you
change
that
behavior,
do
you
think
that's
a
breaking
change
for
the
customer
or
you
think
it's
just
implementation
detail
that
we
have
freedom
to
change.
My
worry
is
like.
If
the
wording
is
not
clear,
we
did
something
and
then
we
changed
the
behavior.
It
might
be
considered
as
a
as
even
worse,
breaking
change
for
the
customer.
K
K
Today,
if
this
specs
language
were
to
change
its
behavioral
requirements
like
you're,
suggesting
riley
and
an
sdk
would
follow,
follow
suit,
wouldn't
it
you
know
that
would
be
a
breaking
change
on
the
and
the
specification
right
and
then
you
know,
is
that
really
a
breaking
change
for
the
sdk
or
does
the
sdk
just
make
it
clear
that
you
know
we
were
in
line
with
the
language
in
1.
version
1.7
of
the
spec
now
we're
in
line
with
version
2.0
of
the
spec.
L
No,
no,
my
answer
is
the
current.
The
current
spec
seems
to
be
not
super
clear
about
this,
so
it
leaves
some
imagination
for
individual
language
and
it's
big
enough
to
allow,
I
guess,
the
change
to
not
being
considered
a
breaking
change
right,
yeah.
So
it's
more
like
unspecified
or
it's
not
clear,
so
you
have
some
room,
but
once
you
implement
that
and
give
that
to
the
end
user,
then
you
might
get
cornered.
A
And
I
guess
precisely
for
that
same
reason.
I
feel
a
bit
uncomfortable
with
putting
the
actual
numbers
for
the
intervals
and
the
the
stuff
in
the
specification.
For
that
same
reason,
because
once
you
do
that
there
is
no
there's,
no
reversing
that
right
and
then
the
implementations
have
to
follow
that,
and
you
can't
change
that
anymore.
K
Okay,
so
so
we
got,
we
got
a
trail
breadcrumbs
leading
back.
The
first
thing
that
we
need
to
do
is
we
need
to
provide
more
improve
the
language
around
what
is
actually
retriable
and
when
to
retry,
and
then
we
can.
It's.
A
I
think
well-
and
I
agree
with
you-
we,
although
we
do
want
to
have
that
I
think,
even
before
we
do
that
we
can
at
least
say
what
happens
if
the
entire
thing
is
wrong
and
the
retries
have
to
be
made,
because
today
it's
part
of
the
specification
already
we
say
retry.
But
how
do
you
retry
is?
I
guess
that
small
change
that
you're
looking
for
making
it
clear
that
it's
okay
to
use,
grpc's
retrying
mechanism?
I
think
I
feel
completely
fine
with
that
small
change
right.
We
should,
I
think,
it's
okay.
A
We
don't
prohibit
that.
I
don't
see
any
harm
if
we
say
that
it's
okay
to
use
the
other
changes,
specifically
adding
more
more
precise
behavior.
I
think
that
requires
a
bit
more
research
and
that
probably
can
be
done
at
the
same
time
as
understanding.
How
do
we
report
the
partial
success
and
how
do
we
retry
that
the
partial
unsuccessful
bits
or
whatever
is
it
there.
K
E
K
K
M
Yeah,
hey
you
all.
This
is
another
area
of
the
spec
that
has
come
up
as
being
maybe
a
little
underspecified,
which
is
error
and
exception
management
for
exceptions
that
may
be
caused
by
misuse
of
the
open,
telemetry
api.
M
So
we
do
have
a
section
in
the
spec
which
gives
some
general
guidelines
that
I
think
are
all
great
around,
not
bubbling
errors
up,
making
sure
the
api
is
safe
to
use,
but
it
doesn't
fully
specify
things
in
a
couple
of
areas
one.
It
doesn't
really
fully
specify
what
we're
supposed
to
do
with
the
exceptions
there
is
a
bit
in
there
around.
There
should
be
like
an
exception
handler
of
some
kind.
M
However,
open
telemetry
has
is
kind
of
split
into
two
parts.
One
part
is
the
exceptions
which
can
be
handled
by
the
sdk
right.
All
of
the
api
calls
that
are
then
passed
through
into
the
sdk.
M
The
other
place
where
the
spec
is
a
little
vague
is
how
far
do
we
actually
go
when
it
comes
to
ensuring
exceptions
are
not
raised
in
typed
languages.
M
What
happens
if
you
call
a
method
that
doesn't
exist?
What
happens
if
you
pass
parameters
that
don't
exist
to
a
method
that
does
exist,
etc,
etc?
Is
there
some
point
where
we
say?
Okay,
that's
actually
a
misuse
of
the
api,
and
you
should
just
throw
so
before
trying
to
like
make
any
decisions
about
like
what
we
should
specify
there.
M
This
is
a
place
where
you
know
all
the
implementations
have
already
done
something
here,
and
so
I
thought
a
good
first
step
would
just
be
to
do
a
survey
of
what
implementations
are
currently
doing
just
so
that
we
could
get
a
sense
of
what
the
current
landscape
is
before
turning
around
and
trying
to
propose
stricter
definitions
in
the
spec
and
just
to
be
clear
that
this
came
up
because
the
the
python
sig
was
was
hitting
this
and
trying
to
decide
what
they
were
supposed
to
do
and
and
we're
looking
at
the
spec
and
feeling
like
it
wasn't
giving
them
enough
guidance
here.
A
And
I
think
what
I
would
do
here
is
use
one
one
principle:
if
an
operation
does
not
fail
when
an
open
telemetry
api
call
does
not
fail,
when
I
don't
have
the
sdk
plugged
in
then
it
should
not
fail
after
I
plug
in
the
sdk
right.
If
something
is
not
an
error
without
telemetry
enabled
it
should
not
be
an
error
with
telemetry
enabled
other
than
that.
A
If
you,
if
you're
doing
something
weird
with
the
api,
then
you
should
get
that
immediate
feedback
right.
It's
fine,
I
think,
to
fail.
In
that
case,
it
just
should
not
be
a
behavior
which
is
altered
depending
on
whether
the
telemetry
is
on
or
off.
I
think,
if
we
follow
that,
I
think
it
should
be
fine
in
that
case
right,
because
you'll
see
that
immediately
if
you're
calling
a
weird
api,
something
you're
calling
a
method
that
doesn't
exist,
you
really
want
to
know
that
by
swallowing,
that
is
probably
not
the
right
thing
to
do.
A
A
N
Yes,
there's
a
important
difference
between
calling
a
method
that
does
not
exist
and
calling
a
method
with
bad
parameters.
The
specification
says
that
the
api
should
not
raise.
The
exceptions
is,
if
used
incorrectly
by
the
user.
So
I
think
that
covers
the
should
cover
the
case
when
we,
when
the
user
passes
wrong
parameters
there.
N
N
That
that,
because,
since
the
wrong
attribute
was
called,
we
don't
know
what
we
could
have
learned
so
in
order
to
draw
the
line
that
the
line
can
can
be
drawn,
in
my
opinion,
at
the
parameter
level,
not
at
the
adjust
not
before
when
we
call
the
sorry
the
some
method
of
an
api
object.
We
do
with
that
parameters.
L
Diagonal,
I
have
a
question,
so
would
would
you
clarify
what
does
it
mean
when
we
say
calling
a
method
that
does
not
exist
for
them
collecting
fastpass,
there's
no
such
problem.
If
you
call
something
that
does
not
exist,
you
won't
be
able
to
compile
correct.
Okay,
that.
N
Is
only
true,
that's
that's
a
good
point.
Yes,
so
that's
not
true
in
dynamic
languages
or
in
a
language
like
python,
you
can
actually
create
a,
I
don't
know
a
span
and
try
to
call
a
method
named.
I
don't
know
foo
or
bar
or
whatever
right
that
does
not
exist.
That
is
not
defining
the
class
of
the
span.
N
The
code
will
be
executed
until
we
reach
that
line.
When
we
reach
that
line,
something
will
fail
because
the
interpreter
will
say:
hey,
there's
an
error
here.
This
attribute
foo.
I
don't
find
it
in
the
class
of
another
space,
which
is
fine.
It's
not
the
same
thing
as
calling
a
method
that
does
exist
in
the
span
like,
but
passing
in
in
a
bad
parameter.
So,
for
example,
if
that
method
expected
an
integer
in
dynamic
languages,
of
course,
there
is
no
type
checking,
but
the
method
itself
can
include
some
kind
of
check.
N
I
H
H
Should
the
sdk
be
checking
parameters
on
every
public
method
in
these
dynamic
languages,
checking
to
make
sure
that
every
parameter
is
the
correct
type,
because
that's
a
non-trivial
runtime
cost,
or
should
it
just
fail
when
it
tries
to
call
some
string
method
that
doesn't
exist
on
an
integer
and
the
api
should
wrap
the
sdk
in
a
try
catch
which
could
have
potentially
some
unintended
side
effects
like?
Maybe
the
span
never
gets
sent
to
the
on
end
method
of
the
span
processor,
but
did
get
sent
to
the
start
method
of
the
span
process.
H
Like
you
end
up
in
some
weird
undefined
behavior
there.
If
it
fails
at
some
point
that
wasn't
like
predetermined
so.
N
At
least
in
the
case
of
python,
the
prototype
that
I'm
proposing
follows
the
second
approach,
which
is
kind
of
wrapping
everything
between
the
triax
and
if
an
exception
is
raised,
regardless
of
where
it
happens
in
the
api
or
in
the
sdk,
then
a
predefined
no
up
or
a
predefined
return
value
is
returned.
Of
course
right.
So
we
are
aiming
to
be
able
to
make
it
possible
for
the
entire
process
to
continue
and
what
I
mean
in
entire
process.
N
I
mean,
for
example,
like
exporting
right
like
sending
things
to
the
exporter,
to
continue
working,
but
it
will
remain.
It
will
start
reporting
empty
telemetry
data
so
that
the
application
won't
crash.
But
as
that,
when
that
error
happens,
the
telemetry
data
will
just
continue
to
be
sent
to
exporter,
but
everything
will
be
empty.
That's
the
approach
we're
trying
to
follow.
H
Yeah,
I
think
I'm
in
support
of
a
something
like
that.
The
only
question
that
I
would
have
is
say
I
call
an
api
method
with
bad
parameters.
It
calls
the
sdk
with
those
bad
parameters.
The
sdk
fails,
the
api
returns
a
no
op
object.
How
do
I,
as
the
person
that
called
that
api
understand,
not
only
that
there
was
an
error,
but
what
the
error
is
you
know
is
that
exception
somehow
attached
to
the
no-op
span
in
some
way
where
I
can
inspect
it
to
see?
If
there
is
an
error
there?
If
I
want
to.
N
Yes,
good
question:
in
fact,
the
the
spec
right
now
actually
refers
to
a
mechanism
that
is
intended
to
help
debug
that
kind
of
problem.
So
let's
say
that
that
a
scenario
I
mentioned
just
happens
right,
so
python
at
least
provides
a
standard
library
named
warning.
N
Learning
is
something
that
behaves
like
an
exception
in
all
the
sense
of
the
word,
except
for
the
fact
that
it
does
not
cause
the
application
to
crash.
So
when
a
warning
is
raised,
you
can
see
in
the
in
the
output
of
the
console
that
something
happened
right
and
there
is
a
way
to
run
the
python
interpreter
with
an
option
that
turns
warnings
into
exceptions
so
that
it
makes
it
strict.
N
So
if
the
user
wants
to
actually
debug
something
can
decide
to
run
their
application
in
the
street
mode
and
it
will
crash
when
a
warning
is
raised
so
that
and
it
will
give
them
the
same
information
that
the
exception
we
have
given
them.
The
spec
mentions
and
requires
this
from
all
implementations
that
there
is
a
way
to
make
errors
that
were
being
swallowed
by
the
api
to
not
be
swallowed
so
that
they
can
help
people.
This
that
that's
also
a
requirement.
H
Yeah
we
have
like
the
the
global
error
handler,
which
is
it
like,
I
think
what
you're
referring
to,
but
that
doesn't
necessarily
it's
not
easy
to
tie
those
issues
to
a
specific
api
call
from
the
user
like
it's
not
easy
for
the
user
to
say
this
is
what
I
did
that
caused
that.
N
Yeah,
at
least
in
python,
is
because
the
you
get
the
same
traceback
with
a
warning
that
you
will
get
it
with
an
exception,
so
it
ends
up
pointing
to
that
same
line
where
something
failed
right,
I'm
not
sure
if
other
languages
are
designed
or
have
a
library
that
works
in
the
same.
M
One
thing
that
was
raised
here
was
the
idea
of
returning
a
no
op
object
in
the
face
of
bad
parameters.
The
method
calls
the
return
objects
like
start
span,
I'm
wondering
if
that's
that's
the
correct
behavior
and
I'm
also
wondering
if
that's
what
what
languages
currently
do
or
do
they
return
a
real
object
with
default
parameters
indicating
you
know
that
the
object
was
created
in
error.
I'm
not
actually
sure
what
the
correct
solution
is.
M
It
does
seem
to
me
it
would
be
harder
to
debug
as
an
operator
if
these
bad
calls
were
creating
no
ops
that
were
just
like
swallowed.
So
I'm
curious,
curiously,
what
people
are
currently
doing.
H
Yeah
and
then
the
other
question
that
I
have
is
if
the
api
is
wrapping
all
these
sdk
methods
in
try,
catches
and
stuff,
how
defensive
is
the
sdk
then
expected
to
be
like?
Could
the
sdk
then
say
I'm
not
doing
any
runtime
checks
in
order
to
be
as
performant
as
possible,
or
should
it
still
be
designed
in
a
somewhat
defensive
way?
N
Well,
at
least
from
our
sig
perspective,
I'll
say
that
it
will
make
sense
to
skip
any
runtime
checks
that
will
be
hurtful
to
performance
if
you're
gonna
do
this.
M
Checking
so
so
we're
at
time
I've
posted
in
the
in
the
meeting
notes
as
well
as
in
the
chat,
a
link
to
just
a
doc.
That's
just
simply
a
survey
of
like
what
languages
currently
do
and
maybe
what
their
current
questions
are
I'll
pass
this
around
to
the
maintainers.
But
if
someone
from
each
sig
wouldn't
mind
at
least
trying
to
fill
in
the
basics
there,
I
think
it
would
be
a
good
next
step
just
to
understand
what
what's
currently
happening
in
the
different
languages.
So
we
can
move
forward
on
some
proposals
here.