►
From YouTube: 2022-04-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Hey
everybody,
let's
start
in
one
or
two
minutes
in
the
meantime.
Please
add
your
names
to
the
agenda
and
fill
or
put
any
item
in
the
agenda
list.
C
B
Okay,
I
guess
we
can
start.
Thank
you,
everybody
for
joining.
The
first
item
is
probabilistic
log
sampling
antoine.
You
want
to
talk
about
it.
D
Hey
guys
open
a
new
pr
for
it
change
the
specification
that
would
start.
You
encompass
how
you
could
apply
probabilistic
club
sampling
to
logs.
So
what
would
you
like
to
do?
I
can
I
can
go
a
little
bit
over
the
rationale
and,
what's
what's
what
is
that's
interesting?
I
don't
know.
D
Sure
so,
there's
actually
been
a
lot
of
work
has
been
happening
around
probabilistic
sampling
at
open
telemetry,
but
not
toward
logs
or
traces
right.
D
So
on
a
given
universe,
if
the
trace
id
is
bound,
you
can
actually
find
a
subset
of
it
and
it's
going
to
be
fair
right,
so
you
can
actually
express
the
probability
in
percentage
which
is
very
meaningful
right,
so
you
can
say
hey.
I
only
want
20
of
my
traces
we're
going
to
apply
this
probabilistic
sampling
to
it
and
all
the
time
you
will
come
up
to
20
at
a
discrete
time.
It
might
be
more
or
it
might
be
less
than
that,
depending
on
on
the
three
setting
pattern.
D
So
the
trace
id
probability
sampling
has
also
some
interesting
things
here,
where
you
can
apply
a
hash
seed
that
allows
you
to
get
the
exact
same
results
in
between
different
samplers.
So
if
you
were
to
place
different
collectors
in
sequence,
you
would
be
able
to
get
the
same
result
because
they
wouldn't
sample
each
other
right.
So
if
you
were
to
start
with
20
of
the
samples,
then
the
next
sampler
is
going
to
be
okay
with
whatever
you
picked,
and
it's
not
going
to
pick
20
or
20,
that's
explained
well
in
the
docs.
D
I
think
that's
a
very
important
functionality.
You
can
also
use
that
to
place
them
in
parallel
right.
So
if
you
have
an
hd
setup
and
you
have
five
ten
collectors,
all
collecting
traces,
you
can
apply
this
high
sheet
across
them
and
they
will
all
come
to
the
same
conclusions.
D
So
these
are
very
interesting
functionalities
which
have
been
implemented
for
for
traces.
I
started
working
on
the
exact
same
idea
for
for
logs,
there's
not
supposed
to
be
a
whole
lot
of
depth
into
the
research
that
goes
into
that,
it's
more
like
hey.
How
do
we
take
the
stuff
that
worked
for
traces
we
applied
for
logs
and
watched
the
first
time
so,
and
this
is
not
just
me-
there
was
also
communication
and
discussion
with
other
members
on
that.
D
So
the
idea
is,
to
you
know,
take
a
look
at
the
log,
a
log
record
itself,
and
by
default
the
log
record
may
have
a
trace
id
if
it
does
have
one,
then
in
that
case
it's
really
really
useful
to
try
to
sample
by
that
trace
id
on
the
log
record,
because
you
would
be
able
to
trace
the
exact
same,
you
would
be
able
to
sample
the
exact
same
way.
You
would
sample
traces.
That
would
give
you
the
subset
of
logs
that
match
the
subset
of
traces
right.
So
this
is
a
good
functionality.
D
You
wanna
you
wanna,
be
able
to
keep
that
there's
a
wrinkle.
Sometimes
logs,
don't
have
a
trace
id.
Actually,
I
would
say
quite
often
they
might
not
have
one
now.
In
that
case,
what
you
want
to
be
able
to
do
is
to
give
people
the
opportunity
to
detect
that
there's
no
trace
id
and
set
an
another
field
they
can
use
as
their
trace
id.
D
If
you
see
what
I
mean,
so
in
that
case,
there's
a
configuration
attribute,
you
can
use
and
you
can
point
to
an
attribute
on
the
log
record
on
the
resource
as
well.
I
didn't
recommend
that,
but
that's
possible,
and
then
you
could
just
you
know,
trace
you
could
sample
according
to
that
instead
and
then
pretty
much.
D
If
you
want
to,
you
can
also
disable
trace
id
sampling
and
just
go
by
that
attribute
all
the
time
if
there
is
no
trace
id
or
if
that
attribute
is
not
set
either
in
config
or
on
the
log
record
or
on
the
resource,
then
you
don't
know
how
to
sample
this
record,
because
there
is
no
way
for
you
to
apply
the
algorithm
to
the
value
right,
and
in
that
case
you
have
to
just
let
the
record
go.
D
E
Yeah,
so
maybe,
if
I
may
add
on
on
on
what
anfan
has
just
described,
because
I
I
think
that
I
have
suggested
to
to
bring
this
topic
for
for
this
group
to
to
discuss,
because
I
think
it's
it's
very
relevant.
So
I
think
that,
like
several
differences
between
traces
and
logs
as
described
by
antuan,
but
I
think
it's
also
very
important
to
let's
say,
do
a
step
back
and
understand.
Why
do
we
even
doing
that?
E
And
I
think
that
the
answer
is
largely
that
we
that
one
wants
to
reduce
the
volume
of
data
whenever
probabilistic
sampling
is
being
used
and
with
tracing
it's
harder,
because
we
have
distributed
traces
and
this
traces
can
be
out
of
spans
that
come
from
different
sources.
So
either
you
need
to
have
one
place
that
combines
all
the
spans
for
a
given
trace
and
we
do
it
by
the
way
in
open,
telemetry
collector
with
group
by
trace
processor
tail-based
sampling,
which
is
getting
a
little
bit
complex.
E
Or
you
can
do
this
trick
trick
with
being
able
to
consistently
use
the
hash
of
the
trace
id
and
trace
id.
Has
this
nice
attribute
of
being
a
uniform
distributed
value,
so
this
can
be
done
across
like
several
instances
of
the
of
the
collector
and
they
don't
need
to
have
a
short
state
together
and
with
logs.
We
don't
actually
have
that
problem.
Well,
we
have
it
when
we
whenever
we
are
having
trace
id,
but
we
can
use
the
same
trick
and
again
for
consistency.
E
If
trade
sampling
is
being
employed,
I
think
that
logs
should
be
sampled
as
well,
so
one
will
not
see
logs
with
trace
id
for
traces
do
not
exist
and
the
other
way
around,
but
just
for
being
able
to
to
sample
locks
by
themselves.
I
think
we
don't
need
to
use
this
this
trick.
We
can
just
use
random
sampling.
This
was
actually
my
suggestion
in
this
specification,
but
but
this
is
like
the
discussion
about
the
technical
capabilities
of
doing
the
sampling.
E
I
think
that
the
question
we
need
to
answer
in
this
group
is
what
opinion
on
that.
The
specification
should
should
have,
and
I
believe
that
minimally
we
should
mention
that
when
log
sampling
and
trace
sampling
is
being
employed,
it
should
be
consistent
and
anything
else
would
be
a
bonus
to
that.
Essentially,.
F
F
D
That's
cool!
Yes,
I
by
the
way.
I
appreciate
your
work
on
this
because
that
was
just
so
easy
to
pick
up
on.
So
thank
you.
I
agree
with
I'm
sorry,
I
don't
even
know
your
name
very
well,
pmm,
josh
and
josh.
Yes,
thank
you
josh,
so
I
I
think
the
trace
the
trace
base
sampling
trace
id
based
sampling-
that's
that's,
probably
easy!
The
random
base
sampling
that's
a
little
more
contentious
to
me,
because
you
would
know
how
to
generate
random
ids
in
the
right
way.
D
Yeah
you're
really
getting
a
good
distribution
of
it.
It's
also
per
collector.
So
I
would
I
would
like
to
advocate
for
a
way
where
we
don't
take
too
much
of
a
bite
into
this,
and
we
say
there
is
a
way
for
you
to
point
an
attribute.
This
attribute
is
source
of
truth.
How
you
generate
that
attribute!
It's
up
to
you.
D
You
know
how
how
smart
you
want
to
be
about
generating
this
source
of
sampling
is
really
up
to
the
implementer
and
the
person
who's
implementing
the
service,
and
I
would
probably
leave
out
doing
too
much
in
the
collector,
because,
let's
say
you
have
five
of
them
in
parallel,
trying
to
ingest
massive
amount
of
data,
you
wouldn't
know
if
you're
really
going
to
sample
20.
If
you
use
random
bass
sampling,
does
that
make
sense.
F
I
think
I
understood
it
in
the
trace
sampling
work.
We
didn't
have
a
source
of
randomness,
and
so
we
created
this
r
value,
which
was
exactly
the
randomness
that
we
needed
and
there's
definitely
a
hope
that
we
could
get
true
randomness
into
the
w3c
trace
id,
which
would
definitely
change
my
opinion
of
this
story.
I,
but,
but
usually
when
we
don't
have
a
good
answer
for
that.
F
The
answer
comes
back
as
like,
create
a
random
number
set
it
as
an
attribute
and
use
that
random
number,
because
it's
going
to
be
both
cheaper
and
more
consistent
than
having
hashing,
because
hashing
is
really
hard
to
predict.
It's
both
its
randomness
as
well
as
its
cost.
It's
like
expensive
and
we
can't
always
be
sure,
it's
random,
so
that
being
explicit
makes
makes
a
much
easier
story.
G
F
To
be,
and
so
that,
just
to
give
us
a
preview
this
we
might
have
a
new
trace
flag.
That
says
this
trace
id
has
real
randomness
in
it,
and
then
we
want
at
least
63
bits
of
it.
So
once
you're
there
you
know
you
could
do
trace
id
sampling
for
logging,
which
is,
I
think,
a
very
good
answer.
But
but
I'm
sure
other
people
can
come
up
with
other
types
of
sampling
that
they
want
to
do
based
on
span
id
based
on
other
attributes.
G
You
you
just
mentioned
63
bits.
Currently,
there's
only
56
bits.
Is
there
a
reason
that
63
is
is
required
specifically
or
is
56?
Okay,
for
you.
B
Okay,
as
unless
you
want
to
discuss
something
more,
I
guess
that
probably
people
need
time
here
to
actually
review
and
once
proposal
so
thursday
this
week
right,
there's
a
call
this
week.
F
B
Yeah
also
riley,
if
you
can
review
that,
I
post
you
posted
a
request
for
changes
there.
Some
of
you
can
follow
up
as
well.
B
Okay,
in
that
case,
let's
go
to
the
next
item:
otlp
json
and
version
handling.
We
talked
about
that
in
the
past
figure
out
year
round-
yeah,
you
are
here
yeah
we
talked
about
these,
I
think
we
want
to
make
otp
is
unstable
and
one
last
question
was
whether
to
add
version
or
not
from
what
something
that
josh
that
gmit
proposed.
B
I
posted
a
comment
saying
that
adding,
I
think
tigran
drops,
I
never
mind
he's
here.
I'm.
B
A
Oh
yeah
go
ahead.
Please
yeah!
I
just
wrote
what
you
wrote
so
the
recent
changes
that
we
made
with
renaming
the
instrumentation
library
to
instrumentation
scope
and
then
we
we
had
to
handle
it
in
a
specific
way.
The
goal
there
was
to
keep
all
the
new
versions
interoperable
right
so
that
so
that
we
don't
make.
This
is
not
a
breaking
change
in
the
sense
right
that
that
is
the
reason
why
we
did
it
the
way
we
did
it.
I
do
not
think
that
having
a
virgin
number
would
achieve
the
same
goal.
A
It
is
so
the
way
that
we
did
this
change
is
we
said
that
the
senders
should
populate
the
exact
same
data
in
the
in
the
json
structure
under
two
different
field
names,
and-
and
we
do
it
precisely
for
for
the
reason
that
we
wanted
the
old
recipients
not
to
be
broken
version
number
having
a
version
number
would
not
be
an
alternate
solution
which
would
achieve
the
same
goal
here
right.
So
I
I
do
not
think
that
this
could
be
a
justification
for
having
a
version
number.
So
it's
still
not
quite
clear
to
me.
A
What
is
it
that
we're
achieving
by
having
the
version
number
there
today?
Where
you
could?
You
could
say
that
assume
that
the
version
number
is
a
field,
and
if
it's
missing
the
version
number
is
1.0,
let's
say
right.
I
say
that
what
does
the
explicitly
recorded
version
number
instead
achieve
in
that
case?
What
does
what
does
it
give.
B
H
Yeah,
I
think,
ultimately
over
the
history
of
otlp
trying
to
support
json,
at
least
on
the
lightstep
side.
The
way
that
we've
been
able
to
do,
that
is
by
having
version
endpoints,
which
is
kind
of
inconvenient,
because
the
sender
actually
has
to
send
it
to
the
right
place
and
they
have
to
know
what
version
of
otlp
is
coming
out
of
their
application.
H
But
the
biggest
problem
is
just
the
that
there
have
been
backwards,
compatible
backwards
and
compatible
changes
to
json
and
ultimately
we
use
you
know
proto
off
transcoding
and
just
kind
of
like
hand
it
some
json-
and
you
say
you
know,
unmarshal
this
to
protobuf
and
if
it
doesn't
understand
it,
it
just
fails.
So
having
a
version
number
would
be
a
way
to
kind
of
like
try
to
massage
the
json
on
the
way
in
and
make
that
a
smoother
process.
H
But
if
otlp
json
is
actually
stable
and
that
step
succeeds
reliably,
it
becomes
like
less
important.
I
think
that
we
have
a
version,
and
then
you
know
as
long
as
you
can
turn
your
json
into
protobuf
one
way
or
another.
I
think
you
could
then
look
at
properties.
I
guess
and
figure
out
you
know
if
something
if
there
was
an
additive
change
or
something
you
could
discover
that
after
the
fact
and
it
would
become
less
important,
there
might
still
be
some
argument
for
a
version.
A
At
the
same
time
that
we
make
this
change,
we
would
also
add
the
version
as
a
field
there,
and
so
you
could
look
at
the
version
field
there.
If
it's
missing
it
tells
you
that
that's
an
old
version.
If
it's
present,
then
the
version
number
is
right
there
right.
So
we
could
do
that
at
the
same
time,
if
we
see
that
without
the
version
number
it's
impossible
to
handle
it
gracefully
or
whatever
the
logic
is
that
we're
introducing
there
with
that
particular
change
that
happens
in
the
future,
which
I
think
shouldn't
be
happening.
A
A
So
that's
what
what
I
I'd
like
to
avoid
to
avoid
having
a
version
number
on
which
then
then
people
would
would
like
would
write
code
which
would
be
just
just
just
wrong
because
you're
not
supposed
to
do
that
right,
because
I,
I
guess-
and
it
would
be,
maybe
even
a
justification
for
making
a
breaking
change.
I
wouldn't
want
to
to
have
this
mindset
that
the
version
number
is
a
way
how
you
make
a
breaking
change.
H
Yeah,
I
I
think
that
makes
sense,
and
I
think
you
know
having
stability
for
otlp
json
makes
the
yeah
makes,
makes
the
requirement
of
a
version
number
less
less
important
and
probably
not
important
at
all
and
from
from
what
you're
saying
you're
saying:
don't
add
it
until
you
need
it
and
yeah
right
now
we
don't
need
it.
I
H
I
think
I
agree
on
that,
and
I
guess
the
only
other
thing
I
would
add
is.
I
believe
that
that
otlp
http,
already
at
least
from
the
collector's
perspective,
has
like
a
version
to
end
point
we're
supposed
to
send
to
like
slash
traces
v1.
I
believe
I
think
the
biggest
problem
is
that,
like
v1
has
been
at
least
from
the
json
side,
has
not
been
the
same
each
time.
It's
kind
of
it
has
not
been
effectively
av1,
but
once
it
is
stable,
sending
stuff
to
v1
should
be
state
one.
H
I
think
if,
at
any
point
in
time
a
change
comes
in
where,
where
things
are
not
backwards
compatible,
we
would
want
to
change
that
version
endpoint
as
well,
and
that's
probably
actually
the
place
where
that
version
ultimately
should
go
for
breaking
changes
when
otlp
breaks
in
the
future,
because,
probably
you
know
in
in
the
long
haul,
there
probably
will
be
a
break
at
some
point.
A
H
Yeah
that
that
makes
sense
to
me
does
anybody
else
have
any
opinions
on
this
while
we're
talking
about
it.
B
F
I
recommend
a
version
because
we've
seen
cases
already
where
we
thought
something
was
not
was
backwards
compatible
or
forwards,
compatible
and
and
then
were
surprised
by
how
an
implementation,
maybe
a
protobuf
library
feature
or
you
know,
like
a
one
of
like
subtleties
subtleties,
have
come
in
so
far
and
I'm
not
convinced
that
they
won't
happen
again.
So
I'm
looking
at
version
0.10
where
we
added
a
staleness
flag.
F
If
you're
not
really
aware
of
that
stainless
flag,
it
sounds
backwards
compatible,
but
you're
likely
to
get
the
wrong
outcome
and
we
had
similar
issues
when
we
introduced
number
data
point
from
integer
data.
Point
like
it
looks
like
a
float.
So
it's
a
float,
even
though
it's
missing
the
field
that
we
just
deleted.
So
I'm
not
convinced
that
we
have
that
we've
got
everything
right
and
so
on
to
be
on
the
side
of
safety.
I
would
recommend
versions,
but
it
could
be
in
an
agent
header
or
something
like
that.
A
A
Sorry,
the
stillness
flag,
josh,
it
was
added
at
some
point
and
you're
saying
you
have
a
software
which
receives
this
data.
You
updated
that
software
to
work
with
a
newer
version
of
the
otlp,
but
you
didn't
make
the
corresponding
change
in
the
in
the
code
to
deal
with
the
stillness
like.
Is
it
what
you're
saying.
F
A
J
A
So
I
guess
one
way
around
that
block,
then,
would
be
that
you
always
automatically
increment
the
version
number
once
there
is
a
new
release,
regardless
of
whether
there
are
any
any
changes
which
you
consider
to
be.
I
mean
there
is
never.
We
were
not
allowed
to
make
backwards,
incompatible
changes
at
this
point.
So
that's
that's.
Not
even
we
shouldn't
be
even
discussing
this-
I
I
don't
quite
understand
like
if,
if
we
made
a
change
which
wasn't
backwards
compatible,
that
was
a
mistake
and
having
the
version
number
there
shouldn't
be
a
way
to
fix
this
mistake.
B
B
A
K
K
J
One
of
the
way
is
you:
have
we
have
a
notion
of
base
behaviors
okay,
so
now
every
sdk,
in
my
opinion,
should
have
a
version
of
the
otp
that
they
are
binding.
So,
based
on
that,
you
should
be
able
to
discover.
I.
G
J
J
Why
I
mean
again?
First
of
all,
I
don't
understand
the
needle
the
version.
Besides
the
the
the
explanation
that
really
riley
has
because
otherwise
to
do
actions
based
on
the
version
is
almost
impossible,
because
you
don't
have
a
schema
or
anything
like
that
between
the
version
or
you,
you
don't
know
what
to
do
if
a
version
is
newer
than
whatever
you
are
built
on
and
stuff
like
that,.
A
Yeah,
so
I
actually
buy
riley's
argument
here.
I
I
don't
want
the
version
for
the
purposes
of
having
any
sort
of
logic
that
deals
with
compatibility
issues,
but
I
think
what
riley
is
saying
that
the
version
number
itself
may
be
valuable
for
the
purposes
of
troubleshooting
understanding
what
the
sender
is
doing,
if
the
sender
is
doing
something
wrong
right
that
I
buy
that
I
buy,
but
let's
not
say
that
the
version
number
is,
for
the
purposes
of
the
recipients
to
do
any
sort
of
business
logic
around
understanding
and
interpreting
the
messages
right.
A
A
A
That's
why
I'm
trying
to
resist
this,
because
people
are
going
to
start
using
having
a
switch
statement
on
version
numbers
instead
of
actually
doing
the
right
thing,
because
for
every
change
that
we
did,
we
actually
had
a
whole
comment
for
the
particular
section
of
the
message
in
in
the
protobox
explaining
how
you
deal
with
with
the
compatibility
we're
not
going
to
pile
all
that
stuff
into
the
version
field,
and
it's
not
going
to
work
nicely
in
that
case.
So
that's
my
worry
as
well.
I
agree
with
you.
H
I
will
say
you
could
solve
the
versioning
from
the
client
side
by
making
the
otlp
version
a
resource
attribute
that
the
exporter
adds
on
the
way
out,
because
it's
kind
of
is
how
data
is
actually
leaving
the
application.
So
it
can
add
the
otop
version.
A
J
B
J
B
That
sounds:
that's
not
right
to
me,
daniel
daila
or
any
of
the
javascript
maintainers.
J
But
this,
if
I
did
this,
does
not
solve
riley's
problem,
which
is
we
are
making
backwards.
Compatible
changes
like
adding
a
new
field
and
people
are
asking
you.
B
Yeah,
but
since
it's
a
a
little
bit
different
thing,
we
can
probably
solve
it.
I
think
that
one
of
the
worries
that
at
least
I
have
seen
is
that
otp
json
is
not
stable
and
what
rally
is
asking
for
it's
something
that
can
be
added
later.
You
know
maybe
even.
J
Yeah
by
the
way
for
for
doing
that
stability,
I
think
we
should
create
a
milestone,
because
there
are
a
couple
of
issues
in
the
proto-repo
related
to
this,
and
maybe
we
should
somebody
should
I
mean
I'm
fine
to
focus
on
this,
but
I
don't
think
we
can
do
it
tomorrow.
My
point
is
my
point
being
I
think
we
should
look
at
all
the
current
issues
if
there
is
any
related
to
jason
fix
that
or
drop
it
mark
it
accordingly
and
then
and
then
we
can
do
the
1-0.
B
That
that
work-
that
sounds
great-
maybe
we
can
organize
a
friday
three-hour
session
or
something
like
we
did
in
the
past
or
whoever
is
interested
yeah
also
also.
J
Also
for
for
for
a
bunch
of
the
sdks
that
allow
having
a
different
version
of
otnp
than
what
the
sdk
does,
I
think
that's
a
kind
of
a
mistake
because,
because
there
are,
there
is
a
conversion
that
happens
there,
and
we
never
said
that
that
conversion
between
internal
span
data
and
the
protocol
is
stable.
J
G
J
That's
good,
that's
good
yeah
that
that's
okay,
because
because
that's
that's
one
of
the
problem
that
we
have
in
other
languages
that
they
are
not
tied
together,
the
version
of
hot
lp
and
the
and
the
exporter,
and
I
think
I
think,
one
of
the
things
daniel
in
opencensus,
we
had
a
version
of
the
exporter
for
that
reason
as
part
of
the
telemetry.
So
maybe
maybe
that's
that's
the
answer
to
to
to
this
question
of
the
telemetry.
Like
purpose
that
riley
said.
G
Yeah,
that's
what
I
was
gonna
say.
If
you
add
the
the
exporter
version
to
the
to
the
resource
attribute
or
something
like
that,
it
then
prevents
back
ends
from
from
having
you
know,
switch
cases,
because
the
version
could
be
you
know
different
from
the
otlp
version.
It
doesn't,
but
it
from
the
exporter
version.
You
should
be
able
to
look
and
see
what
version
of
the
proto
does.
This
version
of
the
use
when.
J
J
G
I
am
curious,
though,
that
definitely
doesn't
solve
at
least
josh
and
matt
we're
saying
that
they
wanted
the
version
for
some
sort
of
it
sounded
like
some
sort
of
processing
or
or
case,
and
I
I
was
interested
in
in
what
particular
problem
a
version
number
would
solve
from
that
perspective.
F
H
Oh,
my
my
thoughts
are
as
long
as
otop
json
is
stable.
It
becomes
unnecessary
in
in
the
over
the
history
of
otlp.
It
has
not
been
stable
and
having
a
virgin
endpoint
is
the
only
way
we've
been
able
to
ingest
it.
I
think
if
so,
oh,
that
was
unfortunate
in
that
the
sender
needed
to
know
what
they
were
sending
and
they
needed
to
actually
send
it
to
the
right
place
for
that
to
work
out
so
having
some
way
to
encode.
H
F
We've
also
spoken
about
wanting
to
have
a
library
for
json
that
just
literally
handles
all
the
incompatibilities
that
we
know
about
in
the
past,
so
that
we
could
stop
having
this
version
switch
in
our
own
code.
So
if
we
see
something
called
instrumentation
library
gosh,
we
know
what
that
is,
and
we're
never
going
to
use
that
attribute.
F
B
H
H
B
B
Yeah,
okay,
so
let's
I
would
try
to
create
a
call
this
friday
and
whoever
it
wants
to
join.
If
not,
we
can
follow
up
asynchronously,
so
we
can
actually
have
this
done.
Thank
you
so
much
tigran
for
the
input
and
yeah
no
version
for
now
at
least
not
for
this
okay.
So,
for
the
sake
of
time,
let's
discuss
briefly.
Let
me
are
you
here.
I
F
You
can
use
let's
use
this
time
for
for
your
topic.
Please.
I
Okay,
yeah
sure
thank
you,
okay,
so
I
have
the
pr
on
the
required
vs
optional
and
I
think
there
are
two
discussions
there.
We
wanted
to
bring
online
to
this
meeting
and
one
of
them
is
optional,
attributes
removal.
So
on
one
side,
instrumentations,
oh
sorry,
backhands
should
not
expect
to
receive
optional
attributes.
They
should
not
require
them
right.
On
the
other
side,
if
instrumentation
had
started
to
set
this
attribute,
then
somebody
could
start
relying
on
it
in
their
alerts
very
recent
whatsoever
right.
I
So
I
want
to
have
some
discussion
and
make
a
decision
whether
we
consider
this
change
breaking
or
not.
A
So
you
I
don't
know
if
you
saw
my
last
comment
there.
I
think
we
need
to
separate
the
two
issues.
One
is
changes
happening
to
the
semantic
conventions
to
the
specification
itself
and
the
other
is
changes
happening
to
implementations
of
telemetry
producers.
So
those
are
two
related
but
separate
topics.
The
particular
change
that
you
propose
there
about
removing
the
attribute
from
the
specification
from
semantic
conventions.
I
believe
it's
a
breaking
change,
at
least
with
the
current
definition
of
what
the
specification
says
is
a
breaking
change
or
reason
allow
a
change
in
the
semantic
conventions.
A
So
going
from
approach
one
to
approach,
two,
where
you
replace
the
attributes,
a
set
of
attributes
by
another
set
of
attributes
is
not
a
breaking
change,
because
that
was
explicitly
allowed
in
the
semantic
conventions.
It
says
that
http
server
name
and
a
couple
others,
I
don't
remember
the
exact
names
is
one
possibility
and
another
three
attributes
and
is
another
possibility.
The
recipients
should
already
be
ready
for
that
change.
A
It
is
not
a
breaking
change,
that's
why,
in
my
opinion,
however,
it
doesn't
mean
that
you
can
remove
it
from
the
specification,
because
you
may
have
producers
which
they
don't
change.
The
producers
continue
to
follow
the
prescribed
semantic
conventions
and
you
are
not.
We
are
not
allowed
to
to
change
the
semantic
conventions.
I
Right,
it
makes
sense,
and
then
I
have
two
questions.
Assuming
are
we
currently
we
can
remove
these
optional
attributes
from
their
specification
right,
because
this
it's
an
experimental
one,
and
do
we
need
to
have
a
transformation
rule
for
the
spec
file,
because
I
I
hardly
understand
what
we
would
write
there.
A
I
don't
think
we
can.
I
don't
think
we
should,
because
the
schema
files
there
is
no
way
to
express
this
change
in
schema
files.
Schema
files
do
only
support
renaming.
I
don't
think
this
is
renaming,
so
there
is
no
way
to
have
this
transformation,
and
that
is
fine.
You
are
making
a
change
to
an
experimental
document.
There
is
no
expectation
that
the
schema
files
will
reflect
such
changes.
That
is,
okay.
I
Okay,
perfect,
then
about
the
other
part.
So
look.
The
sets
of
attributes
are
a
bit
controversial
for
me.
Let's
talk
about,
for
example,
user
agent,
if
I'm
writing
an
instrumentation
and
I
added
user
agent
right
then
at
some
point
I
decided
okay,
maybe
too
expensive.
Maybe
it's
optional.
I
decided
to
remove
it
right
after
the
spec
got
into
a
stable
state
and
the
spec
does
not
change,
but
my
instrumentation
can
decide
to
remove
this
optional
attribute
completely.
I
A
You're
saying
let's
say
this:
was
this:
was
a
stable
document
right?
Would
we
so
the
the
first
question
is:
are
we
allowed
to
remove
it
from
the
specification?
The
answer
is:
no,
we
are
not
allowed.
The
second
question
is:
are
implementations
of
telemetry
which
use
this
attribute?
Are
they
allow
it
to
remove
this
attribute
from
their
emitted
telemetry
or
no?
The
question
is,
it
depends
on
how
your
telemetry
is
marked.
A
We
have
a
definition
of
stable
and
unstable
implementations
and
depending
on
which
one
you,
your
your
implementation,
chose
to
adopt
right,
whether
it
declared
it's
stable
or
unstable.
The
answer
is
going
to
be
different.
If
it
is
a
stable
implementation,
then
the
answer
is
no.
A
I
B
We
need
more
reviews
from
those
from
by
the
way.
So
please
review
that
tigra
and
me.
We
have
reviewed
that
tigrans
spend
a
few
cycles
there,
but
we
need
more
eyes.
A
Yeah-
and
I
don't
have
any
strong
opinions
about
which
sets
of
attributes
are
the
right
for
the
particularly
for
http
semantic
conventions.
I
think
that
that's
the
instrumentation
sig
should
have.
They
should
be
the
deciding
people
who
who
believe
what's
the
right
set
of
attributes
right
for
http
starts.
I
don't
have
a
strong
opinion
about
that.
I
A
Mean
obviously,
what
what
you're
saying
by
default
is
more
desirable
right
make
it
make
it
have
a
one
way
of
expressing
the
same
data
instead
of
two
different
ways.
I
don't
know
if
there
is
sufficient
nuance
there
and
arguments
in
favor
of
saying
you
know
what
there
is
no
way
to
have
one
set
and
we
may
need
to
have
two
sets.
After
all,
there
was
a
reason
I
guess
right.
Why
why
this
proposal
originally
was
done.
This
way
like
you
can
have
this
attribute
or
you
can
have
that
attribute.
A
I
don't
know
what
exactly
was
the
thinking
behind
that,
but
I
suppose
there
was
some
sort
of
argument
in
favor
of
that.
I
don't
know
I'm
not
sufficiently
sufficiently
knowledgeable
about
how
you
want
to
record
the
http
response
to
everything
here.
I
I
don't
know
either,
but
what
they
found
the
instrumentations
or
producing
very
inconsistent
data,
and
I've
tried
to
pick
the
most
consistent
set
of
things
they
produce.
My
assumption
is
that
some
of
these
attributes
were
added
for
the
sake
of
document
documenting
what
some
big
set
of
instrumentation
does,
but
I
obviously
don't
have
enough
context.
L
Yeah,
in
addition
to
this,
I
can
say
that,
basically,
we
want
to
clarify
to
make
it
super
clear,
what
exactly
should
be
instrumented
and
how
exactly
it
should
be
instrumented,
because
if
you
have
more
choices,
it's
really
hard
to
understand
what
is
the
predefined
choice
and
should
I
go
this
way
or
another
way,
that's
why
we
have
like
a
different
implementations
for
diff
different
ways
of
implementations
for
different
languages,
so
the
the
more
clearer
we
have
the
state,
like
the
the
more
clear
the
statement,
the
better
for
everyone
and
it's
exp,
especially
for
those
who
want
to
implement
to
who
want
to
do
this
instrumentation
themselves.
L
A
I
Yeah,
I
think
I
will
try
asking
around.
I
think
that
java
is
their
java
sig
member
here.
No,
I
don't
think
so.
So
the
java
crew
revised
those
and
they
like
choose
a
specific
sets
for
all
their
certain
instrumentations
for
clients
and
for
servers
and
like
this
was
a
result
of
revision
of
what
they
have
and
cleaning
the
space
out.
I
can
go
and
ask
maybe
other
seek
members
and
try
to
dig
the
history
of
the
specification.
A
I
B
Perfect
for
the
skills,
for
the
sake
of
time,
if
that's
okay,
we
can
move
to
the
next
item.
I
think
we
there's
some
follow-up
to
happen
on
this
one.
So
in
that
case,
that
is
http
scope.
L
Yeah,
so
basically,
the
the
item
that
I
submitted
about
with
regards
to
this.
So
this
is
the
item
two
four
nine
nine
and
open
to
another
specification
repo.
So
this
is
basically
about
the
the
scope
that
we
want
to
address
when
it
comes
to
http
semantics
semantic
conventions
p1.
So
it
was
a
like
a
long
conversation
within
instrumentation
sake.
What
exactly
should
be
done?
L
What
we
should
take
to
the
first
version,
what
we
can
postpone-
and
it
was
like
a
review
several
times
by
the
group
so
that
time
we
agreed
that
and
it
was
a
tab
there,
which
also
was
like
reviewed
and
approved,
which
like
captures
this
scope,
but
it
feels
that
there
is
a
slight
disconnection
between
instrumentation,
seek
and
and
dc.
L
So
basically,
the
the
the
whole
purpose
of
the
item
actually
or
the
issue
I
created
yesterday
just
to
fill
this
gap,
so
we
can
be
on
the
same
page
within
like
all
the
old
groups
and
it
actually
just
one
reiterates
the
the
overall
scope
that
we
want
to
address
for
the
first
version
of
semantic
conventions.
So
I
listed
all
these
items
there
and
actually
the
the
only
item
left
is
the
one
we
just
discussed
with
miller.
L
So
we
suppose,
as
an
instrumentation
thing,
that
once
we
address
this
item,
we
are
ready
to
make
or
announce
semantic
convention.
Semantic
convention
specification
is
stable,
so
we
just
I
just
want
want
us
to
agree
on
that.
If
we
have
some
questions
that
we
want
to,
you
know,
add
more
items
or
remove
something
from
that.
It's
it's
a
it's
a
great
moment
to
do
so,
but
that's.
A
Yeah
just
http,
I
think
I
think,
you're
right
the
scope
to
me
myself
looks
good
I
I
would
want
to
have
so
when
you
go
ahead
and
declare
the
document
stable
when,
when
you
think
it's
ready
to
be
declared
stable,
it
would
be
great
that
maintainers
or
implementers
of
instrumentation
give
a
green
flag
to
that
as
well
right.
There
is
nobody,
let's
say
objecting
to
the
particular
approach.
I
don't
think
the
tc
needs
to
be
the
the
the
body
that
makes
a
decision
of
this
so
right.
A
It
should
be
actually
the
people
who
are
there
implementing
this
semantic
conventions.
Who
tell
you
you
know
what
I
cannot
implement
this
in
my
language,
it's
impossible
for
reasons
a
b
and
c,
so
the
maintainers,
if
they
give
thumbs
up
to
that,
I
think
you're
good.
We
should
be
able
to
decorate
stable.
L
A
Meeting
the
slack
channels
for
the
languages
and
for
important
matters
in
the
past,
when
I
needed,
let's
say
consensus
from
all
languages,
I
I
went
and
opened
an
issue
in
every
single
repository
for
every
language,
asking
them
for
a
feedback
to
tell
whether
they
are
on
board,
or
they
have
objections
about
this
particular
change.
You
can
do
it
now
or
you
can
do
it
when
you're
done
with
all
the
pr's,
and
you
want
to
convert
the
document
from
experimental
to
the
stable
state.
A
Maybe
you
do
that
review
at
the
final
stage
right,
depending
depending
on
how
much
feedback
you're
expecting
right.
If
you
expect
that
people
mostly
will
agree-
and
you
do
not-
you
do
not
necessarily
want.
Let's
say
for
for
for
the
maintainers
to
to
return
to
you
with
more
proposals.
L
Awesome
yeah
definitely
that's
something
that
we
can
do
probably
in
parallel,
so
we
make
sure
that
we
understand
the
scope
and
we
can
address
it
together
and
eventually
make
it
done
so
yeah
I
will
probably
reach
out
to
maintainers
and,
as
he
proposed
like
a
probably
create
a
great
issues
for
all
these
languages
and
also
attend
the
meeting
great.
Thank
you.
L
K
F
If
you're
someone
who
works
at
new,
relic
or
dynatrace,
could
you
ping,
uk
or
atmar
to
ask
for
approval?
Please
I've.
I've
done
that
through
github,
but
they're,
the
ones
who
have
contributed
seriously
to
this
proposal,
and
I
think
we
need
their
up
their
approvals.
Yeah
I'll
reach
out.
B
I
There
was
one
question
you
raised
carlos
in
the
no
sorry
armin
raised
in
the
pier
in
the
pr
about
the
performance
considerations
for
required
attributes.
I
C
I
think
adding
something
like
not
having
to
do
a
dns
look
up
is
something
that
we
could
probably
even
add
specifically
into
into
this
pack,
because
we
most
likely
don't
want
instrumentations
to
to
go
through
that.