►
From YouTube: 2023-03-23 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
A
C
D
Added
one
thing:
okay,
I
have
I,
have
I've
been
thinking
about
what
I
wanted
to
talk
about
and
I
have
a
couple
items
that
I
would
throw
in,
but
I
think
great
I
see
I,
see
you
typing
there.
D
Cool
cool
and
then
I'm
gonna,
put
in
my
little
thing.
D
Actually,
there's
two
mention
anything
about
ad
link,
so
I'm,
actually
on
the
same
topic
as
Kayana.
There
and
I
have
a
new
issue
to
share
which
I
just
filed
minutes
ago.
D
And
then
I
have
and
if
there's
time
left,
which
I
hope
there
will
be
I
wanted
to
maybe
have
a
discussion,
especially
including
Omar,
about
this
draft
I
posted
last
time
and
I.
Think
it's
connected
with
Kent's
topic.
Can't,
maybe
maybe
you
can
kick
it
off
and
we'll
talk
about.
D
Here
I
think
I
just
saw
the
note.
So,
let's,
let's
go
Ken
sure.
C
Real
quick,
it
came
up.
We
so
you
you
all
know
we
have
Refinery
Refinery
does
sampling
in
by
a
variety
of
algorithms,
but
the
key
Insight
I
just
had
the
other
day.
That
was,
you
know.
C
It
came
up
in
a
discussion
in
our
public
slack
pollinators
from
customers
wanting
to
know
how
could
they
send
so
so
sorry,
back
up
honeycomb,
corrects
for
sample
rate
in
its
back
end,
so
our
sampler
Refinery
sends
attaches
the
sample
rate
field
and
the
way
it
does
that
is
that
the
Samplers
operate
by
you,
you
hand
in
the
trace
information
or
whatever
you're
doing,
and
the
sampler
com
and
you
say,
get
sample
rate
and
it
comes
back
with
a
number
that
says:
okay
sample
this
at
you
know
we
use
an
integer
value,
so
it's
You
Know
sample
is
a
10
or
50,
or
you
know,
75
or
whatever,
which
means
that
when
the
sample
rate
comes
back,
honeycomb
attaches
that
75
to
the
trace
and
then
runs
a
just,
a
random
selection
on
that
particular
thing.
C
So
it
does
a
1
in
75.
You
know
probability
as
to
whether
it's
going
to
keep
that
Trace.
So
that
means
that
the
traces
we
keep
we
can
attach
a
75
to
saying
that
this
is
representative
of
70
other
75
other
traces
like
it,
and
so
that
allows
our
back
end
to
for
many
of
our
aggregations
and
things
like
that,
compensate
for
trace
on
a
per
Trace
basis.
Essentially
now,
then
people
were
asking:
how
do
we
do
that
in
in
collector
and
I
started
digging
in
a
collector
and
I
realized?
C
C
So
we
literally
can't
today
compensate
you
know,
do
anything
in
with
collector
that
would
allow
honeycomb
to
do
that
compensation
that
it
does
that
people
find
so
valuable.
I
mean
we
have
usage
modes
where
you
can
turn
it
off,
but
basically
that
you
know
being
able
to
look
at
your
data
and
say
well
I
sent
you
know,
100
000
traces
like
this
and
10
like
that,
even
if
the
hundred
thousand
is
actually
we
have
100
samples
multiplied
by
a
thousand.
C
It
gives
people
a
pretty
good
estimate
of
what's
going
on
in
their
systems
so
anyway.
This
is
the
thing
that
occurred
to
me.
That
is
a
problem
and
I
don't
have
a
solution
for
it
today,
but
but
I
wanted
to
get
people's
thoughts
or
ideas
or
whatever
they
you
know
want
to
do
kind
of
just
bring
it
up,
as
this
is
something
I'm
going
to
be
thinking
about
over
the
next
little
bit
and
trying
to
figure
out
how
we
can
Maybe
take
allow
this
to
happen
going
forward
at
some
point.
D
Well
I
said:
I
was
connected
with
the
other
topic,
so
now
I'll
try
to
justify
that
statement.
Okay,
so
I
know
I,
think
I
know
how
Refinery
is
doing
this
and
I
know.
There's
some
I
can
think
of
some
old
debates.
They're
not
exactly
relevant,
but
I
think
I'll
fill
it
in
so
early
days.
Someone
someone
wrote
up
a
a
proposal,
maybe
an
Otep
I.
Don't
remember
it's
not
not
formalized
at
this
point.
D
I
think,
that's
kind
of
your
point
is
like
to
have
a
span
attribute,
say
called
span
sampling
rate
or
sampling
rate
or
sampling
probability
or
whatever
you
want.
You
know
it
doesn't
matter
which
one
you
do
but
and
I
think
the
objection
to
that.
D
First,
initial,
the
initial
objection
or
the
one
I
remember,
was
something
like
well
span.
Attributes
are
meant
to
be
about
the
span
and
what
you're
telling
me
is
about
how
you
collected
the
span?
It
is
not
a
property
of
the
spam,
it
is
a
property
of
the
collection
path
and
if
I
am,
if
I'm
asked
to
to
tell
you
in
my
data
model,
what
are
these
attributes?
D
They
describe
a
property
of
the
span
and,
if
I'm
now,
if
I'm,
using
attributes
to
to
report
selection
path
stuff,
maybe
there's
some
confusion
that
might
result
I
I.
Remember
that
particular
point
of
confusion,
just
that's
how
I
remember
it
and
then
I
I
think
the
takeaway,
the
development
that
happened
after
that
kind
of
initial
discussion.
D
There
led
us
to
this
room
where
we
have
Trace
State
and,
and
it
was
connected
with
the
desire
for
consistent
probability,
sampling,
so
consistency
being
the
being
the
the
way
that
and
and
I
I
want
to
acknowledge
that
what
you're
talking
about
is
popular,
simple,
easy
understand
and
and
widely
understood.
D
That's
why
everyone
understands
the
account
sample
the
root
of
trace
and
you're,
going
to
do
some
stuff
and
you'll
count
that
Trace
as
representative
of
n
or
whatever,
but
this
idea
of
Trace
to
ID
ratio
sampling
or
some
sort
of
consistent
probability
sampling
that
was
not
purely
head
based
was
that
we've
we
remember,
or
you
know
those
of
us
who
are
like
engineering
stuff
like
this
a
long
time
ago,
I'm
just
thinking
my
days
at
Google
was
like
yeah,
and
then
you
have
this
tracing
overload,
where
the
servers
that
are
Downstream
are
asked
to
trace
just
because
and
they
they
are
overloaded,
and
they
cannot.
D
They
don't
have
budget
for
it.
So
the
idea
that
a
downstream
node
in
a
trace
can
lower
sampling
probability
and
and
achieve
a
meaningful
output
was
present
also
there
at
the
beginning
of
the
hotel
and
and
and
this
is
all
that
the
two
things
I
just
said-
combined
kind
of
led
us
to
pull
back
from
having
an
attribute
and
to
try
and
figure
out
this
like
well,
not
every
Span
in
the
trace
has
the
same
sampling
probability.
D
You
want
to
figure
out
how
to
do
that,
and
so
that
led
us
to
I
would
call
version
zero
of
the
Trace
State
spec,
which
has
deficiency.
D
Is
that
I
think
I
don't
know
if
I
want
to
gloss
over
right
now,
but
the
point
was
it
was
designed
for
head
sampling
and
it
can
be
done
in
a
child
node
or
a
root
node,
but
it
has
to
be
done
kind
of
in
the
SDK
and
that's
or
at
least
technicality
is
a
side
I
think
both
people
in
this
call
want
to
correct
me
already,
but
I'm
trying
to
try
to
paint
a
product
brush.
D
So
my
recent
work,
my
recent
draft,
which
I
admit
is
the
kind
of
mess
and
Ottmar
has
fixed
it
in
concept.
I
think
my
recent
draft
was
trying
to
fix
exactly
the
problem
that
you
have
but
fitting
it
together
with
the
consistent
probability,
sampling
work
that
went
before
and
trying
to
make
it
so
that,
but
basically
trying
to
get
the
outcome
exactly
what
you
want.
My
company
wants
it
to
it's
like
we
have
a
sampling
product,
we
have
custom
Samplers,
we
have
custom
collection
infrastructure.
Today
we
want
to
move
off
of
it.
D
D
What's
new
coming
from
this
group
is
the
idea
that
first
of
all,
we
got
some
Randomness
added
to
the
w3c
so
that
okay,
now
we're
gonna
like
two
years
ago,
we
wouldn't
make
an
assumption,
but
now
we're
going
to
go
ahead
and
kind
of
assume
that
there's
some
Randomness
and
know
what
it
is
and
and
then
hypothetically
instead
of
hashing
and
scenes
and
all
that
stuff
that
you
see
in
that
old
processor.
You
ought
to
be
able
to
do.
D
Gave
the
appearance
of
the
problem
in
in
the
sense
that
not
everyone
wants
to
choose
a
power
up
to
we
like
one.
In
ten,
we
like
one
in
20,
we
like
100,
we
like
all
those
other
numbers,
too
175.,
and
so
that
was
where
I
started.
My
proposal,
I
wrote
a
notep
I,
actually
wrote
a
hackathon
project
in
December
and
then
I
a
couple
of
months
later
wrote
an
Otep
and
then
we
looked
at
it
and
then
and
then
Ottmar
said.
Oh
I
have
a
way
better
idea
about
how
to
do
this.
D
Basically,
how
I
felt
and
I
appreciated
that
we
also
had
a
discussion
about
my
proposal
last
time
and
Peter
gave
all
kinds
of
good
reasons
why
it
could
be
much
much
better
and
between
the
pieces
of
feedback
from
Omar
and
Peter.
D
I
feel
like
there's
a
much
better
proposal
waiting
to
be
written,
but
it
still
holds
that
line
Ken
that
that
I
started
with,
which
is
to
say
that
attributes
are
for
describing
spans
and
we
have
Trace
State
and
other
mechanisms
for
describing
collection
path
and,
while
I
I
think
there's
probably
a
large
number
of
users,
and
that
would
just
appreciate
what
you
said
and
say
like
God,
damn
it.
D
Why
can't
Hotel
just
add
me
a
stupid
attribute
that
tells
me
the
sampling
rate,
as
it
would
be
so
simple
and
I'm
I
feel
you
I
I,
just
I
I
know
the
I
mean
I
would
prefer.
I
would
prefer
to
see
us
stay
away
from
the
attribute
and
promote
the
Trace
State
proposal.
D
So
I've
referred
to
recent
discussions
one
from
four
weeks
ago,
one
from
two
weeks
ago.
There's
this
Otep
that
has
a
live
discussion
in
it,
but
I
have
not
followed
up
with
it
in
the
past
two
weeks
because
I'm
prioritizing
other
stuff.
So
I
wanted
to
talk
about
it
here.
If
everyone
else
does
but
I
put
that,
but
I
would
maybe
leave
that
for
the
end
of
the
agenda.
I'm
sorry
I've
talked
a
lot.
You
want
to
respond
to
yeah.
C
So
I
just
have
I
guess
this
is
this.
Is
me
starting
to
kick
off
the
conversation
and-
and
we
have
you
know
more-
to
go-
I
want
to
make
sure
because
I
see
extremely
practical
value
in
okay.
Look
I
want
to.
You
know
the
the
it
at
the
end
of
the
day.
What
we
want
as
Engineers
trying
to
maintain
systems
is,
we
want
to
know
that
we
can
investigate
interesting
incidents
or
interesting
circumstances
right.
C
So
a
big
fraction
of
what
Refinery
does
is
say
to
people
okay,
I'm,
going
to
sample
the
common
stuff,
much
more,
sparsely
than
the
uncommon
stuff.
So
this
week,
literally
I
just
wrote:
a
I
wrote
a
new
sampler
for
us
that
you
can
give
it
a
list
of
keys
a
list
of
fields
that
correspond
to
a
key
and
it
takes
the
it
the
product
of
that
key
space
and
samples
logarithmically.
You
get
a
lot
few.
C
A
lot
higher,
more
samples,
I
I
I,
always
get
problem
when
I
say
higher
sample
rate,
because
people
like,
depending
on
how
you
think
of
it.
But
anyway,
sorry
you
get
fewer
samples
of
the
common
stuff
and
more
samples
of
The
Uncommon
stuff
relative
to
their
rate
of
occurrence
and
because
a
honeycomb
can
compensate
for
that
in
the
back
end,
it
works
out
pretty
well
so
so
this
Dynamic
sampler
basically
says
okay,
you
know
you
can
say
well
all
right.
C
I
I
take
my
endpoint
and
come
and
and
my
error
your
status
code
as
the
key
field,
and
then
you
get
essentially
all
of
your
common
endpoint
200s,
maybe
sampled
at
one
in
a
thousand
or
one
in
a
hundred
or
whatever
the
number
might
be,
and
then
you
know
the
500s
are
going
to
be
sampled
at
one,
because
they're
so
rare
and
and
so
that
data
flows
in
and
combine
that
with.
Actually
we
also
have
a
you
know
this
new
sample.
C
It
has
a
throughput
limit
where
you
say
I'm
going
to
specify
that
I
want
X
spans
per
second
to
reach
honeycomb,
and
so
this
thing
basically
adjusts
sample
rates
dynamically.
To
achieve
that,
while
making
sure
that
all
in
any
given
time
period,
all
of
the
occurrences
of
your
keys
have
at
least
one
sample
in
that
in
that
time
period.
So
this
was
a.
This
is
a
really
useful
sampler.
It
does
something
that
customers
want
to
have
done
and
I
want
to
make
sure
that
as
we're
doing
things
through
otel
that
otel
can
eventually
Express
this.
C
D
I
think
we
share
the
goal:
Ken
I
I-
am
it's
very
similar
about
situation
here
at
lightstep
we're
trying
to
get
rid
of
our
satellites.
We
want
to
make
sure
our
customers
can
run
the
hotel
collector,
but
we
absolutely
have
that
what
I
call
spandometrics
pipeline,
where
we
we
turn
spam
to
the
metrics
everywhere,
all
the
time
and
and
so
it's
it's
kind
of
useless
to
us
to
do
sampling
without
getting
that
stuff
and
and
what
you
just
said
like
your
your
pitch
is
sort
of
like
appealing
to
the
like
gosh.
D
D
Basically,
the
the
space
of
Samplers
is
so
big
and
so
sophisticated
and
all
the
potential
predicates
and
filters
and
and
configurations
that
you
can
imagine
can
can
possibly
go
in
there
and
and
and
then
there
are
some,
in
my
opinion,
interesting
challenges
in
deciding
how
to
you
know,
put
together
a
complicated
policy
of
sampling
where
you
described
sort
of
like
a
I.
C
D
So
that's
that's
kind
of
what
I'm
afraid
of,
but
I
will
say
now
that
if
we
just
focus
narrowly
on
spandometrics
that
the
current
hotel
trace
the
current
Trace
State
spec,
which
is
still
more
experimental,
what
gives
you
that,
except
it
only
works
for
powers
of
two.
E
And
maybe
one
thing
I
mean
what
currently
we
it's.
It's
just
required
that
the
sampling
process
is
consistent.
So
if
you
have
some
custom
sampling
process
which
is
not
consistent,
then
you
would
need
probably
some
other
fields.
I
think
we,
we
discussed
some
kind
of
r
value
or
something
else
which
maybe
expresses
the
same.
The
inconsistent
sampling
rate,
which
has
to
be
multiplied
on
top
of
that
at
the
end,
when
you're
extrapolating
the
data
yeah.
D
Yeah,
okay,
wait:
I,
don't
want
to
downplay
that
statement.
D
I
was
thinking
of
how,
at
least
in
my
head,
the
current
spec
has
an
R
value
and
a
p-value
and
R
value
is
all
about
consistency
and
the
p-value
is
all
about
how
you
count
the
span
and
I
think
we
wrote-
and
maybe
this
is
incorrect
or
inaccurate,
but
I
think
we
wrote
that
if
you
have
a
p-value
and
know
r
value,
just
that
tells
you
how
to
count
it.
It
doesn't
tell
you
what
it
was
consistent,
yeah.
E
D
E
A
D
E
Maybe
some
additional
sampling,
maybe
acting
on
the
whole
Trace,
then
I
think
you
need
some
other
multiplication
effect.
I
mean
if
it's
path
to
you
could
probably
increase
the
p-value
on
all
the
spans
of
the
trace.
D
In
any
case,
this
was
just
sort
of
an
inside,
because
I
I
think
the
point
was
P.
Value
is
not
good
enough,
because
powers
of
two
to
me
are
not
good
enough,
and
that
was
where
the
most
my
most
recent
Otep.
E
B
E
D
And
I
I
suppose
kept
you'll,
probably
say
you
want.
You
want
your
non-probabilities,
you
want
your
non-consistent
sampling,
I,
don't
I
think
I
understand
why
hot
Mara
would
like
to
know
the
difference
and
I
don't
think
in
the.
E
End
of
the
United
States,
if
it's
a
sampling
decision
on
the
whole
Trace
acting
on
the
whole
all
this
Bands,
then
it's
automatically
consistent
right.
So
if
you
first
gather
all
the
spans
and
then
you.
F
E
For
the
whole
Trace,
if
it
should
be
kept
or
not,
then
I
think
it's
a
consistent
decision
and
then
we
do
not.
We
can.
We
can
edit
or
multiply
it
on
the
on
the
p-value
and
but
if
it's
I'm
not
sure
if
yeah,
but
it's
also
not
clean
in
my
opinion,
because
it's
not
using
the
the
common
Randomness
information
like
the
r
value
for
the
sampling
decision,
I
would
keep
it
separate.
D
So
there's
some
confusion
over
whether
we,
if
we,
if
we
must
have
non-consistent
sampling
decisions,
then
we
must
record
the
counts,
multiplied
as
a
result
of
those
that
you
there's
a
proposal
to
keep
them
separate
for
reasons
that
are
maybe
not
quite
clear
at
this
moment,
but
I
I
want
to
I
guess
one
question
we
can
also
frame
is:
is
it
really
worth?
D
Is
it
okay?
Is
it
worth
the
trouble
or
not
worth
the
trouble
to
insist
on
going
with
consistent
sampling
everywhere?
If
you
can
I
I?
Think
that's
why
I
was
referring
to
this
configuration
problem
is
because
I
believe,
at
least
with
the
the
draft,
the
experimental
stuff,
the
power
two
Samplers,
that
we
have
I
believe
that
you
can
always
use
those
base
cases
as
those
samples
as
the
base
in
a
composition
of
higher
level
Samplers
that
will
be
configurable,
but
as
long
as
you
get
those
basic
sampler
cases
right,
you'll
get
consistent
sampling.
D
D
I
was
intrigued
by
the
idea,
mainly
from
Peter,
that
that
my
proposal
was
basically
had
some
defects
and
the
the
way
I
think
about
them
in
two
weeks
after
was
that
I
was
proposing
to
essentially
encode
a
threshold
that
was
going
to
be
correspond
with
a
non-power,
a
non-power
of
two
value
in
the
threat
in
the
trace,
ID
range
and
the
the
unfortunate
problems
with
that
is
that
you
end
up
with
non-integer
counts
for
one
thing
and
Peter's
proposal
was
we
can
do
this.
We
can
do
it
with
consistency.
D
B
Well,
yeah,
so
if
you
have
a
large
population
of
spans,
you
will
not
notice
any
differences
between
this
approach
and
the
approach
where
non-power
of
two
probability
could
be
expressed
explicitly,
but
it's
simpler.
Logically,
you
always
have
the
same
mechanism
going
on
for
sampling
and
you
always
have
integer
accounts.
B
B
A
B
E
Yeah
I
mean
there's
seizing
to
two
things
which
discussed
the
one
thing
is
if
we
are
fine
with
power
of
two
sampling.
So
in
my
opinion
this
is
enough,
but
but
what
what
Kent
wanted
to
have
is
I
think
some
way
to
transport
the
information
of
some
custom
sampler
and
yes,
which
we
do
not
know
how
this
custom
sampler
does
the
sampling
right?
Is
this
correct
right
right.
C
C
A
sampler
that
is
making
decisions
at
a
I
mean
it's
first
of
all,
it's
a
tail
sampler,
so
it
has
the
whole
Trace
to
look
at
when
it's
making
that
decision
and
it
is
making
a
decision
based
on
information
inside
the
trace,
and
it
could
be
anything
from
trade.
C
You
know
span
count
in
the
trace
to
particular
Fields
expressed
within
the
trace
to
particular
Services
visited
within
the
trace,
yeah
and
but
the
the
the
key
distinction
is
that
what
that
calculation
generates
in
Refinery
is
a
sample
rate
and
then,
after
having
been
handed
a
sample
rate,
Refinery
goes
okay,
now
I'm
going
to
use
the
trace
ID
to
identify
a
probability
and
then
decide
whether
or
not
to
keep
this
Trace
as
the
one
that
I
keep
consistent
with
the
sampling
rate.
C
A
C
C
Sometimes
I
have
to
go
64
and
sometimes
I
have
to
go.
128
and
and
I
can
control
the
balance
of
those
so
that
on
net
it
works
out
to
75.
D
D
E
No
I
mean
if,
if
you're
choosing,
you
know,
if
you're
switching
between
Power
of
Two
sampling
rates
to
achieve
the
sample
rate,
what
you
want,
then
I
think
everything
is
there
and
you.
You
also
said
that
you
would
do
the
sampling
decision
consistent
based
on
the
tracity
or
the
R
value
and
then
I
think
there's
no
nothing.
What
we
have
to
add
to
the
spec.
C
It's
not
where
it
was
when
it
came
into
this
meeting,
but
I'm
I'm
coming
around
to
at
least
understand
this
point
of
view.
I'm.
D
Not
sure
I,
agree,
yeah
I
think
I
got
confused
about
what
your
proposed.
What
you
just
proposed,
Omar
I,
mean
I.
Think
what
I?
What
I
believe
to
be
true
is
that
we
can
continue
using
R
values
and
P
values
and
with
the
addition
of
Trace
Randomness,
we
can
specify
how
to
do
tail
sampling.
The
way
we've
been
doing
head
sampling
and
I
and
I
I
I.
D
Never
I,
don't
like
to
do
math
on
my
feet
and
I
realize
it
has
a
lot
to
do
with
math,
but
the
kind
of
insight
I'm
having
right
now
is
that
the
R
value
or
sorry
the
p-value
corresponds
with
a
number
of
leading
zeros
in
your
Trace
ID
right.
D
D
And
I'm
imagining
now
a
situation
where
the
r
value
can
be
derived
from
the
trace
ID.
Basically,
so
the
the
p-value
is
the
number
of
leading
zeros
in
the
trace
ID,
which
equivalent
equates.
E
With
two
percent
is
basically
the
chosen
sampling
rate
or
the
sampling
probability.
So
p-value
means
you
know.
It
was
sampled,
okay,
so
100
one
fifty
percent
and
so.
D
On
let
me
try
and
get
out
what
I'm
trying
to
say,
because
it's
informal
and
vague
and
why
I
need
your
help.
So
what
I'm
trying
to
say
is
that
the
Insight
I'm,
having
is
that
after
you've
identified
how
many
leading
zeros
there
are,
that
tells
you
the
like
range
of
of
decisions
that
you
have
available
to
you.
So
if
it's
1
in
75,
we
know
that
it's
between
64
and
128.
So
there
are
six
leading
zeros
in
that
in
that
something
there.
That
means
there
and
we
have
56
bits.
D
A
B
The
the
leading
zeros
have
nothing
to
do
with
the
probability
that
we
choose
to
sample
with
they.
You
you
want
to
map
those
leading
zeros
to
the
r
value.
That's
great
yeah!
Well,
not
so
great,
perhaps
but
I
I
can
I
can
live
with
that.
Now
the
probability
is
a
completely
independent
thing.
It
doesn't
depend
on
the
R
value
you
first
have
you
have
to
first
decide.
D
D
E
D
E
You
you
could
just
you,
could
just
use
some
random
value
with
the
corresponding
probabilities
which
you
have
to
calculate,
but
that
in
on
average,
you
sample
with
one
out
of
75
fret.
So
you
have.
C
E
D
Anyway,
so
I
the
way,
I
remember
the
problem
with
tail
sampling.
Was
this
you're
trying
to
hit
75,
175
and
you're
going
to
be
switching
between
70,
64
and
128?
But
if
that
switching
is
again
randomized
using
information,
that's
not
part
of
our
value
or
part
of
the
trace
ID.
Then,
though,
you
will
not
get
a
consistent
result.
Yeah.
D
A
D
B
A
B
D
Why
can't
we
wait
so
the
idea
that
we
could
get
consistent
tail
sampling
up
non-powers
of
two
steam
is
still
possible
to
me.
I
I
think
you're,
just
saying
it
doesn't
sound
useful.
D
I'm
not
sure
I
have
more
to
say,
I
might
need
to
think
about
this.
More
I'm,
I'm
I
personally
am
still
interested
in
dropping
the
R
value
because
it
just
seems
like
extra
extra
stuff.
That
is
unnecessary,
although
I
do
recognize
that
there's
a
use
case.
D
You've
mentioned
Peter,
that's
not
I'm,
not
taking
that
away
from
you,
but
I
feel
that
in
most
most
the
time
the
any
kind
of
correlation
across
Trace
IDs
is
is
just
an
added
benefit
that
we
could
get
by
putting
the
R
value
back
and
that.
B
B
A
D
D
A
A
C
B
B
D
Don't
want
to
take
away
the
thing
and
you
keep
and
you've
pointed
out
why
it's
useful
clustering
is
a
good
word.
Sessions
is
a
good
word,
so
I
propose.
We
end
this
discussion
here,
come
back
to
it
again.
I
I
think
I.
If,
if
we
didn't
come
back
to
this
I
think
I
would
know
what
I
want
to
propose
myself.
I
would
have
to
do
some
now
to
figure
it
out,
go
on,
but
I
hope
that's
satisfying
enough,
and
then
we
could.
E
F
Yeah,
so
thanks
Josh,
so
for
this
one,
it's
just
a
update.
I,
don't
have
anything
tangible
to
discuss
in
this
Forum
today,
but
that
is
one
follow-up.
I
need
to
do
to
because
I
had
filed
this
issue
and
you
had
given
a
bunch
of
good
information,
so
I
need
to
close
the
loop
there
follow
up
on
your
feedback
and
I
have
a
example
that
I
created
based
on
some
of
the
comments
you
had
like.
F
How
do
you
compose
a
probabilistic
sampler
and
a
non
probabilistic
sampler,
and
because
there
are
situations
where
people
do
want
to
achieve
this
kind
of
thing,
so
I
just
want
to
throw
out
an
example
there
in
the
hotel.net
repo,
but
I
still
need
to
absorb
all
of
your
I
think
there
was
some
discussions
after
I
created
that
initial
example
I
think
Yuri
had
given
some
points
you
had
given
some
points
so
I
just
need
to
this.
D
Yeah,
let
me
know
how
I
can
help.
Let
me
know
if
I
can
review.
I
will
say
that
that
your
item
has
blended
into
my
last
bullet
on
the
agenda
as
well.
I've
been
thinking
a
bunch
about
sampling
and
links
because
of
an
ongoing
debate
in
hotel
about
links,
I,
so
I
I
I,
would
be
glad
to
I
would
be
glad
to
help
and
review.
It
doesn't
look
like
there's
an
action
item
right
now:
I
I'm
gonna
put
now
my
in
the
agenda.
D
A
D
There's
this
debate
over
ad
link
and
it
was
removed
because
of
fear
and
uncertainty
over
sampling
and
now
and
and
now
the
argument
to
put
it
back
is
that
we
still
need
it
and
and
no
one
really
understands
what
what
events
are.
So
we
should
just
create
a
new
thing
and
I've
been
arguing
for
a
while
that
hotel
tracing
doesn't
have
a
data
model.
It's
just
the
like
logging
in
metrics
came
in
and
we
said
what
the
data
means
and
how
it
how
it
interacts
in
traces.
D
We
just
have
this
protophile
and
an
API
and
and
the
protophile
is
so
close
to
the
API
that
we
all
kind
of
think.
We
know
what
it
means,
but
there's
no
data
model
written
down.
We
don't
say
what
an
event
is.
We
don't
say
what
a
link
is
other
than
say
operationally.
Here's.
How
do
you
create
one?
It
points
to
another
span.
D
D
You
can
sample
the
the
new
span
because
it
refers
to
the
old
span
and
then,
if
the
old
span
was
also
sampled,
you
will
get
that
link
by
virtue
of
the
new
Span.
In
other
words,
we
were
forcing
ourselves
to
record
new
spans
when
they
contain
links,
because
there
was
no
other
way
to
record
a
link
after
this
band
started.
D
So
the
argument
that
we
need
to
add
link
has
been
focused
on
how
we
need
to
add
an
API,
and
what
I'm
saying
is
we
need
to
add
a
data
model.
Data
model
will
say
something
like
a
link
is
a
reference
to
another
span
and
if
you
create
them,
expands
start
well,
then
they're
just
they're
just
links,
but
it's
implied
that
they
weren't
spam
start.
So
you
know
their
start.
You
know
the
time
that
that
link
was
created
and,
and
you
have
a
kind
of
a
name
associated
with
Spam
start
or
the
span
name.
D
My
point
is
that
when
you
create
a
span
and
a
link,
the
other
span
should
get
that
link
too,
but
that
span
has
already
started
and
we
don't
have
a
way
to
put
a
link
into
a
span.
That's
already
started,
so
we
need
to
fix
our
data
model
to
say
that
span,
links
point
in
two
directions
and
that
when
you
create
a
span,
a
link
in
one,
you
implicitly
create
a
link
in
the
other,
then
the
sampler
can
do
the
things
it
wants
to
do
mainly
if
I'm
sampling
the
pointed
the
referred
to
span
I.
D
Can
like
just
put
that
link
right
in
there,
because
I
have
a
way
to
do
it
and
right
now
we
don't
have
a
way
to
put
a
link
into
a
span
after
it
starts.
So.
My
observation,
then,
is
that
we
already
have
the
situation
that
you
create
a
link.
It
creates
an
event
in
the
other
span
which
is
after
that
span
started.
It
now
has
a
new
link,
that's
an
event
to
the
other
span,
so
we
should
just
use
the
add
event
API
to
create
to
create
to
create
links
and
there's
two
cases.
D
Basically
links
created
the
start,
links
created
after
start,
most
links
are
created,
after
start
special
case,
to
create
one
at
the
start,
because
every
time
you
create
want
to
start
it's
an
after
start
link
for
the
other
spam.
So
there's
always
at
least
one
after
start
spam
link
per
span
link
anyway.
D
That's
my
high
level
point
is
that
we've
failed
to
get
that
understanding
of
what
a
span
link
is
and
if
we
could
fix
that
I
think
we
would
start
to
recognize
that
there
are
basically
two
variations
on
the
link
ones
that
were
there
at
the
start
and
ones
that
weren't
and
then
I
think
we
should
just
use
the
event
API
to
record
links.
And
that's
my
that's
my
point.
D
That's
sort
of
separate
from
the
link.
You
know
the
issue
that
you
linked
to
in
the
notes
or
the
agenda
here,
which
was
my
old
statements
or
the
that
thread
about
about.
How
could
we
defer
the
sampling
decision
so
that
we
could
incorporate
information
about
links
which
is
a
little
bit
orthogonal?
But
it's
connected
with
links
like
I
mentioned
I.
C
Want
to
still
support
go
ahead.
Okay,
sorry
I
want
to
ask
this
historical
question
because
I
wasn't
here
for
it
and
I:
don't
wait?
No,
why
they're
not
why
we
don't
have
the
concept
of
a
trace
link
rather
than
a
span
link
or,
alternatively,
why
span
links?
Don't
include
Trace
ID.
D
They
do
because
they
have
a
whole
context
object,
including
tray
state.
It's
key,
oh
I,
just
I
had
to
look
this
morning.
Okay,
the
the
the
history.
C
D
Like
within
the
first
six
months
of
Hotel,
they
started
with
open
census
as
a
draft,
and
then
ad
link
was
there
and
within
six
months,
ad
link
was
gone
because,
allegedly
it
didn't
work
with
sampling.
That's
that's
the
history,
and
then
you
know
years
of
debate
and
everyone
in
in
the
messaging
group
saying
like
we
cannot
create
these
Links
at
start
time.
We
just
want
to
record
them
anyway,.
A
C
D
If
you're
gonna
propose
to
use
the
add
event
API
to
create
a
link,
I
think
I
need
to
say
to
answer
that
question
and
well
anyway,
I.
This
is
where
that
debate
is
right.
Now.
F
So
Josh
one
question
on
this:
event-based
model
you're
proposing
I,
will
read
through
this
offline,
but
quick
question
here,
so
the
linked
span
could
be
in
a
completely
different
process.
Right,
like
let's
take
like
a
producer
consumer
scenario,
so
I'm
just
trying
to
understand
what
what
do
you
mean
when
you
say
that
the
other
span
should
also
receive
yeah.
D
D
The
idea
that
you
could
that
the
span
you're
linking
to
may
be
alive
and
may
not
be
alive
and
it
may
be
remote,
as
you
say,
which
is
even
further
than
alive
or
farther
than
being
alive
or
not,
and
if
that
span
is
still
alive.
There's
a
definitely
an
opportunity
to
put
a
link
into
that
other
span
and.
A
D
Isn't
a
life?
There
is
not
an
opportunity
to
put
in
that
span,
and
this
was
this.
Was
this
came
up
in
Tuesday's,
spec
seg
as
well
like
just
because
the
spans
ended
doesn't
mean
that
it
can't
have
a
link
in
code
like
recorded?
It
just
means
that
we
have
to
find
a
new
way
to
record
it.
One
way
to
record
that
link
is
to
record
the
other
span.
D
D
So
I
was
trying
to
address
the
idea
that,
just
because
we
have
an
opportunity
to
make
sampling
work
with
links
doesn't
mean
we
will
always
be
able
to
make
it
work
and
because
we
could
have
no
information
about
that
link
and
we
have
no
span
present
either
and
and
that
points
I
believe
to
just
simply
needing
another
signal.
D
So
if
we
had
a
way
to
Simply
record
span
lengths,
I'm,
not
the
first
person
to
say
this
one,
you
realize
I'm,
not
just
creating
a
bunch
of
a
bunch
of
crazy
ideas.
You
know
the
idea
that
you
know
open
Telemetry
is
about
to
embrace,
is
sort
of
embracing
the
logs
and
have
a
log
data
model.
Can
we
just
make
a
log
semantic
convention
to
say
this
is
how
you
record
a
span
link?
Maybe
some
of
the
objections
that
are
that
I'm
hearing
are
over
like
wanting
to
have
a
very
crisp
data
model?
D
Is
it's
silly
to
go?
Putting
your
spam
links
into
your
logs,
some
may
say,
because
we
have
a
data
type
meant
to
reflect
a
span
link.
So
unless
it's
a
Spam
link
data
type,
it's
not
a
Spam
link,
and
in
that
case,
how
do
I
record
a
span
link
when
both
spans
are
finished,
or
one
of
them
is
not
around.
D
If
I
want
to
record
a
Spam
link
to
a
Spam
that
was
sampled
when
I
am
unsampled,
that's
another
case
that
we
might
want
to
have
an
answer
to,
but
does
we
don't
need
to
answer
those
questions?
I
simply
want
a
point
that
There's
an
opportunity.
D
If
this
man
is
live
and
the
SDK
has
it
like,
you
could
do
that
and
I'm
still
thinking
about
applications
that
I
remember
so
you
know
the
the
successful
application
here
is
that,
like
some
spans,
never
end
they're
going
to
be
alive
every
time
you
link
to
them
and
because
they
never
end,
they
will
not
be
sent
out.
It
may
be
a
bug,
and
if
you
want
to
do
bug,
why
is
the
spin
never
ending?
D
D
D
It
would
gather
every
live
span
and
into
a
data
structure,
and
then
it
would
go
fill
in
all
the
missing
links,
because
you
had
one
side
of
every
pointer
and
it
would
just
go
fill
in
all
the
others,
and
it
would
render
you
a
nice
useful,
debugging
page.
It
was
horribly
expensive
because
it
had
to
stop
and
look
at
every
live
Trace,
but
it
was
also
extremely
useful.
People
use
it
all
the
time
so
I'm
trying
to
get
that
type
of
story
to
stick
here.
F
D
D
We're
almost
out
of
time,
I
hope
that
can't
that
you
feel
like
you've
gotten
some
answers,
I'm
enthusiastic
about
fixing
up
the
Trace
State
stuff
to
make
everyone
happy
I'm.
Not
it's
not
clear
exactly
how
to
do
that.
Yet
it
does
seem
to
that
requires
a
conceptual
leap
to
agree
that
175
can
be
effectively
and
satisfactorily
achieved
by
alternating
between
164
and
128.,
because
I
think,
if,
if
you
accept
that
we
can
make
Technical
Solutions,
that
will
give
you
that
in
a
tail,
sampler
is
what
I'm
hearing
with
the
help
of
others.
D
We
can
do
that
all
right.
Well,
thank
you.
Thank
you
all
and
I
hope
to
have
some
progress
in
two
weeks
or
so.