►
From YouTube: 2023-03-09 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
C
C
A
F
F
E
A
F
To
keep
it
to
keep
it
as
it
was
I
realized
that
this
is
a
pretty
regular
change
kind
of,
not
not
very
nice,
but
if
we
decided
the
problem
that
I
described
is
real
I'm
I'm,
not
sure.
If
we
really
have
to
support
that
kind
of
of
sampling.
But
if
it's
real
then
then
it
looks
like
that's
the
only
solution
at
least
well.
I
I,
don't
see
any
other
one.
D
Good
morning,
everyone,
hello,
sorry
to
be
late,
I'm
in
a
real
office
today,
for
the
first
time
in
a
very
long
time,
I
mean
this
week
first
time,
a
very
long
time
and
it's
pretty
early
to
get
to
an
office
for
me
anyway.
D
Looks
like
you
were
waiting.
Thank
you.
I'll
find
our
agenda.
I
know,
there's
some
new
things
that
we
have
arrived.
C
We
were
just
starting
to
talk
about
Peter's
issue.
A
D
F
There,
it
is
I,
did
it
last
last
night,
I.
A
F
Right
so
it
came
to
my
attention
that
we
have
might
have
some
customers
who
have
want
to
ignore
certain
transactions
completely
and
the
reasons
are
a
little
bit
unknown,
but
we
suspect
that
there
could
be
some
regular
regulatory
issues
they
they
want
to
hide
something
from
open
Telemetry
completely,
so
they
employ
a
custom
sampler
at
the
root
span
for
in
tier
a
which
is,
which
is
the
top
tier.
They
look
at
the
URLs,
and
if
transactions
belong
to
the
Forbidden
category
or
the
ignored
category,
they
simply
use
always
off
sampler,
so
they
were.
F
They
are
happy
with
with
the
results,
but
of
course
they
have
may
have
this
fan
out
problem
which
causes
TSB
and
C
to
be
overloaded,
with
tracing
spans
that
were
passed
as
sampled
by
the
tier,
a
which
didn't,
of
course
know
anything
about.
What's
going
on
Downstream,
so
we
thought
that
employing
consistent
probability,
Samplers
might
fix
the
issue.
F
Well,
it
does,
but
it
introduces
another
one,
namely
because
consistent
probabilities,
Samplers
do
not
look
at
sampled
flag,
but
rather
look
at
the
R
value
and
they
are
supposed
to
generate
our
value
when
it's
not
present.
F
Well,
they
generate
this
r
value
and
sometimes
they
sample
spans,
which
belong
to
the
ignored
transactions,
but
we're
not
supposed
to
be
sampled
according
to
the
user's
intentions,
and
they
are
not
very
useful.
Those
traces
because
they've
always
missed
the
root
span
and
they
will
be
always
incomplete.
F
D
It
sounds
convincing
to
me,
I
know,
there's
an
issue
in
open,
telemetries,
spec
repository
that
one
contributor
has
been
asking
for
for
quite
a
while.
That
is
similar
in
that
it
asks
for
a
mechanism
to
turn
off
Sam
to
turn
off
tracing.
Basically,
that
would
be
independent
of
okay,
independent
of
Trace
context,
essentially
and
I'm,
trying
to
recall
but
I,
but
I
believe
it
ends
up
sounding
a
lot
like
that.
D
A
fairly
typical
I'd
say
complaint
about
not
for
regulatory
reasons,
trying
to
redact
a
span
or
anything,
but
more
often
just
like
health
checks
or
other
frequent
operations
are
undesired
and
they'd
like
to
to
have
a
way
to
to
to
force
disabling
those
traces.
I.
Remember
now,
it's
actually
in
the
in
the
case,
where
you're
using
say
a
grpc
exporter,
which
is
a
grpc
connection
which
could
be
traced,
and
you
are
exporting
traces.
You
want
to
turn
off
tracing
to
avoid
a
recursion
loop.
I.
D
Think
that's
one
of
the
examples
we've
been
given
so
I
feel
like
there's
been
a
request.
Sounds
similar.
D
I
feel
like
it
might
take,
might
be
a
place
to
request.
Is
there
a
trace
context?
Flag
that
might
support
this.
We
have
some
Flags
available
in
Trace
context
in
the
transparent
header
and
I.
Don't
know,
I
I
haven't
thought
this
through
very
much,
but
it
sounds
like
an
application
that
would
make
sense.
In
other
words,
you
put
a
bit
in
saying
someone
has
requested
not
to
count
this.
F
D
F
Yes,
so
my
my
concern
was
also
here
that,
if
we
want
to
to
make
the
consistent
probability
a
sampler
work
differently,
depending
on
whether
there
is
r
value
present
or
not,
that
goes
against
our
recent
attempt
to
replace
our
value
with
with
the
randomness
of
Trace
ID.
F
So
I
my
yeah,
the
description
of
issues
are
just
a
solution
which
is,
is
pretty
crude,
I
I
I'm
not
happy
with
it,
but
it
would
work.
So
the
idea
is
just
to
make
the
consistent
probability.
Sampler
do
nothing
if
there
is
no
r
value
and
the
sampled
flag
is
false.
D
D
I
haven't
thought
through
the
implications
of
changing
it.
I
know.
We
also
have
this
discussion
going
on
about
the
potential
for
using
say
the
trace
ID,
which,
as
is
why
you
raised
the
the
desire
to
keep
our
value.
So
there's
two
there's
two
sort
of
related
topics
that
come
around
our
value
then
and
I
I've
been
wondering
if
well
in
the
background,
I
wonder
if
the
sampler
API
is
just
completely
broken
or
needing
a
completely
new
version
essentially,
but
it.
D
The
question
is
whether
a
new
sampler
API
could
potentially
help
as
well
and
I.
Don't
those
sort
of
an
open-ended
question,
but
I
think
that
there
is
possibly
a
way
to
change
the
Tracer
SDK
so
that
you
can
create
Trace
IDs
with
a
known
r
value,
essentially
which
might
be
a
compromise.
Although
it's
off
topic
from
where
you
started
so.
E
D
I,
how
how
would
you
respond
if
someone
proposed
to
you,
you
know,
sort
of
briefly
speaking
the
the
use
of
I,
guess,
I'd,
say
a
third
flag
in
the
trace
context
of
the
trace,
transparent
header
of
Trace
context,
which
wouldn't
cost
any
new
bytes
of
of
data
for
anybody,
but
does
require
us
to
kind
of
document
and
describe
a
new
context,
propagation
feature
which
is
going
to
be
difficult.
It's
already
difficult,
so
I
might
worry
about.
D
F
Your
new
flag
that
would
essentially
tell
the
sdks
that
given
transaction,
is
kind
of
forbidden,
don't
touch
it,
don't
look
at
it
and
something
like
that.
Yeah.
D
I
think
I'll
put
that
up
just
because
it
came
to
mind
but
having
but
I
think
you've
probably
thought
this
through
a
bit
more
and
that
the.
F
D
That
the
use
of
this
suppression
of
flag
is
not
perhaps
as
good
as
simply
what
you
described,
fixing
the
consistent
probability,
a
root
sample
or
to
do
nothing
and
honor
this
sampled
flag,
when
our
our
value
is
not
set
there's.
D
F
So
if
they
use
the
old
Samplers,
not
at
the
root,
not
for
the
root
span,
and
if
these
Samplers
even
decide
not
to
not
to
sample,
they
will
still
propagate
the
R
value.
F
Right
because
they
are
supposed
to
pass
the
Trace
State,
as
is
so
even
the
downstreams
that
Downstream
consistent
probability,
the
samplers
will
see
the
R
value
and
will
behave
correctly.
F
D
And
so
so,
I
guess
the
the
variation
you're
proposing
is
the
new
consistent
probability.
Sampler
will
honor
the
sampled
flag
and
not
attach
a
new
r
value,
and
the
assumption
is
then
that
you're
going
to
have
to
use
a
new
style,
consistent
probability
sample
or
at
the
root.
If
you
want
it
to
work,
and
that's
not
so
much
to
ask
I
think.
F
Nothing
needs
to
be
changed
for
the
root
span.
They
are
still
going
to
generate
the
R
value
as
necessary
as
as
normal.
It's
only
so
in
this
particular
case
that
I'm
describing
there
is
a
custom
sampler
that
simply
looks
at
the
URL
and
makes
delegation
either
to
always
off
sampler.
If
this
is
a
not
interesting
transaction
or
to
trace
ID
ratio
based
sampler,
if
this
is
a
an
interesting
transaction,
so
of
course
we
can
plug
in
consistent
probability
sampler
at
this
place,
and
we
will
get
the
the
right
thing.
D
B
So,
in
this
case
Peter
you
mentioned,
there
is
some
transaction
which
is
interesting
and
that
the
R
value
would
be
generated.
But
in
the
other
case
where
the
transaction
is
not
interesting
and
you
use
always
off
kind
of
a
mechanism,
then
the
R
value
would
not
be
set
correct,
correct.
F
D
D
When
the
you
know
well,.
D
I
would
I
will
definitely
want
to
think
about
this
a
little
bit
more,
but
I
think
I
support
the
idea.
I
I
would
like
to
then
talk
about
our
other
discussion
about
the
the
non-power
to
sampling
or
the
T
value,
or
the
combination
of
T
value
and
p-value,
or
even
the
extending
the
Tracer
API
to
let
you
specify
your
seven
bytes
of
Randomness,
which
sounds
non-random,
but
you've
you've
described
a
reason
why
it
makes
sense
too.
B
Josh
sorry
to
interrupt
one
last
thought
on
this
current
issue
here:
I
wonder
if
we
can
generalize
this
a
bit,
because
this
is
all
about
how
the
parent-based,
Samplers
and
consistent
probability
Samplers
kind
of
interoperate
in
a
in
a
system
that
may
be
like,
say
dozens
of
participants
and
some
may
be
using
consistent
probability.
B
Samplers
some
may
be
using
regular
parent-based
samplers,
so
should
be
see
if
there
are
other
use
cases
where
there
is
a
mix
of
these
different
types
of
head
based,
samplers,
and
that
could
be
like
variations
of
this
scenario
where
you
have
like
an
upstream
thing
is
consistent
property.
Sampler
Downstream
thing
is
a
parent-based
one
and
I,
don't
know
if
there
are
other.
D
That
might
be
what
I
meant
when
I
said
I
had
to
think
about
it.
I
I
just
want
to
bet
this
document
and
I.
Remember
writing
it.
It's
been
a
year
and
a
half
and
it's
in
my
opinion,
one
of
the
better
Hotel
specs,
but
it's
complicated
Matrix
that
you
just
described.
So
we
should
think
about
it.
A
D
Okay,
I
found
it
if
R
is
not
set
on
input,
Trace
context
and
the
span
is
not
a
root
span.
Consistent
probabilities
base
should
set
r
as
if
it
were
a
root
span
and
warn
the
user
that
a
potentially
inconsistent
Trace
is
being
produced.
So
we
would
change
that
particular
line
to
say
something
more
like
if
R
is
onset,
it
leaves
it
unsaid
well
as
far
as
unset
and
there's
no
sample
right.
F
D
D
D
We
started
using
a
tiny
little
bit
internally
and
in
my
you
know,
as
far
as
far
as
our
customer
base
sampling
is
still
done.
Server-Side
and
not
a
whole
lot
is
being
done.
Smartly
in
the
client.
D
D
I
did
so
so
the
one
that
comes
to
mind
sorry,
I,
didn't
put
an
agenda
together.
I've
been
in
all
day
meetings
for
otel
for
the
last
two
days.
D
The
the
one
is
this
draft
that
I
wrote,
which
Ottmar
has
responded
to
about
the
proposed
T
value
or
potentially
just
reuse
of
or
extension
of
p-value,
to
cover
the
case
where
we
are
going
to
use
Trace
ID
for
randomness
and
I
posted
that
about
a
week
ago,
botmar
came
back
with
a
reasonable
revision
to
it.
In
my
opinion,
a
tightening
up
Improvement,
maybe
that
that,
but
that's
actually
just
a
little
detail.
I
think
like
the
kind
of
like
I
had
an
off
by
one
and
it's
easier
without
that.
D
So
but
I
think
the
questions
raised
were
whether
we
are
being
I
mean,
maybe
being
arbitrary
about
the
replacement
of
p-value
with
t
value,
and
this
there's,
maybe
some
you
know
we
could
actually
have
the
power
of
non-power
of
two
sampling
with
r
value
p-values
type
structure.
D
But
the
original
proposal
was
not
to
allow
that,
for
mathematical
reasons,
essentially
so
so
settling
on
power
of
two
sampling
with
an
R
value
that
has
limited
number
was
a
decision
we
made
to
simplify
any
analytics
that
are
going
to
happen,
which
is
something
Ottmar
is,
you
know,
had
a
paper
on
so
we
followed
that,
but.
C
D
Can
also
Imagine
say
having
an
R
value
with
different
structure
that
allows
you
to
make
non-power
of
two
sampling
decisions
and
I
think
it
raises
that
fear
or
the
risk
of
having
you
know
exponentially.
Many
distinct
sampling
ratios
in
an
in
a
data
set
which
could
be
a
problem
and
I
guess.
This
will
remember
it
trying
to
remember
that
it's
not
essential
to
use
the
trace
ID
Randomness
to
get
that
non-power
of
two.
We
could
go,
get
non-power
of
two
from
the
R
value
structure,
but
I'm
interested
in
and
I.
D
Think
one
of
you
asked
me
to
sort
of
back
up
my
claim.
The
claim
was
that
users,
don't
necessarily
I,
think
Peter,
don't
necessarily
want
to
see
a
whole
new
header
being
used
for
an
unsampled
context,
especially
if
that
header
would
not
have
otherwise
been
used,
which
is
essentially
the
case
for
at
least
our
customer
base.
D
We
don't
have
much
going
on
in
chase
state,
so
the
only
thing
going
on
in
Tri-State
for
us-
and
this
may
be-
you
know
defect,
but
you
know
the
only
thing
going
on
in
Tri-State
is
the
adjusted
count
for
us.
So
if
we're
telling
a
customer
to
use
this
and
and
they're,
you
know
sampling
one
in
a
hundred
they're,
sending
99
Trace
contacts
that
they
didn't
want,
or
something
like
that
or
sorry
Trace
States.
D
B
B
Comment,
sorry
did
somebody
else.
B
You
go
okay,
so
there
was
I
think
that
I
think
that
makes
sense
to
me
like
in
terms
of
avoiding
having
to
send
the
r
value
and
making
use
of
the
trace
IDs
Randomness.
B
At
the
same
time,
I
think
I
think
Peter
pointed
out
that
I
think
in
the
past
we
have
discussed
this
consistent
sampling
for
a
group
of
traces
within
a
session
I,
don't
know
how
critical
the
use
case
is,
but
if
cases
like
that
are
existing,
where
for
a
group
of
traces,
we
want
to
achieve
the
same
sampling
decision,
then
the
question
is
this:
are
value
still
needed,
maybe
as
an
optional
additional
thing,
and
then
that
would
kind
of
override
the
trace
ID
based
mechanism
right,
it
does
add
more
complexity,
but
I
think
it's
the
question
of
whether
those
kind
of
use
cases
exist
where
you
want
to
have
the
same
decision
across
a
group
of
traces.
C
Making
making
one
of
the
people
internally
wanted
to
make
the
same
decision
for
a
set
of
traces
that
shared
a
different
attribute
than
the
trade
you
they
had
different
Trace
IDs,
but
they
wanted
to
make
sure
that
all
of
those
traces
were
settled
together,
and
so
we
were
discussing
how
to
how
to
how
to
resolve
that
issue.
C
In
the
context
of
the
tail
sampling
system
that
we
have,
it
was
actually
fairly
easy
for
them
to
place
a
a
marker
in
in
all
of
the
traces
that
the
tail
sampler
can
look
at
and
make
a
decision
with
so
but
but
generalizing.
The
problem,
I
think
I
mean
I.
Think
it's
it's
a
valid
use
case.
It's
definitely
a
valid
use
case.
C
I
will
say
one
of
the
things
that
just
surprised
me,
I
I,
came
into
this
I
think
a
little
after
the
rest
of
you
in
in
terms
of
I,
wasn't
here
for
a
lot
of
for
all
the
decisions
about
how
you
know
how
things
how
this
stuff
was
designed,
but
I
have
always
assumed
that
the
power
of
two
sampling
was
motivated
by
just
wanting
to
be
able
to
express
it
in
a
single
bite
rather
than
you
know
so,
trying
to
save
like
seven
bites
and
but
I
I
think
that
places
a
unfortunate
constraint
on
sampling,
I
think
not
being
able
to
do
arbitrary
sample
rates
is
I.
E
Not
the
only
motivation.
Actually,
it's
also
motivated
that
to
extrapolate
it
if
you're
extrapolating
integer
numbers
well
accounts,
for
example,
then
it's
guaranteed
that
you
also
get
an
integer
estimate,
for
example.
This
is
another
reason,
for
example,
so
this,
if
you
want
to
have
that,
then
you
are
restricted
to
one
out
of
something
sampling
rates,
and
so
you
naturally
have
a
gap
between
100
and
50,
because
it's
the
first
one
of
something
right,
and
so
this
this
gives
you
already
a
50
percent
decrease
every
time.
C
Yeah
I
I,
just
you
know,
I
mean
as
humans
I
see.
We
allow
in
our
tail
sampler
specifying
an
integer,
which
is
you
know
so
so
I
say
all
the
time.
People
doing
you
know
sampling
one
in
ten
one
hundred
one
ten
thousand,
whatever
whatever
number
they
they
want,
but
it's
it's
often
a
power
of
10
rather
than
power
of
two
and
then,
of
course
you
have
the
we
have
Dynamic
sampling
where
we
adjust
sample
rates
based
on
the
presence
of
different
keys
and
and
then
those
become
floating
Point
values.
F
My
take
on
on
this
whole
topic
is
that
and
I
believe
I
mentioned
it
a
couple
of
months
before
that
most
of
the
customers
would
like
to
specify
the
rate
of
spans
that
they
want
to
get
rather
than
specifying
the
sampling
probability
explicit,
because
definitely
right
so
they
are
concerned
about
throughput
now,
what's
behind,
the
scene
is
of
much
lesser
interest
to
them.
So
powers
of
two
are
good
enough
in
my
opinion,
because
we
can
can
balance
the
the
sampling
between
two
adjacent
powers
of
two
and
getting
effectively
whatever
probability
you
want.
F
I
know
that
this
is
not
exactly
the
same,
because
there
is
a
little
bit
of
probabilistic
differences
between
these
approaches,
but,
looking
from
the
Practical
perspective,
sampling
only
comes
into
place
when
there
is
a
huge
amount
of
data
coming
through.
Otherwise,
sampling
is
simply
not
necessary.
So
if
we
have
a
large
number
of
popular,
the
population
of
spans
is
huge.
F
There
is
almost
no
difference
between
balancing
between
two
probabilities,
which
are
powered
of
two
and
and
using
an
explicit
probability
that
is
somewhere
in
between
that's.
Why
I
believe
that
power
of
two
is
sufficient
and,
of
course,
the
fact
that
it
can
be
encoded
by
one
byte?
It's
a
cherry
on
cake,
but
on
top
of
cake,
but
that
was
not
as
asthma
said.
The
main
reason.
E
Yeah,
it
also
has
some
other
benefits
when
you're
extrapolating
more
complicated
quantities
like
you,
wanna
have
the
number
of
choices
which
touch
series
a
and
series
B,
and
if
you
have
consistent
sampling
everywhere,
then
this
can
be
get
very
complicated
if
you
have
many
different
sampling
rates,
but
this
is
described
in
the
paper
actually.
D
So
we
have
two
sides
of
the
argument
that
I
you've
summarized
all
the
arguments
for
and
against
I
a
high
level
I've
felt
both
sides
of
those
I've
heard.
Literally
the
same
statements
made
by
my
backend
engineer.
D
If
you're
saying
something
like
if,
if
like
a
factor
of
two
is
not
good
enough,
you're
thinking
about
sampling
wrong
is
essentially
what
the
back
end
team
said
to
me
or
to
you
know
our
support
team
and
then
I've
also
heard
you
know,
and
this
can
be
an
artifact
of
our
product
or
you
know,
sort
of
legacies
that
are
everywhere
but,
like
somebody
wants
three
out
of
four
are
sampling
and
it's
because
they're
trying
to
hit
a
bandwidth
Target,
like
you
said,
but
that's
just
like
100,
is
a
little
too
much
and
50
is
not
hitting
their
target.
D
F
Biggest
problem-
yeah.
Sorry,
sorry,
so,
yes,
I
understand
that
people
might
want
the
effective
sampling
rate,
which
is
75
percent,
let's
assume,
but
this
can
be
quite
effectively
achieved
using
sampling
with
50
and
100
appropriately
balanced.
And
yes,
we
had
a
issue
with
that
that
this
this
balancing
cannot
be
applied
for
tail
sampling.
But
it's
no
longer
true
if
we
have
random
Trace,
ID,
truly
random,
Trace
ID.
F
D
Hi
I
think
I'm
sympathetic.
D
I
also
have
this
feeling
that
the
use
of
Trace
ID
bits
and
the
use
of
a
threshold
comparison
operator
is
conceptually
much
closer
to
what
the
trace
the
consent,
tracity
ratio
sample,
would
have
been
doing,
and
so
this
may
bring
us
closer
to
the
model
the
user
has
in
their
head
in,
in
the
sense
that
you're
Computing
a
fraction,
that's
approximately
seven
bytes
of
precision
and
you're,
comparing
it
to
something
that's
intrinsic
and
so
on.
Like.
A
D
I
feel
like
that's
closer
to
the
model.
A
user
can
appreciate
and
you're
right.
I
was
absolutely
thinking
of
the
tail
sampling
concern.
So,
let's
make
sure
that's
clear
you
just
so
if
I
want
to
tail
sample
at
75,
the
problem
is
I
can't
alternate
between
50
and
100.
Let's
suppose
that
the
original
data
was
actually
100,
sampled
and
I
just
want
to
go
filter
out
25
of
my
spans
consistently.
D
F
F
D
Wait
a
second
I,
wanna,
I
wanna,
make
sure
it's
75
sampling,
though,
because.
F
475,
you
you,
okay,
you,
you
need
to
decide
whether
you
are
sampling
with
50
or
100
percent,
and
this
decision
amazed
is
based
on
Trace
ID,
which
is
random.
So
we
we
grab
the
random
bits
and
we
make
we.
We
know
what
kind
of
what
what's,
how
to
choose
between
them
and
if
we
decide
to
sample
with
50
we
bump
p-value
by
one
for
every
span,
if
it.
If
for
100,
of
course,
we
leave
that
whole
Trace
intact.
Don't
we
don't
touch
it.
D
How
do
we
that's
I,
think
that's
the
the
hard
part,
though,
how
do
we
decide
once
well
yeah,
there's
like
some
pool
of
satellites
or
collectors
or
something
like
that
processing
these.
F
So
so,
when
we
talk
about
tail
sampling
and
we
want
to
sample
whole
traces,
we
have
to
ensure
that
at
least
in
one
collector,
maybe
second
tier
of
collectors
there
is
there-
is
a
complete
Trace.
F
We
don't
even
have
to
do
this.
Sorry
if,
if
the
decisions
made
by
the
collectors
are
using
the
same
code,
we
don't
have
to
do
this,
because
even
if
we
see
a
part
of
a
Trace,
we
make
the
decision
based
on
the
trace
ID
and
the
other
collector
would
make
the
same
decision
because
it
sees
the
same
Trace
ID.
So
if
one
collector
decides
to
bump
p-values
by
one,
all
the
other
collectors
will
make
the
same
decision.
F
D
A
D
F
D
D
Okay,
that's
viable
to
me
as
well.
I,
don't
want
to
say
more
right
now,
because
I
suspect
we're
going
to
run
out
of
time.
If
we
did
everything
in
this
meeting
I
know
there's
a
couple.
Other
topics
still
making
the
agenda
up
as
we
go.
D
Can
we
agree
that
that's
like
a
nice
sort
of
wrapped
up
topic
for
us
to
contemplate
and
over
the
next
couple
weeks,
because
I
think
I
agree
that
that
that
is
easier
than
inventing
a
t
value,
but
I
haven't
thought
enough
about
it
and
we
can
all
think
about
it.
D
F
D
Kalyan
I
know
you've
filed
a
couple
of
PR's
in
the
last
couple
weeks
that
you
wanted
to
talk
about
and
then
I
know,
there's
also
an
issue
in
this
documentation
was
posted
in
slack
that
we
might
want
to
talk
about.
Would
you
like
to
speak?
I
also
know
you've
found
a
new
spec
level
issue
which
I've
seen
but
just
just
skimmed
it
yesterday.
D
B
Right
right,
yeah,
yeah
I
did
based
on
your
feedback
in
my
PR.
Yes,
so
the
PRS.
Actually,
we
last
time
we
met
I
did
share
them
with
the
oh
okay,
when
I
was
in
here
this
group
yeah
and
then
so
by.
You
were
waiting.
We
just
I
just
shared
those
and
thanks
for
looking
at
those
and
I
think
one
of
them
got
merged.
The
other
one
I
think
is,
will
likely
be
merged
this
week.
B
But
those
are
just
again
examples
and
more
like
starting
points
for
somebody
to.
D
Thank
you,
yeah
I
definitely
have
this
feeling,
that's
been
growing
and
yours
was
a
really
good
example
of
it
of
how
we
need
a
V2,
sampler
API,
there's
a
few
different
problems
that
have
been
circling
but
they're
all
kind
of
coming
to
the
same
end
point.
D
So
that's
how
I
took
it
I,
don't
think
I'll
be
given
any
assignments
of
that
nature
in
the
next
few
months,
but
it's
so
it's
one
that
appeals
to
me
as
well.
So
for
the
rest
of
you.
If
you
didn't
follow,
there's
you
know,
there's
a
discussion
about.
D
Why
aren't
we
supporting
Jaeger,
remote
sampling
and
I
felt,
like
you,
were
really
just
probing
the
like
limits
of
the
sampler
API
and
the
fact
that
we
can't
really
do
anything
speculative
and
there's
no
way
to
export
something
not
sample
of
all
those
things?
So
thank
you.
Well.
That
sounds
like
you've
already
discussed
that
there
is
also
this
request
for
us
to
review
something
in
Slack
that
I
thought
I
would
ask
for
some
eyeballs
I'm
going
to
share
it
right
now.
D
Oh,
it's
been
merged
already.
Why
do
I
need
to
look
at
it?
D
Yeah,
yes,
since
it
got
sent,
oh
Kent
reviewed
it.
Thank
you
yeah!
Well,
I'm,
not
sure
I
have
to
do
anything
here.
I'll
read
this
later.
C
It
just
basically
elaborates
you
know
on
what
is
sampling,
you
know,
what's
the
difference
between
head
sampling
and
tail
sampling
and
I
think
it
just
it
incremental,
Improvement
and
I
think
it's
better
than
it
was
so,
let's
keep.
Let's
keep
turning
the
crank
on
this,
because.
D
Well,
I
was
going
to
say
the
same.
I've
still
don't
feel
all
the
way
awake
and
there's
a
whole
thing
of
events
happening
at
my
office,
so
I'm
gonna
ask
if
I
can
drop
off
myself.
D
My
action
item
is
to
follow
up
on
the
topic
that
Peter
and
I
just
went
back
and
forth
on
about.
You
know,
keeping
keeping
the
p-value
and
could
we
can.
We
use
the
the
bits
of
the
trace
ID
without
getting
rid
of
our
value
and
p-value
and
can
I
and
can
I
convince
my
product
managers
and
my
user
base
here
that
powers
of
two
is
good
enough
because
it
always
seems
like
it's
not
but
I.
Think
there's
it's
worth
another
shot.
D
The
the
Highlight
is
that
if
I
can
make
the
hotel
collector
probabilistic
tail
sampler
do
the
right
thing.
They
probably
don't
care
how
it's
done
and
you've
basically
shown
how
we
can
do
it
without
dropping
p-value
and
a
ticket.
The
R
value
can
be
optional
as
well,
but
I
have
to
think
about
that
one
as
well.