►
From YouTube: 2022-09-22 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
D
Hi
guys
I
haven't
been
here
in
a
while,
I
guess
I'll
say
it
was
sort
of
a
summer
thing,
but
I'm
back.
I
saw
my
name
listed
as
the
sponsor
of
this
meeting
recently
and
I
realized
I
should
probably
just
come
all
the
time,
even
if
I'm
having
trouble
making
it.
D
Otherwise,
I
feel
like
I'm
an
intruder
and
you
guys
are
the
ones
who
are
always
here.
So
if
there's
an
agenda,
let
me
look.
I
had
one
thing
I
wanted
to
talk
about,
but
I
don't
want
to
preempt
any
agendas.
C
Yeah,
I
don't
write
anything
down.
I
guess
I
might
be
interested
to
discuss
a
little
bit
this.
These
notes
that
I
shared
in
the
channel
a
couple
weeks
ago
that
I
know
peter
checked
out.
D
I
will
say
I
haven't
looked
at
it
yet,
but
I'm
and
I'm
sorry
about
that.
Today's
date
is
22nd
well
spencer.
Would
you
like
to
kick
off
any
discussion
here?
Yeah
sure,
I'm
sorry,
I'm
not
I'm
basically
just
here
to
listen
and
figure
out
what
I've
missed.
C
Yeah
no
worries,
so,
let's
see
one
thing:
I've
been
thinking
about
so
like,
as
I've
said
in
previous
meetings.
C
The
purpose
of
sharing
this
was
to
sort
of
establish
what
some
of
the
like
teachers
set
and
in
ways
that
the
you
know
existing
systems
work
that
we
frequently
talk
about,
like
potentially
aspiring
to
replace,
or
at
least
offer
like
a
compelling
substitute
for
in
hotel,
and
so
in,
particularly
the
note
that
it's
like
a
survey
of
different
systems
was
intended
to
give
us
a
understanding
of
that,
and-
and
maybe
you
know,
we're
not
ready
to
like
all
of
us
discuss
it,
but
I
think
I
know
at
least
peter
has
read
it.
C
I
saw
you
like
acknowledge
some
comments
from
peter,
but
like
the
pattern
that
is
emerging
or
that
emerged
to
me
after
I
compiled
these,
these
pieces
of
information
is
that
this
sort
of
funk
decision
flow
of
all
of
these,
like
all
of
these
different
sampler
tools
today,
is
like
seemingly
across
the
board
of
the
form,
like
you
know,
evaluate
this
predicate
and
like
maybe
go
down
this
branch
of
like.
C
If
I
choose
this
champion
probability
or
like
fall
through
to
the
next
thing
and
like
if
this
activates
or
is
true
or,
however
you
want
to
think
about,
it,
then
use
the
sampling
probability
and
so
forth,
and
so
it's
like
this
long
like
if
else,
if
else,
if
elsa
like
it,
seems
to
me
that
all
the
systems
that
at
least
I
surveyed
anyway
and
also
kent
mentioned
sentry,
is
working
on
some
kind
of
sampling
thing
too
and
then
their
blog
post.
C
He
links
this
in
the
sig
channel
in
their
blog
post.
It
appears
like
very
similar,
like
if
elsa
also
else
kind
of
set
up,
or
at
least
can
be
reduced
to
that,
and
so
that's
like
one
like
pretty
interesting
pattern
that
emerged
to
me.
That
is
like
leading
me
in
the
direction
of
like.
C
C
Like
survey
exercise,
I
guess
I
would
wonder
if,
like
anyone
that
has
like
reviewed
the
notes
or
otherwise
like
had
any
other
like
observations
or
reactions,
to
some
of
the
descriptions
of
how
these
other
things
work,
that
we
you
know,
we
talked
about
x-ray
like
the
existing
hotel,
stuff
and
refinery,
because
I
understand
that,
like
is
new
information
to
many
of
us.
A
lot
of
us
have
like
some
familiarity
with
some
of
those,
but
I
know
I
didn't
have
familiarity
with
all
of
them.
C
A
C
Thanks,
that's
good
to
hear
okay
well
yeah.
If
anyone
like
at
any
point
in
next
meeting
or
before
then
on
slack
or
like
you
know,
continue
to
comment
on
it.
I,
like
I'm,
torn
on
how
how
super
polished
these
like
off
to
become.
You
might
remember.
C
But
I
don't
know
I
don't
read
many
oteps
truthfully,
so
I
don't
know
if,
if
there
is
like
a
convention
of
like
you
know
putting
things
in
an
appendix
on
otecs
or
like
linking
to
like
somewhere
else
when
there's
like
supplementary
information,
that
like
establishes
some
of
the
like
motivations
or
ideas,
but
it's
just
like
a
lot
of
information,
I
don't
know
joshua.
I
know
you
haven't
seen
what
what
we're
talking
about
concretely
yet,
but
obviously
that'd,
be
interesting.
C
D
It
yeah
yeah,
so
I
hear
the
the
question
is
sort
of
like
what
kind
of
guidance
do
we
need
to
get
this
into
a
and
like
merged
like
accepted
and
so
on?
I
think
so
to
me.
C
To
be
clear
like-
and
you
may
have
read
this
at
the
pr
description,
but
I
currently
I
position
this
pr
as
like
a
this
is
just
a
vehicle
for
review
like
I
don't
actually
intend
at
least
currently
for
this
to
be
like
merged
but
like
if
you
see
that
there
would
be
like
value
in
that,
like
I'm
all
for
that
too.
D
A
lot
of
times
that,
when
we're
dealing
with
such
large
issues,
having
an
oh
chap,
just
to
like
put
some
text
in
that
you
can
point
to
the
next
one
is
pretty
helpful.
I
would
say
in
the
in
the
vein
of
kotep
170.
I
think
it
was
which
took
for
168
170.,
just
more
material
to
help
if
someone
comes
in
kind
of
cold
and
wants
to
get
a
lay
of
the
land
that
those
would
probably
help
them
in
this
too
yeah.
C
D
Yeah
I
get
it.
I
think
how
about,
if
I
take
a
review
promise
to
review
this
in
the
next
few
days
a
week
or
so
and
figure
out
what
I
would
recommend.
So
I
keep
without
looking
closely.
I
don't
I
don't
see
a
problem
like
having
this
sort
of
appendix
giant
appendix
you
know.
One
thing
is
you
could
just
attach
it
to
one
of
the
existing
oteps
that
says
and
here's
more
information
you
know
just
like
that's
one
way
we
could
go
I'll.
Take
a
look
sure.
Thank
you.
B
I
have
one
comment
or
one
question,
maybe
because
I
just
skimmed
over
the
document
I
couldn't
didn't,
have
the
time
to
read
it
in
detail,
but
regarding
the
balances
and
limiters
I
like
to
that,
you
try
to
somehow
structure
the
the
kind
of
samplers
I
mean
you
know
some,
I
mean
regarding
balances
you're
talking
about
this
frequency
score,
but
I
mean
this
is
somehow
given.
B
Is
this
correct,
or
I
mean
this
has
to
be,
because
you
know
it's,
it's
then
something
what
has
to
be
provided
by
the
server
or
I
don't
know,
yeah
and
calculates
that
frequency
score
yeah
and.
C
E
B
Behind
the
reality,
so
I'm
wondering
if
there
is
there
could
be
also
balances
which
dry
themselves
to
you
know
figure
out
the
frequency
scores
so
that
less
communication
is
needed.
Yes,
maybe
what
I'm
missing.
C
No,
I
wouldn't
say
you're
missing
it.
I
I
think
somewhere
in
line
like
after
you
know,
a
definition
is
given
or
something
it
does
say
or
suggest.
You
know
a
way
to
obtain
that
score
is
to
what
and
by
the
way,
I'm
welcome
to
like
changing
the
names
of
any
of
these
concepts.
I
think
I
have.
C
I
think
I
might
be
interested
in
like
changing
this,
like
just
recasting
it
from
being
like
a
score
to
just
like
honestly
what
it
is
in
simpler
terms
and
more
familiar
terms
is
like
a
sort
of
continuous
estimation
of
the,
like.
You
know,
incoming
rate
of
spams
of
that
kind,
and
that
is
a
little
bit
ambiguous,
but
how,
where
that
sort
of
abstraction
was
derived
from
was
there
are
a
couple
systems
today
that
you
know
attempt
to
do
like
not
unlike
what
you
programmed
in
java
a
little
while
back.
C
I
think
that
attempt
to
sort
of
maintain
an
estimate
of
you
know
incoming
rate
of
span
of
a
certain
class
or
category
and
so
yeah.
That's
kind
of
yeah.
That's
like
this,
but
yeah
in
terms
of
that
estimate
being
sort
of
using
information
shared
across
nodes
or
instances
like
that
is
yeah
totally.
B
B
It
depends
basically
in
what
you're
interested
at
the
end.
So
this
is
often
not
known
in
advance.
C
But
in
terms
of
next
steps
yeah,
so
I
understand
joshua
you're
gonna,
give
some
suggestions.
C
I
think,
maybe
at
this
point
I've
like
shared
enough
of
like
the
sort
of
contextual
information
that
had
been
only
in
my
head,
such
that
I
think
I
can
probably
move
forward
now
on,
like
sharing
the
actual
like
early
starting
discussion
of
like
an
actual
concrete,
like
design
conversation
of
our
answer
to
the
like
sampler
configuration
problem.
Knowing
what
you
know
we
would
minimally
need
to
support
or
like
what
our
would-be
customers
are
like
accustomed
to
doing
for
these
other
systems.
C
So
so
I
will
follow
with
more
sections,
probably
of
the
like
maine
hotep,
that
that
I
have
volunteered
for
so
that'll.
Be
my
next
step.
C
I
want
to
make
sure
I
understand
your
feedback
because
I'd
like
to
integrate
it
if
possible.
So
let's
see
we
want
many
complete
traces.
We
must
not
require
all
traces
are
complete.
A
Right,
so
what
I
meant
here
is
quite
often
in
distributed,
applications
which
are
based
on
microservices.
The
frequency
with
which
those
services
are
called
are
vastly
different.
C
It's
actually
not
obvious
to
me
is
that
life
closely
related
to
the
problem
of
like,
like
you
said,
we
must
not
require
that
all
choices
are
complete
and
does
that
relate
to
like
when
I
read
that
sentence,
I
thought
of
you
know
imagining
if
I
had
some
kind
of
like
very
high
volume
system,
like
perhaps
there's
like
diminishing
and
like
no
business
value
after
a
certain
point
of
like
collecting
this
information.
Is
that
kind
of
what
you
had
in
mind
with
like
not
requiring
that
all
traces
are
complete.
C
C
Yeah
I
so
I
totally
see
how
that
is
like
accurate,
that
you
know
from
our
position
as
like
people
trying
to
design
stuff,
let's
see
which
files.
C
C
These
are
sort
of
things
that
I
would
have
in
my
head
as
a
as
like
desirable
things.
So
in
this
context,
I'm
not
attempting
to
state
you
know:
capabilities
of
the
like
underlying
technology,
so
much
as
like
you
know,
from
a
from
a
customer
or
like
a
user's
perspective,
like
features
that
are
are
valuable.
C
I
think
what
I
think
what
I
have
been
thinking
is
that,
like
from
a
user's
perspective
like
all
else
being
like
equal
or
free
or
like
disregarding
these
first
two
points
like,
of
course,
I
would
prefer
everything
to
be
complete
all
the
time,
and
so
in
that
sense
like
what
I
am
trying
to
describe
with
these
items
is
things
that,
like
you
know,
you
have
to
like
any
any
person
who's
a
customer
like
needs
to
trade
off
some
of
these
in
favor
of
some
others.
C
C
For
such
a
for
such
an
owner
of
like
a
infrequently
visited
but
very
important
service
for
such
an
owner,
would
there
concern
the
you
know
addressed
by
by
like
if
all
traces
were
complete,
then
like
every
visit
to
their
critical
service
would
be
recorded.
C
A
A
I
think
it's
it.
It's
a
pretty
frequent
pattern
that
you
have
some
services
that
are
completely
hidden
from
the
public.
They
are
internal
to
the
application.
C
A
C
A
A
But
it
it's
it's
a
very
it's
a
conflict
right,
so
yeah
yeah!
That's
why
that's
why
I
would
soften
number
four.
But
of
course
it's
completely
up
to
you.
If
you.
A
C
Is
like
yeah,
and
I
get
why
like
in
practice
like,
I
totally
like,
not
follow
your
reasoning
that,
like
in
practice,
you
really
want
partial
traces
to
ensure
that
you
know
you
get
these
like
low
low
volume
but
important
sub
pieces.
I
think
I
could
just
as
well
say
like
I
could
read
this
one
and
say
collect
as
little
data
as
possible.
Are
you
kidding
me?
C
I
want
to
collect
more
data
or
like
there
is
like
all
of
these
are
sort
of
stated
as
like
things
that
you
know,
no
one
would
argue
that
like
to
collect,
actually
don't
put
this
a
different
way,
but
all
of
these
independently
are
desirable
and
like
you
would
want
them
to
the
maximum
degree
that
you
could
have
them,
but
then,
once
you
sort
of
pursue
them
like
for
the
reasons
that
we
talked
about,
you
kind
of
have
to
trade
off
between
them,
and
so
I
was
I
was
thinking.
C
A
C
Okay,
yeah
yeah
and,
like
let
me
know
if,
like
I
phrased
them
as
like
sort
of
independent
ideals,
but
if,
if
it
could
be
clear,
if
I
would
tone
down
the
like
absoluteness
of
each
one,
I
could
understand
that
too.
C
C
That
was
just
like
retaining
it
from
the
google
doc
joshua
just
scurried
off,
but
I
would
I
would
love
his
take
on
this
section
when
he
checks
it
out
has
to
do
with
like
what
we
build
first
or
what
we
hope
to
solve.
First,
this
is
a
good.
C
Are
you
following
that?
Like
I'm
not
saying
like
you
know,
we
should
never.
We
should
like
forsake.
You
know
like
caring
about
people
who,
like
there,
there
will
be
some
people
for
whom
they'll
be
something
like
they
can't.
You
know,
run
all
the
collectors
necessary
or
whatever
they're
like
they
don't
have
enough
memory
to
like
buffer
things
for
tail
sampling.
C
There
will
be
such
people,
and
do
I
understand
correctly
that
your
comment
is
basically
along
the
lines
of
like
we
ought
not
like
rule
those
people
permanently
out
of
scope
like
maybe
they
are
not
our
priority
but
like
we
should.
You
know,
keep
them
in
mind.
A
Well,
it
ties
a
little
bit
to
my
other
comments
which,
which
we
we
haven't
discussed.
Yet
this
is
about
the
limits
of
of
systems.
Technological
limits
use
singled
out
storage
limits
as
valid
ones,
but
I
wanted
to
indicate
that
there
are
some
other
limits
well,
first
of
all,
in
the
tracer
right,
so
the
tracer,
quite
often
especially
with
high
volume,
cannot
really
afford
to
have
100
sampling
because
it
is
memory
it
consumes
memory,
cpu
and
network,
and
produces
not
only
overhead
but
also
can
cause
crashes
and
especially
in
spikes.
A
So
when
the
traffic
spikes
and
the
system
crashes
because
of
the
tracer
of
being
overloaded,
so
so
in
general,
I
believe
that
any
practical
application
of
open,
telemetry
with
respect
to
sampling
will
use
both
head-based
sampling
and
tail-based
sampling
because
they
serve
two
different
purposes
in
general.
Head-Based
sampling,
in
my
opinion,
is
necessary
to
protect
the
application
from
being
overloaded,
so
we
collect
as
much
data
as
we
can,
but
not
more
than
that
and
it
can
it.
It
should
be
less
sensitive
to
context
because
the
context
is
very
often
unknown.
A
Yeah
and
tail-based
sampling
is
the
more
intelligent
sampling
where
you
really
care
about
configuration
much
more
and
you
you
have
some
freedom
to
choose,
and
these
two
two
sampling
types
needs
to
be
balanced
it.
We
cannot
say
that
one
is
more
important
than
the
other,
because
they
both
are
very
important.
C
Putting
on
your,
like,
you
know,
product
manager,
hats
for
a
moment
and
like
making
a
decision
of
like
which
id
do
you
think
it'll,
be,
I
guess
what
I
am
anticipating
is
that,
like
in
terms
of
like
implementation,
effort
or
like
before
that
design
effort,
any
necessary
design
effort,
like
the
the
sort
of
solutions
for
both
of
those
realms
and
tail
sampling,
will
not
be
worked
on
in
parallel.
C
That
is
my
presumption
and,
if
that
were
true,
then,
like
I
think
my
like
sentiment
behind
the
statement
here
was
like
assuming
they
won't
be
sort
of
worked
on
simultaneously
and
like
delivered
simultaneously
that
just
like,
as
a
matter
of
sequencing.
The
two
I
had
like
a
weekly
held
opinion
that
tail
sampling
would
be
more
valuable
to
deliver
to
people
earlier,
but
I'm
curious
if
how
how
you
two
feel
about
that
presumption?
If
maybe
it's
not
even
true,
maybe
yeah.
A
A
So
from
that
perspective,
it's
definitely
okay.
If
we
were
to
construct
the
overall
sampling
architecture.
I
I
think
this.
This
would
be
a
problem
and
on
on
the
other,
slightly
other
topic,
you
know
in
some
other
place,
you
were
right.
Foolish
indicated
that
sampling
management
should
be
kind
of
automatic
as
much
as
possible
without
manual
interventions.
A
C
C
Yeah,
I
think
if
I,
if
I
I
may
like,
if
I
link
this
in
at
all,
I
would
probably
rewrite
this
and
say
like
maybe
like
in
terms
of
like
how
soon
we
deliver
our
solutions
for
heavy
tail
sampling,
like
my
intuition
anyway,
could
be
not
right
is
like
prioritize
tail
overhead
sampling,
comma,
but
like
don't
sort
of
don't
design
anything
that
like
precludes
or
like
worsens
the
head,
sampling
experience
or
whatever,
like
keep
that
in
mind,
don't
totally
disregard
it,
don't
take
any
action
that
worsens
your
ability
to
solve
that
later.
C
Do
you
think
peter?
Is
it
possible
that
you
you
mentioned?
You
know
resource
costs
of
like
within
my
own
infrastructure,
of
of
collection
as
like
a
sort
of
counterpart
to
like
downstream
systems
that
I
am
not
quite
the
owner
of
those
things
have
limits
too,
but
like
the
infrastructure
that
I
own
has
limits
too.
I
think
I
intended
these
concerns
to
be
sort
of
in
scope
of
this.
First.
Yes,.
C
C
Yeah,
okay,
I
think
I
might
move
like
this
nested
point
like
t
like
your
with
your
suggestion,
like
broaden
this
thing
and
then
kind
of
move
this
point
into
that
one
now
that
number
two
is
broader
to
include
both
systems
that
are
not
in
my
lake
direct
control
as
it
is
currently
and
with
your
feedback
systems
that,
like
are
in
my
purview
yeah.
That
makes
that's
that's
a
better
way
to
collect
these
ideas.
Thank
you
I'll.
Do
that.
C
And
I
will
fix
this
lifting
okay,
that's
all
I
had
so
like
I
said
my.
I
want
to
incorporate
that
feedback
and
then
I'll
stand
by
for
joshua
your
suggestions,
yeah
and
then
also,
I
think.
C
Sharing
some
material
that,
like
you,
know
more
concrete,
like
configuration
data
model
stuff
that
that
could
accommodate
some
of
this
existing.
You
know
sampler
designs,
that
people
have
today
in
these
other
systems.
So
that's
it
for
me
for
today,
thanks.
D
Thank
you,
spencer.
I
bought
one
item
that
has
come
up
in
the
specification
sig
channel
in
the
last
week,
and
I
thought
I
would
just
put
it
up
in
front
of
us.
This
was
sort
of
like
a
back
from
the
dead
issue.
D
You
know
two
years
ago
almost
we
declared
hotels,
tracing
stable
without
solving
this
problem
and
put
it
to
do
with
the
spec
and
in
many
ways
that
to
do
led
to
the
work
I
was
interested
in
a
year
ago
on
the
propagating
probability,
sampling
and,
as
you
know,
we
started
in
a
very
sort
of
minimal
place
with
power
of
powers
of
two
which
have
this
nice
property
of
being.
D
However,
with
a
year
of
experience-
and
we
haven't
really
even
deployed
much
of
this
by
the
way,
but
with
a
year
of
experience,
I
can
tell
you
that
that
users
do
not
like
the
idea
of
being
limited
to
powers
of
two
sampling
rates,
even
though,
when
you're
talking
about
very
small
rates,
the
the
rates
get
very
close
to
each
other,
it's
it's
actually
the
the
ones
that
are
near
one
that
people
come
in
asking
for
like
I
need
75
sampling
and
for
a
long
time
I
didn't
think
that
was
a
realistic
request.
D
D
So
this
is
like
background
for
this
discussion
it.
This
is.
This
was
a
very
old
issue
and
I
last
time.
C
I
questioned:
what's
your
question
joshua,
what
do
you
mean
by
you
had
a
like
a
customer
quote
like
I
just
want
to
make
my
traffic
steady?
What
do
you
mean
by
steady
or
what
do
they
need.
D
What
they
meant
was
they
have
a
contract
for
so
many
spams
per
month.
I
guess
I
mean
I'm,
I
think
I'm
ex
I'm
some
other
placeholder,
but
and
and
and
what
we're
dealing
with
is
is
customer
has
too
many
spans.
They
would
like
to
bring
it
down
to
to
meet
our
quota,
and
but
it's
we're
talking
about
not
much
sampling
to
get
there.
We're
talking
about
one
two,
one
and
three
one
and
four
one
and
five
and
that's
about
it,
and
the
problem
is
that
I
can
only
do
one
and
two.
D
I
can
do
one
and
four,
but
I
can't
do
one
three
one
and
four
one
like
so
so
this
issue
had
just
sort
of
stuck
there
for
a
long
time
doing
nothing,
and
then
it
came
up
again,
so
I'm
kind
of
bringing
it
up
here.
For
the
same
reason,
the
people
who
are
bringing
this
up
come
from
the
go
sig.
They
were
pretty
unaware
of
the
work
on
probability,
sampling,
daniel
dalai
works
at
dynatrace
with
atmar
and
has
been
representing,
I
think,
kalyana.
You
know
him
from
the
w3c
group.
D
He
he
brought
the
context
back
from
the
w3c
group
to
us,
and
so
I
was
asking
questions
well,
I
I
believe
our
plan
of
action
here
was,
we
will
add
this
randomized
bit
to
the
future
trace
context
version.
If
you
have
random
bits
now,
all
of
those
caveats
about
the
powers
of
two
sampling
may
fall
away.
D
I'm
saying
may
because
I'm
asking
for
sort
of
a
design
here
and
what
I'm,
what
I'm
aware
of
is
that
we've
discussed
in
this
group
suppose
I
want
75
sampling
once
we
have
like
lots
of
random
bits
available
which
is
sort
of
a
an
assumption,
I'm
using
because
that's
the
way
the
w3c
group
went
so
suppose.
I
have
56
bits
of
random
randomness.
D
Now
I'm
able
to
create
a
56-bit
threshold
because
those
render
bits
were
there,
so
I'm
able
to
now
pick
sampling
thresholds
that
correspond
with
three
out
of
four
or
you
know,
75
percent
and
what's
being
asked
here,
is
that
when
we
find
it
there's
a
code
snippet,
oh
geez,
it
was
a
good
snippet.
D
D
If
we
set
that
threshold
to
three
out
of
four
you,
you
will
get
three
out
of
four
sampling
here
and
if
we
specify
exactly
how
to
turn
56
random
bits
into
a
number,
then
we
can
portably
consistently
make
that
decision
across
all
the
the
clients
so
what's
being
asked
for
is
for
us
to
finish
specifying
an
algorithm
that
would
calculate
probability
sampling
from
those
bits
now.
I
think
this
opens
the
door
for
us
to
kind
of
finish
our
work
in
the.
D
I
feel
that
I'm
I
haven't
been
to
this
meeting
for
a
few
months
now
and
I'm
feeling
a
little
rusty.
But
let
me
try
and
explain
what
I'm
saying
the
once
we
have
randomness
in
the
trace
id
now
we
don't
need
an
r
variable
anymore,
and
the
question
comes:
what's
your
p
variable
and
the
p
variable
with
spec
for
powers
of
two
we've
talked
about?
Having
other
variables
that
would
convey
other
probabilities
still
consistently
and
the
the
two
that
come
to
mind.
D
I
can
imagine
two
other
ways
to
express
sampling
and
I
think
there's
only
two
that
I
can
think
of
one
is
to
express
the
threshold
or
the
the
what's
being
called
upper
bound
here
as
a
as
56
bits.
In
other
words,
you've
got
56
bits
of
randomness.
I've
got
56
bits
of
probability
and
we
can
follow
the
rules.
In
other
words,
if
we
specify
how
to
compute
how
how
do
we
convey
a
sampling
threshold?
D
Now
we
have
56
bits
of
sampling
probability
available
and
it's
straightforward
to
do
it.
That
way,
however,
is
is
a
number
that
humans
cannot
read
and
use
very
easily,
because
75
turns
into
a
56-bit
fraction
representing
the
number
0.75,
and
I
can't
read
that,
and
I
don't
think
anyone
can
read
that.
However,
what
we
do
know
is
that
commonly
people
want
to
say
I'm
going
to
do
one
in
seven
sampling
and
the
number
seven
is
the
user
visible
number
that
actually
means
something,
not
one
over
seven.
D
If
we
were
to
convey
adjusted
count
with
a
consistent
probability,
sampling
scheme,
it's
either
going
to
be
a
threshold
or
an
adjusted
count.
I
think
those
are
the
two
options
and
I
prefer
the
adjusted
count,
which
is
the
inverse
of
the
sampling
threshold,
but
I
believe
this
issue
here
in
front
of
us
is
asking
us
to
specify
how
to
calculate.
D
A
threshold
consistently
across
hotel
sdks
so
that
we
can
both
count
adjusted
counts
and
have
non
powers
of
two
sampling.
That's
what
I'm
asking
for
I
finished
saying
it.
I
don't
think
I
said
it
very
well.
We've
talked
about
a
c
variable,
which
is
the
one
that's
most
natural
to
me.
So
if
I
say
c
equals
15.
That
means
I'm
probability
sampling
at
one
in
15
and
I
and
I
fell
under
the
threshold
according
to
the
randomness.
D
That's
kind
of
what
I'm
after
I
think
c
is
the
one
that
will
be
used
or
understandable,
although
it
means
inverting
something
and
there's
loss
of
precision
like
you
know,
fractions
are
hard
anyway.
That's
all
I
had
to
say
I'm
hoping
that
this
group
has
some
context
on
that
and
has
some
ideas
about
what
to
do.
D
It
would
be
4
over
3,
which
unfortunately
does
not
equal
an
integer
and
that's
kind
of
why
the
problem-
that's
kind
of
the
problem
with
with
a
c
variable,
is
that
it
calls
for
fractional
inverses
rather
than
exact.
Fractions
is
what
we
have.
C
So,
like
concretely,
would
you
imagine
that
the
like
actual
value
of
c
would
be
like
four
slash
three
or
would
it
be
like
0.75.
D
I
don't
know,
that's
actually
a
good
question,
I
kind
of
like
four
over
three,
but
I
don't
know
this
was
the
topic
I
didn't.
I
don't
come
with
the
real
answers,
I'm
just
kind
of
coming
with
a
question.
D
And
just
to
just
to
follow
this
to
the
end,
it's
basically
pretty
fresh.
I
I
don't.
I
think
that
this
is
calling
for
something
to
be
done
and
I
don't
think
anyone's
going
to
do
it
unless
we
do
it
here,
and
I
think
I'm
just.
I
think
I
think
it
might
happen
very
slowly,
but
it
will
happen
probably
and-
and
it's
probably
we're
the
ones
who
could
make
recommendations
about
c
versus
whether
thresholds
are
easier
but
unreadable.
D
D
Maybe
this
just
could
be
a
discussion
topic
that
we're
done
with
and
and
I
don't
have
any
action
items
at
the
moment
other
than
to
present.
This
topic
has
come
back.
C
C
Does
anyone
know
like
what
are
the
circumstances
where
I
would
like
a
human
would
be
laying
eyes
on
this
value
and
like
wishing
that
they
could
interpret
it?
Is
it
like
a
debugging
scenario,
kind
of
thing
like.
D
That's
what
I'm
thinking
I
I
mostly
know,
and
it
was
in
this
meeting
of
probably
three
three
quarters
ago,
nine
months
or
so,
like
someone
came
in
from
automatic
saying
we
we
just
want
to
turn
on
one
and
end
sampling,
and
why
can't
I
do
that
and-
and
we
were
like
well,
you
could,
you
know
flip
some
coins
and
use
powers
of
two
but
like
that's,
we
was
an
unsatisfactory
answer
and.
D
I
think,
through
the
thought,
experiment
at
the
time
was:
how
would
I
modify
the
collector's
current
probability
sampling
plug-in
to
do
this,
and
I
I
I
think,
you're
right.
We
don't
necessarily
need
for
or
it's
not
a
must,
and
it's
not
necessarily
required
that
the
humans
can
read
it.
It's
maybe
just
nice
to
have
or
it
it
makes
people
comfortable
to
like
have
a
thing
there
that
they
can
understand.
If
they
see
it.
B
Yeah,
I
would
prefer
to
if
we
need
that
I
would
prefer
to
store
the
to
store
the
threshold
as
it
is
because
everything
else
you
lose
some
precision
and
yeah.
That's
not
ideal.
D
D
Okay,
I,
I
think,
that's
the
kind
of
feedback
I
was
looking
for,
despite
the
fact
that
humans
can
read
c
values
that
that
is
as
messy
and
tricky
because
of
precision
issues,
and
maybe
I
haven't
thought
about
it,
but
like
I
think
I
could
read
a
binary
fraction
at
some
level.
Just
because
I
got
well.
I
think
I
can
read
binary
fractions
pretty
well,
but
only
from
working
on
exponential
histograms.
So
maybe.
B
D
D
Yeah,
I
feel
like
that
can
be
a
recommendation
that
is
sort
of
like.
If
your
vendor
doesn't
accept
fractional
accounts,
then
you
should
not
use
anything
other
than
an
integer
reciprocal
or
something
like
that.
But
I
know
that
that's
that's
a
debate
that
is,
I'm
not
the
first
person
to
enter
this
debate
like
lightstep
added
support
for
fractional
counts
and
its
histograms
recently,
but
it
is
I've
seen
a
lot
of
argument
over
that
feature.
D
D
I
feel
like
that
was
something
I
wouldn't
have
anticipated
either,
especially
coming
from
like
heads
of
observability
companies.
I
know
yeah,
I'm
not
gonna
name
people
but
like
I,
I
was
surprised
to
get.
You
know
some
of
the
feedback.
I
got
yeah.
C
You
said
something
which
was
like,
I
think,
referring
to
like
a
conversation
you
had
nine
months
ago.
You
said
you
know
you
can
like
do
this
probabilistic,
like
selection
of
a
power
of
two
that'll
like
over
many
trials
net
out
to
average,
and
they
were
whoever
you
were
speaking
with
they're
suggesting
to
is
unsatisfied
with
that,
and
I
wonder,
do
you
recall
was-
was
the
reason
for
their
for
their
unsatisfaction
that,
like
that,
was
more
code
for
them
to
write
or
like?
Was
it
something
deeper
than
that.
D
I
think
it
was
not
deeper
than
that.
I
think
it
was
more
of
a
like.
I
just
have
this
thing:
I've
always
done,
which
is
one
seven
sampling,
and
why
can't?
I
still
do
that
and
we
were
like
well,
you
could
flip
the
coins
and
like
one
one
and
seven
is
halfway
between
one
and
eight
and
one
and
four
like
you
know,
like
that's
the
kind
of
logic
which
is
just
like
nah,
I
don't
need
that.
I
just
want
one
seven
or
whatever.
C
So
what
do
you
think
then?
So
how
has
this
notion
of
this?
This
technique
been
floated
in
the
in
the
github
thread,
because
I
wonder
like,
and
the
reason
for
my
question
was
like
if
it
were
like
an
implementation
detail
that,
like
was
part
of
the
the
interface
to
trace
id
ratio.
Sampler
and
like
it
wasn't
extra
code
for
users
to
write
like.
Would
it
be
fine.
D
D
D
I
think
the
probably
the
best
outcome
would
be
trace.
Id
ratio,
sampler
gets
updated
to
say
to
be
consistent
with
the
probability
sampling
propagation
that
we've
done,
which
means
specifying
how
to
make
a
big
endian,
integer
or
whatever,
and
like
turn
the
trace
id
into
a
number
so
that
you
can
threshold
it
and
that
it'll
come
with
caveats
saying
something
like
if
you
are
configuring
a
sampling
probability
that
is
not
an
integer
reciprocal,
so
the
recipient
may
have
trouble
counting
it
exactly.
I
think,
is
kind
of
where
we
would
end
up.
D
Yeah,
maybe
that's
I'm
sure,
someone
with
a
perfectionist
will
come
in
and
say,
but
I
don't
know
how
to
express
one
and
seven
exactly
because
it's
there's
no
exact
floating
point
number
that
equals
one
and
seven
or
whatever.
And
I
hope
that
that
is
like
there's
a
rounding
question
or
something
like
that.
But.
D
C
D
It
I've
mentioned
how
my
employer
is
more
interested
in
compression
than
sampling
right
now,
so
I'm
a
little
bit
distracted
by
other
things,
and
I
I'd
like
to
help
us
solve
this
problem.
E
E
D
I
think
the
answer
is
yes,
I
still
like
the
power
of
two
like
spec,
because,
especially
because
it
saves
lights
and
it's
compact
and
it
as
a
I
think,
as
a
matter
of
getting
the
first
thing
done,
it
was
a
good,
a
good
place
to
go.
D
We
can
now
finish
specifying
trace
id
ratio
based
sampler
to
be
consistent
and
correct,
ish
or
cl
as
close
to
correct
as
like,
whatever
you
know
like
as
an
ieee
floating
point
will
give
you,
and
on
top
of
that
we
have
specified,
trace
state
machineries
that
can
propagate
adjust,
accounts
or
sampling
ratios,
and
I
mean
dylan
I'd,
say
daniel
dayla
also
kind
of
pointed
out
that
there's
discussion,
or
at
least
an
idea-
and
I
think
this
came
from
ottmar's
original
proposal
of
like
you-
could
even
begin
throwing
more
bites
into
the
trace
context
or
the
trace
had
trace
parent.
D
But
I'm
I'm
not
enthusiastic
about
that
really
anymore.
It
just
seems
like
picking
fights
where
they
don't
belong,
so
the
the
rollout,
then,
would
be
something
like
hotel
fixes
its
trace
id
ratio
based
sample
problem
problem
so
that
it's
consistent
without
probability,
counting
or
without
propagating
probability
and
independently.
D
We've
specced
out
how
to
do
the
same
sampling
with
probability,
propagation
using
p
variables,
or
maybe
a
new
variable
for
threshold.
I
think
a
new
variable
for
threshold
will
help
sell
it,
but
it
shouldn't
be
necessary
and
then
vendors
have
to
support
it
and
I'm
going
to
be
honest.
Flight
step
never
went
and
implemented
the
trace
date
thing
and
partly
because
I
think
they
were
expecting
me
to
do
it,
but
I
was
expecting
them
to
do
it
and
no
one
did
it.
So
that's
just
honesty.
D
We're
kind
of
I
think
part
of
the
reason
is
that
we
just
having
a
consistent
probability.
Sampler
is
like
a
leaf
in
in
that.
If
else
branch
that
spencer
showed
us
earlier
and
people
need
the
fels
branch
too,
and
unless
they
have
the
fels
branch,
they
don't
have
useful
sampling
from
from
what
we've
done
so
far,
I
think.
E
Okay,
I
I
got
most
of
it.
I
think
the
last
part
I
missed
is
it's
up
to
the
different
sdk
owners
and
maintenance
to
go
build
support
for
that
consistent
probability,
sampler.
That
proposal
that
you
already
put
together
right.
I
know,
for
example,
there
is
a
java
one
that
I
think
omar
and
trust.
Yeah
just
did
something,
and
then
I
know,
and.
D
Sharp
is,
and
I
wrote,
the
I
wrote
one
and
go
and-
and
I
I
mean
like
that
is
available
to
the
extent
that
someone
could
try
to
use
it.
It's
still
marked
experimental,
I
feel
like
we
need
to
see
users
carrying
before
we
pull
it
out
of
experimental
and
I've.
I
think
my
my
guess
is
that
the
perceived
cost
of
propagating
our
values
was
very
high
and
that
may
have
led
us
to
kind
of
like
pause.
D
While
we
waited
for
a
w3c
to
add
that
random
to
add
that
randomness
bit,
because
otherwise
you're
adding
cost
to
an
unsampled
trace
context,
which
is
a
little
bit
offensive
to
people
it's
only
when
we're
sampling.
Now,
if
you
have
randomness
bit,
you
only
need
to
add
cost
to
a
sample
trace,
which
is
much
better.
E
Explicit
concern
that
came
from
anybody
like
any
customers.
No
right,
I
don't.
D
Think
it
was,
I
I'll
go
back
to
my
original
hypothesis,
which
is
that
it's
it's
nice
to
be
able
to
count
sampled
spans,
but
a
single
setting
for
sampling
rate
is
not
good
enough
for
anybody,
and
so
a
more
comprehensive
solution
is
what
is
what
users
really
need,
and
therefore
you
know
speaking
as
a
vendor,
we've
gone
and
done
tail
sampling
like
send
us
all
your
data
and
we'll
do
some
sampling
and
if
it's
too
much
data
do
one
and
two
sampling
or
do
one
and
three
sampling
and
like
we're
we're
it's
a
band-aid
and
we'd
like
to
do
sampling
throughout
the
whole
stack.