►
From YouTube: 2021-10-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
I
was
going
to
share
my
screen,
but
I
can't
share
my
screen
and
talk
at
the
same
time
because
of
the
zoom
problem.
So
I
won't.
I
added
one
item
to
the
agenda,
but
didn't
have
much
to
speak
about
today.
A
I've
been
promising
to
have
a
draft
of
this
sampling
spec
for
probability,
sampling
done
for
a
couple
weeks
and
it's
really
going
to
be
done,
but
not
today,
I
do
have
a
few
questions
that
have
developed
as
I
was
implementing
it
that
I
thought
I
could
run
through,
especially
since
avar's
here
and
it's
a
little
early,
but
I'm
just
going
to
start
here.
A
It
is
a
draft
that
has
a
number
of
requirements
written
down
in
a
style
that
has
been
prescribed
recently
for
writing
specifications
in
a
very
clear
way.
So
I
spent
some
time
making
sure
it
did
that,
but
and
that's
and
then,
let's
see
the
the
pr
still
doesn't
have
a
test
specification
written
which
atmer
and
I
were
discussing
last
week,
I'm
going
to
work
on
that.
I
have
a
draft
of
that
as
well.
The
question
that
I
came
up
with
were
written
down
here
before
I
go
there.
A
I
also
linked
to
my
implementation.
This
is
in
the
go
contrib
directory
or
repository.
My
goal
here
is
that
we
don't
mandate
everyone
implement
this
in
every
sdk,
so
I
put
this
in
the
contrib
repository
and
hopefully
that
will
slide
so
in
developing
this
specification
or
the
the
prototype.
I
came
up
with
at
least
three
questions
which
were
corner
cases
that
we
hadn't
necessarily
discussed
explicitly,
and
that
might
be
things
that
we
need
to
specify.
Somehow
the
most
significant
of
these
is
a
situation
where
you
are
a
consistent
probability.
Sampler.
A
You
are
trying
to
make
a
decision
and
you're
not
the
root
of
the
trace,
but
there's
no
r
value.
So
if
there's
no
r
value
we're
in
a
degraded
state
one
way
or
another,
we
can
either
not
sample.
A
We
can
fall
back
to
some
other
sort
of
decision
making.
That's
been
specced
already
like
the
legacy
behavior
of
the
trace
id
ratio
sampler
or
we
could
insert
an
r
value
and
inserting
an
r
value
has
some
risks
to
it.
But
it
also
has
some
potential
benefits
to
it.
The
if
you
are
inserting
an
r
value
more
than
one
place,
that's
not
the
root
of
the
trace.
A
You
will
end
up
with
more
than
one
r
value,
and
that
means
that
different
parts
of
the
trace
would
be
sampled
different
ways
or
different
consistency
levels,
and
they
will
not
be,
in
fact
consistent,
but
the
alternatives
seem
worse
to
me.
So
my
proposal
is
going
to
be
that
when
a
span
is
making
a
decision
with
consistent
probability,
sampler
and
doesn't
see
an
r
value
and
it's
not
a
root,
it
will
generate
a
root
just
an
r
value.
B
A
Right,
well,
I
think
it's
worth
saying
that
that
would
also
happen
with
my
work
around.
We
can
still
set
the
sampled
flag.
We
still
get
sampling
at
the
appropriate
probability,
but
we
are
acting
like
a
root,
and
that
means
that
if
there
are
two
nodes
doing
that
this
could
lead
to
inconsistent
sampling.
But
if
you
fall
back,
you
also
get
inconsistent
sampling
true,
so
it's
not
clear
that
there's
a
benefit
in
falling
back.
C
B
A
Okay,
I
see
I'm
going
to
take
your
point
and
I'm
going
to
run
with
it.
So,
let's
see,
if
that's
the
case,
then
falling
back
means
we
can
go
to
the
trace
id
ratio,
sampler,
which
is
already
existing,
and
it
basically
makes
a
probability
decision
without
making
a
r
value
so
that
you
are
actually
able
to
so,
and
I
want
to
stop
now.
This
is
actually
a
bigger
question
that
did
come
up.
A
I
have
been
writing
the
spec
to
say
that
you
can
have
a
p
value
without
an
r
value
or
an
r
value
without
a
p
value.
So
what
is
this
case
with
a
p
value
without
an
r
value,
and
that
is
a
way
that
you
could
say
use
the
legacy.
Behavior
only
allow
sampling
at
a
root
and
not
by
not
propagating
an
r
value.
Nobody
else
can
make
a
new
consistent
decision,
but
you
can
propagate
your
own
probability,
and
so
one
option
is
to
fall
back
to
making
the
same
decision.
A
And
it's
worth
thinking
through
the
implications
for
the
next
next
processor
of
a
span,
and
you
may
end
up
then
seeing
a
p-value
and
a
sampled
flag
and
if
you're
in
the
same
situation,
you
will
end
up
in
the
same
situation.
In
other
words,
if
you're
trying
to
make
a
new
sampling
decision,
you're
not
going
to
have
an
r
value,
and
you
can
do
the
same
thing
again.
So
we
end
up
setting
sampled
in
p,
but
not
r.
A
A
D
A
A
If
I,
if
I
write
out
that
span
and
I'm
and
I'm
a
vendor
who
decides
to
interpret
p
value,
if
there's
a
missing
r
value,
I
actually
don't
need
to
know
that.
I
only
need
to
know
that
when
I'm
trying
to
figure
out
is
this,
a
partial
trace
has
just
been
sampled
consistently
in
one
part
of
the
trace
and
differently
in
another
part
of
the
trace.
B
B
Sampling
within
the
subtree
right
and.
B
A
The
reason
why
I
proposed
originally
to
just
insert
the
r
value,
as
you
have
said
atmar,
is
that
there
are
going
to
be
cases
where
you're
you're
a
single
node
like
you're,
the
not
the
root
but
you're
the
ancestor
of
all
the
children
in
the
trace,
meaning
that
you
could
consistently
stamp
sample
the
only
subtree
and
that
in
that
case,
it's
going
to
end
up
being
useful
to
you
if,
in
a
case
where
you
couldn't
control
the
root
client,
but
you
could
control
a
proxy
in
the
very,
very
close
to
the
entry
of
your
system,
you
then
would
have
consistent
sampling
yeah.
A
I
guess
going
back
to
atmara's
question.
I
don't
see
a
real
downside
in
not
generating
r,
except
that
it
makes
this
consumer
have
to
think
about
it
potentially,
and
if
you
don't
insert
the
r
you're
just
leaving
the
same
exact
situation,
you
can
have
a
p
value,
but
no
r
value
as
you
would
descend
down
this
trace.
A
Every
everyone
who's
trying
to
resample
is
going
to
end
up
in
the
same
exact
scenario,
the
the
difference
of
of
setting
p
value
with
r,
if
you
can
set
p
value
without
r
value
and
all
the
parent
samplers
are
going
to
end
up
doing
the
right
thing.
A
So
it
is
a
a
degraded
mode
for
sure.
I'd
also
imagine
there
are.
Even
today
we
have
customers
who
are
saying
I'd
like
to
just
send
you
sample
data-
I
don't
I'm
not
even
using
hotel.
I
just
I
want
to
generate
some
data
that
has
sampling
applied
to
it
and
in
that
scenario,
when
you're
not
using
a
consist
the
exact,
consistent
sampling
regime
that
we
are
using,
you
could
still
set
your
p
value
and
you
could
still
convey
probability
sampling.
It's
just
you're,
not
allowing
a
resampling
to
happen.
A
D
I
feel
there
is
what's
the
harm
in
forwarding
the
art
because
hey
like
at
least
in
microsoft,
it's
like
very
often
the
trace,
will
maybe
start
with
one
team
right
and
then
it
goes
to
a
set
of
services
owned
by
another
team.
They
might
have
upgraded
because
everybody
upgrades
at
different
levels
so
now
going
forward,
say
the
compute
team
will
at
least
start
doing
the
right
thing
and
within
their
own
services.
They
know
that
that
part
of
the
sub
tree
is
consistent.
So
I
feel
I
mean,
what's
the
downside
of
sending
r.
A
Again,
I
think
that's
my
my
preference
coming
into
this
peter
is
the
one
who
had
mentioned
a
objection.
Based
on
what
I
mean
it's
it's
creates
confusion.
I
agree
there's
well,
there's
nucleus.
C
A
And,
and
in
order
for
my
my
stipulation
to
work
falling
back
to
legacy,
behavior
is
slightly
that's
not
exactly
what
I
mean.
So
if
you
had
configured
a
33
sampler
and
you
want
to
set
p
now,
you
have
to
do
this
probabilistic
thing
that
we've
been
talking
about
where
you're
gonna.
Well,
you
only
support
powers
of
two,
so
you
have
to
flip
a
coin
between
the
the
lower
and
the
higher
probability
power
of
two.
A
C
C
A
Well,
that
is
actually
baked
into
the
spec
as
it
stands
today
that
that
was,
as
I
took
feedback
from
the
group
here.
The
legacy
hotel
built-in
sampler
named
tracity
ratio.
Lets
you
give
it
a
fraction
between
zero
and
one
in
or
for
us
to
emulate
that
behavior
with
powers
of
two
we
can.
We
just
have
to
do
this
probabilistic
decision
at
the
beginning.
I
didn't
think
we
would,
but
now
I've
written
that
in
the
spec,
because
I
think
it's
the
most
convenient
user
friendly
way
to
introduce
this
behavior.
A
It
just
means
you
can
document,
but
if
you
have
a
power
of
two
you're
gonna
have
a
higher
performance
sampler,
because
you
don't
have
to
make
a
random
choice
on
every
span,
but
it's
an
additional
random
choice
on
every
span,
which
I
think
is
very
minor,
but
at
least
gives
you
compatibility
on
the
interface
and
it
lets
you
not
have
to
tell
the
user.
Oh,
you
can't
have
probabilities
other
than
powers
of
two.
So
I
kind
of
agree,
then,
with
the
top
the
sentiment
that
there's
not
much
harm
done.
A
A
B
A
A
B
A
E
A
A
A
I
think
those
behaviors
are
justifiable,
so
that's
I
don't
want
to
write
this
into
the
spec.
Maybe
I'll
write
an
issue
up
and
and
ask
for
more
people
to
comment
in
github
this.
This
meeting
is
a
little
bit
too
isolated
for
people
that
I'll
see
in
the
live
letter
group.
A
Okay,
this
needs
a
new
issue
about
that.
The
other
ones
were
much
less
significant
that
I
wrote
down.
There's
various
error
handling
scenarios.
I'm
just
going
to
write
a
draft
and
we'll
see
what
the
people
think
about
it,
because
I
don't
think
it
matters
semantically
and
then
the
last
one
was,
as
I
say
that,
based
on
the
feedback,
I
have
modified
the
proposal
to
support
all
the
probabilities
and
we
have
to
do
this
coin
flip
for
the
non
powers
of
two
probabilities.
A
B
I
mean
this
probabilities
are
so
unlikely
that
it
doesn't
play
a
role
in
reality,
so
so
yeah
rounding
down
to
zero
is
fine
to
me.
A
A
It
points
out
that
you
can't
simply
say
fallback
or
legacy,
but
you
can
try
to
do
those
things
carefully
and
I'm
going
to
try
to
do
those
things
carefully
when
we're
not
five
of
us
in
a
room
or
whatever,
and
I'm
pretty
sure
that
by
tuesday
next
spec
meeting
I
will
have
a
finished
draft,
hopefully
that
people
can
look
at
and
I'll
call
out
open
questions
if
there
are
any
open
questions
left
and
I'm
sure
that
naming
discussions
will
then
happen
and
so
on
there'll
be
a
wider
group
to
discuss
these
things.
A
I
don't
think
we
should
talk
about
testing
atmar
and
I
have
been
slacking
about
it.
I'm
quite
confident
I
can
do
this.
I
just
haven't
written
it
text,
yet
this
involves
chi
squared
it's
it's
fairly
elementary
after
you
understand
statistics,
okay.
If
anyone
else
would
like
to
propose
an
agenda
item,
it's
your
turn.
I
know
that
there's
a
roadmap
that
carlos
wants
to
to
work
on
and
carlos
is
in
the
call
carlos.
Would
you
like
to
speak
about
the
roadmap
or
the
request
from
the
tc
yeah.
E
So
essentially,
the
thing
is
that
we
want
to
know
what
are
the
expectations
and,
roughly
speaking,
I
could
say
that
I
think
that
my
initial
impression
is
that
the
specification
portion
will
be
done
and
merged
into
the
specification.
This
quarter,
I
don't
know
if
that's
still
a
goal.
E
E
A
E
Yeah
that's
correct
and
actually
that
that's
a
that's
a
good
reminder
to
make
this
specific
clarification
that
it's
not
abuse.
It's
only
probabilistic
sampling.
E
The
other
question
is
it
seems
we
won't
have
to
implement
that
many
things
that
many
changes
into
the
actual
sdks.
But
what's
your
impression
of
that,
I
mean
based
on
the
fact
that
first
of
all,
the
parent-based
sampler
will
probably
have
to
be
modified
to
validate
rest
state
and,
second,
that,
probably
you
will
have
to
change
something
in
the
trace
in
you
know:
legacy
sampler.
A
So
yeah
well,
where
this
proposal
stands
because
it's
trace
state,
we
don't
have
any
trouble
modifying
the
the
context
when
we
ever
get
to
a
place
where
you
know,
there's
something
going
into
trace
parent,
then
the
sampler
api
will
be
impacted
and
that's
fortunately,
not
not
here.
Yet
I'm
glad
that
we're
now
that
I'm
focused
on
just
getting
this
thing
done,
I'm
glad
that
we're
not
trying
to
fix
trace
pair
at
the
same
time
so,
but
you
did
refer
to
problems
that
are
on
the
borderline
I
meant
to.
A
I
do
mean
to
discuss
these
conditions.
What
happens
when
you
mix
a
new
style
consistent
probability
sampler
with
just
let's
say
you
put
that
in
your
root
or
a
proxy
like
we
discussed
earlier
and
now
you've
got
legacy,
parent-based
samplers
they're
going
to
do
the
right
thing
as
long
as
the
trace
context
is
correct,
but
they
will
not
validate
or
correct
the
broken
or
in
or
fix
an
incorrect
trace
context.
A
So
I've
been
thinking
through
what
are
all
the
pitfalls
of
that.
So,
let's
suppose
that
you
install
one
new
sample
at
your
route
and
then
all
the
other
samplers
in
your
system
are
are
legacy
built-ins,
it's
their
legacy,
built-in
parent
and
nothing
goes
wrong
which
it
shouldn't.
Then
you
know
nothing
corrupts
the
trace
state.
Then
you
get
the
correct
behavior,
you
count
those
spans
correctly.
There
is
a
case
that
I'm
concerned
about
that
is
going
to
be.
A
People
will
need
to
understand,
and
that
is
that
that
we
agree
that
the
legacy
trace
id
ratio
sampler.
It
has
a
to
do
and
a
warning
against
using
it
at
not
root
at
the
non-root
span.
Because
of
the
problems
we
were
aware
of,
but
the
there's
always
on
sampler
and
an
always
off
sampler,
which
are
effectively
trace
id
ratio
based
samplers
at
zero
and
one
probabilities.
A
Those
don't
have
that
warning
or
that
to
do
because
they
have
always
behaved
sort
of
correctly.
Now,
if
you
mix
a
old
style,
sorry,
a
new,
consistent
probability
sample
at
your
root
and
then
a
say:
a
parent,
a
legacy
parent
is
in
the
way
and
the
child
and
that's
going
to
work
fine,
but
the
grandchild
uses
and
always
on
and
always
off.
A
The
grandchild
uses
always
on
turns
the
sampling
bid
on
and
then
its
child
is
going
to
see
an
inconsistent
trace.
State
sampled
will
be
set,
but
the
r's
and
the
p's
came
from
the
root
which
didn't
set
sampling
and
that
legacy
sampler
knows
nothing
about
trace
state
it's
going
to
pass
through
r's
and
p's
that
weren't
meant
for
it.
A
Now,
if
you
have
another
legacy-based
parent,
sampler
you're,
going
to
record
the
incorrect
information
and
this
new
sampler
that
we've
specified,
which
is
the
consistent
parent
probability
based
it's
just
a
fall
through
the
parent
after
you
fix
the
trace
context.
So
all
we're
doing
in
that
case
is
fixing
trace
context
and
then
going
to
do
the
same
exact
thing
the
parent-based
sampler
would
have
done.
A
It
does
require
you
to
understand
that
weird
corner
case
that
I
just
gave,
which
is
an
always-on
sampler
or
an
always-off
sampler,
used
in
conjunction
with
a
consistent
probability.
Sampler
will
corrupt
the
trace
context
and
if
you
have
a
legacy,
parent-based
sampler,
you
will
let
that
pass
through.
I
am
concerned
about
this
case,
but
I
don't
know
what
we
can
do.
The
the
the
proposal
is.
A
This
trace
state
thing,
but
it's
starting
to
look
less
and
less
possible
to
just
cleanly
move
past
it,
and
so
the
these
cases
are
starting
to
worry
me
and
potentially,
what
we
should
do
is
specify
a
minimum
amount
of
I'm
going
to
say
fixing
or
checking
of
states
even
the
legacy
skippers,
so
we
could
basically
be
modified.
A
A
sort
of
middle
ground
here,
where
hotel
built-in
samplers
don't
their
functional
behavior
to
fit
the
broken
trace
state
so
that
you
so
that
the
always-on
sampler
would
remove
our
value
p-value.
Actually
it
just
needs
to
remove
the
p-value
if
everybody
removes
a
p-value,
if
all
the
legacy
built-in
samplers
remove
p-value
before
doing
their
thing,
we
get
the
right
legacy
behavior
anyway.
A
This
deserves
a
little
bit
more
attention
and
I'm
going
to
try
to
write
up
that
risk
and
that
the
the
outcome
of
that
optionally
is
that
we
should
modify
the
built-in
samplers
to
do
some
consistency,
checking
and,
of
course,
that's
a
slippery
slope.
It
leads
you
towards
thinking
gosh.
I
should
just
specify
the
built-in
samplers
to
do
this
stuff.
Now
that
I've
said
the
apis
must
be
the
same.
It's
actually
a
step
in
that
direction.
I'm
trying
to
say
the
consistent
probability.
A
A
So
maybe
I've
slipped
down
my
own
slope
and
the
the
real
thing
is
that
we're
just
going
to
have
to
specify
the
straight
state
solution
and
specify
the
upgrade
path,
build
them
into
the
built-in
samplers.
It's
just
more
work.
It
was
what
I
was
trying
to
avoid,
but
I
can
see
myself
slipping
into
it
because
of
this
corner
case
about
always
on
always
off
mixing
with
mixing
badly
with
the
new
stuff.
E
A
Stuff
yeah
reasonable
course.
I
mean,
then
we're
saying
these
are
optional
samplers,
if
you
use
them
in
your
whole
system.
You'll
get
the
correct
behavior
if
you
use
them
at
your
root
and
don't
use
always
on
always
off
you'll,
get
correct,
behavior,
otherwise,
you'd
better
start
upgrading
the
always-on
always
office
before
you
upgrade
this
and.
B
A
By
next
quarter,
we
can
specify
that
the
built-in
things
do
the
right
thing,
I
guess,
makes
sense
perfect,
I'm
wary
of
getting
ourselves
into
a
place
where
we
have
to
specify
the
upgrade
from
trace
state
to
trace
parent
in
the
future,
and
that
may
be
extra
engineering
we're
building
in
for
ourselves,
but
maybe
the
way
we
have
to
go
long-term
plan.
There's
a
halloween
costume
going
on
over
here,
I'm
a
little
distracted.
A
D
A
Thank
you
yeah.
I
I
think
I
was
probably
stretching
a
little
bit,
but
my
I
do
remember
that
135-
and
I
remember
ludmila
originally
proposed
like
every
span-
might
want
to
have
a
random
number
between
a
uniform
random
number.
A
lot
of
sampling
algorithms
do
use
those
that
approach,
and
it's
a
nice
one.
So
my
understanding
of
the
goal
there
was
to
do
essentially
consistent
sampling,
and
so
when
I
said
that
168
was
effectively
replacing
it.
I'm
I'm
definitely
that
that's
being
generous.
A
A
If,
if
not,
we
should
discuss
it.
It
is
definitely
restricted
to
powers
of
two
types
of
sampling
decision,
and
so,
if
you
had
a
system
that
was
trying
to
keep
its
legacy
of
allowing
arbitrary
probability,
values
or
thresholds,
then
we
have
not
done
our
our
work
and
you
could
imagine
gosh.
We
could
imagine
making
r
be
a
floating
point,
never
mind
how's,
that
maybe
you
could
talk
about
what
you
would
like
to
achieve
and
we
could
see
if
there's
a
possible
path.
D
So
right
now
what's
happening
is
we
have
the
ai
sdk
that
we
already
have,
which
has
a
different
algorithm
for
sampling
and,
as
customers
start
upgrading
to
open
telemetry,
we
start
giving
this
as
an
offering.
So
how
would
we
get
consistent
sampling
between,
like
some
of
their
services,
will
still
be
using
ai
sdks
like
they
would
slowly
go
and
upgrade
right,
so
there'll
be
a
mix
of
ai
sdk
and
open
telemetry,
and
also
within
microsoft.
When
the
call
comes
in,
then
we
need
to
make
sure
we
pick
the
right
algorithm.
D
So
the
proposal
earlier
was
the
route
goes
and
makes
the
sampling
decision
and
then
passes
that
output
to
all
the
to
the
subtree.
So
that
way,
not
everybody
in
the
chain
goes
and
recalculates
what
the
value
is,
but
you're
right
that
required
a
floating
number
to
be
propagated,
which
is
not
very
ideal.
So
that
way,
and
it's
also
possible
that
we
can't
yeah
anyway
yeah.
So
that
was
the
intention
of
135
that
the
roots
makes
the
decision
and
then
everybody
else
if
they
are
using
the
probability
sampler
they'll
see.
D
D
So
that's
how
the
proposal
was,
but
with
the
new
proposal
I
see
where
we
pass,
the
p
is
more
for
span
to
metrics
so
that
that's
good,
that's,
definitely
something
needed,
and
then
the
r
is
the
random
value
which
I
think
I
need
to
understand
better
because
in
the
ai
sdk,
if
we
were
to
make
a
change
and
say
even
if
we
were
to
compromise
the
power
of
the
true
thing,
will
that
still
meet
our
needs?
In
that
scenario,
let's.
A
See
I
I
never
do
math
standing
on
my
feet
very
well,
so
I'm
going
to
try,
though,
so
we
have
a
value.
That's
our
value
is
0
to
62
and
it's
a
logarithm.
So
if
we
exponentiate
that,
let's
see
and
then
there's
some
function
that
will
map
it
back
into
a
uniform.
That's
that's
what
I
was
trying
to
do
on
my
foot
feet
and
then
and
and
someone
else
would
be
better
at
saying
that
out
loud
on
their
feet
than
me.
But
that's
why
I
mentioned
briefly
offhand.
A
A
So
so,
let's
see
you
will
take
the
negative
and
you'll
raise
it
to
the
power
of
two
two
to
the
negative
r
value
gives
you
a
probability
number
correct,
so
so,
given
an
r
value,
you
can
turn
that
into
a
probability.
It's
just
that
it
is
fixed
to
probable
to
powers
of
two.
Now
you
can
run
the
algorithm
that
you
had.
I
think.
A
And
so
if
you,
so,
this
only
works
if
you're
willing
to
accept
powers
of
two
probability
thresholds,
but
then
all
of
your
consumers
will
you
can
just
take
the
r
value
negative
it
negate
it
and
then
raise
it
to
the
power
two
you'll
end
up
with
a
probability
value
which
you
can
then
use
for
your
algorithm.
I
think.
D
D
A
Be
happy
to
slack
about
this.
I
I
just
terrible
at
talking
about
math
on
my
feet,
so
yeah.
If
you
want
to
slack
me,
I
can.
I
can
try
to
explain
what
I'm
what
I'm
saying,
but
hopefully
this
makes
some
sense.
I've
gotten
used
to
talking
about
probabilities
and
r
values.
F
A
All
right,
let's
do
that
in
the
hotel
sampling
channel.
We
can
talk
about
this
in
a
longer
form
and
hopefully
that'll
make
progress
everyone's
in
there,
and
we
can
talk
about
it
unless
you
we
think
we
should
keep
talking.
I
I
just
math
in
my
feet
is
terrible.
So
if
you
have
an
r
value
of
zero,
your
probability
is
one.
If
you
have
an
r
value
of
one.
Your
probability
is
point
five,
our
value,
two,
sorry
that
is
totally
incorrect.
That's
a
threshold!
So
gosh!
Oh
god!
A
If
you
are
value
you
will
sample.
If
your
r
value
is
zero,
you
will
sample
all
spans
if
your
r
value
is
one
you
will
sample
anything
that
has
a
threshold
of
fifty
percent
or
greater.
If
you
have
an
r
value
of
two
you'll
sample
everything
that
has
an
argument:
a
threshold
of
0.25
or
greater
our
value,
3.125
or
greater.
That's
what
I'm
trying
to
say.
We
can
talk
about
this
in
the
hotel
sampling
slack
as
well.
I
think
text
will
help
I'm
in
a
zoom.
A
I
can't
even
type
the
chat
when
it's
all
broken
right
now,
so
I'll
I'll
meet
you
in
slack,
and
we
can
talk
about
this
more
how's.
That.
A
Thank
you
all.
Hopefully
there
will
be
a
spec
out
and
we
can
talk
about
it
next
week.
See
you
next
time.