►
From YouTube: 2021-08-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
worry
that
we
have
lost
critical
mass
in
this
group
and
there's
not
much
use
in
me
and
you
talking
anymore.
A
How
are
you
feeling
about
our
current
current
state?
I
know
that
the
comment
from
monday-
oh
hey,
we
have
another
person,
that's
great
on
monday,
you
are
proposing.
We
just
keep
tail
sampling
and
head
sampling
separate
I'm
trying
to
get
the
result
that
my
employer
wants,
which
is
literally
just
being
able
to
count
span.
So
if
we
drop
tail
sampling
entirely
from
the
current
stuff
that
we've
been
talking
about,
I
wouldn't
really
care.
A
I
mean
the
employer
wouldn't
care,
but
I
just
feel
that
there's
pent
up
demand
for
some
sort
of
tail
sampling
in
a
collector
as
soon
as
that's
there.
I
don't
know
what
I
don't
know:
how
we're
gonna
encode
that
hey
josh
you
actually
there
could
use
a
third
party.
A
B
A
Okay,
no,
no,
no
problem!
I
I!
What
I'm
trying
to
say
is,
I
feel,
like
we've
lost
critical
mass
and
that
we
have
a
set
of
proposals
and
variations
on
a
proposal
that
are
all
reasonable
and
not
enough
decision
or
opinionated
people
able
to
help
us
get
them
through.
I
was
just
recounting
to
atmar
howe.
I
could
drop
this
concept
of
tail
sampling
entirely
and
maybe
that
would
get
us
to
step
one
and.
A
I
could
see
taking
a
step
forward
where
I
just
remove
everything
about
tail
sampling
and
all
we
would
put
into
the
span
would
be
a
single
integer.
It
would
be
the
one
that
we
talked
about
earlier
this
week.
It's
the
logarithm
of
adjusted
count,
so
0
means
one.
One
means
two
two
means:
four
sixty-two
means
two
to
the
sixty-four
to
two
to
sixty-two.
Sixty-Three
means
zero
and
then
I
I
I
just
wanna
make
sure
that
we
have
a
way
to
distinguish
unknown
from
zero
from
all
the
positive
values.
A
So
it
could
be
the
value
of
64..
It
becomes
at
that
point
we're
using
seven
bits
instead
of
six
bits,
but
it
could
be
that
we
maxed
out
at
62
to
61
and
then
have
62,
b0
and
63
be
unknown.
I
don't
know,
and
it
we're
all
we're-
also
kind
of
implicitly
dealing
with
the
fact
that
protobuf
zeros
are
hard
to
see,
and
so
you
can't
tell
the
difference
between
a
zero
and
a
default.
So
that's
one
reason
why,
having
logarithms.
C
The
sampling
rate
two
to
the
power
of
63,
I
mean,
I
think.
B
A
About
a
smaller
limit,
but
I
don't
think
that
I
don't
sense
that
that's
where
the
objections
are.
A
I
did
show
up
to
the
w3c
meeting
this
week
and
there
was
daniel
from
your
company
discussed
it
with
me
their
the.
A
What
we
discussed
in
that
group
was
that
there's
a
lot
of
there's
believed
to
be
a
lot
of
opposition
to
requiring
any
sort
of
randomness
of
the
chase
id
and
that
that
comes
in
from
memory
mainly
from
amazon,
and
I
I
sort
of
feel
like
challenging
it,
because,
if
there's
some
really
great
reason
to
be
putting
things
into
your
id,
maybe
we
should
all
be
doing
it
because
we
made
the
argument
that
if
you,
if
you
have
some
random
sdk,
that's
the
root,
it's
going
to
choose
a
trace
id.
A
Maybe
we
could
ensure
that
six
bits
are
the
randomness
that
we
need,
which
that
geometric
distribution
could
be
possible
and
then
you're
only
requiring
one
byte,
b,
specked
out
or
so
of
that
chase
id,
but
that
just
it
just
felt
like
a
bureaucratic
nightmare
to
go
to
the
wcc.
C
C
C
But
it's
easier
to
say
that
people
should
generate
the
uniform
three
study,
but
the
tradition
which
is
concatenated
with
a
geometric
distribute
yeah,
I
mean,
of
course
you
can.
A
Yeah,
I
agree
so
that
that
left
us
with
the
most
maybe
a
likely
to
succeed
proposal
which
is
still
using
trace
state.
It
still
uses
essentially
four
encoded
bytes
of
real
data
plus
syntax
and
the
what
we
have
in
that
proposal
is.
You
know
it's
ends
up
around
30
bytes
for
the
what
we're
trying
to
propagate-
and
that
includes
the
randomness
and
the
probability
which
is
kind
of
more
more
or
less
exactly
what
is
currently
in
that
otep
168..
A
I
gotta
go
edit
edit
that
document,
but
it
would
end
up
unchanged,
roughly
speaking,
and
I
think
from
what
we
saw
in
w3c.
Any
other
proposal
is
just
likely
to
be
drawn
out
by
that
group.
That
group
seems
to
be
very
focused
on
my
new
changes
of
wording
in
a
way
that
led
me
to
think
that
it
would
take
a
long
time
to
get
a
change
of
version
through
it.
A
So
that
leaves
me
thinking.
Our
best
best
bet
is
still
the
trace
state
solution
where
we
put
both
the
geometric
random
and
the
probability
from
the
head
into
it,
and
that's
pretty
close
to
the
current
stuff
I've
written.
So
I
I
guess
I
can
go
back
to
it
and
revise
it.
One
more
time
ask
everyone
to
take
another
look.
I
again.
I
worry
that
we're
a
loss
of
critical
mass
in
this
in
this
group.
A
You
know
no
one's
come
again
and
just
us
who
have
been
talking
about
it
for
five
six
weeks
now,
so
that
was
all
on
168.
A
There's
still
this
question
about
this
about
what
to
put
in
the
span
and
just
since
there's
three
of
you
listening
to
me,
I'd
like
to
end
this
meeting
soon,
but
I
I
want
to
see
if
what
I
can
do
as
a
viable
proposal
is
to
just
drop
tail
sampling
from
it
and
say
that
whoever
wants
that
is
going
to
have
to
work
on
specking
it
out,
but
we
can.
C
Kind
of
move,
in
my
opinion,
the
span
based
sampling
so
wherever,
where
you
have
to
to
to
to
decide
or
make
the
sampling
decision
per
span,
it's
completely
independent
on
on
on
table-based
sampling,
because
there
are
the
sampling
decisions
basically
made
on
the
whole
trace
when
it's
already
composed
right.
So
it's
more
like
right,
similar
more
similar
to
to
to
write
based
sampling.
So
if
you
have
it
already
on
the
server
and
then
you
have
to
decide
if
you
throw
it
away
or
not.
C
So
this
is
more
similar
like
that.
It's
more
right-based
sampling
and
tail-based
sampling
is,
in
my
opinion,
quite
similar.
A
Yeah,
I
agree
and
that's
what
I
was
that's
why
I
was
saying
we
could
remove
the
tail
sampling
stuff
yeah.
I
guess
I
just
want
to
make
sure
that
as
soon
as
someone
comes
in
and
asks
for
it,
it's
we've
sort
of
seen
where
it's
going
to
go.
So
the
most
recent
comment
I
put
in
170
is:
you
know
we
carry
a
struct
called
sampling,
detail
or
sample
detail,
and
it
has
one
field
in
it
to
start
with,
which
is
this
integer
integer
is
the
head
sampling
information?
A
It's
a
logarithm,
so
you
know
64
different
values,
potentially
I'm
including
one
for
the
unknown
value,
because
that
and
that
will
give
us
all
the
built-in
trace.
All
the
built-in
samplers
have
a
well-specified
behavior
that
results
in
setting
this
value
to
an
a,
not
unknown
value,
and
then
the
unknown
value
is
there
to
help
us
migrate
from
the
current
state
of
the
world,
which
is
where
we
don't
have
all
that
information
available
so
that
at
the
end
of
this,
when
all
the
new
trace,
all
the
new
sdks
are
updated.
A
To
do
this,
we
would
get
a
non-unknown
adjusted
count
for
every
span
from
the
head
and
then
tail
sampling
can
be
done
in
any
way
we
like
and
it
can
be
modeled
in
a
new
way,
and
at
least
I
think,
we've
done
some
explore
exploration
of
how
you
might
encode
that
information.
So
one
way
to
do
it
is
the
same
way
we're
doing
it
with
head.
You
could
encode
adjusted
count
from
your
tail
sampler,
and,
and
so
we
could
just
say
at
least-
we've
set
some
sort
of
precedent.
A
I
guess
wish
we
had
someone
who
really
wanted
to
represent
tail
sampling
in
the
room
right
now.
Lightstep
does
a
bunch
of
that,
but
it's
inside
of
our
system.
So,
like
you
were
saying,
we
don't
need
to
change
openometry
to
do
tail
sampling
and
even
if
we,
if
we
wanted
to,
we
might
just
say
here,
are
some
attributes,
and
these
are
what
they
mean.
A
I
don't
think
anyone
in
this
room
is
going
to
object
to
what
I've
been
saying.
So
maybe
there's
no
reason
to
be
talking
about
it,
but
I
I
just
I
feel
like
we're
in
a
like
a
reactionary
place.
If
I
go
forward
with
a
proposal
to
not
do
any
tail
sampling
in
the
protocol
for
spans,
the
next
objection
will
be,
how
do
I
do
tail
sampling
and
so.
A
A
There
are
differences
here
and
the
the
choice
of
a
head
sampling
ratio
is
somehow
crosses
the
system
in
which,
in
a
way
that
tail
sampling
decisions
don't,
and
so
we
were
working
very
hard
just
to
to
spec
out
how
to
propagate
that
sampling
stuff
having
tail
sampling
tossed
in
there
potentially
just
damages
the
information
that
atmara
wants
to
get.
I
think
and
maybe
doesn't
offer
much
value
right.
C
I
mean
what,
if
you
want
to
do
both
or
tail-based
sampling,
so
so,
if
you're
doing
both,
let's
assume
you're
doing
both
head-based
sampling
combined
with
tail-based
sampling,
then
I
wouldn't
encode
the
adjusted
count
in.
To
the
same
I
mean
it
would
would
keep
the
information
separate.
So
I
want
to
know
what
is
the
adjusted
count
from
head
based
sampling
and
what
is
due
to
tail-based
sampling,
because
maybe
I
have
to
incorporate
that
in
the
statistical
estimations
somehow
you
know,
it's
you've
shown
us
how
your
yeah
partial,
trace,
sampling.
C
This
adjust
account
for
till-based
sampling
should
be
completely
independent
and
should
also
be
written
into
a
complete,
separate
field.
I
think
so
I
mean
if,
if
people
wanna
have
tailbased
sampling,
then
they
should
also
yeah
come
up
with
with
the
proposal
how
to
to
to
encode
the
adjusted
counter
probability
or
whatever,
as
they
needed.
B
Yeah,
there's
also,
in
my
mind,
an
interesting
interaction
between
logs
and
metrics
here
or
metric
exemplars,
with
tail
based
sampling
and
like
from
a
metric
example.
Our
standpoint.
It
terrifies
me
to
tell
based
sampling,
because
you,
basically,
how
do
we
know
we're
picking
the
right
samples,
for
example
ours.
We
have
a.
We
have
a
highly
limited
bandwidth
there
or
I
should
say
we
should
stay
highly
limited
in
what
we
select
and
so
what's
the
odds
that
we
picked
the
same
thing
that
tail-based
sampling
would
choose
for
logs.
B
We
already
have
this
fun
issue
where,
when
you
embed
trace
id's
and
vanities,
you
lose
so
much
context
that,
following
the
logs
without
an
actual
trace,
is
rather
fun
and
exciting.
B
Yeah,
but
with
tail-based
sampling
right,
there's,
there's
known
ways
to
kind
of
defer
writing
the
the
span
or
the
trace.
Until
you
get
that
confirmation
of
whether
the
tele-based
sampler.
A
I
want
to
know
how
to
code
it
so
that
I
can
count
it,
but
maybe
that's
just
not
happening.
Maybe
that's
like
maybe
the
tail
sampling,
that's
in
the
collector
or
that's
envisioned
in
the
collector,
is
so
primitive
or
is
not
functional
enough
that
it
wouldn't
affect
us
anyway,
and
I'm
comfortable
leaving
open
a
question
for
later,
because
I
I
mean
when
I
think
about
tail
sampling
for
spans.
I
really
am
thinking
about
just
studying
expanse.
A
A
But
I
I
I'm
assuming
that
I've
collected
all
the
traces
that
I
might
want
to
have
and
then
I'm
just
using
those
those
tail
selected
spans,
as
as
data
points
to
show
a
user
or
something
like
that.
B
The
thing
the
thing
that
we
run
into
so
we
have
people
who
ask
for
tail
based
sampling,
but
we
have
this
issue
where
effectively,
if
you
think
of
single
source
of
truth
for
a
trace
is
not
going
to
be
the
case
when
you're
a
cloud
provider
so,
for
example,
our
load
balancer
synthesize
traces
for
everything
that
flows
through
the
load
bouncer,
but
there's
this
trust
issue
on
the
trace
id
where
we
actually
don't
want
to
use
the
randomness
that
someone
gives
us,
because
it
could
there's
attack
vectors
there
right.
B
So
we
actually
use
links
of
spans
at
that
level
and
we
decide
in
our
load
bouncer
what
our
sampling
frequency
is
going
to
be
for
its
own
traces,
that
we
store
and
export
right
and
then
there's
additionally,
the
user
source
of
truth
in
their
system
through
there.
So
if
you
want
to
ever
attach
like
the
the
cloud
provided,
you
know
load
bouncer
with
your
infrastructure,
you
actually
have
a
completely
different
system,
doing
sampling
from
the
other
system.
B
How
do
they
interact?
That's
that's
my
concern
here,
but
I
really
don't
want
to
throw
that
at
this
group
at
all.
Yet
because
I
think
I
just
want
to
call
out
that,
there's
like
to
the
extent
we
can
understand
that
you
might
have
two
systems
providing
spans
in
a
useful
way
right
and
that
you
can
get
access
to
these
like
infrastructure
levels,
fans
in
your
products
and
to
the
extent
that
the
user
zone
spans
are
useful
and
we
join
these
two
things
in
some
way.
B
That'd
be
awesome,
but
tail
based
sampling
probably
won't
be
practical
for
those
infrastructure
spans
things
coming
out
of
like
our
load,
bouncer
or
our.
You
know
cloud
cloud,
sql
databasey
things
right
because
they're
that
the
the
channel
for
those
telemetry
to
hit
you
is
going
to
be
way
delayed
compared
to
stuff
that
you're
running
in
a
collector
in
your
own
infrastructure.
A
little
bit
hotter
to
where
you
are.
You
know
anyway,
that's
a
random
rant,
but
just
something
I
was
trying
to
understand
the
in
the
context
of
this
right.
B
If
we're
thinking
of
a
trace
id
as
a
single
trace
for
everything
that
happens,
that
a
user
wants
to
see,
I'm
starting
to
see
that
not
be
the
case,
at
least
for
google
provided
traces
just
because
these
infrastructure
traces
are
actually
linked,
not
attached
right.
A
B
There's
a
spec
bug
from
anurag
about
this
as
well
about
right
now
the
parent-based
sampler
right
just
looks
at
the
parent
span
and
decides
to
sample
based
on
what
the
parent
did,
but
the
open
bug
is:
should
we
also
sample
based
on
links?
B
So
if
there's
a
link
span
that
sampled,
should
you
also
sample,
and
so
you
know,
there's
there's
a
question
here
with
head
sampling,
then
too
of
you
know.
If
I
get
a
linked
span,
that
is
sampled.
Should
I
sample
as
well-
and
my
answer
here
is
actually,
I
think
the
answer
to
that
question
for
parent-based
sampler,
fine
whatever,
but
for
the
like
ratio
based
samplers
and
that
sort
of
thing
we're
doing
from
the
head?
No,
I
think
I
think.
A
There's
a
yeah
go
ahead.
One
thing
I
would
say
just
just
to
try
and
convey
the
idea
behind
this
adjusted
count
that
I've
been
trying
to
get
forward.
Is
that
suppose
you
have
a
trace
that
you
declare
yes
you're
going
to
sample
this
because
of
the
trace
id
ratio
and
it
has
a
link
to
another
trace,
but
your
other
trace
you
decided
not
to
sample
because
of
trace
id
ratio
sample.
I
think
that's
just
what
you
said.
So
here's
an
opportunity
to
record
the
other
trace
with
adjusted
count
of
zero.
A
Somehow
that's
tail
sampled
to
me
is
that
I
wrote
something:
that's
not
countable,
because
the
logic
was
said
you
shouldn't
collect
this
for
your
sampling,
but
you
want
to
record
it
as
part
of
another
thing.
A
Counting
giving
an
account
of
zero
is
is
how
I
would
address
that
in
my
thinking,
but
so
that's
why
I've
I've
held
out
for
making
an
impossible
to
convey
adjusted
count
zero
and
it
has
to
do
mostly
with
tail
sampling,
but
I
think
I
can
also,
and
just
envision,
like
a
completely
new
system
for
writing
out
information
about
tail
sampling.
A
Like
you
know,
maybe
if
we
had
a
a
new
data
record
which
was
trace
and
it's
sort
of
like
when
you
write
out
a
trace,
it
has
a
sampling
score
as
well
as
a
bunch
of
spans
associated
like
now
you're
gathering
spans
together
and
giving
them
a
group
weight.
I
don't
know
group
count,
I'm
a
little
bit
confused.
I
don't
know,
I
don't
know
what
people
really
want
and
as
much
as
I
know,
I
can
spec
it
out.
It's
not
clear
that
anyone's
going
to
build
it.
A
So
I'm
starting
to
think
just
to
stop
talking
in
this
meeting
of
coming
around
with
a
proposal
that
says
nothing
about
tail
sampling
says
tail
sampling
is
tbd
can
be
improved,
but
we
need
to
see
a
lot
more
use
cases
and
prototypes
the
prototype.
I've
given
us
is
just
head
sampling,
just
power
to
just
propagating
the
randomness
and
probability
and
then
the
proposal.
B
I
am
all
on
board
for
saying:
cut
scope,
get
the
thing
that
people
agree
on
done
and
out
yeah
and
then
come
back
to
the
sampling.
I
do
think
tail
sampling's
a
big
can
of
worms.
It
just
needs
a
lot
of
attention
and
if
no
one's
actually
doing
that
right
now,
so
you
should
definitely
not
get
a
question
in.
A
So
if
you
are
going
to
install
that
sampler,
you
got
to
mark
that
the
head
probability
is
unknown,
even
though
it's
done
by
tail
sampling,
because
if
it
gets
to
our
back
end-
and
we
start
counting
like
it-
was
sampled
and
probabilistically
we're
going
to
end
up
with
bad
counts,
and
what
I
want
is
to
not
get
that
chaps.
I'd
rather
get
unknown
counts.
A
That
I
know
are
bad
or
you
know
not
not
applicable,
I'm
starting
to
see
the
next
step,
but
I
feel
like
I'm
in
a
very
friendly
group
who
all
agree
with
me,
so
it
could
be
hard
to
predict.
A
I
am
willing
to
do
the
the
text
just
to
keep
keep
it
moving
here.
C
C
You
mentioned
leaky
pocket
sampling,
I
mean
it's
the
same
like
it
was
sampling
right.
So
I
mean,
if
you
want
to
do
that
on
a
span
level.
Yeah.
Let's
assume
you
have
one
service
and
and
basically
you
want
to
trace
the
ratio
with
sampling,
but
you
want
to
have
the
the
ratio
there
on
that
service.
C
C
I
mean
this
does
not
match
with
the
current
interface,
because
the
sampler
requires
you
to
give
an
immediate
sampling
decision.
So
so
you
cannot
buffer
the
spans
for
a
minute
and
then
decide
on
that
right.
So.
A
C
But
if
you
wanna,
for
example,
for
every
minute
you
wanna
have
1000
samples
maximum,
then
you
need
to
buff
and
after
the
and
and
only
after,
the
minute
has
ended.
You
know
which,
which
spans
have
right,
survived.
Yeah.
C
And
is
it
tail
I
mean
tailway
sampling
is
is
basically
when
you
already
have
the
whole
trace
right.
So
when
it's
composed,
but.
A
C
A
We're
kind
of
I
don't
know
I
I
always
I
guess,
I'm
I'm
saying
reservoir
sampling,
then
I
mean
it's
like
reservoir
sampling,
but
without
probabilities
I
feel
like
we
could
make
a
proof
and
I
haven't
I'm
not
the
right
person,
but
we
can
make
a
proof
that
you
can't
have
head
sampling
with
variable
rates
and
unbiased
at
the
same
time
or
something
like
that
and
when
we
talk
about
reservoir
sampling.
A
What
we're
doing
is
we're
saying
we
don't
know
how
many
events
are
going
to
happen
in
the
following
period,
so
we
have
to
have
a
reservoir
and
then
we
begin
to
call
it
tail
sampling
once
it's
just
like
a
recording
after
the
fact
where
probabilities
were
variable.
Sorry,
that's
not
becoming
not
useful.
A
I
pulled
up
just
on
the
same
topic.
Hopefully
something
useful
can
come
out.
I
pulled
up
this
prototype
pr
because
people
want
to
see
how
we're
going
to
propose
to
do
rate
limited
tracing.
A
This
was
built
off
of
a
remark
of
yours
at
mar,
and
I
I
understand
the
concepts
like
yesterday.
I
was
on
twitch
with
liz
and
ted
talking
about
sampling
and
she
asked
the
same
question:
do
what
about
the
very
simple
approach
of
just
setting
your
rate
to
a
true
to
get
an
approximate
outcome
and
that's
what
this
pr
is
and
the
trouble?
The
only
trouble
I
run
into
is
that
we
can't
do.
C
A
A
I
don't
actually
have
the
inclination
or
the
math
skills
to
do
that
type
of
reasoning,
and
I
don't
think
it's
important,
but
people
will
ask
that's
why
I
mentioned
it.
I
don't
think
we
can
say
much
more
than
this
is
expected
to
work.
This
has
always
been
one
of
the
weaknesses
of
sampling.
To
me
is
this
worth
discussing.
Do
we
need
to
say
any
more
about
how
this
rate
limiting
only
gives
us
approximate
rate,
limiting
if
you
need
a
hard
rate
limit,
we're
back
to
talk
about
tail
sampling.
A
You
I
mean
I,
I
sort
of
have
low
expectations,
so
I'm
sure
you'll,
surprise
me.
I
I
mean
I'm
saying
all
expectations,
because
I've
read
books
and
looked
for
this
type
of
thing,
and
I
just
find
that
you
end
up
with
no
easy
solutions.
You
end
up
with.
You
need
to
do
the
math
and
it's
maybe
you
need
to
have
an
input
like
equation
before
you
can
do
the
math
and
I'm
not
sure
input
equations
are
real,
like
we
don't
have
a
math
formula
saying
how
many
requests
we're
going
to
have
in
the
next
second.
A
A
I
will
just
remove
that
and
we're
going
to
put
in
a
message
named
sample
detail
with
log
head
adjusted
count
or
it's
just
a
bare
message:
a
mere
bare
field,
with
no
enclosing
structure.
I
know
that
if
I
mean
one
extra
object,
if
you're
not
careful
in
the
collector,
it's
it's
been
compiled
in
so
it's
flat
object
anyway.
A
A
And
I
guess
my
only
request
maybe
is
like
you
can
just
say:
no.
This
is
too
hard
or
you
can
tell
us
in
math
terms
why
why
the
question
is
ill-formed,
but
if
you
could
help
help
us
in
any
way,
try
and
understand
why
this
seems
like
a
good
idea
and
why
it's
really
hard
to
give
guarantees
or
trends
at
the
same
time.
That
would
help,
and
then
I
I
still
think
that
that
the
end
of
the
day,
honorog's
question
is
we
have
this
leaky
bucket
in
jaeger
and
you're
saying
we
can't
have
it.
A
How
can
I
have
it
and
then,
if
you
have
it,
we're
just
going
to
propose
unknown
head
sampling
rate
and
if,
if
you
want
to
replace
this
leaky
bucket
with
a
tail
sampler
that
is
probabilistic,
I've
shown
var
opt
as
one
way
to
do
that.
I
think
you
maybe
have
a
better
another
way
to
do
that.
Well
now
it
has
to
be
done.
Sorry
has
to
be
known
as
a
tail
sampler,
because
that
thing
I
said,
there's
probably
a
proof
that
says
you
can't
do
unbiased,
head
sampling
with
a
fixed
rate.
A
By
that
proof,
we
need
a
tail
sampler
that
buffers
some
spans
and
then
limits
them.
I
didn't
actually
make
that
proof,
but
I
think
that
that
proof
might
be
impossible.
A
B
A
Yeah,
I
think
that
that's
right
and
okay,
this
is
actually
a
good
resolution.
I
appreciate
your
help.
Everyone.
A
If
anyone
wants
to
talk
about,
proving
that
you
can't
have
unbiased
head
sampling
with
with
very
little
probability,
let's
stay
on
the
call,
actually,
let's
not
so
I'll,
keep
working
on
this
everybody.
If
there's
any
reason
to
stay
on
the
call.
Let's
talk
about
it.
A
C
Unbiased
sampling,
with
varying
sampling
rate.
A
At
the
so
I'm
I'm
trying
to
to
it
seems
to
me
this
is
now
theoretical.
Okay,
it
seems
to
me
that
the
reason
why
we
keep
talking
about
reservoir
sampling
is
that,
if
you,
if
you
just
talk
about
a
head
sampler
where
in
the
moment
you
have
to
make
a
decision
about
probability
at
the
head,
then
over
time
you
have
no
way
to
apply
a
rate
limit
without
like
there's
no
way
to
be
a
fixed
to
to
fix
the
output
of
your
sampler
and
be
unbiased.
C
I
mean
it's,
I
mean
you
have
the
freedom
to
choose
the
sampling
rate
for
every
spin
and
as
long
as
this
is
independent
of
the
dracety,
then
then
it
will
be
unbiased
in
my
opinion,
so.
C
Reservoir
sampling,
and
but
this
does
not
match
at
all
with
the
with
the
sampler
interface
yeah,
because
then
right.
C
But
on
a
spin
level,
actually
yeah
so
yeah,
so
in
your
spanx,
not
like
not
like
tail-based
sampling,
where
you
compose
the
trace
actually,
and
then
you
make
the
sampling
decision
on
the
whole
trace
to
to
have,
whereas
it
was
sampling
on
a
span
level.
It
means
you
just
collect
the
spans
of
the
same
type
and
then
you
choose
after
minute
which
ones
survive
basically
and
which
one
are
sampled.
So
this
is
something
in
between
between
tables
yeah,.
A
I
realize
that
we
have
the
term
the
term.
Tail-Based
sampling
has
been
used
to
describe
many
things,
and
I
think
we
should
stop
using
it
as
much
so
I.
What
I
just
saw
is
that
you're
talking
about
tail
sampling
as
a
sort
of
you,
you
gather
a
whole
trace,
and
then
you
decide
whether
to
write
it
out
or
not.
A
Essentially,
all
this
spans
and
that's
maybe
one
type
of
sampling
and
then
there's
this
other
type
of
sampling,
which
is
a
lot
more
like
metrics
exemplar
selection,
where
you've
got
a
bunch
of
spans
and
they
all
represent
a
request
and
in
over
a
minute
you
want
to
choose
100
of
them
and
and
write
them
out
and
now
all
you've
done
is
is
choose
spans
and
there's.
C
I
mean
it's
it's
if
you,
if
your
reservoir
sampling
expands,
then
it's
possible
to
do
it
consistently.
That's
also
like-
and
this
is
for
that
they
have
a
prototype,
but
I
had
no.
A
C
A
A
I
don't
think
we
need
it,
I'm
looking
forward
to
it,
that's
really
exciting,
but
that
sounds
great.
Okay.
I
think
we've
discussed
this
to
death,
we're
going
to
talk
about
head
sampling
and
we're
going
to
drop
discussion
about
tail
something
because
it
probably
means
more
than
one
thing
and
we'll
let
it
discuss
the
discussion
happen.
Okay,
I
feel
encouraged.
I'm
gonna
revise
170,
take
sampling,
tail
sampling
out
entirely
and
ping.
You
all.