►
From YouTube: 2022-02-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
B
Yep,
I
actually
have
some
things
to
bring
from
that
meeting
to
the
discussion
from
the
last
two
weeks.
A
Yeah
great
well,
I
I
think
we
could
probably
start
this
meeting.
I
I
want
to
just
disclaim
that
I've
been
so
focused
on
the
metrics
sdk
spec
work.
That's
being
finished
right
now
that
I've
done
basically
nothing
in
this
area
for
the
last
two
weeks,
which
is
less
than
I
wanted
so
other
than
putting
in
my
summary
of
what
I
know
is
happening
in
the
go
repository
which
does
interest
me.
I
just
don't
have
much
to
say
on
anything
I'd
like
to
let
you
start
with
your.
A
Your
discussion
points
will
and
we
go
from
there.
I'm
actually
very
interested
in
this
third
topic
on
the
agenda.
That's
what
I
hope
we
talk
about
today.
A
B
Sure
it's
just
so
I
mean
I
guess
to
to
catch
up
with
with
the
the
last
meeting.
I
just
wanted
to
bring
up
that
in
the
collector
meeting.
There
are
general
needs,
for
I
mean
for
my
my
own
use
case.
B
It's
it's
routing,
spans
and
routing
spans
to
different
exporters,
but
there's
a
more
general
problem
in
collector,
where
we
need
to
select
spans
for
for
doing
some
operation
on
them
and
and
the
need
for
a
query
language
that
is
like
expressive
and
flexible
and
and
that's
just
the
way
you
select,
I
mean
not
just
spans,
but
any
telemetry
you
select
on
on
telemetry,
and
then
you
do
something
with
it
in
the
collector.
B
So
there's
a
proposal
for
a
query
language,
it's
being
created.
The
current
implementation
is,
is
very
simple,
but
but
that's
kind
of
where
collector
wants
to
go,
because
we've
found
that
there's
a
few
different
ways
to
to
select
things
and
and
they
these
different
ways
fit
for
different
use
cases,
but
they
just
want
to
generalize
and
just
have
one
way.
So
my
thoughts
were
okay.
Do
we
take
that
that
need
to
generalize
and
apply
it
to
outside?
Of
the
collector
to
the
sdk.
B
A
Yeah
this
is
on
point,
though,
well
because,
as
you
know,
the
two
kind
of
existing
sampling
protocols
configuration
protocols
that
we
know
of
which
are
listed
here
like
the
jaeger
and
the
amazon
x-ray.
A
Those
both
give
you
a
predicate
functionality
of
some
sort
and
you're
right
that
immediately
someone's
gonna
say
I
well,
I
mean
the
known
limitation
of
jaeger's
sampler
is
that
you
can
only
say
two
properties
and
there's
only
a
equal
quality
match
for
the
service
and
operation,
basically
expand
name
and
service
name
right,
of
course,
parsing
a
complicated.
I
mean
it's
a
sql
dialect,
basically
and
parsing
that
in
every
language,
is
going
to
be
a
nightmare
scenario.
Probably
and
I've
seen
people
have.
A
You
know
parsing
service,
so
that
you're
only
working
with
an
abstract
syntax
tree
and
even
then
it's
still
a
daunting
amount
of
work.
A
But
I
see
your
point
yeah
I
and
I
don't
want
to
just
detract
from
that
idea.
My
own
thinking
for
what
might
be
a
good
sort
of
middle
ground
for
sdk's
support
is
to
look
at
expanding
on
what
we've
seen
already
in
jager
and
x-ray.
So
that's.
A
There
still
are
questions,
and
this
third
bullet
on
the
on
the
agenda
is
kind
of
one
of
them,
but
this
does
give
you,
like
a
rule
engine,
that's
potentially
simpler
than
a
sql
parser
being
involved
with
arbitrary
predicates
that
are
just
harder
to
implement
when
you
purchase
the
sql
statement
of
course
easier
to
use,
but-
and
that
might
mean
that
if
you
look
at
x-ray,
you've
got
these
predicate
fields
and
they're
empty
meaning,
onset
and
they're
set.
A
B
The
x-ray,
the
x-ray
implementation
of
predicates
and
they
have
that
rules
engine
how
many
languages
does
that
exist
in
because,
like
I'm
thinking
I'd,
I
wouldn't
want
to
have
to
re-implement
like
this
whole
query,
language
and
every
in
every
sdk.
But
what
can
we
leverage
out
there?
I
know
that
I
know.
There's
a
java
client,
I'm
just
not
aware
of
like
is,
is
the
x-ray
client
implemented
in
every
life
we
care
about.
A
Right,
I
I'm
guessing.
Amazon
has
done
four
or
five
of
them
and
they
were,
and
there
have
enough
resources
to
kind
of
push
on
all
those
and
at
some
point
they're
just
gonna
say
we
don't
support
that
language.
Maybe.
A
The
one
of
the
topics
of
this
setting
in
this
setting
that
concerns
me
is
the
use
of
multiple
policies,
and
I
think
this
connects
back
with
what
you
were
saying
well
is
having
a
rate
limit
on
two
combined
sampling
policies
that
are
otherwise
independent
is
somehow
different
than
having
a
pure
like
like
when
there's
a
single
predicate,
and
I
set
a
rate
limit.
That's
when
we
get
into
this
question
below
and
there's
a
question
about
expected
value
versus,
like
worst
case
value,.
A
There,
if,
if
there's
a
way
to
say,
there's
a
pool
of
of
spans,
it's
rate
limited
to
100
per
second
and
it
matches
this
or
that
or
the
other
thing,
and
so
I
can.
I
can
implement
effectively
combining
those
three
predicates
into
one
and
then
I
can
implement
a
rate
limit
for
those
three
predicates
that
sums
up
to
100
per
second.
A
But
if
there's
another
policy
with
a
different
rate
limit
and
a
span
matches
that
policy
as
well
now
I've
got
a
span,
that's
sort
of
matching
two
different
pools
and
it's
going
to
match
them
both
and
it's
going
to
count
for
bytes
in
both
of
those,
and
it
seems
to
me
you
end
up
in
a
position
where
you've
got
probabilities.
A
You
need
to
sort
of
to
to
do
this.
Adaptive
feedback
loop,
that
people
are
imagining,
you
have
to
balance
multiple
probabilities
and
it
becomes
a
hard
problem.
I
think
I'm
sketching
and
I'm
hand
waving
right
now,
but
yeah.
I
wish
someone
from
aws
or.
B
Here
to
sort
of
weigh
in
on
that,
I
I'm
just
trying
to
remember
that
those
previous
conversations
when
they
presented
what
they
were
doing
in
x-ray
and
I
I
think
they
kind
of
had
a
handle
on
that
or
at
least
a
way
to
deal
with
that
problem.
A
And
I
don't
I'm
not
saying
we
don't
have
a
handle
on
that.
It's
a
math
problem
and
someone
can
probably
write
down
some
equations.
A
A
And
maybe
maybe
we're
overthinking
this?
Actually,
I
think
we
might
be
overthinking
this.
I
would
like
to
potentially
talk
about
this
third
bullet,
because
I
think
it's
an
interesting
one
and
it
might
help
us
understand
that
the
kind
of
complexity
we're
looking
at
because
it's
more
whatever
it
is.
So
I'm
not
sure
who
wrote
this
question,
but
I
I've
it's
been
on
my
mind.
A
I
think
it
sort
of
was
discussed
last
week
last
time
as
well,
which
is
to
say
we
know
there
are
hard
rate
limiting
algorithms
like
token
bucket,
and
that
is
not
probabilistic,
and
then
we
know
hard
rate
limiting
with
probabilities
can
be
accomplished
in
the
tail
sense
after
the
head
decision
has
been
made,
and
that
leads
to
incompleteness
most
time.
A
So
then
there's
this
last
probability
scenario
that
I
think
is
very
interesting,
where
you're
going
to
adaptively
set
your
probability
and
and
aim
for
this
target
and
you're
going
to
keep
updating
and
keep
aiming
for
this
target.
And
then
the
question
is:
what
do
I?
How
do
I
set
the
target
and
what's
my
aim
function,
and
I
think
that
that's
what
this
question
is
getting
at
here
and
I
think
peter
last
time
you
mentioned
a
kind
of
let's
say
extension
of
my
reasoning
into
just
further
and
further
complexity.
A
So
I
feel
like
if
we
can
answer
this
question
for
once
you
can.
You
can
easily
imagine
applying
this
like
in
a
more
refined
way
by
subdividing
and
and
adjusting
expectations
kind
of
on
the
fly.
So
so
now,
let's
imagine
how
do
you
answer
the
basic
question?
I
think
the
basic
question
is:
we've
seen
the
past,
we're
about
to
set
up
sampling
function
for
the
future.
A
The
first
bullet
here
says
expected
number
of
samples.
Now
I've
never
been
super
strong
at
probability,
but
I
mean
statistical
learning,
let's
say,
but
I've
tried
to
understand
it,
and
I
think
my
understanding
of
this
question
is
it's
not
very
well
defined
yet
like
we
can
model
this
as
a
using
using
bayesian
reasoning.
A
You
could
model
this
using
frequent
statistics
if
you
wanted,
I'm
not
very
qualified
to
explain
the
nuances
of
those
differences
there,
but,
like
I,
when
I
come
away
from
this
problem
and
I
go
back
to
a
textbook,
I
find
myself
reading
through
the
bayes
chapter
saying:
okay,
I
have
a
prior
probability
expected
something
about
the,
but
there's
a
model
implied.
What
is
my
model
here?
A
Is
it
poisson
distribution,
because
I've
got
a
system
that
has
some
average
arrival
rate
and
that's
all
I
know,
because
that's
sort
of
a
standard
model
for
a
system
with
an
arrival
rate
and
no
memory
right
and
that's
commonly
what
we
can
kind
of
assume
here
when
we're
doing
this,
this
adaptive,
sampling
and
then
okay.
So
I
have
some
prior
average
and
the
rate
might
be
changing,
so
it
might
not
actually
hit
the
model
might
be
bad
in
other
words
and,
and
the
change
might
not
be
might
not
be
because
of
an
extreme
event.
A
C
It
boils
down
to
predicting
the
number
of
spends
right
in
the
future,
so
it's
basically
our
because
if
you
know
how
many
spans
arrive
in
the
future,
you
can
set
the
sampling
probability
for
the
next
step
right.
That's.
B
D
A
C
A
I
guess
my
my
point
is
that
there
is
no
there's.
Definitely
no
prediction.
I
think
it's
always
going
to
handle
that
it's
just
as
good
as
the
model
that
you
give
it.
So
I'm
imagining
so
my
my
simplest
model
is
it's
a
steady
state.
It's
poisson
traffic
and
the
most
variation
I
get
is
just
like
a
very
chance.
Collision
of
everybody
arrives
at
the
same
time
under
that
model.
Now
I
have
two
time
periods.
A
I
have
the
beginning
of
my
period
when
I
used
everything
I
knew
about
the
past
to
make
a
prediction
for
the
coming
period
and
then,
at
the
end
of
that
period,
I
actually
can
look
at
what
happened,
which
is
I
I
received
some
number
of
sampled
events.
Therefore,
I
have
a
distribution
of
ex
of
what
actually
probably.
D
D
D
I
agree
that
that
we
should
assume
that
we
are
dealing
with
poisson
distribution
of
the
of
the
number
of
spans
well,
at
least
at
least
in
the
systems
that
I'm
interested
in
the
actors
are
just
some
activities
from
the
internet,
which,
of
course,
we
should
treat
as
independent
and
therefore
it
fairly
well
matches
matches
the
assumptions
for
poisson
distribution
and
as
long
as
the
expected
rate
is
fairly
constant.
I
think
the
math
is
pretty
simple.
D
I
try
to
to
do
it
for
both
of
these
of
these
cases,
for
the
expected
rate
and
for
the
maximum
rate
well
for
the
maximum
rate.
What
we
need
really
is
we
have
to
stretch
the
time
period
for
which
the
the
specific,
which
was
set
in
the
specification
in
such
a
way
that
we
are
dealing
with
a
relatively
large,
reasonably
large
number
of
of
spans.
D
That
gives
us
some
numbers
we
can
play
with,
and
the
reason
for
that
is
that,
with
very
small
number
poisson
distribution
looks
very
different
than
normal
distribution,
but
for
larger
numbers
it
it
is
fairly
well,
it
is
similar
to
to
the
normal
distribution,
and
it
has
this
characteristic
bell
curve,
and
we
therefore
can
build
our
confidence
interval.
There.
A
Right,
I
would
only
add
to
that
when
I
go
through
my
textbook
on
the
bayesian
reasoning,
there's
this
maybe
additional
step
and
maybe
unnecessary,
but
the
idea
being
that
you
so
before
the
interval.
You
use
the
pass
to
predict
your
your
sampling
rate.
After
that
interval,
there
were.
A
So
it
tells
you
what
is
the
the
range
of
of
expected
distributions,
given
the
uncertainty
you
had
before
and
the
insert
do
you
still
have
and
then,
as
far
as
I
understand,
peter
you've
probably
worked
through
this
already,
you
can
approximate
so
you're
going
after
an
analytical
solution.
You
can
approximate
the
poisson
distribution
as
normal
or
something
like
that
and
you
can
go
find
its
its
confidence
or
its.
You
know
five
percent
region
or
something
like
that.
A
I
think
in
the
another
way
of
phrasing
that
perhaps
maybe
slightly
different
but
close
is
to
say-
and
I
went
again-
I'm
just
going
through
my
textbook
to
try
and
remind
myself
what
I
know
or
what
I
don't
know
is
that
when
you
look
at
that
posterior
predictive
distribution,
it
may
not.
You
may
not
actually
have
a
closed
form
equation
for
it
in
in
the
according
to
your
model,
because
there
was
uncertainty
on
both
sides,
but
there
are
algorithms
and
I
don't
fully
understand
them,
but
expect
expectation.
A
Maximization
is
one
that
can
help
you
find
and
and
I've
you
can
find.
What
is
the
probability
that
I
want
to
use
to
make
sure
that
95
of
the
time
I
don't
exceed
my
rate,
given
the
assumptions
before
and
after
I'm
trying
to
say,
I
think
I
wrapped
my
head
around
this.
Although
doing
the
actual
equations
is
a
little
too
hard
for
me
at
this
point,
and
I
haven't
studied
it
enough.
C
A
C
C
The
the
the
bigger
problem
is,
if
you
have
abrupt
changes
in
the
number
of
rates-
and
I
do
not
see
the
solution
here
so
the
thing
is
that
you
have
to
adapt
the
sampling
probability
as
soon
as
possible,
but
you
know
it's
always
a
trade-off
so
between
ever-etching
and
and
how
so
it's
like
exponential
smoothing.
So
I
actually
have
implemented
a
sampler
which
uses
exponential
smoothing
to
estimate
the
the
current
rate
in
order
to
adjust
the
sampling
probability
it's
in
the
pull
request
towards
the
open,
telemetry
java
contrib.
C
So
if
you
want
to
have
a
look,
it's
actually
simple
because
you
you
can
update
the
sampling
rate
after
every
span,
actually
because
it's
not
very
expensive,
but
it
has
no
also
not
strict
guarantees
yeah.
So
yeah.
A
And
I
think
that
that's
yeah,
you
know
another
totally
different
approach,
because
exponential
smoothing
is
somehow
I
don't
know
time
averaging
right.
I
don't.
A
I
don't
have
a
theoretical
framework
to
explain
how
that
is
yet
another
solution
to
this
problem
and
of
course
we
think
none
of
these
are
actually
going
to
give
you
the
heart
rate
limit.
I
think
if
we
step
back
here,
you
know
the
question
that
was
being
asked
and
I
still
don't
know
who
wrote
it,
but
but
what
is
the
meaning
of
rate
limiting
sampling
because
there's
a
demand
for
it.
A
It's
even
been
written
now
into
the
specification
we
we
used
lowercase
our
lowercase
l
rate,
limiting
sampler
because
there's
no
spec
for
it
and
if
there
was
going
to
be
a
spec
for
a
rate
limited
sampler
example.
A
A
There's
this,
if
there
were
a
standard
rate
limiting
sampler
that
we
wanted
to
be
a
soft
rate
limit,
that
does
probability
under
adaptive
probability.
Then
what
would
its
parameters
be?
Is
it
a
rate
or
is
it
a
number
per
time
unit
which
I
mean
those
are
the
same
except
one's
parameter,
single
parameter
and
one's
two
parameters?
A
A
That
seems
like
the
simplest
first
step
would
be
to
add
something
in
the
spec
that
says:
there's
a
rate
limiting
sampler
a
soft
rate,
limiting
example
that
does
adaptive
probabilities,
it's
configured
by
one
parameter
and
and
there's
no
x,
there's
no
spec
on
how
it's
written,
because
you
could
do
adaptive.
You
could
do
exponential.
You
could
do
something
simple.
I
don't
know
if
that's
what
people
want.
C
The
prototype
I've
implemented
has
actually
two
parameters,
the
target
sample
spin
rate
and
the
second
one
is
actually
some
kind
of
period
in
time,
so
it's
corresponds
to
some
adaption
periods.
So
you
know.
E
C
E
C
A
Well,
I
I
think
it's
certainly
defensible.
I
I
could
imagine
we
write
a
spec
entry
for
as
new
sampler.
That
is
see.
I
personally
don't
want
us
to
do
this
leaky
bucket
thing
for
a
hard
rate
limit,
but
and
and
if
we
start
proposing
the
probability
solution,
chances
are
no
one
else
is
going
to
do
that
other
work.
So
I
would
propose
that
we
ignore
the
leaky
bucket,
ignore
the
heart
rate,
limiting
non-probabilistic
approach
and
and.
C
Think
I
mean
the
hard
limit
is
not
possible
with
the
current
sample
interface,
which
requires
to
make
a
sampling
decision
immediately.
At
least
it's
not
possible
in
a
consistent
way.
So
right
right.
A
Yeah,
I
have
no
interest
in
doing
that,
so,
unless
someone
pushes
for
it,
I
think
we're
not
going
to
see
that,
hopefully
all
right.
Well,
I
think,
obviously
the
two
choices
would
be
a
single
parameter.
That's
just
your
rate
and
the
implementation
does
what
it
wants
or
two
parameters
which
I
think
atmar
you're,
proposing
that
two
parameter
solution
can
be
adjusted
to
other
implementations.
A
I
think
right,
peter's
approach
or
this
sort
of
naive,
expected
sample
approach
or
something
fancy
they're
all
going
to
have
these
two
parameters,
which
are
the
window
and
the
rate
and
the
average
rate
target.
C
And
the
sampler
I
have
implemented
actually
adapts
or
may
adapt
the
sampling
rate
after
every
processed
span.
So
that
means,
I
think,
the
worst
case
scenario.
When
you
know
the
the
number
of
spends
the
rate
of
spends
incoming,
spends
or
increases
a
lot
in
a
moment
it
will
adapt.
C
Probably
quite
quickly,
I
don't
know
it
needs
still
some
mathematical
analysis
or
what
is
the
worst
case
scenario,
but
at
least
it
has
the
chance
to
adapt
quite
quickly
in
comparison
to
approaches.
You
know
you
keep
the
sampling
probability
constant
for
some
period,
then
you
recalculate
it
and
again
so,
but
it
still
needs
some
analysis.
As
I
said.
D
Yes,
with
your
with
your.
D
D
C
Yeah,
this
is
exactly
this
parameter,
but
you
can
also
interpret.
I
mean
it's
actually
exponential
smoothing,
but
the
weight
did
according
to
to
the
time.
Actually.
So
it's.
C
Okay,
okay,
so
it
actually
estimates
the
current
rate,
based
on
the
time
differences
between
subsequent
spends.
A
C
Weights,
the
average
yeah
waiting
time.
It
calculates
the
average
waiting
time
between
spans
using
exponential
smoothing,
and
this
allows
you
to
estimate
the
rate
and,
if.
D
Right
so,
by
the
way,
I
I
wrote
this
this
third
bullet
for
for
today's
agenda
and
I
identified
this
two
modes
of
operation.
One
is
expected
rate
and
the
other
is
maximum
rate.
D
C
I
mean
modeling,
the
input
stream
is
quite
hard.
Actually,
I
am
not
sure
if
it's
so
easy
to
find
a
model
which
works
in
most
cases.
A
C
C
A
Yeah
so
peter
your
point
was
that
the
the
probability
sampler
stuff
is
sort
of
done.
I
think,
and
then
all
you
need
is
like
a
periodic
adjuster
and
lot
more
say
you
can
adjust
on
every
span
and
there's
also
simple
approaches
that
adjust
every
period
and
they
have
different
ways
of
being
analyzed
and
depending
on
how
you're
modeling
them.
D
A
In
metrics
apis,
I
know
that's
that's
the
kind
of
concern
I
I
feel
like
in
spans.
The
the
rate
of
spans
is
not
typically
high
enough
to
to
worry
about
the
cost
of
calculating
timestamp
and
some
of
the
hotel.
Sdks
are
probably
calling
time
twice
per
span
anyway,
but
I
don't,
but
I
that's
just
a
kind
of
feeling,
because
I
haven't
measured
lately.
D
A
A
C
Plus,
but
you
also
need
the
time
also
for
other
approaches,
I
mean
at
least
you.
I
don't
know
how
how
does
at
least
you
have
to
change
the
the
sampling
probability
from
time
to
time.
So
so
do
you
have
a
periodic
worker
doing
that
or
I
don't
know.
A
So
this
that
question
sounds
like
a
spec
level
issue
it
it
doesn't
sound
like
it's
100,
critical
or
blocking
you
know.
You
could
write
a
sampler
that
does
that
calculate
some
time
stamp
and
I
don't
know:
what's
it
really
cost
50
nanoseconds
100
nanoseconds,
maybe
more.
D
No,
it's
it's
actually,
okay,
so
smart,
it's
more
because
because
we
are
dealing
with
virtual
systems.
The
individuals
in
a
real
hardware
taking
the
timestamp
is
easy.
You
just
read
the
hardware
register
and
multiply
with
by
something
and
add
some
offset
and
you're
done
in
virtual
systems,
and
we
are
practically
dealing
all
the
time
in
the
cloud
with
virtual
systems.
It
is
much
more
complex.
D
C
C
In
my
prototype,
I'm
using
the
jvm
nanoseconds
method-
I
don't
know
if
this
relates
to
an
absolute
time.
I
don't
know
it's.
I
think
it's
relative
to
the
start
time
of
the
gbm
so
and
this
is
sufficient
because
you
only
need
to
calculate
differences,
so
maybe
this
is
faster,
but
actually
I
do
not
know
yeah.
A
A
Time
is
time
is
tricky.
I
know
that,
so
I
think
it's
a
point
well
taken.
I
would
definitely
support
considering
proposal
to
add
timestamp
to
make
this
easier.
I
feel
like
we
really
made
progress
here
just
just
in
talking
through
this.
I
think
the
the
goal
is
that
we
would
begin
adding
a
probability
sampler
that
is
the
the
adaptive
based
on
time
and
based
on
two
parameters
that
we
talked
about
now,
whether
we
go
without
mars
as
a
standard
or
whether
we
don't
specify.
A
I,
like,
I
kind
of
like
what
omar's
saying
it's
it's
and
if
peter
you
were
on
board
with
that,
I
would
definitely
be
willing
to
to
propose
that
as
a
standard.
At
least
I
haven't
seen
or
thought
of
any
objections
to
that
here.
D
Yeah,
if
you
could
post
a
link
to
your
prototype.
A
And
then
group
I
actually
have
to
leave
early
today.
So
unless
there's
more
to
talk
about,
I
feel
like
I'm
gonna
drop
off
the
call
and
I'm
being
glad
to
follow
up
on
this
pr,
the
java
pr
that
mar
linked-
and
if
someone
here
wants
to
carry
this
forward
faster.
Please
do.
I
would
welcome
a
spec
change
that
as
a
time
stamp
to
a
sampling
parameter
is
struct
or
that
adds
a
rate
limiting
sampler.
That
has
two
parameters
and
is
somehow
specified.
That
would
be
helpful.
A
I'm
because
I
can't
see
myself
getting
to
it
in
the
next
two
weeks,
because
there's
tremendous
pressure
on
sdk
spec
for
metrics
and
that's
what
I've
been
working
on
so,
but
this
is
great
and
I
feel
maybe
I'll
try
and
wrangle
some
people
from
the
x-ray
group,
especially
to
talk
next
time
in
this
setting,
because
I
think
they
would
they're
the
ones
who
might
either
consume
this
or
decide
not
to
consume
this.
So
we
should
get
their
opinions
and
so
on.
B
Thank
you
see
josh.
I
do
want
to
talk
to
otter
and
peter
and
everyone
else
more.
Do
we
have
a
fitness
test
for
for
these,
for
these
proposed
samplers
like
what
do
we?
B
B
Does
such
a
test
already
exist,
so
this
is
something
sort
of
similar
to
the
in
the
tray
state
specification.
How
josh
proposed
a
statistical
test
for
non-biases?
I.
C
Mean
it's
possible
to
to
analyze
certain
scenarios.
Some
I
mean.
One
scenario
is
just
a
constant
rate
of
of
of
spans
and
other
scenarios
that
you,
the
the
number
of
rates,
suddenly
increases,
and
so
the
rate
would
be
yeah
described
by
a
step
function
actually,
and
then
you
can
see
how
much
the
rate
limit
is
exceeded,
or
maybe
you
are
also
sampling
too
few
spans
because
you're
not
using
the
whole
limit
because
it
needs
some
adaption
time.
A
Is
there
a
like
a
test
test
suite
you
can
imagine
where
it's
like
the
maximum
ramp
is
like
I'm
going
to
go
from
one
query,
a
second
to
3000
queries
a
second,
but
my
server
just
can't
do
more
than
three
thousand
and
like
so
what's
the
worst
case
is
I
I
go
from
zero
from
one
to
three
thousand
in
a
second
how's.
Your
thing
respond
like
what's
the
worst
case.
Overage,
I
think
that's.
The
type
of
thing
will.
A
Guess
I
don't
mean
to
calculate
it
like
on
paper.
I
mean
to
just
like
have
a
test
suite
that
like
runs
it
because
in
the
end,
I
know
that
I
can't
do
more
than
3000
per
second,
and
if
you
told
me
that
this
setting
will
practically
always
work
for
a
server
that
can
never
go
above
3000
per
second,
then
I
would
probably
take
it.
B
Yeah,
because
I
mean,
as
we
talk
about
different
algorithms
like
and
we
all
have
different
ideas
on
what
algorithm
could
work,
it
would
be
nice
to
just
specify
well
this.
These
are
the
things
that
we
actually
care
about
in
an
algorithm
and
and
we
can
evaluate
our
choices
against
that.
C
Yeah,
actually,
we
have
to
evaluate
the
algorithms
or
when,
when
under
the
assumption,
that
the
rate
of
spans
changes
suddenly,
because
the
steady
state
is
not
very
interesting
for
that,
you
do
not
need
adaptive
sampling
rights.
B
Yeah-
and
I
think
I
mean
for
for
my
own
needs
rate,
limiting
like
why.
Why
do
we
care
about
reliving
so
much
in
the
community?
B
And
it's
about
cost
like
you
want
to
balance
cost
like
the
maximum
cost
out
of
the
maximum
cost
on
on
on
your
binaries
there's,
there
might
be
a
maximum
cost
on
the
collector
and
and
you're
just
trying
to
to
predictably
bound
all
of
that
like
so
so
when
I'm
looking
at
oh,
what's
the
best
algorithm
to
use,
it
was
it's
the
one
that
gives
me
that
best
guarantee,
while
still
being
efficient
and
not
costing
like
the
rest
of
the
cpu,
to
do
to
do
the
actual
work
that
you're
supposed
to
be
doing.
C
I
think
there's
a,
I
think,
there's
an
agreement
that
such
a
sampler
is
needed,
but
I
mean,
if
you
want
to
have
a
heart
limit,
then
there's
the
other
option
that
you
do:
reservoir
sampling
for
that
there's
also
a
prototype
or
in
the
pr,
but
this
is
sampling
actually
are
in
the
export,
not
in
the
in
this
span.
Processor
right
so-
and
this
means
this
is-
you
can
only
use
that
with
when
you're.
C
So
so
the
sampling
decisions
would
be
completely
independent
and
you
would
have
to
deal
with
partially
sampled
traces
actually
but
yeah
in
return,
you
have
heart
sampling
guarantees.
Actually
you
because.
E
B
Okay,
so
so
yeah
I'll
take
a
look
at
your
lake
there,
odmar
and
and
yeah.
Maybe
maybe
a
test
suite
is
something
that
that
that
we
whip
up
now
do
we
do
we
have
like,
like
java,
for
example,
is:
is
that
like
just
a
language
that
we
evaluate
these
these
choices
on,
because
it's
a
little
bit
hard
to
like
if
we're
all,
implementing
under
different
languages?
And
we
want
to
do
benchmarking
and
that
sort
of
thing
it's
it's
kind
of
hard.
B
If
we
I'll
pick
a
different
language
or
just
do
that
comparison
so
like
what
would
what
would
be
a
good
language
to
sort
of
do
benchmarking
on
what
was
java,
because
some
of
you
have
a
person
and
then
I'm
sure.
C
B
All
right
all
right,
I'll
see
everyone
in
two
weeks,
then.