►
From YouTube: 2022-01-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
If
you
are
following
open
telemetry
closely,
you
may
know
that
the
metrics
project
is
rushing
or
working
as
best
to
finish.
Metrics
stuff
and
I've
been
focused
a
lot
on
that.
So
I
don't
have
anything
new
on
sampling
to
tell
you
all
about
other
than
to
share
these
links
and
to
promote
my
current
openpr,
which
many
of
you
have
reviewed
and
approved,
but
doesn't
have
quite
enough
reviewers
on
the
spec
approvers
list,
in
my
opinion,
so
we're
waiting
a
little
longer,
although
I
think
we
can
probably
merge
it
without
approvals
by
next
week.
A
So
I
came
to
the
meeting
hoping
to
talk
to
you
all
about
anything.
You
want
to
talk
about
more
so
than
about
anything.
I
want
to
talk
about
I'll
run
down
these
issues
since
I
have
read
through
them
all
in
pr's.
A
If
you
weren't,
following
over
the
break,
we
have
merged
a
number
of
preliminary
prs
related
to
jaeger's
remote
sampling
specification.
So
this
gives
us
the
bulk
of
this
is
in
the
is
this
one
here,
and
this
is
first
step
toward
towards
describing
how
to
do
well.
Let's
click,
this
link,
a
remote
sampling
protocol.
A
If
you've
been
following
our
work
on
probability
sampling.
My
goal
is
to
make
sure
that
all
this
stuff,
when
it's
done,
gets
us
probability
and
adjusted
counts
so
that
these
are
fairly
speaking
orthogonal
efforts.
This
is
definitely
a
starting
point
for
compatibility
with
jaeger.
We
will
definitely
see
this
coming
forth.
This
is
basically
a
configurable
way
to
express
which
root
sampler
you
want
to
use,
and
hopefully,
with
probability
samplers
plugged
in
you'll,
get
countable
spans
out
of
these
configurable
samplers.
A
This
is
definitely
not
open.
Telemetry
efide,
it
doesn't
have
kind
of
open,
tormentary
feel
to
it.
So
I
can
see
us
putting
something
like
that
together,
but
there's
this
would
work
without
any
modifications
at
some
level
and
if
you're,
following
the
depth
of
the
details
here,
you
may
know,
there's
a
little
bit
of
a
debate
over
what
to
do
with
rate
limited
sampling
that
has
merged
anyway.
A
So
I
just
wanted
to
share
that
one
uh-oh,
I
lost
my
list
here
here
we
are
the
others
of
these
are
two
minor
clarifications,
just
as
far
as
what
may
be
coming
ahead
for
this
group
there's
a
few
issues
that
have
been
filed
over
the
last
month.
Some
of
these
might
have
been
visible
a
week
a
month
ago.
Let's
see
the
one
that
is
most
recently
discussed
and
might
be
relevant
in
this
group
is
here
it
if
you've
read
and
followed
the
probability
sampling
pr
closely.
A
This
doesn't
look
like
news
to
you.
This
is
really
asking
a
question
from
someone
that
hasn't
quite
followed
probability
sampling
it's
talking
about.
How
do
we
avoid
broken
traces?
I
mostly
said
we
can
read
the
document
and
and
find
out
about
it
what
we've
said
about
probability,
sampling
and
breaking
traces.
I
I
would
recommend
this
this
for,
for
those
of
us
implementing.
If
you
want
to
understand
this
issue,
you're,
eventually
going
to
see
that
I've
made
a
million
references
to
this
other
issue
2179.
A
This
was
during
the
review
of
the
big
pr
on
probability
sampling.
We-
and
this
is
atmar's
point
we-
we
discovered
that
really,
this
parent-based
sampler
is
not
doing
great
delegation
for
us
as
far
as,
if
you
can
choose
a
sampler
based
on
whether
you're
sampled
already
you
end
up
having
a
way
to
introduce
non-probability
behavior
I've.
This
is
a
lengthy
explanation
of
what
we
might
be
able
to
do
here.
A
Actually,
it's
not
as
lengthy
as
I
remembered
it,
but
the
point
is
that
delegating
sampler
ought
to
be
its
own
thing,
not
a
parent-based.
I
I
don't
think
that
this
is
interesting
to
anyone
here
either.
This
is
really
a
problem
with
the
sdk
specification
for
the
api
of
sampling,
which
doesn't
quite
have
to
do
with
probabilities.
A
This
is
the
question
going
back
to
the
one
that
that
was
referring
to
it.
We
are
going
to
want
to
be
able
to
just
put
together
samplers,
where
you
say
like
this
example.
If
my
attributes
are
health
check,
then
drop.
This
is
hard
to
do
with
the
current
set
of
core
samplers.
You
have
to
invent
your
own
and
we
would
like
to
have
a
delegating
sampler.
I
think,
to
make
this
sort
of
thing
a
little
bit
easier.
A
I
was
giving
you
all
a
summary,
but
I
don't
really
particularly
think
that
this
is
worth
discussing,
given
that
you
all
came
to
talk
about
something,
so
I'm
gonna
stop
talking
now.
I've
run
through
all
of
the
updates.
I
could
possibly
give
you
and
I'm
I
see
that
we
have
a
new
agenda
item
from
kalyana.
C
C
C
Yep,
so
the
scenario
I
I'm
trying
to
look
at
is
imagine
a
service
which
wants
to
do
sampling,
but
the
the
requirement
I've
gotten
from
that
customer
I'm
working
with
is
hey
for
some
queries.
Like
the
user-based
queries,
we
we
want
to
basically
have
a
really
high
sampling
rate
like
pretty
much
we
never
want
to
sample,
but
for
programmatic
queries
we
want
to
which
are
going
to
be
at
a
much
higher
ratio
than
the
user
generated
queries.
C
We
do
want
to
have
a
lower
sampling
rate.
So
so
there
was
a
question
about
how
is
the
model
going
to
work,
particularly
with
things
like
adjusted,
counts
and
estimations?
And
all
of
that
so
the
approach
I
was
exploring,
which
I
just
wanted
to
get
feedback
from
the
experts
here-
is
whether
we
can
compose
multiple
samplers
for
the
scenario.
I
know
josh
your
pr
talks
about
composite
samplers,
so
I
wanted
to
understand
it.
C
This
is
a
good
use
case
for
that,
where
we
have
say
two
samplers
sampler
number
one
would
be
a
consistent
probability
sampler
and
it
pretty
much
samples
based
on
some
probability.
There
is
no
custom
logic
and
let's
say
it
decides
to
sample
in
x
percentage
of
all
queries
right.
That
would
include
both
user
generated
queries
and
the
programmatic
queries
in
this.
In
this
use
case,
and
then
there
is
a
sampler
two.
C
C
Whenever
sampler
one
decides
to
sample
in
then
the
p
value
would
take
effect
and
we
can
estimate
matrix
and
all
of
that.
But
then
it
will
be
across
both
types
of
queries,
both
user
generated
and
tool
generated,
because
it's
a
consistent
probability
sampler
and
would
not
have
any
kind
of
custom
logic.
A
Yes,
I
I
agree
with
your
sample
one,
and
that
was
that's
hopefully
like
the
basic
goal
of
probability
sampling.
When
it
comes
to
your
sampler
two.
I
would
prefer
that
that
be
modeled
as
an
always-on
sampler.
So
it's
a
probability
of
one
rather
than
a
non-probability,
and
I,
as
you're
saying
I
realize
I
don't
have
a
great
quick
way
of
explaining
myself.
So
that's
a
good
question.
I
think
I,
the
the
non-probability
sampling
that
we've
been
imagining
was
when
you
there's
two
things
happening
here.
A
There's
a
there's
a
when
you
combine
for
a
single
decision,
I'm
not
sure
what
I
mean
by
that.
Let's
keep
going
for
a
single
decision.
You
combine
a
probability
decision
and
a
non-probability
decision.
Then
what
you've
described
in
this
text
is
going
to
happen.
So
the
probability
of
sampler
made
a
decision
and
a
non-probability
made
a
decision
and
they
were
disagreed
and-
and
you
might
end
up
using
a
zero
adjusted
count
or
a
63
for
p
value
right.
A
A
There's
two
ways
you
could
compose
is
what
I'm
trying
to
say:
there's
one
where
you
compose
them
both
or
both
samplers
will
always
run
on
every
span
and
one
where
you
choose
sampler
one.
If
it's
a
tool
generated
span
and
sampler
two,
if
it's
a
user
generated
span,
those
are
different
scenarios
right.
A
So
I
think
if
you're
gonna
have
them
both
apply
to
every
span,
then
you
can
call
sampler
one
probability
sampler
at
twelve
percent
and
you
can
call
stamp
or
two
a
probability
sample
at
one
hundred
percent
and
when
you
compose
those
the
rules
say
to
take
the
minimum
p-value,
which
will
give
you
an
adjustment
count
of
one
for
this
span,
because
it's
always
sampled.
A
So
that
your
spans
would
either
have
an
adjusted
account
of
eight
or
of
one
in
this
case,
and
then
I
think,
there's
a
completely
different
arrangement,
which
will
also
give
you
similar
results,
and
I
think
we
can
debate
or
discuss
why
this
there's
so
many
options
here
next,
but
the
other
arrangement
is
sampler.
One
applies
to
all
the
tool
generated,
queries
and
and
gives
you
your
adjusted
counts,
but
it's
exclusively
that
way
and
then
sampler
two
exclusively
applies
to
the
user-generated
queries
and
gives
you
always-on.
A
I
think
you
get
the
same
behavior
here
and
you
don't
actually
intend
for
a
true
what
the
reason
the
true
non-probability
sampler,
which
what
does
that
actually
mean
was
written
in
envisioning
things
like
every
second,
I
will
take
one
span,
which
is
not
a
probability
decision,
so
I
want
to.
I
want
to
reserve
non-probability
for
for
more
non-probable
things
than
than
this,
I
think,
is
what
I'm
trying
to
say.
A
C
See,
let's
see
so
so,
just
to
make
sure
I
got
it
right
in
the
first
approach
you
described,
you
said
sampler
one
would
be
at
a
lower
rate,
but
it
handles
it,
treats
everything
consistently
right,
irrespective
whether
it
is
a
user
generated,
query
or
a
tool
generated
query.
It
samples
at
some
some
particular
rate
and
then
sampler
2,
is
an
always
on
sampler
and-
and
in
that
case,
that
thing
probably
always.
A
D
The
question
from
the
answer-
a
third
option,
so
you
could
also
have
just
one
sampler
if
you
implement
it
yourself
and
consistent
sampling,
allows
you
basically
for
to
choose
the
sampling
rate
for
every
individual
span.
So
so
you
could
use
a
different
sampling
rate
for
those.
D
What
are
those
generated?
Queries
and
for
the
others,
a
different
sampling
rate,
and
you
could
also
use
a
custom.
A
Example,
for
that
this
comes
to
the
fact
that
the
spans
are
immutable,
and
these
are
properties
of
the
span
that
you're
using
to
make
the
decision
not
properties
of
so
they're,
not
improbable.
They
are
definite
or
some.
I
don't
know
how
I
don't
feel
like.
I
don't
have
the
words
for
this,
but
so
a
decision
based
on
one
of
those
attributes
is
really
just
choosing
your
sampler
at
that
point,
so
you
can
choose
a
sampler
that
is
probable
or
not
probable,
but
it's
based
on
the
data,
so
I
don't
have
a
good
words.
B
So
it
looks
like
the
confusion
comes
from
the
fact
that
in
this
particular
setup
there
is
a
suspicion
here
that
the
whole
thing
is
not
unbiased,
that
it
handles
some
some
spans
with
different
probability
than
the
other,
but
that's
okay.
I
think
that
when
we
say
that
probability
sampler
needs
to
be
unbiased,
we
are
limited
to
a
certain
class
of
spends.
B
C
C
It
still
falls
under
the
bucket
of
consistent
probability
sampling,
so
as
long
as
you're
unbiased
in
in
that
category
of
spans,
and
as
long
as
you're
unbiased
in
how
you
handle
the
the
programmatic
queries
right.
As
long
as
that
happens,
I
think
it
can
still
be
considered,
unbiased
and
and
consistent
probability
by
something.
C
B
At
the
jaeger
jaeger's
specification
for
sampling,
it.
C
B
D
A
Thank
you.
I
was
trying
to
take
a
little
bit
of
a
note
about
what
we
just
discussed
yeah.
I
think
that
the
word
population
was
pretty
important
in
that
explanation.
Is
that
the
as
long
as
you
are
consistently
applying
the
same
sample
to
the
population,
and
something
like
that?
I
feel
like
it's
too
early
in
the
morning
for
me
to
make
a
coherent
thought
right
now.
A
I
think
that
that's
right-
and
I
I
mean
my
perspective
now
as
a
kind
of
long-standing
member
of
hotel
here-
is
that
gosh.
I
really
want
to
see
us
get
to
the
point
where
we
have
these
configurable
samplers,
like
jaeger
or
something
we
build
for
hotel,
specifically,
because
that's
really
what
users
care
about
and
right
now,
I'm
feeling
this
great
divide
between
users
are
like
where's.
A
The
remote
sampling
configuration
that
I've
been
waiting
for
and
my
vendor
my
employer,
saying
where's
my
account
for
those
fans
josh
and
I'm
just
trying
to
give
us
the
way
to
count
spans
while
everyone
else
is
waiting
for
a
configurable
sampler.
So
there's
like
this
like
mismatch
once
we
have
the
probability
stuff
in
my
my
company
and
and
others
around
us
are
going
to
start
working
on
more
of
this
for
hotel,
but
we're
kind
of
waiting
for
the
baseline
stuff
to
get
merged.
A
Which
is
another
way
of
saying
I'm
kind
of
waiting
for
more
reviews,
and
I
appreciate
I
appreciate
getting
feedback
on
what
is
unclear.
So
this
is
a
good
example.
Potentially
this
is
something
that's
unclear.
I
will
say:
there's
this
issue.
I
don't
think
I
I
need
to
tell
any
of
you
more
we're
waiting
for
more
reviewers
and
if
anyone
would
like
help
with
finding
reviewers,
there
are
issues
to
explain
the
the
road
map.
A
How
I
think
we
are
next
once
this
merges
one
of
the
first
things
we're
going
to
do
is
start
talking
with
w3c
about
a
randomness
bit,
because
a
lot
of
the
my
own
reservations
about
this
proposal
that
we
have
is
that
it
costs
a
lot
when
you're
unsampled,
because
the
r
value
has
to
be
encoded
so
and
but,
but
I
feel
like
we're
all
waiting
for
something
and
nobody's
doing
anything.
So
I
just
want
to
leave
us
with
this.
A
If
you
know
somebody
who
can
approve
a
pr
any
approval
helps,
and
with
that
I
don't
think
we
have
more
agenda
for
today.
In
two
weeks,
perhaps
we
can
have
more
items
on
the
agenda
to
discuss.
Maybe.
C
A
Going
going
gone
all
right,
everybody
have
a
great
weekend,
I'll
see
you
in
two
weeks.
Hopefully
we'll
have
more
to
talk
about.