►
From YouTube: 2021-07-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Hi
everyone
not
a
greatly
popular
meeting
today.
I.
A
B
Yeah
I
mean
we
had
the
time
booked,
so
I
mean
I'm
open
either
way.
If
you
want
to
talk
through.
A
It
or
yeah
I
mean
I'm
going
to
take
on
the
if
the
opportunity
is
given.
I
will
I
will
I
will
discuss
in
exchange
for
approvals
on
my
pr's
or
at
least
input
that
can
be
used.
A
So
if
no
one
speaks
up
for
a
jaeger-related
sampling
topic
today,
I
will
hesitantly
lead
us
forward
and
ask
you
if
I
may
quickly
present
the
two
otteps
that
I'm
trying
to
get
approved
so
yeah
I'll.
Just
talk
michael
stop
me
if
you
feel
any
questions,
I'm
gonna
present
just
to
try
and
get
in
front
of
you
as
being
recorded
two
oteps,
in
which
I
am
trying
to
draft
language
that
we
can
put
into
our
specification
to
deal
with
sampling
and
the
more
I've
worked
on
this.
A
A
A
A
bunch
this
first
otep
here
assume
you
can
see,
has
been
alive
for
a
while
and
I,
for
better
or
for
worse
wrote
a
lot
of
depth
on
past
information
that
I
had
gathered
about
sampling
in
particular,
going
all
the
way
back
to
dapper,
which
I
was
a
part
of
and
tried
to
present
a
sort
of
academic
foundation.
First
of
all,
what
are
we
doing
with
sampling?
It's
something
from
math
and
some
examples
of
how
we
can
do
that
in
the
telemetry
space.
A
But
ultimately
it
comes
down
to
a
proposed
specification
text
and
I'm
on
the
fence
as
to
whether
this
all
this
text
actually
matters
when
it
comes
down
to
the
very
bottom,
and
I'm
asking
you
literally
just
to
review
this
text
at
the
bottom
and
I've
reformatted
it
a
bit.
But
this
is
it.
So
the
idea
is
to
imagine
that
this
everything
on
the
screen
goes
into
our
specification,
and
so
what
I've
done
is
above
tried
to
lay
out
the
conceptual
framework.
What
are
we
doing
when
we
sample?
A
We
are
trying
to
compute
an
adjusted
count
when
we
have
an
adjusted
count.
That
is
the
effective
count
of
that
item
in
the
population
represented
by
the
sample
and,
if
you
believe,
all
the
things
that
have
been
written
and
said
and
studied
in
science,
then
conveying
an
adjusted
count
is
all
you
need
to
estimate
true
counts.
A
But
it's
a
separate
question
as
to
how
we
convey
that
so
for
this
document.
Here
I'm
only
trying
to
specify
two
attributes:
one
is
name
sampler
adjusted
count
and
one
is
named
sampler
name
and
it
would
be
a
required
field
if
you're
doing
probability
sampling
to
set
the
adjusted
count
so
that
consumers
of
that
information
can
count
the
spans
that
they
receive
there's
only
one
case
we
know
of
where
you
won't
have
that
which
is
when
you're
using
the
parent
sampler.
A
It
gives
us
a
short
brief
pseudo
code
algorithm
for
how
how
do
you
count
spans
on
receipts
when
you
get
one
you
check
if
it
has
an
adjusted
count?
If
so,
you
count
that
many
you
check
if
it
has
a
sample
name
if
it
has
a
sample
name
and
not
a
count
you're
out
of
luck,
you've
got
to
go,
find
some
other
information.
A
So
as
all
this
pr
is
trying
to
do,
and
I
can
understand
how
it
might
feel
a
little
bit
uneasy
to
say,
all
we
need
to
know
is
adjusted
count
which
is
equivalent
to
knowing
probability
without
knowing
how
it
was
computed.
A
And
the
point
of
the
specification
is
to
say
what
matters
is
the
thing
you
derive
from
it.
There's
a
concept
of
adjust
of
expected
value.
It
is
tightly
coupled
with
the
concept
of
probability.
If
you
know
what
probability
means
you
can
understand,
what
expected
value
is
the
expected
value
of
the
adjusted
count
equals
the
true
count.
That's
the
definition.
So
that's
what
this
otep
is
trying
to
say.
A
It
summarizes
when
you
do
or
do
not
need
to
set
sampler
name
and
sampler
just
count,
and
the
idea
is
that
in
almost
all
scenarios,
when
you're
sampling,
you
shouldn't
need
the
name
except
for
this
legacy
that
we're
stuck
with
right.
Now,
when
you
don't
know
the
probability,
I'm
trying
to
give
us
a
quick
overview,
because
I
I
think
that's
the
best
we
can
do
so.
That
was
that's
this
otep
here
and
then
the
second
otep
is
newer
and
I'm
going
to
confess
that
I've
largely
been
influenced
by
atmar
ertel.
A
A
This
is
the
trace
id
ratio
based
sampler.
It
has
the
word
to
do
it's
quite
broken.
It
doesn't
actually
work
for
what's
intended,
for
it
can
only
be
used
today
to
make
a
root
sampling
decision.
The
way
this
to
do
is
written,
and
you
see
it's
followed
by
a
warning,
and
you
see
that
these
to
do's
are
not
actually
finished,
see
two
two
above
so
that
is
the
issue
here
and
that
we
are
trying
to
issue
address
with
this
ot
number
168..
A
A
It
is
not
the
easiest
to
follow.
I
mean
it's
an
academic
work
and
he's
a
researcher,
so
it
takes
a
little
bit
of
applied
thinking
to
figure
out
exactly
what's
going
on
here.
But
if
you
do
read
through
this,
you
will
get
to
a
place
where
he
explains
the
sampling
algorithm.
That's
roughly
the
same
as
the
one
that
I
am
explaining.
You
can
see
this.
I
greater
than
j,
greater
or
equal
to
j
implies
span
of
sample
and
that
will
that
will
show
up
in
this
otep.
A
So
there
are
two.
There
are
two
tasks
being
solved
here
in
one
otp,
it's
important
to
keep
them
separate.
The
first
is
that
we
have
to
fix
the
parent
sampler
to
know
the
probability
when
you're
recording
a
span,
and
the
second
is
that
you
have
to
fix
the
trace
eddy
ratio
sampler
to
know
how
to
make
a
consistent
sampling
decision
when
you're
not
the
root.
A
These
are
connected
as
opmar
shows,
and
it's
not
obvious
at
first,
but
what
we're
proposing
to
do
here
is
first
of
all,
try
to
limit
cost
the
cost
of
propagating
this
information
is
not
zero
and
if
we
are
going
to
propagate
a
sampling
decision
that
uses
probability
the
question
of
how
much
precision
we
need
and
how
many
different
values
that
might
take-
and
this
has
a
lot
of
implications
if
you
allow
arbitrary
precision
in
your
probability
and
you
invert,
that
you
have
arbitrary
precision
in
your
adjusted
count.
A
First
of
all,
what
wasn't
immediately
obvious
to
me
when
you
do,
that
is
that
you
also
limit
the
amount
of
randomness
that's
needed
in
order
to
make
a
sampling
decision
consistently,
and
so
you
may
all
be
familiar
with
the
rough
idea
of
using
binary
representation
to
compute
fractions
or
something
like
that.
The
number
of
leading
zeros
tells
you
something
about
the
binary
fraction
that
you're
looking
at,
or
the
order
of
magnitude
of
the
binary
fraction.
A
That
you're
looking
at
the
point
of
this
whole
exercise
is
that
we
can
use
the
when
we
restrict
the
probabilities
to
powers
of
two.
We
restrict
the
number
of
bits
of
randomness
needed
significantly
and
I
don't
really
want
to
go
too
heavily
into
this
math,
but
I
did.
I
do
need
to
go
further
because
it's
not
quite
clear
yet
in
this
document,
but
if
you
imagine
flipping
coins
a
fair
coin
and
the
number
of
trials
before
you
get
a
success,
that
is
the
number
we
are
looking
for,
how
many
fair
coin
flips?
A
Did
you
get
tails
on
before
you
got
heads
that
is
a
geometrically
distributed
variable
its
shape
is
one
half,
because
it
has
a
probability
of
one
half.
You
have
we're
talking
about
the
distribu
probability,
distribution
of
the
sample
decision,
and
when
you
fix
your
probabilities,
you
also
limit
the
amount
of
randomness
needed.
A
So
it
turns
out
that
you
can
derive
this.
If
you
have
fixed
your
probabilities
to
powers
of
two,
you
can
do
this
question.
All
you
need
to
know
is
how
many
leading
zeros
were
there,
and
you
can
compute
that
with
expected
two
bits
of
information
you're
going
to
flip
your
coin,
once
half
the
time
you're
you're
done,
you're,
gonna
flip
your
coin
again,
half
the
time
you're
done,
you're,
gonna
flip
the
coin,
again,
half
the
time
you're
done!
This
will
terminate
the
expected
number
of
two.
A
A
So
I
tried
to
wrap
it
up
summarizing
it.
There
are
two
values:
being
encoded:
we're
using
base
16.
we're
using
the
trace
state,
which
is
a
way
for
vendors.
We're
calling
a
hotel
a
vendor
here
for
vendors
to
convey
personal
information
about
a
trace,
there's
a
lot
of
overhead
in
this
approach,
but
ultimately
we
are
conveying
two
base:
16
bytes.
All
we
need
are
two
bytes.
A
The
first
byte
says
what
your
current
probability
is
at
the
head,
meaning
the
last
person
who
just
turned
on
sampling
in
this
trace.
That's
their
probability.
You
can
use
that
count
for
your
own
adjusted
count
if
you're
a
parent
sampler
and
then,
if
you
are
a
trace
id
ratio
sampler.
This
is
your
random,
consistent
input
that
you
can
use
to
make
a
decision,
and
then
you
all
you
do
is
compare
your
own
probability
with
that
random
number.
A
So,
once
again,
you
end
up
with
this
set
the
sampled
bit
if
s
is
less
than
or
equal
to
r,
which,
roughly
speaking,
is
what
you
saw
in
this
paper.
I
was
just
showing
you
that
that
explanation,
so
I've,
roughly
speaking,
just
reproduced
opmar's
proposal,
because
I
think
it's
brilliant.
It
saves
a
lot
of
space
and
it
makes
analysis
tractable.
So
I've
presented
this
roughly
speaking
this
idea
and
as
far
as
getting
these
these
in,
I
think
it's
going
to
take
a
bunch
more
work.
There
are
certain
alternatives.
A
B
Sorry,
josh
on
that
last
topic
I
mean
that
would
only
be
true
if
it
was
the
the
root
span
in
terms
of
randomness
right,
because,
if
you're
selecting
a
set
of
traces
using
whatever
algorithm,
but
once
it
gets
down
to
the
the
next
dependency,
that's
your
you
may
be
sampling
in
before
you
sample
again
on
that
dependent
surface,
so
randomness,
so
it
only
be
applicable
for
the
root
span.
Would
it
not
the
randomness.
A
A
That's
what
the
trace
id
ratio
is
for,
but
the
the
whatever
input
is
needed
for
that
decision
will
be
made
at
the
beginning,
so
that
everyone
can
consistently
make
a
decision,
and
so,
in
this
meeting,
we've
discussed
three
ways
of
doing
that
and
I've
chosen
one
of
them
for
this
proposal,
but
the
other
three
are
still
there,
and
I
did.
I
did
link
to
those
here.
Let
me
share
my
screen
again.
There's
issue.
A
Issue
number
12,
I
say
1826
came
out
of
this
meeting
two
weeks
ago.
I
think,
is
that
we
we
want
to
fix
the
trace
id
ratio
sampler.
That
was
two
dudes
in
the
spec
somehow,
and
these
are
several
ways
that
we
could
do
it.
I
have
you
know
my
proposal
falls.
This
latest
proposal
falls
into
the
second
category.
A
Here
is
probably
a
new
uniform
information,
but
it's
better
than
that,
as
shown
in
the
paper
and
then
if
the
trace
ids
are
actually
random,
we
can
just
count
leading
bets
or
you
know,
use
the
trace
id
as
a
number,
that's
truly
random,
and
then
the
other
idea
that's
been
proposed.
Many
times
is
to
just
assume
a
hashing
function
will
solve
this
for
us,
assuming
that
the
choice
id
is
not
truly
very
random.
A
So
we
are
looking
for
a
solution
to
this
and
I'm
and
I'm
not
I'm
not
going
to
fight
anybody
if
they,
if
they
try
to
convince
us
that
yes
trace
ids,
are
truly
random
and
but
then
we
should
be
able
to
test
that
and
put
in
something
in
the
spec
that
says
yeah
and
every
bit
should
be
truly
random
and
it's
not
just
approximately
random
or
whatever.
That
means
so
random
enough
to
not
collide
with
other
choice.
Ids
is
different
than
random
enough
to
do
good
sampling,
like
I
think
it's.
A
What
I'm
trying
to
try
is
what
I
think,
and
I'm
not
sure
if
we
have
the
right
requirements
for
for
choice,
id
sampling
directly
using
those
randomized
bits.
B
B
Had
for
today
great,
I
had
one
more
thing:
josh
like
when,
on
the
first
of
tip
that
you
were
talking
about,
I
do
see
that
you're,
and
this
is
probably
me
being
new
to
the
community.
I
mean
I
do
see
like
in
terms
of
the
motivation
about
how
what
probability
theory
is
and
providing
some
examples,
but
I'm
wondering
how
much
of
that
is
needed
right.
I
think
at
the
end
of
the
day,
there's
so
many
different
models
and
probability
theory
I
mean.
A
B
May
think
of
right,
and
so
I'm
just
wondering
how
much
of
that
is,
that
important
for
the
otep
itself
versus
just
talking
about
the
actual
algorithms
that
we
want
to
apply
for
sampling.
A
Let
me
see
if
I
can
answer
you.
I
did
talk
a
little
bit
about
variants
in
this
document,
because
I
meant
this
originally
as
an
overview
to
try
and
bring
everyone
to
agree
that
there
are
some
things
that
we
can't
agree
on.
So
my
proposal
in
this
document
is
really
that
we
is
that
we
all
agree
on
how
to
convey
the
average
or
the
expected
value
of
some
information,
and
that
does
not
say
anything
about
variance,
and
that's
pretty
intentional.
A
So
you
know
I've
said
that
that
and
there's
some
language
in
here
to
help
to
help
convince
you
scientifically
speaking,
that,
when
we're
doing
sampling
with
without
replacement
that
there's
a
general
approach
here,
that
always
works
and
is
correct.
But
it's
just
it
just
tells
us
that
we
know
our
expected
mean
or
expected
totals
are,
are
expected
to
equal,
true,
true
values.
It
doesn't
tell
us
anything
about
variance.
A
You
need
to
know
something
more
about
what
was
done
in
order
to
compute
variants,
analytically,
or
if
you
just
have
enough
data,
you
can
use,
you
know,
bootstrapping
techniques
or
something
else
to
figure
out
variants,
so
in,
for
example,
of
to
example
exemplify
your
question.
Atmar,
who
was
here
last
week,
has
given
an
algorithm
in
that
paper
for
estimating
from
partial
traces,
and
it
gives
you
some
analytical
equations
for
variance
in
order
to
use
those
equations.
A
You
need
to
know
that
trace
id
ratio,
sampling
was
used
everywhere
and
I'm
on
the
fence
about
whether
that's
actually
super
important
so
important
that
we
should
record
it
because
well
for
me,
my
vendor
actually
isn't
doing
very
much
with
variance
at
all
right
now,
and
so
that
doesn't
that
doesn't
interest
me
that
much
and
I
am
so
much
more
interested
in
getting
the
the
bare
minimum
which
is
like
we
can
convey,
averages
and
counts,
and
they
will
have
variance
depending
on
how
much
how
much
sampling
you
do.
A
A
A
I
and
I,
and
I
feel
like
sympathy
to
the
question
because
I
feel
like
there's
also
like
there's
so
much
more
in
the
space
that
can
be
done
and
said
about
sampling,
it's
just
a
huge
huge
topic
and
we
are
trying
to
kind
of
frame
the
question
or
claim
it
a
more
tractable
way
here
for
open
telemetry,
and
I
think
one
of
the
things
that
confused
me
the
most-
and
I
don't
know
that
reading
this
document
will
help.
You
is
the
sort
of
distinction
and
the
difference
between
head
sampling
and
tail
sampling.
A
A
What's
cool,
though,
is
that
if
you,
if
you're
being
unbiased
and
you
still,
then
you
can
output
adjust
accounts
on
all
the
things
that
you
collect
and
all
those
averages
come
through?
So
I
don't
care
what
kind
of
sampling
you
do?
If
you
give
me
correct,
unbiased,
adjusted
counts,
I
will
be
able
to
use
them.
So
that
means
that
I
can
give
you
spans
and
you
can
resample
them
with
ever
any
algorithm.
You
want
as
long
as
they
put
that
attribute
back
together
correctly.
A
I
will
use
them,
make
metrics
from
them
and
it
will
work
correctly
so
contrasting
that
with
head
sampling,
the
decision
is
like
you're
bound
to
make
a
decision
before
you
know
very
much,
and
then
you
have
to
propagate
that
decision
and
then
somebody's
going
to
receive
a
propagated
decision
at
that
moment.
If
you
want
to
do
tail
sampling,
you
need
to
know
your
weight,
which
is
an
adjusted
count.
A
So
the
otep,
the
second
otep
there
is
telling
us
how
to
propagate
that
weight
or
an
adjusted
count
so
that
you
could
do
any
kind
of
fancy
tail
sampling
if
you
wanted,
but
I,
but
this
topic
is
so
it
cracks
open.
So
many
different
problems.
It's
it's
easy
to
get
confused.
If
I
start
doing
tail
sampling
of
spans,
I'm
no
longer
doing
consistent
sampling.
I
might
not
be
able
to
use
those
for
trace
analysis,
but
just
having
a
set
of
spans
alone
can
be
very
powerful
for
analysis.
A
So
you
can
imagine
using
sampling
in
many
different
ways
you
can
sample
for
traces.
You
could
sample
for
spans.
You
could
sample
for
anything
you
want,
but
what
we've
identified,
or
at
least
I
think,
is
that
there's
a
broad
need
to
do.
System-Wide
metrics
about
spans
and
if
every
span
has
one
adjusted
count,
we
can
do.
A
A
The
the
other
otep
is
talks
about
the
things
we
might
need
and
it's
at
least
to
propagate
the
probability,
as
well
as
to
propagate
an
agreed
way
to
do
that
random
decision
consistently,
and
you
know,
I
think,
that
if
you
combine
head
sampling,
you
can
still
do
any
kind
of
tail
sampling
that
you
want
and
you'll
at
least
have
the
inputs.
You
need
to
do
that
on
your
collection
path.
A
A
I
know
yuri
is
here
listening
and
he's
been
conversing
with
me
in
some
of
these
oteps.
I
appreciate
your
feedback
yuri
and
I
will
keep
working
to
address
your
questions
and
hopefully
we
can
reach
a
decision
about
how
to
fix
tricity
ratio
sampler
soon,
and
hopefully
we
can
agree
to
propagate
probability
soon.
A
A
All
right,
I
think
we're
done
I'm
going
to
keep
promoting
these
in
the
slacks
and
try
and
get
more
people
to
read
or
understand
or
approve,
and
I
will
keep
working
on
them
to
help
understand
help
everyone
understand
all
right.
Thanks,
josh,
thanks,
michael
bye,
everyone
see
you
next.