►
From YouTube: 2021-11-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
will
could
say
good
morning
and
say
that
I
don't
have
any
agenda
other
than
what
I
see
in
these
notes.
As
you
know,
I
have
an
outstanding
pr
which
many
of
you
have
reviewed.
That
would
be
my
agenda
and
but
I
thank
you
all
for
joining
and,
let's
see,
there's
a
reminder
to
review
my
pr
here.
I
will
say
that
oh
we're,
looking
at
last
week's
notes,
of
course
we
are
carlos,
is
editing
the
notes.
A
Let
me
then
ask
if
anyone
on
this
call
has
an
agenda
item
other
than
probability
sampling
to
talk
about,
because
I
don't
have
anything
other
than
probability
sampling
that
I'm
working
on
right
now
and
with
the
upcoming
holidays.
I
have
no
desire
to
start
anything
new,
but
I'm
here
to
talk
about
other
sampling
topics,
especially
if
we
have
them.
I
think
the
big
push
we
could
talk
about
carlos's
roadmap,
what
we
expect
to
come
next
quarter,
but
let
me
open
this
up
and
see
if
anyone
has
anything
to
discuss.
A
So
I
on
the
call
here
I
see
peter
and
atmar
and
kalyana
all
three
of
you
have
reviewed
the
pr
about
trace
choice,
probability,
sampling,
the
three
of
you
are
considered
my
experts
and
I
appreciate
all
your
reviews
at
this
point.
I
need
people
who
are
have
open,
telemetry,
approver
status,
to
help
me
finish
this
work,
and
so
I'm
basically,
I
have
no
more
questions
for
anyone
here
and
unless
we,
unless
there's
something
new
to
talk
about
on
probability
sampling,
I
think
we
could
have
this
time
back.
A
My
hope
is
that
in
the
near
future
of
this,
this
sig
we
we
could,
we
could
pause
for
a
bit
going
into
december.
I
don't
know
that
anyone's
going
to
bring
any
new
topics,
but
I
think
the
biggest
topic
outstanding
was
the
one
about
configurable
sampling
and
how
do
we
get
the
clients
to
be
able
to
change
on
on
the
fly
or
to
be
able
to
load
these
sampling
rules
from
a
remote
server
and
that
sort
of
thing
both
of
which
the
the
jaeger
project
and
the
amazon
x-ray
project
do
support?
B
Josh
I
had
one
thing
I
wanted
to
understand
is
from
this
meeting.
So
as
you
roll
out
the
new,
I
mean
updates
to
specification
to
support
your
new
sampler.
B
B
A
You
I
appreciate
your
being
here.
I
know
you're
involved
in
the
w3c
trace
context
group,
so
that
so
this
is
important
to
have
you
here.
The
I
think
the
answer
is
a
little
bit
detailed.
We
did
have.
We
did
specify
how
to
handle
various
inconsistencies,
so
the
one
type
of
inconsistency
that
we
so,
let's
see
if
you
were
using
a
plain
old
trace
id
ratio
based
sampler,
let's
say
at
the
root
and
then
some
other
participants
in
your
trace
were
using
this
new
consistent
probability.
Sampler.
A
A
The
best
answer
we
could
come
up
with
is
to
insert
an
r
value
and
then
there's
a
caveat
section
at
the
end
of
the
document.
It
says
that
consumers
of
traces
should
check
for
its
inconsistencies,
but
I
don't
exactly
know
how
that's
going
to
change
the
logic
that
they
apply
and
the
reason
I
say
that
is
that
my
vendor
and
doesn't
actually
plan
on
using
partial
trace
estimation.
The
way
has
been
covered
in
ottmar's
paper.
Okay,
so
I
think
the
answer
is
probably
you.
C
A
So
I
guess,
there's
there's
some
resolution
you
would
want
to
apply,
but
at
least
the
the
data
is
clear
on
what
what
happened
and
I
suppose
it
depends
on
how
you're,
using
the
data
that
you
would
remedy
that
situation.
The
reason
that
we
specced
it
that
way
was.
We
expect
a
common
case
where
you
don't
control
the
root,
but
you
do
control
some
intermediate
node
that
might
be
a
grandparent
to
all
of
your
traits.
It's
just
not
the
root.
A
So
in
that
situation
you
would
end
up
with
a
consistent
single
sub
tree
like
if
you're
coming
through
a
proxy,
the
proxy
could
insert
an
r
value,
got
it,
and
there
are
a
few
other
cases
that
we
called
out
of
inconsistencies
where
we
just
declared
that
you
should
abandon
all
the
data
and
start
over
and
I
but
but
those
are
just
cases
where
somebody's
acting
completely
incorrectly.
B
Got
it
thanks
for
the
clarification
I
have.
One
follow-up
question
might
be
a
naive
question
so
today,
when
we
want
to
recommend
a
sampling
mechanism
to
to
our
customers
right,
trace
id
ratio
based
sampler
is
what
exists
today.
So
if
I
recommend
that
to
my
customers
is
the
main,
I
understand
that
you
called
out.
There
is
a
challenge
with
respect
to
the
if
different
participants
configure
different
sampling
rates
and
how
much
what
guarantees
you
get
around
the
consistency
of
that.
B
That
is
one
one
warning
that
you
called
out
in
addition
is
performance
of
that
hashing
algorithm
might
be
one
of
the
other
drawbacks
which
the
new
sampling
mechanism
would
address
right.
Is
that
the
correct
understanding.
D
A
What
was
I
gonna
say
the.
A
The
recommendations
are
to
help
so
that
you
can
configure
that,
so
the
customers
can
figure
these
sorry,
I'm
not
quite
sure
what
you're
asking
no.
B
No,
so
let
me
rephrase,
I
understand
the
benefits
of
the
new
consistent
probability
based
sampling
mechanisms,
because
it
helps
you
get
the
estimations
and
all
of
that
right,
like
the
the
spans
to
metrics
which
which
is
not
possible
today.
B
But
if
meanwhile,
while
I
know
it
will
take
some
time
for
all
the
sdks
to
once
the
specification
once
your
pr
gets
approved
and
it's
part
of
the
specification,
I'm
assuming
there'll
be
a
few
months,
at
least
before
all
the
sdks
add
support
for
it.
So
during
this
time,
if
somebody
wants
to
start
using
or
if
they're
already
using
open,
telemetry
tracing
sdks,
they
will
probably
use
the
either
the
parent
based
sampling
or
the
existing
trace
id
ratio
based
sampler
in
the
current
sd
case.
B
Probability
based
sampling
mechanisms,
so
I
was
trying
to
understand
that
interim
period
the
trade-offs
in
terms
of
what
people
who
use
the
existing
trace
id
ratio
based
samplers.
Of
course
they
wouldn't
get
the
adjusted
counts
and
all
of
that.
But
in
addition,
is
the
performance
of
the
that
hashing
algorithm.
A
So
the
way
we
specified
this
there's
something
really
nice
I'll
get
to
it,
but
but
we
definitely
specified
compatibility
with
the
trace
sampled
flag,
so
the
sampled
flag
governs
the
the
behavior
so
that
if
we
see
an
inconsistency
according
to
the
rules
of
the
r
value
and
p
value,
we
will
honor
the
sampled
flag
so
that,
at
least
if
the
customer
is
already
using
a
different
sampler
to
set
the
sampled
flag,
which
means
the
the
trace
id
ratio
sampler,
then
they
will
still
get
complete
traces.
A
A
As
you
know,
trace
state
is
a
part
of
the
w3c
trace
context,
and
the
open
telemetry
span
will
just
literally
copy
the
trace
date
into
its
span
record
and
and
save
it
and
that's
already
specified
behavior,
and
it
has
nothing
to
do
with
with
sampling.
So
what
what
we're
able
to
do
here
is
only
change
the
one
sampler
that
actually
needs
to
change
the
behavior
at
the
root,
for
example.
A
So
if
we
swap
in
your
swap
out
your
trace
id
ratio
sampler
and
you
put
in
a
consistent
probability,
sampler,
you
don't
have
to
modify
any
of
the
parent-based
samplers
in
your
trace.
They
will
record
the
trace
date
correctly
and
then
we
can
infer
their
counts
correctly,
so
that
all
you
have
to
do
is
update
roots
and
and
nodes
that
would
have
been
doing
consistent
probability,
sampling
decisions
in
order
to
get
this
behavior.
So
I
think
that's
one
answer
and
I
think
that's
the
answer
to
your
question.
B
I
see
I
was
thinking
of
a
scenario
where
each
of
the
non-root
spans
are
also
using
the
trace
id
ratio
based
sample
today,
because
if
they
were
using
only
the
parent
based
sampler,
then
they
wouldn't
be
able
to
control
their
own
rates.
Correct.
A
Yeah
and
the
problem
being
that
open,
telemetry
actually
didn't
recommend
anyone
do
that
because
there
was
no
actual
consistent
hashing
specified.
So
I'm
partly
taking
advantage
of
the
fact
that
there's
this
kind
of
ominous
warning
and
a
to
do
in
the
specification
saying
we
don't
recommend
doing
this
trace
id
ratio
sampling
at
non-roots
because
it
hasn't
been
specified.
A
So,
let's
see
there's
a
few
answers.
I
could
come
up
with.
A
We
one
reason
that
we
specify
that
if
you
see
inconsistent
values,
you
should
drop
your
your
your
trace
state.
So
if
you
see
an
inconsistency
between
sampled
and
r
and
p,
you
should
drop
p,
and
that
is
an
example
of
a
case
where
we're
accommodating
somebody
change
trace
state.
Sorry,
somebody
changed
the
sampled
flag,
but
doesn't
know
about
our
trace
state
rules.
We
see
an
inconsistency,
therefore,
we
are
going
to
drop
the
adjusted
count
information.
A
B
C
B
Your
new
sampler
is
becoming
part
of
the
sdk,
then
I
think
that
would
be
the
obvious
guidance
that
we
can
give
to
our
customers
right
use
this
new,
consistent
probability
based
sampler.
I
was
trying
to
see
this
interim
period
of
until
that
becomes
part
of
the
sdks.
What
should
be
the
recommendation
at
the
root
and
at
the
non-root
participants?
Looking
like
from
what
you
said
it
looks
like,
I
should
not
be
recommending
the
trace
id
ratio
based
sampler
at
non-root.
A
A
Okay,
I'm
gonna
make
that
note
and
I
might
consult
atmar
or
someone
else
for
formal
language,
as
you
may
have
already
noticed,
I'm
not
the
best
at
writing
a
formal
language
anyway.
So
that
was
that
topic.
I
just
spoke
coffee,
so
when
everyone
else
left
to
speak.
E
Hi
josh,
you
brought
up
you
brought
up
configurability
remote,
remote
control,
sampling
and
that
sort
of
thing,
which
is,
is
something
that
I'm
interested
in,
but
I
understand
that
yeah
holiday
seasons
are
approaching
and-
and
I
I
don't
want
to
get
in
the
way
of
of
you-
pushing
through
this,
this
spec
change.
So
I
mean,
if
you're
saying
that
it's
probably
best
that
we
start
addressing
that
in
the
new
year.
Then
then
I'm
okay
with
that
too.
E
I
am
interested
in
like
as
I'm
kind
of
a
newish
member
to
to
the
sig,
I'm
interested
to
know
how
we
can
at
least
get
the
ball
rolling
and
and
what's
the
best
way
to
start
conversations
around
well.
What
would
that?
What
would
remote
control
sampling
look
like?
Do
we
just
rip
off
jagers
and
keep
going
with
it?
Yeah.
A
This
is
everyone
wants
this
and
I
think
so
we're
all
in
agreement
on
on
the
desires.
So
there's
two
groups
that
have
put
any
ideas
out
there
and
it's
jager
sampling
as
well
as
amazon
x-ray,
has
a
protocol
that
they
use.
There's
the
other
kind
of
current
happening
here
is:
there's
a
movement
to
get
configurable
collector
management
so
that
you
could
push
out
a
new
collector
configuration
somewhere
between
being
able
to
push
out
a
new
collector
configuration
or
possibly
even
being
able
to
push
out
a
new
sdk
configuration.
A
We
at
least
can
see
the
path
towards
reconfiguring.
So
then
the
question
is:
what
is
the
protocol
that
we
come
up
with
to
configure
a
fancy?
Configurable
sampler,
and
my
I
mean
my
my
belief-
is
that
it
will
come
to
look
approximately
like
a
mixture
of
what
amazon
has
and
what
jaeger
has
with
open,
telemetry
semantic
conventions
slapped
on
top.
A
So,
for
example,
jager
doesn't
really
talk
about
what
attributes
you
can
specify
attributes,
but
the
like
the
the
operation
name
is
something
they
hard
code
and
like
in
that
in
open
telemetry.
That
would
be
called
span
name,
for
example,
so
there's
various
sort
of
slight
differences
there,
but
then
the
the
sort
of
more
technical
stuff
comes
out
as
well,
so
we
know
how
to
do
probability
sampling,
at
least
with
this
proposal.
A
We
do
and
we
know
how
to
do
and-
and
we
know
that
for
the
most
part
you
can
dynamically
change.
If
you
can
dynamically
change
probabilities,
you
can
you
can
really
imp,
you
can
really
control
what
you
are
getting
out
of
your
sampling,
but
there's
still
corner
cases.
A
People
refer
to
so
one
is
rate,
limited
sampling
and
one
is
adaptive
sampling
where
I
want
to
somehow
set
a
maximum
limit
on
on
volume
of
data,
which
is
not
quite
what
you
get
from
a
probability
sampler
and
then
we've
we've
talked
through
and
looked
at
prototypes
a
little
bit.
So
there's
questions
about
how
to
do
rate
limited
sampling,
both
opmah
and
I
have
at
least
specified
or
sort
of
prototype
stuff
in
this
space.
A
So
you
can
either
implement
a
rate
limit
by
doing
adaptivity
or
you
can
do
tail
sampling.
There's,
there's
stuff.
We
can
do
there
that
fits
well
with
probability
sampling,
but
it's
it's
a
topic
and
then
I
think
that
that's
not
even
really
what
users
want
so
much.
I
mean
there's
a
fine
technical
point
here
with
probability
sampling
can't
give
you
a
heart
rate
limit,
so
tail
sampling
can
do
you
really
need
a
heart
rate
limit
or
is
approximately
a
rate
limit?
A
Okay,
if
approximately
a
rate
limit
is
okay,
then
hopefully
we
can
move
towards
something
adaptive,
but
I
don't
know:
there's
there's
like
technical
questions
here.
I
think
most
for
the
most
part
users,
don't
even
care
about
all
that
stuff.
I
just
said
they
care
about
being
able
to
configure
regular
expressions
or
key
values
like
that's
the
syntax
and
the
structure
of
a
configuration
that
we
need
is
the
the
predicates
really
not
the
actions.
A
So
I
know
of
several
participants,
pavel
from
jaeger
and
will
amaros
from
amazon
who
have
expressed
an
interest
in
this
and
and
for
the
most
part,
I'm
sort
of
waiting
for
someone
to
put
something
forward,
since
it
hasn't
been
something
that
that
I've
particularly
had
cycles
for,
although
it's
something
my
employer
really
really
wants
and
eventually
we'll
get
there.
If
no
one
else
does.
A
Well,
I
think
that's
about
right
if
there's
anything
less
formal
that
you'd
like
to
just
sort
of
like
banter
about
in
this
group.
We
could-
I,
I
think,
probably
an
otep
is
the
place
to
start,
and
I
I'd
love
it.
If
we
could
just
ignore
this
topic
of
of
rate
limited
sampling,
it's
sort
of
esoteric
and-
and
I
don't
think
it
really
is
what
users
are
after,
but
as
far
as
being
able
to
configure
probabilities
and
attributes
and
other
predicates
that
choose
which
sampling
policies
apply.
D
E
Interesting
that
that,
like
to
ignore
rate
limited
sampling,
it's
I
mean
that's
something
that
I,
that
that
we're
using
and
we're
just
doing
rates
limited
sampling
decisions
at
the
head,
no
secondary
sampling.
So
that's
something
that
I
found
useful
and
and-
and
I
want
to
better
understand
how
putting
great
limited
sampling
aside,
how
that
simplifies
things.
E
E
A
A
So
I
mean
this
is
getting
into
the
depths
of
the
of
some
technical
debate
right,
but
so
I
guess
so.
We
know
the
probability
sampling
rules.
Now,
we've
looked
at
a
lot
and
one
of
the
important
problem
properties
of
the
sampling
prop
consistent
probability.
Sampling
is
it's
unbiased
so
that
you
expect
your
estimates
to
be
accurate.
They
will
be
imprecise,
but
they
will
be
accurate.
A
So
there's
there's
no
positive
or
negative
bias
when
you
have
a
hard
rate
limit,
it's
really
difficult
to
avoid
bias,
and
that's
why
this
this
algorithm
is
hard.
A
Now
I
haven't
given
you
a
proof,
but
we've
we've
gone
over
this
a
few
times,
and
I
think
you
I'm
just
I'm
not
the
person
to
write
proofs,
but
it's
it's
basically
impossible
to
be
unbiased
if
without
having
a
second
chance
to
revisit
your
decision,
but
so
what
we,
the
simpler
approach
that
I
prefer
is
to
be
adaptive
and
just
like,
let's
say
every
second:
you
update
your
sampling
rates
to
make
sure
that
your
average
rate
stays
below
a
threshold
and
once
in
a
while,
you
will
exceed
your
rate.
A
That
is
a
risk
that
will
happen,
but
it's
much
more
straightforward
to
implement
those
algorithms
and
and
both
atmar,
and
I
did
prototype
something
like
that.
Then
otmer
also
could
talk
about
tail
sampling
with
consistent
probabilities,
although
that
is
an
area
where
I
I
would
just
hand
it
off
to
him.
It's
another
option.
C
I
I
I
wouldn't
call
it
tail
sampling,
it's
it's
more
or
less
a
span
sampling,
but
with
a
reservoir,
and
so
but
this
means
that
you
have
to
cache
this
bands
for
a
period
and
but
with
fixed
amount
of
memory.
Yes,
you
have
a
reservoir
of
fixed
size
and
at
the
end
of
the
period
you
report
just
those
spans
and
those
which
are
sampled
for
those
you
will
have
to
adapt
the
adjusted
count
and
before
reporting
them
further
on.
So
actually
I
was
thinking
of
maybe
implementing
a
spam
process
which
is
doing
that.
C
A
That
sounds
great
by
the
way.
Next
week
I
will
not
be
here
thanksgiving
holiday
here.
The
idea
is
that
you
can
couple
a
sampler
which
is
at
the
beginning
of
a
span
with
a
processor
which
is
at
the
end
of
a
span
and
you
buffer
some
spans,
and
then
you,
you
don't
write
them
all.
That
gives
you
your
heart
rate
limit,
but
you
had
to
break
some
traces
to
do
it
so
we're
back
at
talking
about
incomplete,
trace
traces,
which
I
don't
think
everyone
wants.
A
My
employer
is
not
very
interested
in
that,
so
for
us
we
would
rather
adapt
the
rate
at
the
head
because
it
gives
us
complete
traces.
It
just
gives
us
this
slight
risk
that
we
will
exceed
that
rate
and
then,
of
course,
you
can
apply
lots
of
improvements
to
make
sure
that
you
the
risk
to
lower
the
risk,
but
there's
still
that
risk.
A
If
it
weren't
for
the
complete
trace
issue,
then
we
could
do
reservoir
sampling.
We
wouldn't
need
this
consistency
either,
but-
and
I
and
I
that's
that's
an
area
that
I
like
to
talk
about-
although
it
doesn't
give
us
complete
traces,
so
so
other
reservoir
sampling
algorithms
can
be
applied
here,
but
they
they
also
break
completeness
and
they
and
they
and
they're
only
really
useful
for
a
single
stream
of
spans.
So
so
that's
why
it's
complicated.
A
Will
the
the
traditional
leaky
bucket
or
token
bucket
algorithm
absolutely
creates
bias,
and-
and
it
is
just
not
able
to
be
sort
of
rationalized
with
a
probability
sampling
scheme,
and
that
was
why
we
we
built
into
the
spec
that
you
see
here
a
lot
of
room
for
us
to
have
non-probability
samplers.
So
you
can
have
something
that's
doing
arbitrary
decision
making,
but
we
can't
count
those
spans.
So
that's
why
there's
this
whole
section
about
composing
samplers
so
that
you
could
have
some?
A
You
could
compose
a
sampler,
that's
non-probability
sampler,
that
might
say
one
per
minute
and
there's
bias
in
that
one
per
minute:
there's!
No!
Unless
you're
doing
probabilities
like
you
can't
count
that
one
per
minute,
but
you
can
see
it
and
you
can
still
collect
it,
but
we'll
give
it
a
zero
adjusted
count.
A
So
so
part
of
this
composite
sampler
or
configurable
sampler
that
we're
looking
at
will
have
the
option
to
combine
a
probability
sample
so
that
we
can
count
spans
with
some
other
logic
like
a
like
a
token
bucket
or
leaky
bucket
or
some
sort
of
minimum
threshold
logic.
That
says,
I
need
to
record
at
least
one
span
per
period
of
time
so
that
I
know
what
spans
are
out
there
so
that
I
can
tune
my
adaptivity
and
so
on.
A
So
that
is
another
area
where
these
configurable
samplers
can
combine
multiple
policies
and
islam
and
then
there's
a
spec
in
the
in
the
pr
about
how
to
combine
the
p
values
and
the
r
values
and
sample
decisions
so
that
we
can
do
some
of
the
things
that
are
traditionally
done,
which
I
would
call
non-probability
sampling
at
the
same
time
as
we
can
do
probability,
sampling.
E
Okay,
yeah,
I
I
see
your
concerns.
I
guess
I'd
like
to
I
mean
so
that
I
know
that
our
our
internal
solution
and
I
can
better
understand
the
soundness
of
it.
It's
something
that
I'd
like
to
share.
I
mean
not
today,
but
yeah.
I
mean
I'd
like
to
document.
Well,
here's
what
we're
doing
at
autonomic
and
and
see
what
see
what
you
folks
think
what
holes
you
can
poke
through
it.
E
It
does
certainly
rely
on
very
limited
sampling
at
the
head
and
and
maintaining
counters
when
sampling
decisions
are
made
yes
or
no.
So
it's
a
scheme.
I
I
talked
about
it
briefly
last
week
and
and
trask
he
kind
of
understood,
oh
yeah.
I
guess
I
guess
that
could
work
if
you're
just
doing
this
at
the
head,
but
I
kind
of
like
to
share
it
with
with
more
more
folks
and
and
see
what
you
think.
E
A
A
If,
if
any
of
the
stuff,
I
said
about
mixing
and
composing
probability,
samplers
with
non-probably
samplers
looked
curious
to
you
there.
There
is
a
section
in
the
current
spec
pr
about
it,
but
there's
a
lot
more
detail
about
it
in
one
of
the
other
chaps
otep
170
gave
a
reason
why
you
might
want
to
combine
non-probability
samplers
with
probability
samples,
then
that
might
might
be
familiar
or
rings
and
dolls
for
you.
That's
in
otep
170,
with
some
more
information.
A
Cool
all
right.
Well,
I
think
we've
covered
all
that
now.
Unless
someone
else
has
an
agenda
item,
I
propose
that
we
end
the
meeting.
I
will
continue
pressing
hotel,
spec
approvers
to
get
this
pr
merged
because
I
don't
think,
there's
any
more
objections
and
I
will
see
you
all
in
a
couple
weeks
and
we
can
start
talking
about
configurable
sampling.