►
From YouTube: 2022-12-01 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
A
I
am
going
to
be
honest
and
say
that
I'm,
a
solo
parent
today
and
my
kid
is
sleeping
in
and
when
they
wake
up
I'm,
probably
gonna
have
to
go,
be
a
parent,
okay,
I,
don't
have
much
to
say
new,
but
I
thought
I
would
raise
some
attention
and
putting
things
in
the
notes.
Right
now,.
A
Thank
you
yeah,
so
I
yeah
as
I
was
mentioning
that
for
now,
I
am
probably
gonna
be
called
to
be
a
parent
soon,
but
I
am
have
been
involved
in
this
discussion,
which
is
spending
two
open
specification
issues
right
now,
I
put
them
in
the
in
the
dock.
Oh
thanks
for
writing
me.
Thank
you.
A
How
to
achieve
consistent
sampling
across
linked
traces
is
one
we
were
discussing
for
the
last
month
and
I
continue
this
discussion.
A
This
was
the
this
was
the
summary
that
I
wrote
after
our
meeting
sorry
I'm
like
after
a
meeting
last
time,
and
that
drove
some
discussion
here
and
I
would
like
to
to
land
on
a
discussion
briefly
about
this
comment
that
I
made
here.
It's
tied
to
this
other
thread
that
I
put
in
the
notes,
which
is
new
and
there's
some
a
response.
A
I
think
this
is
somebody
who
is
asking
for
something
legitimate,
but
maybe
not
in
the
same
terms
that
we
understand
it
looks
like
a
request
for
tail
sampling,
better
tail
sampling,
and
we
might
actually
connect
it
with
this
discussion
about
non-power.
Two
two
sampling,
which
I
think
is
one
of
the
gate:
blockers
for
getting
good
tail
sampling
to
work
inside
of
collector
and
but
I'll
bring
it
back
now
that
that
type
of
topic,
sorry
I
just
changed
the
screen.
A
That
topic
is
looking
for
a
way,
basically
to
record
spans
when
they're,
not
sampled,
so
that
you
can
make
a
decision
later,
which
is
not
the
same
as
what
we've
described
as
like
a
zero
adjusted
count.
It's
basically
saying
I
want
to
record
myself
and
let
somebody
else
do
it,
which
is
I,
think
the
same
as
not
sampling
at
all,
but
but
he's
trying
to
avoid
setting
the
sampled
flag,
essentially
I
synthesizing.
Something
out
of
these
two
issues
find
myself
wanting.
Essentially
a
new
sampler
API.
A
The
sampler
API
today
has
three
status
codes
and
they
force
you
to
make
the
decision
about
sampling
up
front
when
the
span
starts,
but
they
give
you
two
other
choices.
One
is
to
not
sample
the
span
and
not
have
anything
recorded,
even
in
memory
which
is
like
it
says,
no
op
span
and
then
there's
an
intermediate
Choice
today
called
recorded,
which
is
not
sampled
and
not
a
no-op
and
the
point
of
a
non-sampled
non-no
op
span.
A
These
recorded
only
spans
is
that
you
can
build,
say
a
servlet
in
memory
of
Z
pages,
of
a
typical
example
to
access
those
live
recorded,
but
unsampled
spans
and
from
the
discussion
about
sampled
span
links
that
started
a
month
ago
and
this
discussion
together.
We
find
ourselves
now
with
this
comment
that
I've
written
imagining
I
think
three
new
states,
or
at
least
two
I've
described
two
of
these
states
and
Jaeger
and
by
your
Yuri's
comment
about
here,
described
a
third
and
those
three
new
states
are
interesting.
A
I
think
that,
taken
together
this
to
me,
that
has
the
size
and
scope
of
essentially
a
prototype,
a
project
to
demonstrate
a
fully
new,
a
complete
new
sample,
sampler
API,
which
might
help
us
and
the
two
new
states
that
I'm
describing
here
are
here.
A
This
is,
this:
is
a
deferred
sampling
decision
and
a
deferred
export
decision
I'm
trying
to
make
the
decision
to
sample
independent
from
this
decision
to
export,
which
is
not
easy
to
do
today
without
without
building
a
sampler
and
an
exporter
that
or
a
sample
or
a
processor
that
like
correct
the
the
Gap,
essentially
that
the
fact
that
you
can't
have
a
span
that
is
exported
without
also
having
it
sampled
in
today's
sampler
API.
A
So
the
new
status
codes
would
be
the
one
Jaeger
describes
is
I
haven't
started
any
children,
yet
I
may
or
may
not
be
a
sampled
I'm
going
to
wait,
because
I
should
expect
more
attributes
to
arrive
before
my
first
child
I
gather
that
that's
a
somewhat
common
pattern.
I
I
can
imagine
in
a
proxy
scenario
where
you
start
the
span,
you
parse
the
payload.
A
A
B
To
allow
somebody
to
say
I
don't
even
need
to
create
a
set
child,
because
this
Trace
is
not
being
sample,
so
we
can,
we
can
just
ignore.
We
cannot
do
the
work
of
creating
a
child.
A
No,
what
what
we're
saying
is
that
we
can
do
the
work
we
can
make
the
decision
to
sample
late,
I've
already
started
the
span.
I
just
haven't
started
any
child
context
yet
and
and
so
that
that
being
in
an
undecided
state
is
permissible
before
you
start
your
first
child,
because
you
haven't
sent
a
context
anywhere,
but
that's
actually
not
the
point.
I
was
looking
to
make.
That's
just
one
that
Yuri
pointed
to
on
this
thread.
A
The
one
that
I
wanted,
which
was
originally
phrased
as
solving
our
sampling
and
Lynx
problem,
is
that
if
basically
I'm
trying
to
construct
a
scenario
where
there's
another
span
out
there,
I'm
constructing
I'm
building
myself
a
new
span
and
I'm
going
to
link
to
another
span
so
I'm
creating
a
span
I'm
linking
to
an
existing
span.
The
existing
span
is
sampled
and
I
am
deciding
not
to
sample
myself.
I
need
a
way
to
record
this
span
to
complete
a
link
that
someone
else
is
trying
to
look
at.
A
In
other
words,
someone
else
has
a
trace
out
there
and
I'm
trying
to
sample
my
span
to
record
extra
information
about
that
Trace,
because
there's
a
linkage
but
I'm
going
to
record
myself
being
unsampled.
So
that's
the
state
three
that
I
called
using
a
little
bit
consistent
terminology
With
Yuri.
Is
this
be
a
recorded
situation
where
you've
got
a
real
span,
object
and
you're
you're
deferring
the
export
decision.
You
don't
know
yet
you're
waiting
to
see
if
any
spam
links
arrive.
Hey
spam
links
arrive
to
A
sampled
context.
A
Now
you
definitely
want
to
record
that
span,
but
you
may
have
already
had
your
first
child.
You
may
have
already
the
moment
to
decide
your
sampling
Fate
has
passed
at
that
point.
So
at
this
point,
you're
just
saying
I'd
like
to
record
this
span
simply
to
help
complete
another
Trace,
which
has
a
linkage
and
by
recording
this
span,
I'm,
going
to
record
that
linkage.
So
I
need
to
record
it.
I
need
to
record
it
with.
If,
for
probability,
sampling,
I
need
to
record
it
with
zero
adjusted
count.
C
A
difference
between
number
two
and
number
three
I
understand
that
we
do
not
know
when
the
child
span
is
created.
The
parent
is
not
notified
about
that.
So
how
how
it
is
going
to
work
with
this
schema.
A
Right,
the
the
idea
is
that
this
is
sort
of
what
I
was
discussing
last
time.
Is
that
the
way
we
record
link
today
is
essentially
one-sided,
we're
only
going
to
record
half
half
of
a
link
in
it.
I'm
I'm,
supposing-
and
this
is
where
we
run
into
place-
where
we
just
sort
of
don't
have
a
model
for
what
is
a
span
link
and
what
are
we
trying
to
achieve
one
example,
but.
A
I'm,
supposing
that
that
the
true
information
of
a
trace
is
bi-directional,
when
you
create
a
span
and
until
we
talk
about
sampling,
it
was
fine
to
just
record
all
the
things
you've
got
the
child,
which
has
a
link,
so
you
can
see
that
there's
a
link
to
something
else.
If
you
have
all
these
fans,
you
can
reconstruct
the
global
Trace
linkage
state
right,
but
when
you
start
sampling,
there's
now
an
opportunity
to
have
the
span
go.
A
The
link
go
completely
missing
because
you
didn't
sample
the
child,
which
is
how
we
have
a
way
to
record
I
call
it
the
child
we
should
but
like.
Maybe
we
need
a
different
term
for
the
from
and
the
two
like
there's
a
directionality
associated
with
it.
It's
just
not
clear
from
the
semantics
of
what
the
directionality
is.
C
A
As
I
creative
span,
I
create
a
link
to
something
else,
so
there's
a
direction
created
by
the
act
of
creating
this
the
link
and
so
that
that
forces
us
to
in
order
to
complete
that
bi-directional
graph
when
you're
sampling
the
thing
pointing
that
the
link
points
to
you
just
have
to
record
yourself,
which
means
finding
a
way
to
be
exported
without
being
sampled
yourself,
not
sampled.
A
That
means
the
back
end
is
going
to
receive
a
span
that
has
an
unsampled
Trace
ID
a
span
ID,
that's
on
Sample,
a
spin
ID
and
then
a
link
pointing
to
a
sampled
span
one,
and
then
you
know
the
back
end.
That's
going
to
be
fully
supporting
links
can
then
find
that
that
Trace
that
you're,
that
you're
linked
to
since
of
a
sample.
Now
you
can
assemble
that
trace
and
show
it
to
the
user.
A
Even
though
there's
a
different
Trace
context
in
that
link,
the
span
that
links
to
your
span,
you,
you
can
see
the
whole
story
and
then,
however,
we
get
to
the
bottom
of
this
second
issue
and
someone
that
I
think
is
a
sensible
fellow
here
in
opentana
tree,
this
guy
Kendra
Hayworth,
says
yeah,
but
that's
like
really
complicated.
A
I
mean
how
are
we
going
to
tell
the
users
what
they
get,
and
that
was
where
this
is
sort
of
where
I
was
left
thinking
about
it
a
bit
more
there
and
what
did
I
want
to
connect
I
mean.
Does
anybody
want
to
speak
any
free
stock
on
this
topic
right
now,
because
I
have
one
more
issue
to
tie
this
to
and
I,
but
I
have
to
find
it
there's
a
link
to
it.
A
B
It
feels
very
act,
I,
don't
know
what
the
right
word
is
academic.
To
me
of
of
how
practical
is
the
solution,
what
is
somebody
doing
with
this
structure?
Actually
is
going
away
like
how
do
you
implement
this,
and
how
do
you
make
it
work,
especially
across
services
and
and
that
sort
of
thing
and
I
start
I
start
getting
a
little
nervous,
so
I'm
I'm
trying
to
keep
up
but
I'm
also
like
this.
A
Is
that's
totally
fair
to
me.
I
was
about
to
describe
a
sampling
policy
that
would
capture
an
intention
that
I
hope
a
user
could
understand,
and
it's
we
already
have
in
our
kind
of
draft
working
group
here,
like
a
probability,
parent-based
sampler
and
a
consistent
probability
root
based
sample
right.
A
We
have
and
the
issue
I
was
going
to
go,
look
up
which,
which
we
can
find
and
discuss,
but
it's
from
a
year
ago
or
so
says
that
it's
really
hard
to
construct
a
correct,
consistent
probability
sampler,
especially
the
when
there's
delegation
happening,
especially
the
parent-based
sampler,
because
you
sort
of
like
you
because
of
the
interface
and
I've
written
it
in
a
different
issue.
A
A
Well,
well,
essentially,
more
composable
sampler
policies,
which
is
what
was
hard
a
year
ago.
So
I
want
to
compose
so
so,
let's
suppose
I'm
up
I'm
a
so
the
place
where
this
generally
always
comes
up.
Is
the
fanny
and
fan
out
scenario
where
you're
doing
like
message
queuing
or
batching,
or
something
like
that
so
I'm
going
to
be
the
operator
in
a
batching
scenario:
I'm
the
span
that
assembles
batches
and
when
I
start
my
span
I'm
going
to
link
to
all
the
things
that
contributed.
A
My
to
me
so
I'm
I'm
starting
a
new
route,
so
I'm
going
to
be
a
consistent
probability,
root
based
sampler,
and
that
means
that
I
will
make
my
independent
decision
based
on
my
Trace
ID,
whether
to
sample
me,
and
we
all
get
that
so.
If
I'm
sampled
I'm
going
to
write
out
a
bunch
of
links
great
now,
when
my
backend
sees
that
sample
Batcher
span
plus
a
bunch
of
links,
but
some
of
those
links
are
going
to
be
broken
and
that's
right.
A
That's
a
fact
of
life.
Now,
if
I'm
one
of
the
producers
and
I'm
going
to
like
I
I'm
like
trying
to
investigate,
why
is
my
latency
bad?
Well,
the
reason
is
because
the
battery
got
stuck
or
something
like
that
and
I'm.
Now
there
are
spans
that
have
a
submit
batch,
we'll
call
it,
which
is
the
leaf
and
the
trace,
because
the
context
was
suffered
essentially
to
create
the
batch,
which
was
then
put
into
a
new
phase
context.
A
Now,
if
the
Batcher
operation
is
not
sampled,
I
end
up,
writing
a
trace
that
ends
up
in
a
dead
end
like
there's
nothing
there,
because
I
put
I
submitted
a
batch
to
my
bachelor
and
Batcher
did
some
work,
but
it
was
unsampled,
and
so
when
I
collect
this
user
Trace
on
the
surface
of
it,
I
have
literally
no
way
to
look
at
it
and
say:
oh
something
happened
here.
It
was
a
dead
end
and
it
had
it
had
a
linkage
that
we
that
we
never
recorded
so
now.
A
I
am
interested
in
completing
the
sampled
context
by
recording
myself
and
independently
I
may
be.
Sampled.
I
was
an
independent
composition.
My
the
the
there's
essentially
an
override
here
that
says
we're
going
to
make
our
consistent
root
based
consistent
probability,
root
sampling
decision
and
then
we're
going
to
decide,
maybe
to
record
anyway,
because
a
link
might
arrive
or
there's
already
a
link
to
a
sampled
context
and
the
net
result
for
you
can't
trying
to
summarize
the
business
value
here.
A
A
They
link
to
a
trace
that
you're
going
to
be
assembling
for
a
different
sample
Trace,
so
your
user
is
going
to
come
along,
saying,
I'm,
looking
for
a
trace
that
had
a
submit
batch
request
and
it
ends
in
a
span
with
no
children,
but
it
always
ends
in
a
spin
that
has
a
linkage
to
a
submit
to
a
batch
request
and
because
of
my
decision
to
sample
the
batch
request
anytime,
one
of
my
sorry
not
to
sample
but
to
export
unsampled.
That's
this
new
state
to
export
unsampled.
A
My
batch
request
span
simply
because
it
will
be
the
missing
information
for
my
submit
batch
Trace,
which
formerly
ended
in
a
a
leaf
with
no
children.
So
now
my
span,
if
I'm
doing
it
the
back
end
support
correctly.
My
submit
batch
request
has
a
little
star
next
to
it
saying
linked
to
a
span,
and
we
know
that
span,
but
we
don't
have
a
whole
trace
for
that
span.
That's
the
end
of
the
line,
so
I
didn't
sample
the
spam.
I
don't
have
its
children.
I
just
know
it's
latency.
A
It's
attributes
its
operation
name,
so
I
know
that
there
was
a
submit
batch
and
I
know
the
latency
and
that's
the
real
Leaf
of
my
of
my
my
trace,
and
now
you
have
a
situation
where
the
user
can
see.
Oh
well
I'd
like
to
find
one
like
this,
but
where
my
batch
was
also
sampled,
and
now
you
have
the
probability
game
of
saying
I'm.
Looking
for
a
100
submit
batch
request
and
a
100
batch
request
that
now
I've
got
one
in
ten
thousand
chance
of
finding
a
sample.
B
B
You
know
so
I'm
running
my
back
end
and
now
what
I
am
receiving
sometimes
is
not
say
a
collection
of
a
bunch
of
Trace
like
an
entire
Trace,
because
that
was
sampled
out.
But
I
might
get
a
essentially
a
a
span
reference
with
like
minimal
information
about
that
span,
because
we
know
that
that
span
might
have
been
a
Target.
B
I
wouldn't
record
because
the
span
was
sent
I
mean
like
if
you
imagine,
a
long
duration.
You
know
pipeline
the
span
that
that
batch
by
the
time
it
finally
pulls
it
out
of
the
pipe
the
span
that
created
the
you
know
the
the
that
the
record
into
the
pipe
May
long
have
been
sent
or
decided
on
and
and.
A
So
well,
it
was
already
existing
as
a
span,
so
it
we
I
haven't
changed
the
fact
that
you
have
to
make
your
sampling
decision,
at
least
when
you
start
the
span
or
before
your
first
child
was
the
asterisk.
So
the
span
that
submits
the
work
was
decided.
It's
already
been
decided
whether
it's
going
to
be
sampled
or
not.
Okay,.
B
B
A
A
B
A
Span
has
to
be
alive,
but
both
spans
do
not
have
to
be
alive,
but
one
span
is,
and
that's
always
where
you're
going
to
record
the
link.
Okay,
the
one
that
is
alive,
and
so
in
the
case
where
you
submit
the
email
and
your
span
either
was
or
was
not.
Sampled.
Now
imagine
that
the
batch
times
out
the
request
returns.
A
It
still
was
or
was
not
sampled
the
independently
the
batch
request
was
or
was
not
sampled
there's,
essentially
a
new
sampler
composite
that
I'm
just
imagining
here.
That
would
allow
you
to
then
override
the
batch
requests.
Sampling
decision
with
a
just
write.
This
one
span
to
come
to
to
provide
the
evidence
of
a
link
for
the
others,
if
any
other
is
sampled.
A
I
say
record
but
I
mean
export
recorders
in
in-memory
version
of
export.
B
D
Is
there
a
risk?
I
mean
if
you
know,
if
you're
recording
a
lot
of
stuff
which
you
actually
don't
want
to
sample,
I
mean
you
you,
you
still
send
a
lot
of
data
and
have
to
store
a
lot
of
data,
so
basically
it
could
break
actually
the
sampling
goal,
yeah
right.
A
D
I
mean
you
have
a
reason
to
to
to
to
sample
actually
right
and
if
then
there's
a
couple
of
links
which
actually
make
you
collect
all
the
data
again,
then
this
is
maybe
not
what
you
want.
A
D
D
Approach
is
to
sample
100.
If
you
want
to
have
all
the
links,
then
you
have
to
sample
100
on
the
on
on
the
receiver
side,
and
then
you
will
have
all
the
links
right
and.
A
Maybe
does
the
decision
to
export
a
span
that
is
unsampled
may
be
seen
on
the
one
hand
as
creating
more
data,
because
you
are
now
writing
a
span
where
you
didn't
before,
because
it
was
unsampled,
but
if
I
treat
that
Spanish,
it's
literally
part
of
the
thing
that
I'm
that
I
linked
to
well
that
thing
I
linked
to
was
sampled.
Oh
I
mean
I'm
just
adding
one
standard,
something
that
I
already
decided
to
sample.
A
Well,
I'm
only
talking
about
the
one
span:
okay,
because
that's
and
that's
hopefully
the
the
I
guess
the
big
deal
here
is
that
we
we
need
and
that's
what
these
two
issues
that
I
put
up
start
are
kind
of
both
talking
about
is
looking
for
ways
to
required
spans
that.
B
A
My
example:
what's
the
same,
what's
the
difference,
the
span
is
a
span
it
I
would
I
would
write
literally.
D
How
meaningful
is
this
single
span?
Actually,
so,
if
you
do
not
have
any
follow-up
spends
you
you
just
stick
one
span
on
the
other
side
and
Link
it
to
the
trees.
On
the
producer
side,.
A
Yes
and
I
think
that
that
gets
back
to
a
I,
don't
know
even
like
a
modeling
question
of
what
we're
hoping
but
I'm.
What
I'm
hoping
was
that
the
the
email
request,
the
thing
that
sends
the
email
doesn't
have
this
make
this
like
missing
Gap,
that's
like
yeah.
It
was
like
the
work
passed
to
another
Trace
there
and
because
of
sampling,
we're
we're
gonna
like
there's,
there's
like
a
sentinel
here.
That
says
like
this
is
where
the
trace
boundary
ends.
At
least
we
know
it
ends
here,
rather
than
having
a
gap.
A
Essentially
is
what
I
was
trying
to
achieve
and
and
and
the
the
premise
of
that
is
that
say,
a
user
who's,
so
that
so
that
in
this
batching
Fanning
fan
out
scenario,
you
can
get
a
good
trace
for
either
you
just
the
chances
of
getting
a
good
trace
that
combines.
The
entire
flow
is
limited
by
the
the
nature
of
probability
so,
but
but
I
can
I
can
always
see
the
email
ended
in
a
batch
request
span
and
the
batch
request
span
always
is
independently
sampled.
A
Foreign
policy,
as
one
that
I
think
kalyana
originally
proposed,
which
is
there's
so
many
policies
here,
which
is
what
I
think,
is
dangerous-
that
users
can
get
really
confused.
So
so
the
other
policy
even
might
be
not
a
probability
based
algorithm
at
all.
You
just
like,
if
any
of
my
links
are
sampled
I'm
going
to
sample
this
spam,
but
that
doesn't
tell
you
probability:
it's
just
a
decision
to
sample.
So
if
so,
so
it
was
an
incomplete
solution
and
it
it
might
be
more
confusing
to
users.
A
And
it
has
a
more
severe
performance
cost.
If
things
are,
you
know
if
you're
trying
to
actually
sample
because
of
you
want
to
limit
the
data
that
other
policy
has
described,
is
I.
Think
worse.
A
Oh,
this
is
useful,
guys,
I,
think
I,
I
I
I
find
that
probably
nobody,
not
very
many
people
care
about
this
topic
and
I
think
that
the
reason
it
comes
up
still
is
that
message.
Passing
systems
are
hard
to
trace.
People
want
to
trace
them,
but
there's
no
way
to
get
a
good,
complete,
trace
and
Sample.
At
the
same
time,
I
think
is
a
big
problem,
and
anyway,
this
feels
like
it
might
be
the
offer
some
some
direction
for
that.
A
While,
while
definitely
you
know
just
exposes
to
the
competing
interests,
if
you
are
going
to
have
separate
traces,
you
are
going
to
have
incompleteness
if
you're
going
to
use
links,
you're
going
to
have
a
problem
with
sampling,
and
you
can,
the
best
we
can
do
I
think
is
maybe
what
I
just
subscribed
I
would
be
happy
to
see.
I
think
personally,
that
sort
of
like
Trace
boundaries
like
the
span
here
is
a
placeholder.
A
It
was
not
a
trace
span,
but
we
recorded
it
anyway,
because
we
wanted
to
show
you
where
your
batch
went
or
where
your
email
went
and
then
it's
a
product
decision
like
for
the
observability
vendor,
then
you
can
say:
okay,
here's,
your
user
interface.
You
got
a
span,
it
has
an
email
submitted.
It
got
to
a
point
where
there's
an
unsampled
span
says
there
was
a
bachelor
now,
maybe
I
can
pivot.
A
To
like,
find
me
one
like
this,
where
there's
an
actual
Trace
and
that
would
be
a
you
know,
search
your
database
for
a
trace
that
has
the
same
characteristic.
You
were
just
looking
at,
but
has
one
of
these
Batcher
requests.
That
was
also
sampled
and
that'll.
Be
a
probability.
You
know
game
of
finding
one
I
guess
that's
kind
of
how
I
imagine
it.
B
So
just
for
context,
because
I
literally
had
this
conversation
with
a
customer
this
week,
who
was
has
some
you
know,
slow
running
batches
and
was
asking
how
do
I?
How
do
I
Define
my
Trace
boundaries
versus
what
do
I
do
different
ways
and,
and
actually
the
conversation
ended
up
in
I-
think
the
way
they're
going
to
do.
It
is
they're
going
to
create
a
batch
ID
that
is
carried
throughout
the
batch.
B
A
Yeah
that
I
would
call
it
the
same
type
of
problem
and
I.
That
brings
to
mind
even
another
kind
of
Avenue
for
this
original
issue:
how
to
achieve
consistent,
templing
across
length.
Traces
one
is
like
was
kind
of
Peter's
suggestion,
I
think
which
is
like
synthesize,
your
your
Trace
ID,
just
right
and
you'll,
and
you
can
make
sure
this
thing
is
sampled,
which
is
effectively
saying
you
know,
start
your
Trace
earlier
and
don't
and
don't
use
more
than
one
Trace
I
think
right
and
which
is
not
really
an
answer.
So.
B
A
I
think
like
I,
would
definitely
agree
that
this
this
discussion
arises
because
we're
we're
all
kind
of
we
know
of
the
customer
who
is
stuck
on
instrument
and
Kafka
and
or
or
finding
a
good
trace
for
Kafka,
essentially
and
yeah
and
and
the
reason,
and
just
in
case
I'm
I'm,
not
projecting
anymore.
But
this
in
the
thread
that's
been
linked.
This
guide
hi,
it's
yohanas
tax,
hi
Johannes.
A
This
is
GitHub,
he's
the
one
who's
been
advocating
forever
for
spam
links
to
be
supported
after
you
start
a
span
and
I
mean
I
didn't
have
to
talk
about
it
in
this.
This
motion
this
time
here,
but
I
think
we
can
also
solve
that
problem
and
I
think
that
that
gives
the
user
in
your
case
Kent
what
they're
looking
for,
which
is
that
you
can
start
a
a
a
worker
and
then
add
the
batch
when
you
as
you
get
it
saying.
A
Okay
now
I
know
I'm
linked
to
this
other
thing
and
then
I
create
a
span
link
and
then
this
sampling
logic
that
I've
described
would
trigger,
meaning
that,
if
I'm,
if
I'm
part
of
a
batch
that
is
sampled
or
I'm
linking
to
a
batch,
that
was
sampled
and
I've
decided
not
sample.
Already
now
I'm
gonna
at
least
record
myself
to
let
the
user
know
where
the
where
this
went
ended.
It
ended
in
an
unsampled
place,
but
gives
you
that
boundary
I
guess.
D
I'm
wondering
if
consistent
sampling
across
the
link
could
be
achieved,
because
we
were
just
talking
about
collecting
one
span
on
the
receiver
side
right
and
basically
for
consistent
sampling,
you're
free
to
choose
the
sampling
rate,
with
the
exception
that
you
must
not.
The
decision
must
not
depend
on
the
R
value
yeah,
because
otherwise
you
would
introduce
some
correlation
right
yeah.
D
But
what?
If
you
choose
the
sampling
rate
yeah
based
on
the
fact
that
the
span
was
linked,
I
mean
and
and
yeah
through
the
linkage
I
get
the
information
that,
on
the
other
side,
there
was
a
trace,
a
sampled
or
a
span
sampled.
D
But
this
means
that
you
I,
make
my
sampling
probability.
I
would
say,
set
the
sampling
probability
to
100
in
this
case
right,
which
would
be
valid.
But
the
the
problem
here
is
that
the
sampling
probability
depends
on
the
r
value
of
the
other
trees
right
and-
and
this
could
be
potentially
dangerous.
But
I
have
to
think
about
that.
But.
A
I
think
you're
you're
on
to
what
maybe
what
Kayana
was
thinking
about
when
he
first
posed.
This
question,
though,
which
is
to
say
something
like
I,
have
four
links
every
time.
I
think
it
matters
how
many
links
there
are
and
I'm
now
I'm.
Looking
at
my
four
links-
and
you
know,
half
of
them
are
sampled
and
half
of
their
of
them
are
unsampled
with
different
adjusted
counts
or
or
P
values.
Now,
what's
my
what's
my
p-value
and
I,
like
I
I,
couldn't
figure
out
how
to
go
any
further
than
that,
but
maybe
you
can.
D
Yeah
yeah
I
have
to
think
about
that,
but
basically
you
you're
sampling.
Your
chosen
sampling
rate
depends
on
the
hour
value
of
another
Choice,
which.
A
A
I'm
thinking
about
a
case
where
you
have
10
links
and
what
your
goal
is
to
do
some
sort
of
consistency
which
says
that
the
more
of
my
links
that
were
sampled,
the
more
likely
I
am
to
be
sampled
I.
Think
that's.
The
high
level
objective
of
consistent
sampling
is.
Is
that
we're
all
more
likely
to
be
sample
that
wants?
If,
if
this
is
done
right,
I
just
don't
quite
know
how
to
do
that
in
a
case
where
there
are
links.
A
D
A
D
What
wouldn't
be
a
problem
at
all
is
that
you
sample
by
100
if
it's
yeah
link,
somehow
right
independent.
D
If
the
the
span
is
sampled
or
not
on
the
sender
side
right
and
but
if
you
make
your
sampling
decision
also
dependent
on
the
sampling
decision
of
the
span
on
the
sender
side,
then
you
would
rely
on
the
or
would
make
your
sampling
rate
your
chosen
sampling
rate
dependent
on
the
R
value
of
the
other
choice,
and
this
could
have
some
risks
yeah,
but
otherwise
you're
actually
free
to
choose
the
sampling
rate
dependent
on
attributes
on
the
span.
This
wouldn't.
A
Break
here's
another
similar
construction
for
you
to
compare
in
this.
In
the
same
sense,
I
might
just
decide.
Okay
in
the
case
where
I'm
going
to
create
a
new
span,
a
new
root
because
of
these
this
linkage
problem
I
have
10
links.
Maybe
instead
I
should
just
use
one
of
those
10
as
my
parent
and
then,
if
one
of
those
10
was
sampled,
I
can
just
continue
its
Trace
ID,
and
that
would
give
us
kind
of
the
same
outcome.
Although
it
distorts
something
and
I,
don't
want
to
try
and
explain
what
it
distorts.
A
It
it's
private,
it's
preferencing
one
of
the
links
in
a
way
that
could
be
arbitrary
or
could
be
deterministic
and
I
and
I
haven't
thought
any
further
about
it.
I
think
I
need
to
to
go
parent.
Now
it's
been
level
you
guys
I
I,
want
to
think
about
this.
Some
more
I
don't
have
much
time
to
work
on
it.
A
I'm
Gonna
Leave,
because
my
kid's
hungry
okay,
I'll,
see
a
couple.
B
C
I
I
have
a
thought
that
I'm
not
sure
if
it's,
if
it's
relevant
here
I
think
it
is
so.
We
have
been
discussing
a
case
of
fan
in
like
bad
processing
or
email
sending
emails.
So
there
is
a
number
of
requests
that
are
folded
to
a
single.
C
Step
In
processing
later
well,
if
we,
this
is
possibly
in
a
different
process,
or
something
like
that.
If
we
go
back
to
what
the
users
really
want,
which
is
I
still
believe,
the
most
important
part
that
the
users
want
is
is
that
defining
the
rate
of
sampling.
A
C
So
so
it
is
pretty
natural
that
the
sampling
rate
for
the
batch
steps
will
be
similar
to
to
the
sampling
rate
for
the
individual
requests,
but
because
the
number
of
batch
requests
is
much
smaller
than
the
number
of
individual
requests
for
email.
It
will
come
automatically
that
these
batch
requests
will
be
sampled
with
higher
probability.
D
C
B
C
The
whole
system-
and
we
are
all
good
here-
we
will
see
all
what
we
need.
It
is,
of
course,
counting
on
chances.
This
is
not
to
become
mixed
up
with
consistent
probability
sampling,
because
the
sampling
decisions
for
this
batch
steps
are
completely
independent
from
from
what
we
did
with
the
requests,
but
statistically
it
might
come
quite
okay.
That's
that's
my
thought.
D
Yeah
I
I'm
also
afraid
that
you're
introducing
even
more
complicated
sampling
mechanisms,
yeah
we'll
be
difficult
to
implement.
So
it's
I
think
consistent
sampling
alone
already
complex
enough,
but
introducing
some
extra
Logic
for
links
and
so
on.
I
think
and
no
one
will
understand
at
the
end
how
to
deal
with
that
data.
D
B
That
any
individual
user
has
a
sin,
has
they
they
often
know
what
they
want,
but
don't
know
how
to
achieve
it,
and
so,
like
we
see
look
you
know
to
quote
the
online
like
you
know:
everyday,
happy,
families
alike.
Every
unhappy
family
is
unhappy
in
its
own
way.
I
want
to
sample
errors
at
100
percent
and
I
want
everything
else
at
one
in
a
thousand
and-
and
so
you
know,
that's
statistically.
B
D
B
B
You
know,
adjust
my
sample
rate
so
that
I
get
as
close
to
my
throughput
limit,
but
not
over
it
as
possible
or
I
want
a
good
sampling
of
all
of
these
Keys
and
so
like
just
take
my
collection
of
endpoints
and
adjust
the
sample
rate
based
on,
like
all
the
login
endpoints
get
sampled
pretty
heavily,
whereas
the
processing
endpoints
get
sampled
less
heavily
because
there
are
fewer
of
them,
you
know
so
we
we
have
a
dynamic
sampler.
B
So
it's
kind
of
like
how
do
we?
How
do
we
get
there
from
a
usable
point
of
view?
I
mean
I.
Think
that's
what
we
all
keep
trying
to
get
to
is
like.
How
do
we
make
this
comprehensible
for
people
so
that
they
can
do
what
they
need
to
do
to
understand
their
own
data
that
that
I
feel
like
that
needs
to
be
the
the
large-scale
question
here
more
than
you
know,
is
it
provably,
statistically
correct
or
anything
else
it
needs
to
be?
B
B
D
Yeah
it's
anyway,
I
mean
you
once
you
start
sampling,
I
mean
it's
yeah.
You
have
to
do
a
lot
of
compromises
and-
and
you
cannot
have
everything
yeah
right.