►
From YouTube: 2022-04-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
josh
hi
I
we're
here
we
are
very
early
in
the
meeting.
Guess
I'll,
wait
or
we'll
be
glad
to
wait.
A
Maybe
we
put
together
a
agenda
should
have
yes,
I
knew
what
it
would
have.
Good
agenda
has
a
topic
with
antoine
here
to
talk
about
probabilistic
log
sampling.
I
gave
some
feedback
on
the
pr
just
this
morning
because
I
was
kind
of
running
late
on
that.
A
So
hopefully
we
can
have
somewhat
productive
conversation
about
that
and
then
I'll
admit.
I
have
to
go
a
little
early
today.
A
So
I
can
go
take
a
test.
Let's
see,
I
suggest
we
wait.
I
am
taking
my
water
distribution
test
today,
so
I'm
driving
to
santa
rosa
by
noon,
which
means
I
got.
I
gotta
go
pretty
shortly
after
the
meeting
and
I
need
to
do
some
stuff.
I
first
thought
was
a
covent
test
that
I
was
going
to
say:
I'm
sorry,
but
then
yeah.
This
is
much
more
exciting
yeah.
So
it's
a
drive
to
the
santa
rosa
at
noon
and
then
it's
a
three-hour
test,
hopefully
won't
take
that
long.
A
So
here
we
are
then
let's,
let's
go
antoine's
suggested.
We
talked
about
this
in
the
spec
meeting
and
hadn't
gotten
a
lot
of
review.
I
proposed
we
talked
about
it
here
this
week,
so
I
read
it
and
I'm
prepared
to
talk
about
what
I
wrote
as
well
as
I
wrote
some
things
so
that
others
not
in
the
call
could
read
them
as
well.
A
B
I
can
introduce
it
and
I
haven't
had
time
to
read
your
comments.
So
that's
that's
fair!
Yes,
you
should.
I
will.
I
will
say
them.
Yes,
okay,
then,
in
that
case,
maybe
let's
just
go
through
the
the
overall
thing.
That's
that's
what
this
is,
and
I
you
know
happy
to
answer
questions
about
the
the
reflection
behind
this.
B
So
you
know
in
hotel,
we've
had
trace
id
sampling
for
a
little
while
trace
id
sampling
is
a
great
idea
and
deploy
by.
I
guess
this
group
allows
you
to
kind
of
trace
to
choose
which
trace
you're
going
to
sample
according
to
a
deterministic
algorithm
and
using
both
the
percentage
of
how
many
traces
you
want
to
keep
and
a
way
to
also
have
the
same
traces
being
picked
by
a
collector
right
using
the
hash
seed
approach,
so
that
worked
pretty
well
and
it
would
be
great
to
extend
it
to
logs.
B
B
So
when
you
do
the
collection
you
get
all
that
and
it
works,
whereas
the
spec
goes
a
little
further
and
that's
the
I
think
where
things
are
getting
a
bit
more
mixed
up,
you
would
also
want
to
in
the
absence
of
a
trace
id,
which
I
think
would
actually
be
something
that
happens
quite
often
have
another
way
to
pick
a
field.
B
B
If
you
pick
that
something
else
in
this
present,
then
we
can
use
it
in
the
trace.
Id
is
not
there
to
continue
the
algorithm
a
little
bit
if
there
is
no
trace
id
and
if
there's
no
such
field,
then
in
that
case
the
sampling
algorithm
does
not
apply
and
we
continue
using
this
log
record.
We
cannot
discard
it
because
we
don't
know
whether
we
should
sample
it
or
not.
We
can't
just
apply
a
random.
B
You
know
sampling
at
this
point
because
that's
too
far
down
the
pipe
right,
so
it
kind
of
kicks
out
the
number
of
considerations
and
that's
what
we
discussed
in
the
spec
meeting
last
thing,
tuesday,
for
example,
we
don't
want
to
take
care
of
the
distribution
of
that
sale
value
right.
This
is
not
our
job,
we're
not
trying
to
be
good
about
it.
We're
not
trying
to
be
smart,
we're
just
saying
hey!
B
Maybe
you
thought
about
this,
and
you
had
your
good
business
about
this,
so
you
already
did
the
work
of
populating
that
field
with
something
that
is
well
distributed
is
unique,
all
that
stuff
right
and
there
are
ways
for
you
to
also
in
the
pipeline
before
you
get
to
this
particular
processor,
to
play
a
little
bit
with
the
log
records
to
add
additional
data
based
on
a
regular
expression
or
the
severity
of
the
record.
You
could
change
the
percentage
level
of
that
sampling.
B
You
could
do
a
bunch
of
different
things,
so
that's
kind
of
where
we
left
it
and
there's
some
contention
right.
This
is
not
like
a
consensus
right
now,
because
there
are
some
folks
who
come
at
this
from
different
perspective
and
have
different.
You
know
expectations
in
terms
of
how
easy
and
useful
it
would
be,
for
example,
to
have
random
sampling
built
into
this
so
say:
there's
no
such
thing.
Well,
in
that
case,
can
we
just
apply
a
percentage
and
brute
force
through
this?
B
My
feedback
to
them
has
been,
and
no
tell
me
if
I'm
wrong,
you're
the
experts,
but
if
there's
no
way
for
me
to
piggyback
to
a
certain
value,
then
I
can't
know
for
a
fact
that
this
particular
log
record
should
be
sampled.
On
that
I
can't
just
apply
randomnesses
right,
I
can't
just
say
20,
because
the
way
we
calculate
right
now,
whether
you
should
sample
it
or
not,
is
by
taking
this
this
hash
of
the
value
and
comparing
it
to
your
buckets
right
so
to
make
sure
that
we're
in
that
range.
B
A
So
there's
probably
several
directions
to
take
that
conversation,
one
of
the
sort
of
the
simpler
case
where
you
do
have
a
trace
context.
Let's
say
that's
one
area
direction.
We
can
go
in
that
area.
There's
some
nuance
here.
I
try
and
explain,
because
we
we
respect
as
much
as
we
could
in
pure
open,
open,
telemetry
context
without
going
outside
of
our
scope
and
talking
about
what
w3c
is
doing.
A
In
that
sense,
we
got
less
than
I
what
we
might
call
ideal
from
the
sense
of
you
know
if
we
could
assume
or
knew
that
the
trace
id
had
randomness,
then
we
could
do
this
better
and
because
we
didn't
want
to
assume
that
or
couldn't
assume
that
and
then
current
spec
environment
we
created
a
new
field
called
our
value,
which
is
ideally
a
bridge
to
the
better
world
in
the
future,
where
the
w3c
does
give
us
what
we
want.
A
So
I
I
kind
of
want
to
have
this
conversation
in
two
directions
in
two
ways.
One
is
like
in
the
current
present
reality,
even
what
you're
suggesting
won't
work
right
now,
because
we
need
the
trace
context
and
its
trace
state
in
order
to
do
the
consistent
decision
right
now,
because
we
don't
have
any
truth
about
randomness
from
w3c.
A
We
think
that
that's
going
to
change
pretty
soon
that
was
mentioned
in
the
spec
meeting
on
tuesday
w3c
knows
what
we
want
and
they're
working
on
it.
The
new
flag
says
these
are
random
and
you
know
exactly
which
56
bits
or
something
like
that
from
there
we
can
expect.
So
we
don't
need
the
r
value
and
we
could
even
get
more.
We
could
move
beyond
powers
of
2.
If
we
wanted
to.
I
think,
because
we
know
the
order
of
those
56
bits
we
can
be,
we
can
get
between
the
powers
of
two.
A
In
other
words,
I
think,
but
that's
like
I'm
not
sure
we
want
that
either.
So
that's
sort
of
a
tangent,
but
right
now
to
do
to
advance
to
take
advantage
of
what
we've
done
already
right
now
you
would
have
to
have
the
trace
state
in
the
log
to
to
not
because
you're
going
to
want
to
know
if
it
was
sampled
and
the
log
record
has
the
trace
flags,
but
it
doesn't
have
the
trace
state.
A
So
you
know
it
was
sampled,
but
you
won't
be
able
to
count
it
the
way
we
wanted
to
do
for
probabilistic,
counting
of
spans
and
traces.
So
if
you
tried
to
do
this
now,
you'd
get
the
consistent
logs,
but
you'd
have
to
go,
find
a
trace
and
look.
It's
look
at
it.
A
You
would
just
know
it's
sampled
flag,
so
there's
maybe
a
missing
field
to
do
to
sort
of
embrace
what
we've
done
now
but,
as
I
say,
maybe
w3c
moves
us
forward
and
we
can
skip
that
r
value
and
we
can
use
the
randomness
and
the
trace
id
eventually,
assuming
it's
got
that
flag,
saying
randomness
is
there
so
that's
sort
of
like
right
now
we
need
trace
day
in
in
the
future.
A
Probably
we
we
we
could
avoid
trace
date,
but
if
you're
avoiding
trace
state
in
the
log
record,
you
still
don't
have
a
way
to
count
those
blogs
we
in
the
specs
work.
We
did.
We,
the
the
the
output
of
the
probability
sampling,
is
an
adjusted
count,
information
which
we
call
the
p-value.
So
that
also
goes
in
trace
state
and
in
order
to
count
your
log,
you
would
have
to
have
the
p-value
and
again
that's
something:
that's
in
trace
state,
so
there's
nowhere
to
put
it
in
the
log
record.
A
Today
you
could
go,
find
a
span
and
then
look
at
it
suggest
account
and
that
should
have
the
same
adjusted
count.
But
this
suggests
to
me:
either
we
should
start
putting
trace
state
in
logs
or
we
should
maybe
move
past
that
and
think
about.
A
Well,
I
at
some
point
had
proposed,
like
the
span
record,
could
just
have
a
field
which
you
know
double
valued
optional,
which
is
it's
a
just
account
and
it
could
be
two
or
four
or
six
or
seven
or
whatever.
If
you
just
accept
that
field,
but
we
never
added
that
field.
That
was
a
little
bit
more
than
we
wanted.
We
just
because
the
trace
states
recorded
for
every
span.
We
could
ignore
that
problem.
You
know,
you've
got
a
span,
you
look
at
straight
state,
you
find
suggested
count.
A
You
also
find
its
r
value,
so
we're
sort
of
missing
both
of
those.
If
you
can't,
if
you
don't,
have
this
trace
date,
there's
there's
something
missing
here.
So
that's
one
whole
topic
of
feedback.
I
guess-
and
I
won't
pause
there,
because
the
other
topics
are
kind
of
different
and
I
don't
want
to
get
too
confused
yeah
and.
B
B
B
First
to
the
code,
the
code
itself
is
just
bucketing,
according
to
the
percentage
applying
a
a
end
filter
against
the
hash
of
a
twice
id
and
based
on
the
number
of
preventing
zeros
of
that
hash,
and
that
end,
you
know
whether
you're
hitting
10
20
30
of
a
population
all
the
time
right,
even
given
the
very
large
population,
you
would
eventually
probably
strictly
like
eventually
ascentatically
reach.
That
percentage
point
right.
It
may
not
be
that
the
first
five
minutes
you
get
to
20.
B
B
I
can
reuse
your
logic
I
can
just
take
trace
id
applies
this,
but
what
you're
telling
me
is
that
there's
actually
more
work
around
traces
and
that
work
may
invade
it
to
some
extent
the
work
that
is
currently
happening
like
it's,
it's
slightly
different
from
the
current
logic.
That's
in
the
collector
am
I
right.
C
So
so
you
mentioned,
you
looked
actually,
I
think
in
the
trees
at
the
eurasia
based
temple,
the
old
one.
So
before
we
started
the
specification
of
the
consistent
sampling
approach,
so
I
I'm
not
sure
if,
if
this
is
the
latest
source
code,
what
do
you
say.
A
But
but
this
process
you
described
is
sort
of
still
logically
present
in
a
lot
of
these.
You
know
the
the
r
value
that
I
described
is
equal
to
the
number
of
zeros
that
you
see
in
that
equation
that
you're
looking
at,
for
example,
we.
A
A
If
we
specified
a
standard,
hashing
approach,
then
you
could
do
you
could
perhaps
not
use
the
r
value
you
could
potentially
not
have
requirement
for
randomness
or
true
randomness,
but
everyone
would
have
to
do
that
and
we
were.
We
were
concerned
that,
like
we,
don't
want
sdks
to
have
to
implement
a
standard,
hashing
algorithm.
It's
like
good
good.
Look
with
that.
Also
that
the
statistical
properties
can
only
get
worse
from
that
from
from
randomness
at
that
point,
so
yeah
so.
A
So
I
think,
there's
a
question
here,
which
is
the
maybe
the
reason
why
tracing
is
so
sensitive
to
sampling
is
that
we
need
to
do
it
up
front
in
the
in
the
sdk
to
make
that
decision.
Before
we
propagate
that
decision
and
and
then,
let's
see
the
scenario
you're
looking
at
is
like
the
logs,
the
collector
is
collecting
these
logs
and
it's
different.
It
can
have.
It
can
have
a
single
piece
of
code
instead
of
having
a
standardized
hash
algorithm.
Just
just
do
it.
A
A
Well,
I
guess
we
can.
We
can
definitely
make
sampling
for
logs
match
sampling
for
trace.
A
A
Well,
the
all
of
open
telemetry
had
that
original
set
of
samplers
built
in
which
just
includes
trace
id
ratio
based
and
that's
right,
okay
and
every
one
of
them,
I
think,
does
something
approximately
the
same,
but
but
not
guaranteed.
To
be
the
same
and
that's
why
there's
like
a
warning
in
the
spec
that
we
were
trying
to
fix
for
a
long
time
the
warning
says:
there's
no
there's
no
stable
hashing
function,
that's
been
specified,
so
you
should
do
this
as
a
route
and
then
honor
your
decision.
It's
not
truly
consistent
right,
so
we
were.
A
That
with
introducing
the
the
way
to
to
have
true
randomness
in
the
trace
state
and
so
on,
right.
C
C
B
B
A
That
is
the
implication
to
me
and
that
currently
I
see
a
span
id
trace
id
and
flags
and
there's
one
more
field,
part
of
context,
that,
without
that
we
yeah
we.
B
A
A
I
don't
know
what
else
is
going
to
go
into
trace
state
eventually
and
whether
that's
going
to
be
good
or
bad
for
logs.
But
it's
meant
to
be
part
of
the
span
and
if
atmar's
statement,
which
I
believe
which
is
logs,
are
just
like
spans
part
of
a
trace,
they
just
don't
have
that
structure
of
a
span
and
we
can
sample
them.
The.
D
A
As
long
as
we
have
a
context
and
it's
worth
okay,
here's
here's
the
here's,
an
interesting
sort
of
point.
I
think,
when
you're
making
that
sampling
decision
in
in
band
as
a
tracer
you've
got
the
context
the
whole
context.
If
you're
making
that
decision
as
a
logging
sdk,
you
probably
also
have
the
whole
context:
you're,
not
making
that
decision
based
on
a
log
record
you're
making
that
decision
in
context
as
a
piece
of
code
handling
a
log
event.
A
You
may
have
the
trace
context
at
that
point
and
you
can
do
the
sampling
at
that
point
problem.
Is
we
don't
have
a
place
to
put
the
count
if
you're
interested
in
accounting,
then
you
still
need
that
count
information,
but
you
just
said
you
weren't
actually
and
maybe
so
so,
maybe
step
one
is
say:
we
know
how
to
do
a
consistent
sampling
of
logs
the
same
way
we
do
for
spans
in
context.
We
have
the
we
have
the
trace
state.
We
can
just
make
the
decision.
A
Them
the
way
we
wanted
to
do
with
spans,
which,
if
nobody's
no,
if
nobody
wants
that,
then
nobody
wants
that.
I
I
do
know
of
scenarios
from
my
past
job,
where
people
wanted
that
it's
like,
like
you,
know,
you're
sampling
logs
and
you
want
to
have
a
frequency.
Then
then
you'd
like
to
have
that
count.
B
Okay,
damn
it
well,
maybe
I'll
reset
a
bit
of
discussion
here,
because
I
mean
I
think
you're
this
trace
the
tracing
log
discussion.
I
think
what
needs
to
happen
in
that
case,
we
need
to
have
some
level
of
maturity
from
your
folks
to
your
point,
where
you're
like
everything's
working
and
you're
you're
happy,
and
then
we
can
just
pour
that
over
to
loves
and
there
isn't
going
to
be
a
long
discussion
about
what
love
should.
B
Do
I
mean
you,
you
would
be
able
to
just
say:
look
log
is
just
like
a
spam,
just
like
omar
said,
and
we
should
apply
the
exact
same
logic
and
we're
good
to
go.
Here
is
my
requirement
when
I
started
all
this.
We
have
a
a
user
who
is
defending
terabytes
of
data
to
a
collector
every
day,
right,
so
just
a
huge
fire
hose.
They
want
to
have
a
pipeline,
by
which,
let's
say
0.1
percent
goes
to
a
destination
for
immediate
analysis
and
the
rest
of
it
goes
to
another
place
right.
B
So
it's
no
longer
tied
into
traces
and,
from
my
point
of
view,
in
terms
of
sampling,
it's
just
a.
I
want
to
apply
a
percentage
point
and
say
I
want
to
have
the
easy.
You
know
almost
no
cost
solution.
B
Where
I
can
say
I
want
to
have
a
portion
of
my
traffic
go
one
way
and
I
don't
want
to
have
to
think
too
much
about
about
it.
It
needs
to
be
very
expensive,
very
cheap
to
compute
that
that
the
decision
that
decision
cannot
be
completed
before
because
they
don't
have
any
control
over
the
ingest
itself
until
it
hits
the
collector
the
collector
could
be
deployed
in
an
aha
setup
in
are
parallel
sequence.
Right
to
just
you
know,
withstand
the
pressure
and
instead
just
have
a,
I
must
say,
a
uniform
decision
right.
B
You
should
not
be
overly
surprised
by
how
it
comes
to
decision,
but
what's
simple
and
then
there
were
some
additional
requirements
such
as
well,
you
know
based
on
the
severity
of
the
logs.
Maybe
you
would
want
to
apply
different
level
of
sampling.
Now
we
take
that
into
consideration.
B
Remove
the
idea
of
a
trace
right,
the
when
I
started
those
this
works
right.
Okay,
that's
what
can
I
do
with
this
and
then
and
then
along
came
the
idea?
Well,
you
know,
before
you
do
something
specific
to
logs
looks
like
the
trace
folks
have
their
stuff
in
order
just
piggyback
on
their
logic
right
this
their
stuff.
B
So
I
use
the
code
for
traces
that
is
using
this
ratio,
based
on
on
the
hash
of
the
of
the
trace
id
against
the
buckets
of
the
that
are
completed
from
the
percentage
points
right,
and
I
applied
that
wholesale
to
logs.
So
now
I
know
for
a
fact
that
if
I
have
a
value
on
my
log
record
that
I
can
use
against
this,
this
set
of
buckets,
I
can
get
to
a
good
percentage
point
and
I
can
really
just
sample
what
I
need.
B
The
next
step
was
to
say:
well,
maybe
there's
no
race
id.
Maybe
the
customer
is
a
little
smarter
than
mimosan
has
the
ability
to
set
up
a
an
id
on
the
record
or
to
point
at
an
idea
on
the
record,
and
that
could
be
a
portion
of
the
adjacent
document
in
the
body
or
it
could
be
anything
right,
and
in
that
case,
what
you
want
to
be
able
to
do
is
we
want
to
manipulate
the
body
of
the
log
record
to
isolate
this
id
store
it
into
an
attribute
value?
B
A
And
yeah-
and
there
is,
for
example,
a
collector
processor
right
now
called
probabilistic
trace
or
probabilistic
span
processor.
I
think,
which
sounds
almost
exactly
like
what
you're
doing,
of
course,
is
using
the
ids
that
we,
the
fixed
ids,
that
we
have,
but
it
is
applying
a
hash
function.
That's
not
specified
it's
just.
D
B
B
Yeah
I
took
that
code
and
I
changed
it
so
slightly.
So
now
it's
going
to
logs
and
I
also
change
it.
So
it's
not
just
taking
trace
id,
but
you
can
override
that
to
use
another
attribute
of
your
choosing,
and
I
also
made
it
so.
You
could
also
use
another
attribute
on
the
record
say
this:
the
percentage
of
sampling
for
that
record
is
going
to
be
actually
higher
because
we
care
about
disability.
A
For
example,
so
I
still
see
three
topics
here
and
I
want
to
try
and
help
so
the
the
collector
processor
that
you're
describing
it
sounds
good,
and
it's
not
something
that
I
think
we
need
to
specify
this
in
the
same
way
that
we
that
we
did
for
trace
because
we're
trying
to
do
something
in
every
library.
That's
the
same,
if
you're,
if
you're
in
a
post-processing
phase,
only
you
need
to
modify
one
piece
of
code,
it's
your
standard!
It's
your
thing!
You
don't
need
to
standardize
it
right
and
I
think
okay.
A
From
that
perspective,
what
hotel
needs
is
a
collector
processor
that
does
what
you've
done
so
so
when
it's
not
based
on
trace
and
it
can
be
based
on
another
input
and
you
can
attach
it,
you
can
say
it's
random,
you
can
get
a
probability.
You
can
even
record
the
probability
which
is
sort
of
a
different
topic.
A
Know
how
to
do
it,
but
we've
talked
about
how
to
add
the
trace
state
to
the
log
record
so
that
you
could
consistently,
after
the
fact,
meaning
in
a
post,
pipeline
post
processing.
You
could
do
that.
There's
still
one
topic
here
that
we
haven't
really
mentioned,
and
it's
the
other
one,
which
is
this
idea
of
controlling
the
filters
that
was
given
on
by
prachemic
in
the
the
pr
as
well,
and
I
I've
I
commented
on
that
too.
A
Here,
where
we
have,
I
would
say,
almost
exactly
the
same
scenario
and
it's
not
a
very
solved
problem
so
in
trace.
You
see,
people
come
to
sampling
and
they
see
two
things
they
still
do.
One
is,
I
have
a
way
to
control
filtering
by
choosing
which
sampler
applies.
I
can
nest
them.
I
can
embed
them.
I
can
list
put
a
list
together
like
it's,
basically
a
rule
engine,
so
you
see
a
span,
decide
which
sampler
and
then
do
it.
A
The
way
we've
kind
of
framed
the
problem
in
tracing
is
that
there
are
that
there
are
two
separate
problems
in
the
end,
you
will
compose
a
sampler
out
of
some
filtering
logic
and
some
selection
and
some
rule
engine
stuff,
which
is
what
jager
remote
config
configurable
sampling,
is
basically
and
then,
instead
of
having
the
root
leaf
case
of
your
sampler,
be
one
of
those
legacies
like
trace
id
ratio,
which
is
what
you
see.
You
know
the
hash
function,
that's
not
very
standardized.
A
Instead
of
that
that
you
would
be
able
to
drop
in
one
of
our
new
new
fancy,
counting
probabilistic
samplers.
That
does.
D
A
R
value
stuff,
so
that's
our
goal
is
that
you
will
swap
in
the
roof
the
root
samplers
that
know
about
probability
for
the
old
ones
that
didn't
and
then
continue
doing
your
configurable
sampling
through
something
like
jaeger's
or
whatever
hotel.
Eventually
I
mean
there's:
there's
people
want
to
do
more,
okay,
and-
and
so
you
do
that
as
well
for
for
logging.
So
if
but
again,
if
it's
just
a
standardized
collector
processor,
it's
less
important
to
standardize
it.
But
if
it's
something
you
were
going
to
say,
every
logging
sdk
should
do.
A
No,
that's
the
that's
the
point
where
it
just
becomes
a
major
specification
thing,
you're
going
to
have
to
say,
there's
a
log
sampler
and
it
receives
all
the
things
you
know
at
the
moment.
The
law,
maybe
it
receives
the
whole
log
because,
unlike
spans,
it's
not
done
at
that
point.
It's
much
simpler
than
spans.
You
make
your
decision,
you
return
well,
no,
but
your
decision
is
going
to
be
based
on
more
than
probability,
it's
going
to
be
like
looking
at
the
attribute.
Okay
for
that
attribute.
A
B
Yeah
yeah,
so
I
completely
understand
by
the
way,
I
think
what
I
really
would
want
to
do
is
to
wait
for
traces
to
have
it
like
100
cover
all
use
cases
before
logs
go
and
weight
into
that
I
mean
my
my
interpretation
right
at
this
time
is
that
logs
have
less
in
terms
of
investment
and
time
and
people
into
it.
So
I
don't
want
to
waste
the
effort
and
have
a
bunch
of
folks
like
work
on
this,
where
they
could
just
piggyback
on
traces
as
much
as
possible
and
get
smart
about
that.
B
I'll.
Give
you
an
idea
of
what
I
can
do
right,
there's
an
attributes
processor
in
the
collector
as
well.
I
know,
if
you're
familiar
with
it,
but
pretty
much,
I
could
apply
a
condition
to
it
and
say
hey
if
I
see
the
severity
of
error
change
the
field
for
the
attribute
that
is
going
to
control
the
sampling
percentage
to
100
75
50-
and
this
is
how
I'm
going
to
do
it-
is
I'm
going
to
create
a
pipeline.
In
my
collector,
it's
going
to
manipulate
all
those
records
coming
through
to
change.
B
According
to
that
logic,
and
I'm
going
to
encode
that
logic
in
the
in
the
pipeline
itself,
not
in
a
single
processor
and
then
the
final
processor
is
just
applying
the
decision.
It's
like,
oh
well,
I'm
looking
at
the
record
you
modified
and
you
change
this
priority.
You
change
its
source.
You
change
a
few
things
here.
B
D
Yes,
so
sorry
for
being
late
here
for
the
meeting,
but
I
just
wanted
to
comment
on
what
joshua
has
said
about
sdk
and
and
not
requiring.
Let's
say
the
specification
to
cover
that.
This
is
my
understanding.
If
we,
if
we
do
it
just
as
a
processor
in
open,
telemetry
collector.
So
I
think
that
this
is
not
necessarily
true,
and
the
reason
is
that
in
the
log
specification
we
work
on,
let's
say
the
ways
to
emit
locks
from
instrumentation
as
well,
and
we
have
some
capabilities
already.
D
I
think
that
the
java
and
python
sdks
already
has
this
capability
to
to
emit
logs
via
otlp,
and
I
think
that
the
problem
we
are
discussing
here
like
or
like
one
of
the
problems,
which
is
a
requirement
for
a
consistent
sampling
between
traces
and
and
logs,
might
be
relevant
here,
because
if
we
have
a
sample
decision
on
a
trace
and
we
have
logs
that
have
some
trace
id
and
span
id
attached
to
them,
we
should
be
able
to
to
use
that
sampling
decision
for
for
the
logs
in
sdk
as
well.
D
So
I
think
that
for
for
now
really
like
what
we
want
to
implement.
First
is
the
probabilistic
sampler
for
locks
in
the
collector,
and
maybe
if
we
would,
we
don't
even
need
to
cover
that
in
the
specification.
But
sooner
or
later
this
problem
will
hit
us
in
the
as
the
case
as
well.
B
And
and
by
the
way,
I
think,
there's
a
ton
of
confusion
that
came
from
the
fact
that
this
current
speech,
you
know
the
trace,
probabilistic
sampler
in
the
collector,
is
not
specified.
I
just
had
the
broad
assumption
that
the
work
that
you
folks
were
doing
what
was
already
kind
of
either
specified
or
part
of
the
existing
specification.
B
I
would,
I
would
probably
want
to
just
cover
ourselves
and
make
sure
we
have
documentation
that
goes
back
to
that
probabilistic
sampler,
even
just
for
ourselves,
so
we
can
refer
to
it
in
the
in
the
trace
and
and
log
sampler
when
we
get
more
evolved
with
the
p-value
and
our
value
approach,
so
that
you
can
always
go
back
and
say
well,
here's
the
old
way.
B
You
know
it's
kind
of
not
great,
but
that's
currently
what
was
being
corrected
until
up
to
this
date-
and
this
is
a
new
specification-
and
here
is
here-
is
the
tenants
and
the
balance
about
it.
So
you
can
learn
more
and
you
can
kind
of
make
your
own
decision
about
the
best
way
to
go
about
this.
So
I
think
maybe
what
I
would
want
to
do
is
to
change
ever
so
slightly
the
current
spec
to
document
the
current
gimbal.
A
B
Is
it
I
mean
I
was
just
going
to
just
like
put
a
bunch
of
english
words
like
hey,
we
apply
a
percentage
point
toward
creating
a
bucket.
Instead
of
you
know,
bytes
that
we're
going
to
test
against
the
hash
of
the
value
using
the
number
three
hash
algorithm,
and
we
count
how
many
buckets
you
have
in
common.
B
If
you're
above
the
certain
threshold,
then
you
made
it
the
percentage
if
you're
below
you're
being
sampled
out
here
is
why
it's
a
good
idea-
and
you
just
like
point
to
literature
about
you-
know
all
those
those
things,
because
this
is
not
first
time.
I
think
we
we
see
this
right.
B
This
is
not
we're
not
of
all
computer
science
right,
it's
just
right
and
just
just
put
it
there
and
maybe
it
could
be
a
documentation
more
on
the
center
itself,
but
I
just
want
to
make
sure
we
we
can
also
say
hey
the
specification
talks
about
a
broader
initiative
that
takes
place
across
all
components
over
there.
So
because
I
I
got
confused
like
this
yeah,
I
mean
it's
just
for
me.
A
Fair,
the
it
sounds
like
more
kind
of
a
current
state
of
the
world
summary
would
have
been
helpful.
It
also
sounds
like
the
path
for
you
is
to
spec
spec
out
and
implement
the
collector
processor
that
you
want
for
the
case
where
there's
no
trace
id
right
now,
because
we're
this
group
here,
the
rest
of
us-
are
working
on
making
it
so
that
that
there
are
standards
built
into
all
the
tracers.
A
That
can
do
this
sort
of
consistent
thing
that
we've
talked
about
also
mentioned
how
we're
hopeful,
hoping
w3c
will
improve
for
us
and
that's
coming
so
the
the
this
is
still
shaking
out.
In
other
words,
if
you
were
to
go
ahead
with
the
the
collector
processor,
that
does
exactly
what
you
want,
but
I
see
two
things
will
come
later.
A
One
is
we'll
catch
up
with
all
the
trace
stuff
and
we'll
get
that
working
and
at
some
point,
if
you
want
to
filter
logs
by
trace
id,
you
probably
would
just
do
that
in
the
sdk
and
you
and
and
you
don't
need
the
sdk
knows
whether
the
trace
is
sampled
or
not
the
moment.
It
sees
it
if
you're
trying
to
do.
A
A
D
A
The
trace
aspect
of
this
you're
still
going
to
have
to
carve
out
or
specify
a
filtering,
essentially
language,
and
what
I
would
imagine
looking
forward
to
is
that
I
think
hotel
wants
to
have
logging
sdks
of
its
own.
At
that
point,
yeah
someone's
going
to
say
I
want
a
log
sampler
for
my
logging
sdk.
A
At
that
point
someone's
going
to
say,
I
want
to
filter
my
logs
I'd
like
a
standard
for
building
my
logs
and
it
will.
It
will
start
to
look
like
the
same
problem
we're
having
here
at
trace,
which
is
your
remote
sampling
as
a
standard.
It's
not
quite
specified,
but
it
is
basically
specified.
We
just
want
that,
and
yet
we
haven't
figured
out
quite
how
to
integrate
that
with
probability
yet
so
because
it
does
more
than
that,
and
so
so
then
you'll
have
this
hard
problem
that
we
currently
have
in
trace,
so
yeah
yep.
A
B
No
thank
you
so
much
and
I
I
apologize.
I
missed
out
a
bit
of
the
motivation
of
that
and
I
think
now
I
understand
better
where
you're
coming
from
and
you
have
a
much
bigger,
complex
picture
on
your
hands
me.
B
B
A
All
right,
we
definitely
talked
in
this
group
a
month
or
a
month
and
a
half
ago
about
potentially
how
we
could
just
keep
using
the
spec
that
we
wrote
to
modify
that
sampler.
For
example,
if
that
sampler
is
using
powers
of
two
could
we
use
the
trace
state
mechanism
that
we
have
to
convey
the
counting
the
adjustments
to
count
from
that
processor
and
technically?
Maybe
the
answer
is
maybe,
but
it's
a
little
unsightly
and
there
were
some
some
true
reservations
in
this
group.
A
So
we
talked
about
other
ways
of
because,
like
I'm
just
trying
to
like
help
the
users
at
my
company
who
are
actually
using
that
processor-
and
they
would
like
to
you
know
if
they
configure
50
sampling
at
that
processor,
we'd
like
to
count
two
expands
per
one.
A
We
don't
have
a
way
to
do
that
yet,
and
the
the
sort
of
the
easy
thing
I
was
proposing
was
rejected,
which
would
be
just
to
continue
using
p-value,
even
though
you're
not
doing
consistent
sampling,
so
that
that
that's
a
little
disappointing,
you're
right-
and
we
have
also
talked
in
this
group-
about
how
to
fix
that.
A
A
And
then
here's
the
last
piece
of
advice
I
have
is
that
you're
right
and
that's
a
good,
that's
a
good
processor
to
leverage
it
sounds
like
you
could
use
the
same
logic
such
that
today,
before
we
solve
a
problem
with
trace
that
we've
discussed
right
now.
If
you
configure
a
logs
probabilistic
sampler
with
the
trace
id
field,
you
get
consistent
sampling,
that's
consistent
with
the
trace
id
probabilistic
sampler
for
spans
and
traces,
so
that
you're
using
one
piece
of
logic
to
sample
both
logs
and
traces.
B
Yeah
and
at
least
it's
consistent
between
the
two,
so
you
get
the
exact
same
traces
and
loves
the
match,
and
this
is
the
thing:
the
feedback
that
and
your
colleague
here
and
your
name.
I'm
sorry
is
chris
monk.
B
B
It's
not
okay.
I
apologize
I
it's
early
for
me.
So
sorry
yeah
go
ahead,
yeah,
so
yeah.
I
think
this
is
the
point
you're
trying
to
raise
all
along
is
that
we
should
at
least
get
that
parity
in
terms
of
three
side
assembling.
So
we
can
get
by
and
talk
about
that
as
a
as
a
thing
that
works,
and
then
we
get
better
over
time
right.
D
Yeah,
I
I
think
that
it's
a
process
right
like
I
think
that
we
have
some
like
requirements
that
we
want
to
to
achieve
like
with
with
this
open,
telemetry
collector
processor,
and
I
think
that
what
I
think
is
important
for
us
to
have
an
agreement
that
we
want
to
have
this
capability.
Eventually,
the
specification
describing
how
locks
should
be
sampled
like
how
this
should
be
consistent
with
tracing
and
etc,
and
and
eventually
get
there
like.
D
I
think
that
it's
it's
natural
to
to
evolve
like
with
the
the
like
iterative
and-
and
we
can
start
with
this
probabilistic
sampler
work
and
then
over
the
course
of
the
next
couple
of
months.
We
can
work
on
the
specification
part.
A
Sounds
good
to
me:
yep,
okay,
we're
good
enjoy
your
trip
to
central,
though
thank
you.