►
From YouTube: 2021-10-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Hi
good
morning
I
apologize
for
missing
the
meeting,
maybe
twice
apologize
about
that,
it's
nice
and
early.
Today
again,
so
I
put
up
a
couple
of
items.
I
hope
we
hoped
that
we
could
talk
about
that,
have
come
up
in
discussions
over
the
last
couple
weeks
related
to
the
oteps
for
probability,
sampling
and
anyone
else
can
put
up
an
item
if
they'd
like.
B
I
will
admit
I
wrote
this
first
bullet
configurable
slash
remote
sampling
configuration
anyone
working
on
this.
I
know
a
few
people
who
have
been
interested
and
have
come
to
this
meeting,
but
it's
not
something
that
my
employer
has
assigned
me
to
get
done
and
it's
something
where
I
was
looking
to
the
x-ray
and
the
yeager
communities
to
take
some
leadership.
B
So
I
just
wanted
to
state
that,
because
some
people
come
asking
hey,
where
can
I
write
a
sampler
or
how
can
I
get
a
sampler
that
you
know
tests
some
attributes
and
something
like
that,
and
it
does
its
thing
that
I
think
is
orthogonal
to
what
we've
been
doing
with
probability.
So
I
just
want
to
say
that
an
offer
that
I'm
interested,
but
I
hope
someone
else-
is
going
to
take
the
lead
on
that.
B
And
now
I
would
like
to
move
on
to
the
two
points
that
I
put
myself
mostly
hoping
to
get
atmar's
opinions
on
some
of
these
things.
Here
I
don't
think
I
can
share
anymore.
Okay.
I
don't
know
how
to
share
now
that
I'm,
you
know,
web
browser
zoo
has
been
working,
I'm
not
going
to
share.
I'm
just
going
to
say
what
I
see
there's
something
about.
B
Well,
I
put
two
bullets
here.
One
is
an
implementation
plan
and
I
want
to
describe
that
to
everyone,
because
it
is
a
minimum
implementation
plan,
but
before
we
could
get
to
the
even
the
minimum
implementation
plan.
B
This
requires
us
to
get
some
specification
written
for
the
open,
telemetry's
trace
state
entry,
which
is
how
we
use
w3c,
and
it
brought
up
this
conversation
about
tenant
multi-tenancy,
and
it
made
me
realize
that
some
of
the
issues
being
discussed
about
multi-tenancy
had
also
come
up
in
sampling,
so
atmer,
I'm
thinking
of
your
concern
about
when
we
have
partial
traces
because
of
probability
sampling.
We
end
up
skipping
some
number
of
spans
and
when,
in
the
thread
that
I
linked
to
it's,
it's
discussing
the
trace
date
spec,
but
it's
also
being
driven
by
probability
sampling.
B
So
the
the
discussion
there
turned
to
talking
about
multi-tenancy,
and
as
far
as
I
can
tell
dyna
trace,
is
the
only
vendor
who
has
a
foot
in
that
door,
and
I
so
I
it's
really
hard
for
me
to
work
through
it,
because
I
I
to
me
it
feels
like
it's
something
that
people
are
talking
about,
that
I
don't
understand,
but
I
think
if
I
think
about
it
long
enough,
it
ends
up
looking
a
lot
like
this
partial
trace,
partial
sampling
situation,
where
you
skip
spans
because
they
aren't
belonging
to
your
trace,
they're
belonging
to
another
vendor's
part.
B
You
know
in
a
single
trace,
so
I
wanted
to
see
if
that's
been
thought
through
or
discussed
at
the
dino
trace
more
than
it
has
elsewhere,
because
I
don't
fully
understand
what
what
we're
going
for.
But
it
sounds
like
if
we
got
what
we
wanted,
maybe
then
it
would
be
possible
to
have
multiple
samplers
in
effect
for
multiple
vendors,
the
same
trace.
B
B
As
far
as
I
can
see,
so
it
does
start
to
look
like
we
should
talk
about
multi-tenancy
or
at
least
say
what
it
means
before
we
go
much
further
and
that's
all
I
have
is
an
observation
that
they
look
the
same
to
me
and
I
wonder
if
we
should
finish
both
of
these
problems
at
once,
because
christian
neymar
is
saying
in
this
thread:
hey
we
never
did
multi-tenancy
and
we
kind
of
we
got
to
keep
doing
that
and
anyway,
it
doesn't
look
like
it's
fully
defined
in
open
telemetry.
A
Yeah,
actually,
I
just
talked
with
crystal
normally,
because
I
didn't
know
that
we
were
doing
that
actually
or
but
actually
it's
it's
it's
a
related
problem.
So
if
you,
if
you're
collecting,
spends
or
sending
spends
to
different
consumers
actually
of
the
same
trees,
then
both
of
them
get
only
a
partial
trace
right,
and
so
it's
it's
like.
A
If
your
samplings,
if
you're,
not
sampling
some
spans,
it's
the
same
problem,
you
also
get
just
portions
of
the
trace
and
what
what
is
already
solved
within
that
race
is
that
we
can
still
link
child
spans
with
parent
spends
if
there
are
some
missing
spans
in
between,
because
they
are
reported
to
some
other
consumer.
A
So
this
is
what
I
understood,
and
so
it's
so
it's
a
kind
of
related,
but
of
course
it
it
would
introduce
a
lot
of
overhead
because
you
have
to
put
parent
or
consumer
specific
or
tenant
specific
parent
spam
ids
on
the
trace
state.
And
of
course
this
does
not
scale
yeah.
So
if
you
have
more
than
two
consumers
I
mean
somewhere,
you
will
end
up
with
very
long
trace
states,
and
so
it
will
be
truncated
anyway.
Somehow
and.
A
B
Yeah,
this
raises
a
few
questions
for
me
about.
I
don't
know
what
customers
actually
want
when
they,
when
they
cross
these
boundaries
that
we've
talked
about
in
that
other
thread.
But
it
also,
I
feel,
like
there's
a
connection
with
well
there's.
B
There's
some
work
going
on
in
the
otps
repository,
particularly
from
microsoft,
about
suppressing
spans
and
suppressing
spans
in
relation
to
knowing
which
other
instrumentation
libraries
are
installed,
and
it
reminded
me
that
that
sometimes
there
are
more
than
one
thing
being
traced
at
the
same
time,
in
the
same
program,
it
may
be
all
going
to
the
same
vendor,
but
these
are
sort
of
different
flows
of
control
at
different
layers
in
the
software
hierarchy,
so
that
you
might
have
spans
getting
connected.
B
But
it
continues
a
trace
for
someone
else
and
like
to
me.
That
arrangement
is
not
very
difficult,
that
I
have
a
bunch
of
spans
inside
my
application
and
I'm
doing
a
database
connection,
and
it's
not
connected
with
the
request
processing
I'm
doing
because
the
database
connection
is
persistent,
so
I'm
going
to
have
one
trace.
That
explains
the
lifetime
of
my
database
and
then
I
have
one
trace.
B
That
explains
the
left
of
my
request
and
they
might
be
related
at
some
point
in
a
in
a
sense
of
multi-tenancy
at
least
that's
one
way
we
could
model
it
anyway.
My
point
here
is
that
I
don't
think
we've
modeled
this
in
open
telemetry
at
all
and
I'm
not
sure
we
should
be
slowing
down
prs
of
ours
because
of
it.
But
if
we
are,
we
might
want
to
talk
about
it.
C
B
Yeah,
I
agree
so
just
to
cut
that
short.
If
everyone
else
here
basically
understands
what
we're
talking
about.
What
I
think
we
should
do
is
is
leave
it
unspecified,
and
if
someone
is
doing
what
dynatrace
seems
to
be
doing,
they
will
have
to
be
clever
about
it
and
if
it
ends
up
duplicating
the
r
value,
I
think
that's
the
and
that's
the
worst
outcome
we
get
here.
That's
that's
not
so
bad
and
I
think
any
further
discussion
about
this
particular
corner
of
esoteric
tracing
behavior
probably
should
be
deferred.
D
Yeah,
I
think
this-
this
is
sort
of.
One
of
the
issues
I
raised
quite
a
while
ago
is
the
fact
that
we
have
these
globals
and
we
we
hook
a
lot
of
the
the
helpers
off
the
global
functions.
I
think
that's
that's
where
a
lot
of
this
comes
from
that's
the
process.
D
Or
not
well,
just
in
general,
if
you
you've
got
a
a
trace
and
you're
rather
than
passing
around
your
own
context.
You
rely
on
the
global
context
that
causes
like
multi-tenant
multi-component
issues
all
over
the
place
and
the
only
way
around
that
is
to
effectively
you
know,
carry
your
own
instance
through
the
system.
So,
in
the
case
of
your
your
database
example
effectively,
the
database
would
have
its
own
instance
and
manage
that.
D
So
therefore,
it's
got
its
own
config,
for
you
know
for
setting
out
to
its
own
collector
or
its
own
backend,
but
yeah.
I
I
thought
that
really.
B
Is
the
light
step
has
approached
this
topic?
Basically
by
letting
exporters
send
everything
to
the
same
place
and
it
seems
like
they're
I
can
imagine
just.
I
could
imagine
this
a
specification
that
allows
us
to
mix
tenants
that
is
firm
and
well
well
defined,
and
then
I,
the
the
situation
that
I'm
familiar
with
from
maybe
ancient
conversations
is
like
some
companies
running
a
huge
shared
envoy
and
their
customers
of
theirs
are
are
actually
coming
through.
This
shared
envoy
and
every
customer
gets
their
own
tracing
instance,
or
something
like
that.
B
I
think
that's
a
lot
approximately
like
what
dynatrace
is
doing
and
then
at
the
moment
you
receive
your
request.
You
you
look
at
its
trace
date,
header
and
you
figure
out
what
customer
or
tenant
it
is,
and
then
you
give
it
its
own
special
tracer
that
has
all
this
stuff
pre-configured
and
then
that's
one
way
and
then
the
other
way
that
I
think
more
like
more
of
the
vendors
are
doing,
is
just
like.
B
Let
all
the
traces
get
to
the
back
end
and
then
send
them
where
they're
going
from
an
exporter,
and
there
are
variations
in
between
here,
but
the
sampler
is
probably
the
thing
that
connects
these.
So
that's
why
we're
talking
about
it
anyway,
I
like.
I
would
like
to
leave
multi-tenancy
undefined
or
unspecified.
B
1852
pr
right,
so
we
can
merge.
Well,
I
have
said
that
in
the
pr
as
recently
as
yesterday,
but
I
think
it
I
was
just
making
sure,
especially
thanks
for
atmar,
for
confirming
what
we're
talking
about
here
is.
I
there's
really
little
talk
about
multi-tenancy
across
open
telemetry.
It's
just
barely
mentioned
in
w3c,
so
I
don't
think
we
really
know
what
we're
dealing
with.
I
think
we
could
move
on
from
that.
Then
it
is
a
a
corner
case.
B
B
B
I've
been
talking
with
daniel
daila,
who
is
our
open,
telemetry,
javascript
maintainer,
as
well
as
dynatrace,
employee
and
he's
interested
in
these
samplers
as
much
as
we
are
at
lightstep.
It
seems
to
me,
but
everyone
who
looks
at
this,
that
I
who's
not
too
close
to
detail
kind
of
understands
when
we
tell
them
look.
There's
this
way,
we'd
like
to
go
that
involves
touching
the
w3c,
because
we
believe
that
this
is
in
the
best
interest
of
all
the
users
of
open
telemetry.
B
B
It
was
based
on
opmar's
design
that
has
these
two
values
in
the
trace:
state,
p
value,
r
value,
and
we
can
keep
talking
about
parent
skipped
and
counts,
and
ten
multi-tenant
parent
spans,
but
but
so
far,
two
fields
in
the
trace,
update
and
for
to
get
what
lightstep
actually
wants.
I
just
need
to
teach
my
back
end
about
p-value.
So
if
a
span
comes
in
with
p-value
not
equal
to
zero,
I
can
count
it
for
more
than
it
is
or
less
than
it
is
that's
the
key.
B
I
really
want
that
and
I'd
like
it
as
soon
as
possible.
The
problem
that
I'm
trying
to
avoid
is
that,
once
you
touch
an
open,
telemetry
spec
for
the
sdk,
things
just
become
hard.
They
are.
They
are
these
beasts
of
all
kinds
of
like
interfering
and
over
conflicting
demands
and
there's
so
many
requirements
on
the
sdks
they're
very
scrutinized,
and
these
samplers
and
the
specs
that
are
currently
there
for
them
are
loose
in
a
way
that
makes
them
hard
to
like
just
going
to
be
a
lot
of
work.
B
Just
to
get
anything
done
is
what
I'm
trying
to
say
and
if
we
do
all
this
work
to
get
the
thing
that
we
don't
want.
It's
just
going
to
drag
on,
and
people
are
going
to
be
unhappy
with
us
the
whole
time
it
will
make
a
disaster
if
we
try
and
force
the
spec
there's
enough
people
who
say
I
don't
want
that,
keep
it
off
by
default
as
soon
as
we
get
something
off
by
default,
it's
not
very
good
for
us,
so
the
approach
is
to
kind
of
walk
a
line
here.
B
I've
been
starting
to
work
on
a
pr
to
to
to
write
down
the
open,
telemetry
data
model
for
trace.
It's
not
written
down
very
well,
it's
not
in
the
same
place.
Much
like
the
the
well
the
metrics
and
the
log
signals
have
this
already.
B
So
I'm
putting
in
a
data
model
file
that
says
what
a
span
is,
and
we
already
have
a
span,
but
we
don't
really
say
what
it
is
and
then
my
plan
would
be
to
follow
that
with
another
pr
where
we
just
say-
and
this
is
what
trace
state
is
to
us
and
here's-
how
you
can
interpret
some
of
the
values
that
you
that
we
have
defined
for
trace
state.
B
In
other
words,
I'm
only
going
to
try
and
specify
this
in
a
data
model.
If
you
see
a
trace
state,
you
may
interpret
it
thusly
essentially,
and
the
goal
is
that
we
can
come
up
with
probability
sample
or
implementations
that
do
what
we
think
we
should
do
behaviorally,
but
they're
going
to
follow
this
spec
that
we're
not
very
enthusiastic
about,
and
that
is
to
say
that
these
samplers
will
do
exactly
what
we've
talked
about.
They
will
set
our
value
and
p
value
and
they'll.
B
We
will,
I
think,
what
I'm
trying
to
say
is
lightstep
would
like
to
implement
those
in
in
a
way
that
we
can
throw
it
away
later,
and
I
think
dynatrace
may
also
be
interested
in
this
I'm
interested
in
whether
we
could
share
this
together,
essentially
to
say
that
dianetrace
and
lightstep
are
going
to
produce
a
set
of
samplers
that
do
exactly
what
we
think
we
should
be
doing
for
the
trace
id
ratio
sampler,
but
we're
not
going
to
put
in
spec
yet
because
we
don't
want
to
force
all
the
sdks
to
do
it,
because
we
want
to
throw
it
away
as
soon
as
the
w3c
gives
us.
B
What
we
want,
and
the
idea
of
keeping
it
vendor
specific
or
vendor
only
is
that
the
burden
of
maintenance
is
far
lower.
We
can
delete
these
one
day
and
we
we
will
keep
the
trace
dates
back
forever.
That
says,
you
know.
In
the
year,
2021
trade
open,
telemetry,
used,
trace,
state,
p
value
to
mean
sampling,
and
we
can
keep
doing
that
because
it's
honestly
very
simple
for
my
satellite
to
reach
to
keep
this
code
that
just
counts
spans.
B
That
way,
it's
really
hard
to
maintain
the
sdk,
with
all
of
its
intricate
spec
language
and
all
of
its
test
cases,
and
and
to
have
that
done
for
a
w
for
chase
state
solution
that
we're
not
very
enthusiastic
about
just
makes
me
sad.
So
I
think
that's
what
I
would
like
to
propose.
B
I'm
looking
for
support
from
dynatrace.
You
know,
we've
got
one
already
implemented
this
in
go
for
example,
and
I
think
you
have
done
it
in
java
and
javascript.
That's
for
us
practically
the
three
most
important
languages.
We
need
python
in
there
and
we'll
be
real
happy
and
then
like
after
that,
it's
a
pretty
long
tail
and
I'm
not
sure
unless
you
have
a
customer
or
we
have
a
customer
who's
saying
my
route
of
a
trace
is
this
language
and
I
need
to
do
sampling
now
unless
we
have
a
root
customer.
B
You
know
it's
not
too
important,
because
all
the
other
sdks
will
just
write
out
their
trace
state
and
then
hopefully,
things
just
work,
so
it
seems
like
we
could
get
by
with.
As
I
say,
data
model
specs
and
a
few
vendor
contributed
optional
samplers
for
now.
The
way
we
make
this
show
that's
working
is
that
lightstep
has
its
own
distribution
of
the
openometry.
B
We
can
just
turn
them
on
by
default,
in
our
distribution.
Do
what's
right
for
the
customers
that
are
signing
up
with
us
and
then
I
think
dynatrace
has
the
same.
You
know
you
just
configure
your
sdk
defaults
to
use
the
sample
you
want,
and
then
we
will
have
evidence
if,
if
lightstep
is
showing,
you
know,
we
have
customers
who
have
opted
into
this,
it's
more
expensive
than
we'd
like,
but
it's
less
expensive
than
no
sampling.
B
If
we
can
show
that,
I
think
that
we'll
have
evidence
that
w3c
should
should
help
us.
That's
that's
all.
I
have
that's
that's
the
strategy,
I'm
pushing
right
now
it
right
now.
It
involves
changing
the
specs
to
to
get
the
definition
of
trace
state
in
there,
but
I
do
also
plan
to
I
guess
we'll
probably
write
some
kind
of
work
plan
and
hopefully
can
share
it
with
with
dynatrace.
B
That
just
describes
like
what
we
plan
to
build
and
it's
like
a
trace
state
id
trace
ratio,
trace
id
ratio
based
sample
or
replacement
that
that
does
p
value
and
r
value.
Basically,
that's
it
and
then
test
it
and
then
release
it
so
that
hotel
users
can
use
it
and
then
turn
it
on
for
your
sdks
and
we're
done
wait
for
us,
for
you
know
future
discussion
with
w3c.
B
That's
all
I
have.
I
hope
that
that
strategy
works
for
people.
If
we
don't,
if
it
doesn't,
we
talk
about
it.
I
I
I
think
that
personally
I
just
don't
want
my
name
on
a
pro
on.
We
tried
to
make
open
telemetry
implement
this
in
every
language.
It's
going
to
go
badly
pretty
sure.
So
that's
what
we
should
do.
I
think.
B
Well,
I
thank
you
for
nodding.
I
appreciate
the
thoughts
nav
and
carlos.
I
would
love
to
talk
about
anything
else.
Peter.
I
see
you
here
from
cisco.
Is
there
anything
we
could
talk
about
sampling
with
you.
E
E
One
thing
that
I
take
from
from
my
understanding
is
that,
in
order
to
benefit
really
from
from
this
r
value
being
propagated,
all
the
way
through
the
whole
trace
is
the
ability
of
the
back
end
of
the
consumer
of
those
traces
to
process
it
properly.
E
They
are
very
well
prepared,
and
but
there
are,
these
are
complex
things
and
there
might
be
some
kinks
that
we
will
have
to
work
out.
So
your
proposal
makes
a
lot
of
sense
from
that
perspective,
because
we
will
have
a
chance
to
to
tweak
a
little
bit
those
specifications
if
necessary.
B
Yeah
right
we've
seen
reasons
why,
with
multi-tenancy
you
couldn't
want
more
than
one
p.
Also,
we
see
there's
a
pretty
good
proposal
that
I
think
w3
should
take
pretty
quickly
to
get
the
r
value
into
the
trace
id
because
that's
totally
possible
and-
and
it
aligns
with
some
other
interests,
but
I
think
also
I
I
take
your
comment.
B
Well,
I
heard
a
similar
comment
from
engineer
at
splunk
who
works
on
the
java
sdk
and
he
said
to
us.
This
is
really
complicated.
Very
few
people
can
understand
it
and-
and
it's
going
to
be
difficult
to
get
us
to
implement
and
that's
that's
one
reason.
I'm
saying
let's
not
try
to
get
everyone
to
implement
it,
but
what
he
what
he
asked
for
was
in
the
java
world.
B
It's
called
a
technology
kit,
I
guess,
or
some
sort
of
like
essentially
a
framework
for
testing
that
you've
done
it
correctly,
and
I'm
thinking
about
this
on
two
counts
for
open
telemetry
right
now.
One
is
the
sampling,
stuff
and
sampling
is
probably
the
more
difficult,
because
we're
actually
saying
that
we
want
to
affect
children
processes
and
make
sure
that
we
count
things
correctly
in
the
child,
and
so
it
calls
for
some
kind
of
integration
test.
B
That's
more
than
one
just
one
language,
and
I
don't
know
how
to
build
that
exactly
in
the
open
telemetry
world.
So
it's
asking
for
more
than
I
know
how
to
do
easily,
but
it
does
sort
of
suggest
I
can't
and
I'm
looking
across
the
other,
the
landscape
of
open
telemetry.
Where
else
do
we
have
something?
B
This
is
complicated
enough
that
I'd
like
to
know
how
you
expect
me
to
test
it
standards,
body
or
whoever
you
are
so
I'm
starting
to
think
if
open
telemetry
doesn't
really
have
a
pattern
to
go
after
just
saying
this
is
what
we
want
you
to
build,
and
this
is
exactly
how
you
should
test
it:
sort
of
like
a
testing
requirement
section.
Maybe
that's
all
we
need,
but
you're
right.
B
We
will
learn
the
testing
plans
and
document
them
and
I
think
if
we
keep
this
off
the
sdk,
the
default
sdk
spec
for
now
it'll
just
help
us,
as
you
say,
iron
out
kinks.
I
can
certainly
see
that
that
could
happen.
So
thank
you.
This
aligns
with
our
strategy.
I
think
we
can.
We
can
look
for
the
appropriate
balance
of
rigor
and
correctness
and
labor
of
testing,
I'm
not
sure
what
it
is
exactly.
B
I
wrote
some
tests
to
make
sure
that
my
own
sampler
was
doing
the
basic
idea
that
I
like
functional
tests
and
I
wrote
some
and
I
think
it's
probably
sufficient,
but
I'm
not,
but
I
I
wouldn't
be
surprised
if
there
are
bugs
and
therefore
we
should
be
adding
tests.
So
I
think,
as
we
develop
these,
we
should
look
at
the
bugs
that
we
find
and
maybe
share
test
cases.
Basically
so
there's
a
this
is
where
we
get
into
I've.
Had
this
problem
with
my
manager.
B
These
are
the
tests
you
should
perform
on
your
sampler
and
and
kind
of
spelling
it
out
and
the
same
for
the
exponential
histogram,
for
example,
like
this
is
to
show
us
that
you've
got
one
of
these
show
us
that
you
can
test
it.
Maybe
some
standardized
testing
like
just
a
plan,
so
it
can
be
words.
You
should
test
that
this
value
equals
this
you
know
like
and
so
on.
B
In
the
case
of
sampling,
I'd
say:
put
a
million
spans
through
it
and
with
this
sampling
ratio
and
expect
to
count
so
many
at
the
end
within
a
particular
error,
and
that
that
is
a
test
that
statistically
you
can
check,
and
so
that's
the
kind
of
thing
that
we
ought
to
be
able
to
do
with
sampling.
And
I
would
love
help
writing
statistical
tests
because
they
fail
once
in
a
while,
and
you
have
to
explain
that.
B
So
that's
the
kind
of
area
where
we
could
use
help,
and
but
I
don't
have
anything
more
prepared
for
this
topic
or
this
discussion
today-
hey,
we
have
a
we
have
someone
joined,
and
since
we
were
just
out
of
topics
I
might
want
to
see
if
paul,
who
just
joined
our
call.
B
I
had
an
item
at
the
beginning
of
the
agenda
meant
for
you
and
so
now
you're
here
the
question
was
whether
anyone
would
like
to
talk
about
configurable
and
or
remote
sampling
from
the
x-ray
or
the
jager
worlds,
because
it's
almost
an
orthogonal
topic
to
what
we've
been
talking
here
about
probability,
sampling
and
not
something
that
my
employer
has
pushed
me
to
to
do
so,
I'm
not,
but
we
do
know
who
the
interested
parties
are
in
paolo
and
you're
one
of
them.
I
guess
this
is
just
to
say
we're
all
enthusiastic.
B
F
B
F
B
It's
just
that
I
know
community
the
community
reaches
out
all
the
time
and
and
part
of
it
is
that,
as
I
said
earlier
in
this
call,
there
are
some
vendors
who
care
a
lot
about
probability,
sampling
the
way
we've
been
doing
it
and
there's
some
vendors
who
seem
ambivalent
about
it,
and
I
want
to
get
the
vendors
who
see
my
bible
about.
It
are
all
asking
for
something
completely
different,
which
is
the
remote
and
configurable
sampling.
B
So
it
doesn't
matter
to
count
the
spans,
which
is
what
probability
sampling
is
about,
but
it
does
matter
to
choose
the
span,
and
so
it's
so
so
splunk
is
kind
of
the
elephant
in
room
here.
I
want
them
to
care
about
probability
sampling,
but
they
they
don't
seem
to,
and
I
have
feelings
about
that,
but
that's
business
issues
so
to
get
splunk
excited
about
sampling.
B
We
need
to
talk
about
choosing
the
sampler
decision
based
on
attributes,
and
I
think
that
that's
all
I
have
is
that
everyone
says
that's
what
I
want
that's
more
important
to
me
in
probability,
sampling
and
so,
and
I
keep
saying
it's
different.
It's
totally
independent,
but
it's
obviously
it's
in
the
same
category
of
what
people
want.
So
we
all
know
about
the
jaeger
sampling.
B
There
ought
to
converge
something
that
open
telemetry
can
get
behind
and
again,
like
the
earlier
discussion,
it's
going
to
involve
building
these
things
and
as
soon
as
you
talk
about
specifying
them,
it
just
the
labor
involved
and
the
effort
and
the
discussion
explodes
in
this
case.
B
There
I
I
know
the
capabilities
of
the
jaeger
remote
involve
basically
just
attributes,
and
it
could
be
very,
very
straightforward.
I
don't
know
how
complicated
this
needs
to
be
in
the
in
the
x-ray
case.
They
let
you
choose
various
sampling
settings
as
well
as
there's
like
a
predicate
list
and
there's
like
views
and
so
on.
B
A
A
Yeah
you,
you
can
implement
the
sampler,
which
works
like
like
the
probability
sampler
that
is
proposed
by
us
right.
So
it's,
but
currently
the
the
sampling
rate
is
fixed
or
constant,
and
but
it's
not
a
big
deal
to
make
spain
depend
and
to
choose
independent
sampling
rates
yeah
based
on
the
attributes
yeah.
So
you
can.
A
A
B
So
it
does
start
to
look
like
this.
It's
mostly
a
systems,
programming,
question
and
a
standard
for
standards
for
open
telemetry
is
okay.
I
I'm
a
collector.
Where
do
I
get
my
sampling
configuration?
What's
the
ammo
look
like
what's
the
rules
of
precedence?
When
do
I
stop
evaluating?
How
do
I
know
if
multiple
people
say
I
should
get
this?
B
Is
that
okay?
Can
we
do
that?
We
can
do
that
for
probability
speaking,
but
like
the
the
actual
configuration
file
and
and
going
through
it,
and
is
I
think,
what's
hard
here
and
and
then
the
and
then
I'm
jaeger,
jaeger
and
actually
both
have
like
one
more
base
case.
That's
not
one
of
the
probability
samplers,
which
is
just
like
one
of
these
per
hour
or
one
like
there's
these
non-probability
sampler
ideas
and-
and
we've
talked
about
that
a
lot.
It's
super
complicated,
but
not
it's
not
it's
not
too
super
hard
either.
B
It's
like
you
can
adapt
or
you
can
flip
coins
to
choose
values
and
we've
talked
about
that
a
lot.
But
I
don't-
and
I
don't
want
that
to
be
an
obstacle
so
token
bucket
or
leaky
bucket
sampling,
one
per
minute
once
per
hour
once
per
second,
all
that
can
be
common.
It
can
can
be
accommodated
outside
of
those
base
cases
by
using
these
zeros
that
we've
talked
about
and
we've
written
up
the
sampler
composition
rules.
B
B
Yeah,
I
kind
of
made
this
idea
up,
but
well
I
didn't
make
it
up
the
problem.
The
way
I'm
approaching
this
is
that
the
metrics
s,
the
metric
sig,
starting
with
open
census
metrics
as
a
plan,
had
the
idea
of
metrics
views
from
the
start,
because,
because
open
census
did
but
whereas
in
the
tracing
side
of
open
census
in
the
history,
there's
just
not
a
precedent
other
than
x-ray
and
jaeger
for
this.
B
So
I
I
I
well
what
I'm
calling
view
configuration
is:
okay,
you've
got
an
sdk
and
it
is
producing
this
tone
of
data
with
different
instrumentation
libraries,
different
span
names,
different
attributes
and
events
and
all
that
stuff.
But
really
it's
that
the
first
two
coordinates
are
instrumentation
library
and
span
name
and
then
attributes
so
this.
B
This
first
of
all,
there
was
a
recent
debate
about
whether
instrumentation
library
belongs
in
the
sampling
decision,
and
there
was
a
lot
of
debate
about
it
and
the
reason
why
we
eventually
rejected
it
is
that
we
think
you
can
set
up
your
sdk
so
that
by
the
time
you're
actually
sampling.
You
know
your
instrumentation
library.
You
could
bind
that
before
you
get
the
sampler
set
up.
Okay.
Well
then,
it's
that
makes
it
harder
to
set
up
sdks.
B
That
means
I'm
gonna
have
to
bind
each
sampler
with
each
instrumentation
library,
but
but
that
is
an
efficient
approach
and
then
the
next
question
is
okay.
I
am
now
starting
my
process
and
I
have
15
instrumentation
libraries
and
I
want
to
configure
one
of
them
to
suppress
all
of
them,
and
I
want
to
configure
one
of
them
just
100
sampling,
and
I
want
to
configure
this
third
13th
one
to
have
only
one
of
its
spans
50,
and
I
want
this
other
library.
So
what
I'm
talking
about
is
the
behavior.
I
want
that.
B
I
set
up
in
the
sdk-
I'm
not
really
talking
about
sampling,
so
it's
possible
to
think
of
what
I
just
described
as
one
composite
sampler
that
looks
at
resource
and
instrumentation
library
and
span
name
and
attribute,
and
does
so,
but
and
that's
why
I'm
trying
to
push
back
on
calling
that
one
sampler?
What
what
I'm
trying
to
call
this
is.
We've
got
these
base
case
samplers
that
do
probability
that
that
flip
coins
for
us
and
then
go
ahead.
F
B
Yeah,
let
me
just
try
and
clarify
what
I
was
getting
at
the
discussions
in
the
spec
in
the
past
month
and
a
half
or
so
had
to
do
with.
Must
you
provide
the
instrumentation
library
to
this
should
sample
decision,
which
is
the
sampler's
interface
and
we'd?
B
You
can
configure
your
sampler
that
only
well
you
can't
sample
based
on
latency.
Sorry
forget
it.
If
you,
you
have
a
sampler
decision
that
says
predicate
that
says:
instrumentation
library
equals
a
and
some
other
attribute
condition.
Well,
you
can
bind
your
instrumentation
library
into
your
your
sampler
decision
so
that
only
these
instrumentation
library
a
ever
evaluates
the
second
attribute.
B
That's
the
type
of
optimization
that
we
envision,
so
that
when
I
call
view
configuration
I'm
trying
to
separate
simple
sampling
decisions
from
what
is
it
you're
trying
to
do
with
this
instrumentation
all
the
instrumentation
in
your
code,
and
so
then
I
and
I
take
this
word-
view
configuration
from
metrics
where
in
the
sdk
spec
there
is
a
section
that
says
view
configuration
it
says
for
each
instrumentation
library
and
instrument,
metric
name
and
and
attributes
you
can
configure
which
aggregation
to
apply,
and
that
means
you
can
suppress.
B
Metrics
means
you
can
have
multiple
aggregations
on
metrics
and
all
that
action
of
configuring
which
metrics
you
want
and
which
attributes
you
want
all
that
stuff
has
equivalent
in
tracing,
which
is
basically
what
you're
talking
about
jaeger's,
sampler
and
x-ray
sampler
do
that
for
spans,
and
so
there's
this
debate
happening.
Should
we
should
we
try
and
get
the
one
big
sampler,
the
grand
sampler,
we'll
call
it,
which
has
all
the
decisions
about
views
built
into
it,
as
well
as
all
the
stuff
about
probability
built
into
it?
B
It's
just
like
the
mega
sampler,
or
do
we
sort
of
separate
these
problems
into
sampling?
Is
a
decision
that
you're
going
to
make
in
the
moment
and
what
you
actually
want
to
see
from
your
sdk
when
you
start
tracing
is
actually
a
different
kind
of
decision
which
can
be
implemented
as
a
sampler,
but
it
can
also
be
implemented
by
hard-coded
rules.
It
can
be
implemented
in
other
ways
and
it
can
also
control
variables
that
are
not
controlled
by
sampling.
B
So
I'm
thinking
about
verbosity
level
and
like
do
you
want
to
suppress
events
in
this
band.
Like
there's
all
kinds
of
ways,
you
could
change
what
gets
recorded.
That's
not
a
sampling
decision.
Those
are
things
that
you
might
control
with
a
view
as
well.
The
example
I
used
was
okay.
There's
a
span
event.
Do
you
want
to
encode
the
stack
trace
because
I'll
tell
you
where
it
happened,
that's
expensive.
Can
I
turn
that
on
once
in
a
while,
or
can
I
turn
it
on
for
just
one
instrumentation
library?
B
Those
are
the
things
that
are
not
sampling
decisions
that
are
configuration
of
the
view,
and
I
think
there
hasn't
been
enough
discussion
that
sort
of
swirls
around
all
this
like.
How
do
I
set
up
my
span
processor?
Well,
that
has
some
of
the
settings
that
control
how
spans
get
written
out.
So
it's
part
of
your
view.
Configuration
too,
are
you
batching.
B
Are
you
not
batching,
so
I
I
see
sampling
as
a
component
of
a
view
configuration,
but
the
view
configuration
feels
bigger
to
me,
and
I
think
the
grand
solution
here
is
asking
for
spam
views,
so
sampler
would
be.
F
One
of
the
view
configurations
is
that
right
thinking
about
it,
that's
right,
yeah!
I
like
the
idea,
especially
when
you
mention
the
stuck
traces.
It's
really
made
it
clear
for
me.
B
Cool
yeah,
so
I'm
not
trying
to
push
back
and
stop
people
from
this.
What
I
call
the
grand
sampler
design,
but
I
think
it's
it's
conflating
things
that
ought
to
be
better,
better,
not
conflated,
so
so
that
we
can.
Then
we
can
focus
on
just
probability,
sampling
for
the
base.
Samplers
and
any
kind
of
configurable
sampler
will
just
do
its
decision
making
and
return
you
some
base
sampler.
B
Anyway,
I'm
looking
for
I'm
expecting
or
hoping
for
leadership
or
proposals
or
further
movement
on
this
from
somewhere,
because
I
think
jaeger
really
wants
it
and
I'm
enthusiastic
and
I
will
be
a
supporter
yeah.
F
So
we
have,
we
have
just
discussion
on
the
agricult
that
we
will
deprecate
our
clients
at
the
end
of
this
month
and
basically
access
only
security
or
bug
fixes
and
basically
try
to
redirect
older
users
to
open
telemetry.
But
at
the
same
time,
in
open
telemetry
there
is
the
remote
sampler
is
missing
in
most
of
the
sdks.
B
B
Let's
just
focus
on
it,
I
guess
I
know
it'll
be
it'll,
be
good
to
have
jaeger
users
coming
in
to
ask
for
it,
but
I'm
not
sure
how
to
get
off
the
old
libraries
without
I'm
not
sure.
B
B
You
have
exactly
the
same
remote
sampling
that
I'm
expecting
it's
kind
of
what
I'm
expecting
to
hear
before
you
joined
the
call
earlier,
I
tried
to
address
effectively
a
similar
type
of
chicken
and
egg
problem
here
for
the
probability
sampling
itself,
which
is
say
that
we've
specced
out
something
that
we
don't
love
and
I'd
like
the
w3c
to
give
us
some
bits
so
that
we
can
make
it
better.
B
And
so
I
don't
want
to
force
all
the
hotel
users
libraries
to
implement
this
thing
that
we're
not
quite
sure
of
yet-
and
I-
and
maybe
this
would
be
the
way
to
go
for
jager-
is
that
there
would
be
a
jaeger,
specific
hotel
library,
that's
otel,
plus
the
yeager
sampler
hacked
in
there.
While
we
try
to
specify
it
because,
because
the
way
this
body
of
standards
moves,
it's
you've
got
a
month
to
do
that,
I'm
not
sure
I'm
not
sure
that'll
happen
in
a
month.
B
B
Right
as
in
the
contrib
so
yeah,
if
we
got
to
that
point
and
then
the
only
thing
that
I
the
the
remaining
worry
that
I've
I've
voiced
at
least
once-
is
that
I'm
a
little
concerned
that
hotel,
if
hotel,
doesn't
try
to
do
to
say
what
should
be
done,
that
the
jager
sampler
will
become
a
standard
which
is
okay.
I
just
haven't
looked
at
it
very
carefully.
B
F
B
B
It's
not
that
strong.
It
says
that
if
you
have
one,
this
is
the
name
that
you'll
use
to
select
it.
So
it
goes
as
far
as
saying
this
is
how
to
get
the
old
behavior,
but
it
doesn't
say
what
the
behavior
really
would
be
anyway.
I.
B
F
B
Well,
I
think
we
should.
I
would
love
to
see
a
proposal
that
just
I
guess,
has
to
it's
hard
to
write
these
specs,
so
I
guess
you're
you're,
saying
that
there's
a
file
format
or
it
could
be
a
protocol
format.
As
you
see
in
the
first
link
there,
and
then
I
guess
definitions
that
tell
you
how
to
evaluate
it.
B
F
B
I
think
that
that
would
be
a
good
start,
and
then
we
can
figure
out
what's
the
gap
between
there
and
and
support
that
is
wanted
more
broadly,
I'm
thinking
only
of
the
x-ray
support
when
it
comes
to
setting
probabilities
and
rate
limits
and
and
like
and
and
the
actual
decision.
B
Yes,
no
those
turn
into
probabilities
questions
often,
but
but
if,
if
it's,
how
we
spec
specify
how
we
format,
how
we
the
protocol
syntax
for
the
actual
configuration
how
we
communicate
that
all
that
stuff
falls
outside
sampling
to
me
and
I
think
that's
what
carlos
was
saying.
We
should
see
and
begin
to
sort
of
bring
into
open
telemetry.
B
B
Well,
I
think
we're
enthusiastic.
I
I
look
forward
to
this
being
carried
out
somehow.
F
B
B
B
Very
good,
okay,
everyone.
Thank
you
for
joining
this
call
I'll
keep
working
on
the
trace
data
model
stuff
we
talked
about
earlier
and
I
will
get
to
a
some
sort
of
question
about
test
plan
for
you,
atmar
and
daniel
and
ourselves.
Thank
you
all
see
you
maybe
next
time
I
don't
know.
Maybe
we
should
do
this
every
two
weeks,
I'm
starting
to
think
we'll
see
I'll
be
on
the
slack.