►
From YouTube: 2022-07-28 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
It
going
today
it's
going
all
right
at
my
normal
job.
Several
people
are
in
town,
I'm
in
san
francisco
and
so.
A
Oh
okay,
yeah.
A
We,
we
have
a
the
team.
I
work
on.
The
telemetry
team
is
about
eight
of
us
total,
and
so
we
we
most
most,
people
are
on
the
east
coast
and
we
rented
a
house
together
in
west
virginia
in
june
for
four
days
and
just
you
know,
people
who
couldn't
drive
in
flew
in
and-
and
we
just
hung
out
for
four
days-
and
it
was
great-
it
was
like
first
time
in
three
years
I've
been
able
to
like
get
together
with
co-workers.
B
At
my
job
before
this,
we
met
once
or
twice
with
like
folks
from
honeycomb
like
christine
came
at
one
point.
Oh.
B
And
then
archon
came
at
one
point
and
I
was
I've
been
wondering
like
what
fraction
of
the
whole
company
at
this
point
is
like
still
in
the
bay
area
versus
it
sounds
like
many
on
the
east
coast.
A
A
I
had
actually
worked
with
ben
back
12
13
years
ago,
and
so
I
was
interested
in
honeycomb
and
then
a
few
years
ago
I
was
I
I
was
talking
to
them
and
they
were
very
much
still
very
much
a
san
francisco
focused
team,
and
so
there
wasn't
really
a
fit,
but
then
a
year
later
it
was
actually
right
before
kovid.
A
They
had
made
the
decision
after
having
had
a
difficult
year
that
it
made
a
lot
more
sense
to
be
fully
remote
and
or
fully
distributed,
and
so
it
that
worked
out
well
going
into
govi
but
but
I
joined,
I
joined
right
right
after
covet
started.
Actually.
A
But
yeah
it
it's
still
with
the
exception
of
three
people
in
the
uk
right
now,
it's
still
very
much
north
american
company.
We
have
just
opened
it
a
business
unit
in
canada
because
we
had
so
many
people
in
canada.
It
was
worth
it
but
yeah
growing
fast.
It's
nice.
B
Yeah
I
just
wanted
to
thank
you
very
much
peter
for
your
comments
on
the
google
doc
I
wanted
to
just
forward
together,
make
sure
I
understood
what
your
meaning
for
for
all
the
threads
and
maybe
resolve
some
of
them.
So
let
me
open
that
up
now.
B
Let
me
scale
it
up,
so
this
one.
I
thought
that
was
a
good
point.
I
will
work
that
in
I
don't
think
I
hadn't
had
the
whole
the
like
completeness
criterion
in
my
head,
so
I
don't
think
that's
well
covered
by
any
of
these,
and
I
think
it
is
distinct
and
something
that
is
important
to
a
lot
of
people.
B
Let's
see
this
one,
I
did
not
type
a
reply
to
because
I
thought
it'd
probably
be
easier
to
just
discuss
it,
but
so
like
I
guess
I
would
start
with.
You
mentioned
the
biggest
idea.
You
sorry,
libya's
biggest
problem
with
the
currently
specified
trace
id
ratio
based
sampler
is
that
it
doesn't
I'll
just
say,
support,
adjusted
count
right.
That's
definitely
true.
It's
interesting
some
one
of
the
sort
of
meta
comments
I
have
about
like
this
stock.
B
So
far
is
so
like
this
comment,
as
well
as
like
some
of
the
themes
of
yuri's
comments
are
sort
of
like
I'm
trying
to
balance
like
this
is
like
what
we
have
today
and
why
it's
problematic
versus,
like
with
you,
know
some
more
iteration
on
this
thing
like
it
could
like
sort
of
fundamental
limitations
versus
like
with
a
little
more
work.
This
thing
that
I'm
you
know
complaining
about
to
be
simple,
like
could
be,
this
problem
could
be
solved.
B
So
I
guess
it's
true
that,
like
pretty
much
every
every
sampler
that
anyone
uses
today,
like
doesn't
support,
adjusted
count.
Yet
I'm
not
sure
how
I
should
like
work
that
in
I
guess,
yeah
well,.
C
So
so
the
other
problem
which
we
identified
in
the
past
with
trace
id
ratio
based,
is
that
trace
id
is
not
normalized
in
terms
of
probability,
distribution
of
its
values.
So
all
the
what
the
specification
says.
That's
a
unique
number,
which
kind
of
implies
that
it's
random,
but
it
doesn't
really
say
which
bits
are
important
and
which
are
meaningful
or
random,
and
that
makes
any
hash
function
on
the
stress
id,
not
vendor
independent.
C
C
B
B
Since
this
shortcoming
applies
to
like
all
of
everything
on
the
market,
I
guess
you
could
say
like.
Would
you
agree
with
that
like
nothing?
Yes,
what's
this
so
like.
C
Maybe,
with
with
exception
will
will
always
on.
Of
course
it
doesn't
need
that
adjusted
count
because
it's
implied,
but
that's
that's
an
exception
and
by
the
way
you
never
know
whether
it
was
always
on
or
not
right.
So
there
is
no
feedback
at
the
back
end
about
about
what
was
going
on
with
the
head
or
tail
sampling
right.
B
B
Maybe
right
here
like
mention
like
all
all
the
followings,
don't
yeah
support,
adjusted
count
is
bad
kind
of
thing,
right,
okay
and
then
for
your.
The
first
part
of
your
comment.
The
following
observations
are
probably.
C
C
A
B
What
would
you
say
about,
I
guess
I
would
submit,
or
the
the
definitions
I
have
in
my
head-
was
that
a
probabilistic
sampler
could
include
a
sampler
which
is
so
akin
to
the
jaeger
adaptive
where,
like
it
is
probabilistic
but
like
sort
of
internally,
the
the
cutoff
for
its
decision
to
sample
is
constantly
changing,
and
in
that
way
it
is
like
attempting
to
balance
some
of
the
like
sampling
goals.
B
Like
the
thing
that
the
like
egregious
flaw
or
egregious
shortcoming
for
this
is
that
the
threshold
is
static
and
maybe
there's
like
a
clearer
way.
I
could
say
that,
whereas,
like
I
would
not
sort
of
levy
these
complaints
against
yes,
probabilistic
samplers,
where
the
threshold
is
constantly
changing,.
B
Okay,
it
sounds
like
we
agree,
then
would
you
like
to
see
any
like
edits
or
clarifications
based
on
the
first
part
of
your
comment.
C
B
Okay,
thanks
for
the
comment,
let's
see.
B
Yeah
this
was
in
this
comment
here
and
one
or
two
others
from
yuri
who,
I
believe
I
mean
like,
is
a
primary
maintainer
of
the
jager
project
software.
He
mentions
like
a
couple
times
where
I
sort
of
state.
You
know
this
is
how
it
works,
and
it's
not
great
and
he's
like
well,
you
know
it
could
be
different
with
a
little
bit
more
work
and
so
part
of
my
response
or
reaction
is,
like
you
sure,
son.
B
I
guess
I'll
make
some
room
to
say.
Like
you
know
this,
this
could
be
different
or
you
know
generalized
with
some
more
iteration
here
it
he
does.
Similarly,
so
I
think
I
think
I'll
just
probably
find
a
way
to
like,
since
he
was
interested
enough
to
like
call
these
two
examples
out
I'll,
probably
work
in
integrate
his
his
comments
on
these.
This
is
like
a
correctness,
accuracy,
kind
of
thing
that
I'll
tweak
this.
B
C
C
This
is
far
from
true,
what's
going
on
here.
With
this
token
buckets
well,
but
let's
keep
it
in
okay.
Unbiased
sampling
is
not
something
that
is
always
needed.
Depending
on
the
application
domain.
You
have
to
apply
some
unbiased
sampling,
but
depend
depending
on
a
domain.
It
might
not
be
that
much
relevant.
C
For
example,
if
we
are
dealing
with
internet
traffic
with
thousands
of
even
millions
of
independent
actors
that
generate
the
expense
bias
is
not
important
because
they
are
so
independent
from
each
other
that
it
doesn't
really
matter
the
so
here
the
problem
is
that
we
don't
get
any
any
spans
that
were
sampled
with
probability
different
than
one,
which
makes
our,
which
implies
adjusted
count
of
one
for
all
of
them,
and
we
don't
see
anything
else,
so
there
is
just
missing
data.
This
is
a
missing
data
problem.
C
This
the
theoretical
behavior,
is
correct
in
the
limit
when,
when
the
size
of
the
span
span
set
goes
to
infinity,
then
everything
looks
good,
but
by
using
numbers
as
small
as
2
to
minus
62
right
it.
It
means
that
you
need
to
have
an
order
of
magnitude,
more
input
spans
than
2
to
power
of
62
in
order
to
really
see
the
benefit
yeah,
which
of
course,
is
not
practical
at
all.
B
Yeah
the
way,
the
way
I
think
about
this
is
that
like
like,
because
I
do
because
I
do
keep
the
like
p
threshold
non-zero
like
that
was
the
you
know
necessary
and
sufficient
thing
to
keep
the
whole
system.
Like
mathematically
sound,
and
I
used
this,
I
used
zero
bias,
because
what
I,
what
I
had
understood,
is
that
it's
unbiased
in
the
sense
of
like,
on
average,
like
you,
said
over
many
many
samples
or
observations
or
whatever
like
in
the
limit.
B
It
is
gonna
not
deviate
from
the
like
true
that,
like
the
expected
value
of
this
estimate
is
gonna,
be
the
like
true
value.
However,
because
you
have
these
like
extremely
infrequent
events
that
will
blow
up,
you
know,
have
huge
effects
on
on
the
the
estimate.
B
Yeah
yeah
yeah,
okay,
thanks
yeah.
I
totally
accept
that
for
sure
it's
like
and.
B
Can
I
guess
I
would
say
that
I'm
pleased
to
notice
that
this
shortcoming,
I
think,
can
be
tied
back
to
like
goal
number
three,
which
is
that,
like
the
error
for
such
an
estimate
is
like
gonna,
be
so
huge
that,
like
it,
you
won't
be
able
to
derive.
You
know
valid
insights
from
it
in
you
know,
reasonable
time
ranges
with
reasonable
amounts
of
spams.
C
B
B
Yeah
yeah.
No,
I
guess
I
should
say
that
they're,
like.
B
Like
if
you
squint
they're,
both
suffering
in
this
category,
but
like
undeniably
they're
in
like
different
orders
of
magnitude
like
in
how
bad
and
how
badly
they
fall
short
of
this
of
this
goal,
yeah
cool,
let's
see
where
kent
I
wanted
to
make
like
an
analogy
to
like
honeycomb
use
in
case,
you
would
find
it
helpful.
B
B
So
like
I,
I
imagined
what
kind
of
results
I
would
get
if
I
had
this
concept
and
it'd
be
like
constantly
like
you
know,
my
account
would
be
one
and
then
maybe
once
in
my
lifetime,
the
count
would
just
like
blow
up
when
it
like
hit.
One
of
these
spins.
A
Well,
I
have
had
a
customer
who
was
sampling
it
at
actually
only
one
in
500,
but
the
the
rate
of
arrivals
of
of
of
events
that
caused
the
problem
was
low
enough
that
what
they
were
seeing
was
just
nothing,
nothing,
nothing,
you
know
yeah
and,
and
they
were
like
how
do
we
deal
with
this
and
I'm
like
yeah,
but
actually
I
I
I
need
to
get
back
to
them
and
and
and
talk
about
it
a
little
more
but
yes,
right,
okay,
yeah!
C
If
I
may
interrupt
so,
I
I
understand
the
motivation,
your
motivation
here,
spencer,
for
for
writing
this
thing,
and
I
think
eventually,
what
the
conclusion
that
will
be
reached
is
that
the
customer
will
think
more
about
a
rate
of
the
spans
person
time
unit
than
in
in
any
other
criteria.
So,
for
example,
probabilistic
sampling
with
specifying
a
given
probability,
will
not
be
used
directly
by
the
customers
and
the
same.
B
C
Rate
right
so
one
in
500.
Well,
you
have
to
to
know
very
well
your
system
in
order
to
declare
that
this
is
the
right
thing.
This
is
what
you
want
to
do
so
eventually,
most
of
the
specifications
that
make
sense
will
be
expressed
in
terms
of
span
rates,
and
this,
of
course,
ties
very
well
to
points
number
two
requirements
number
one
and
two,
and
it
has
to
be
dynamic
right,
meaning
that
that
the
probability
which
is
somewhere
as
used
as
a
foundation
for
making
the
sampling
decisions
will
have
to
change
automatically
and
dynamically.
A
I
think
you
from
my
point
of
view
you
expressed
yourself
somewhat
categorically
and
and
I'm
trying
to
decide
whether
I
agree
with
the
sort
of
categorical
nature
of
the
statements.
Okay,
yeah
sure.
B
Yeah
like
on
this
on
the
subject
of
like
how
what
is
like
the
level
of
abstraction
at
which,
like
you
know,
would
be
the
dream
to
configure
sampling
or
to
like
think
about
it
in
there's
a
lower
part
in
this
stock,
where
I
I
refer
to
a
feature
in
the
open,
telemetry
collector
right
now,
where,
like
the
sort
of
way,
it
asks
you
to
think
about
like
setting
limits
on
things.
B
I
think
it's
just
very
strange
and
like
I
wouldn't
prefer
to
think
in
those
terms,
and
so
I
think
we'll
probably
get
to
that,
but
but
sort
of
inspired
by
this.
B
This
thread
here,
which
I
kind
of
passed
over
but
sort
of
similar
to
how
people
say
like
or
people
shifted
people
like
software
engineers
in
the
past,
like
five
years,
have
shifted
from
saying,
like
the
error
rate
for
my
service
should
like
like
there's
some
like
threshold
or
like
you
know,
I
don't
want
any
anomalies
in
here
or
whatever
they
kind
of
pivoted
to
like.
We
have
a
budget
for
errors
and
we're
like
we're
comfortable
with
some
amount
of
like
spending
errors.
B
We
have
some
notion
of
budget
and
it
seems
as
though
people
have
found
value
in
like
that
sort
of
acceptance
of
likes
like
defining
a
tolerable
amount
of
like
imperfection,
and
so
that's
kind
of
I
like
took
a
lot
of
inspiration
here
and
actually
like
just
last
night.
This
used
to
say
minimize
sampling
error,
but
then
that
I
realized
last
night
that
that
was
sort
of
discordant
with
my
like
mention
here
of
like.
Oh,
you
want
appropriately
reliable
or,
like
appropriately.
B
A
Solid
insight,
you
know
we
can't
lose
sight
of
the
fact
that
what
what
people
need
is
the
ability
to
make
you
know
useful
inferences
about
their
systems
from
this
data,
and
and
so
it
doesn't
it
like.
We
can't
let
perfect
be
the
enemy
of
of
useful
and
so,
like.
I,
I
think,
there's
this
kind
of
like
this
competing
goal
of
would
like
to
be
able
to
express
adequately.
A
You
know
the
goals
and
the
design
criteria
in
the
system
to
you
know
if
you
have
that
much
available,
but
there
are
an
awful
lot
of
use
cases
where
close
enough
is
good
enough,
and
we
have
to
support
people
being
able
to
do
that
without
needing
to
become
phds
in
statistics
to
simply
write
their
configuration
files.
So
right.
B
Right
and
that
very
last
thing
you
said,
like
you
know,
keeping
the
like,
you
know
minimum
necessary
knowledge
like
low
enough.
That
actually
might
be
an
idea
that
counts
against,
where
I
was
just
leading.
B
I
was
just
thinking,
brainstorming,
that,
like
rather
than
or
sort
of
in
addition
to
like,
I
think,
a
lot
of
people,
it's
intuitive
to
be
like.
Oh,
my
storage
system
or
my
sas
vendor.
Like
only
wants
me
to
send,
you
know
this
many
spans
per.
A
B
Or
like
this
many
gigabytes,
my
like
self-hosted
thing
can
only
tolerate
you
know
this
many
megabytes
per
second
or
something
I
don't
know
so
there
are
like
sort
of
units
of
data,
or
you
know
per
unit
time
that
I
think,
are
natural
for
people,
and
I
was
sort
of
amusing
in
this
response
here
that,
like
in
terms
of
like
conceivable
configuration,
it's
pretty
clear
like
probably
what
what
should
be
what
presence
or
how
the
like
requirements
that
stem
from
these
goals
like
should
enter
into
configuration
and
that
you
type
in
your
configuration.
B
You
know
this
many
things
per.
Second,
it's
less
clear
right
now.
How
like
this
concept-
or
let
me
say
it
differently,
any
sort
of
requirements
on
this
goal.
B
We
don't
yet
have
like
a
common
way
of
putting
that
into
like
sampler
configuration
or
talking
about
that,
and
that
is
sad
because,
like
what
you
will
end
up
with
when,
if
you're
like
not
able
to
tell
your
sampler
like
this,
is
the
acceptable
range,
then
it
like
won't,
have
any
knowledge
of
like
what
acceptable
range
means
to
you
and
it'll,
probably
just
like
focus
on
meeting
these
two
object
objectives,
and
so
I
try
to
illustrate
that
one
like
right.
B
If,
if
you
are
able
to
communicate
this
in
like
a
more
precise
way,
then
it
could
more
economically
identify
when
it's
happening
that,
like
it
could
do
with
collecting
less.
C
Right,
so
I
have
a
number
of
comments
here.
Yeah,
so
first
thing
is
that
I
don't
think
it's
possible
to
to
declare
this
target
sampling
error
during
sampling,
configuration
and
hope
that
it
will
cover
everything,
because
almost
always
this
sampling
error
will
depend
on
the
query
that
will
be
run
on
the
collected
data.
I
can
explain
it
a
little
bit
later,
but
that's
one
thing.
The
other
thing
is
that
your
comments
suggests
that
we
will
decrease
a
sampling
error
when
we
see
that
we
have
too
much
data,
I
think
the
mo.
C
In
most
cases
it
would
be
the
opposite.
Our
when
collecting
certain
statistics,
when
we
don't
have
enough
data,
for
example,
one
or
two
data
points
the
intelligent
backend
should
say
sorry.
We
cannot
run
any
meaningful
statistic
on
that.
You,
you
can
see,
let's
say
two
traces
and
they
are
good
for
debugging
purposes,
but
nothing
else
you
you
cannot
deduct
anything
from
statistically
from
them
now
going
back
to
my
first
point:
well,
you
know
in
many
cases.
C
C
Then
we
we
have
to
increase
the
volume
of
spans,
surviving
the
sampling.
C
There
is
something
which
is
called
confidence
interval
in
in
statistics
which
can
be
calculated
numerically
well.
You
you
all
are
familiar
with
with
political
polls
when
they
based
some.
C
Statistics
on
sampling
an
arbitrary,
randomly
selected
set
of
popul
people,
which
is
fairly
small.
It's
of
order
1000,
which
gives
the
confidence
interval
about
three
percent,
and
it
does
not
depend
on
the
size
of
the
general
population,
which
is
great
news,
because
it
means
that
we
should
be
able
to
calculate
our
confidence
interval
based
on
what
we
see
after
sampling,
regardless
of
what
was
the
original
input
for
for
the
for
the
sampling
algorithm
and
in
most
cases,
this
confidence
interval
is
calculated
by
using
one
over
square
root
of
the
number
of
samples.
C
C
C
Okay-
and
you
can
just
display
this
information-
you
can
say:
okay,
you
are,
you
are
fairly
good
here
you
are
you're,
you
are
in
good
shape.
On
the
other
hand,
if
there
is
only
200
of
those
you
say,
oh
it's
not
a
red
light
yet,
but
you
should
look,
you
should
take
it
with
a
grain
of
salt
because
we
don't
have
enough
samples.
A
C
C
B
So,
at
like
a
high
level,
I
think
peter
what
you
were
just
describing
I
I
heard
that
as
like,
potentially
like
a
a
feature
that
could
exist
in
the
analysis:
software,
that's
where
that's,
where
confidence
intervals
are
similar
could
like
come
into
play.
Not
so
much
in
anything.
Upstream
of
that
is
that
kind
of
the
the.
C
Relevance
you
had
in
mind
right,
okay,
right
and
from
that
there
are
two
ways
to
affect
the
sampling
configuration
it
might
be
done
automatically
or
manually.
C
B
Else,
yes,
yuri
identified
that
I
got
really
lazy
at
the
end,
and
this
is
not
at
all
to
the
same,
not
at
all
consistent
with
the
sections
before
do
you
expect
to
have
configuration
format?
I
didn't
answer
him
yet,
but
my
intention
or
what
I
believe
we
should
probably
a
goal
we
should
shoot
for
is
to
answer
his
question
here.
Yes,
just
because
I
think.
A
Just
coincidentally,
I've
been
working
on
the
collector
sig
as
well
and
have
been
involved
with
we're
extracting
a
configuration
language
essentially
for
expressing
collector
stuff
and
and
that
language
is
pretty
limited
right
now,
but
the
hope
is
to
actually
pull
out
a
common
parser
and
language
structure
that
could
be
used
by
you
know
various
elements
within
the
collector,
and
so
I
think
that
the
you
know,
I
think
it
makes
sense
as
that
language
evolves
to
make
sure
that
it
has
the
expressive
power
to
do
the
things
you're
talking
about
here.
A
A
B
Cool
thank
you
for
mentioning
that.
I
I
was
like
aware
just
feel
like
looking
at
github
of
like
certain
efforts
now
and
again,
and
I
guess
I
didn't
put
the.
A
A
Do
you
know
collector
contrib,
it's
in
collector,
contrib
and
within
collector
contrib?
We
are
pulling
out
the
thing
you
will
see
like
in
the
it's
called.
A
Telemetry
query
language
tql
and
it's
going
to
I'm
not
sure
if
it's
been
merged
yet
yeah
there.
It
is
so
tql
in
that
folder.
A
So
you
know
it
started
out
very
simple:
we're
expanding
it
more.
I
have
some
parser
background,
so
I'm
trying
to
help
and
get
this
thing
to
be
sort
of
a
grown-up
language,
parser.
So.
B
Okay,
cool!
Thank
you.
This
is
newer
than
anything
I
had
seen.
I
did
in
my
like
research
find
like
one
or
two
like
precursors
to
that.
Like
might
have
attempted
something
somewhere,
but
but
this
is
certainly
more
recent,
so
this
is
helpful
to
know
about
for
sure.
Thank
you.
A
This
is
currently
only
in
the
transform
processor,
but
the
intent
is
to
have
well,
first
of
all,
have
transform
processor
yeah.
You
know
capable
of
doing
all
most
of
the
things
that
are
currently
doing
transformations
and
then,
but
I
do
think
that
the
idea
would
be
this.
One.
Language
should
be
the
thing
that
parses
a
configuration
file
and
allows
you
to
express
useful
got.
B
It
yeah,
I
think
I
think
this
was
the
like
the
thing
I
was
aware
of,
and
so
you
were
saying
it
was
split
out
of
this.
Basically,
yes,
yes,.
A
So
tyler
said
we
should
pull
it
out
into
a
separate
package,
so
it
can
be
used
by
anybody,
and
so
that
is
currently
happening
right
now
and
will
probably
make
its
way
into
reality
over
the
next
couple
of
weeks.
B
Got
in
thanks.
A
B
B
Yeah,
that's
great,
are
you
yeah,
that's
cool
I'll.
I
might
check
I'll
I'll
check
that
out
and
see
so.
A
B
A
B
The
domain
of
it
like
a
language
that
selects
like
spans
or
traces
kind
of
thing,.
A
Yes,
that
is
the
current.
That
is
the
current
domain,
but
I
think
that
part
of
the
goal
will
be
a
general
purpose-
expression
processor,
for
example-
that
can
handle
basic,
math
and
stuff,
like
so
being
able
to
use
that
and
to
you
know
so
at
the
moment,
it's
kind
of
like
handed
an
expression,
and
it
gives
you
back
a
parse
tree
for
that
expression,
sure
there's
not
a
concept
of
parsing
a
file
yet
and
I'm
not
sure
there
needs
to
be.
A
C
C
I
haven't
had
a
look
of
course,
but
it
might
be
too
heavyweight
for
our
needs
sort
of,
but
yeah.
A
It's
possible,
I
mean
it's
not
yet
very
heavyweight,
but
you
know
like
basic
expression.
Parser
is
not
too
bad,
but
I
guess
especially
if
you're
thinking
about
doing
anything
client-side
the
weight
of
almost
anything
tends
to
get
people
nervous
in
javascript.
So.
B
Yeah
one
so
like
a
major
reason
why
I
like
having
a
concise
language
for
selecting
spans
or
traces
in
the
collector
is
because,
at
that
time,
in,
like
tail-based
sampling,
more
information
is
available.
B
All
the
spam
attributes
and
resource
attributes
and
things,
whereas
what
you
call
client-side
or
like
head-based
sampling,
so
much
less
information
is
available,
and
so
I
wonder,
if,
like
the
intrinsic
need,
is
less
in
sdks
to
have
like
sophisticated
selection,
because
there's
just
not
much
data
to
reference
in
any
like
policy,
you
could
write
on
the
sdk
and.
A
C
B
B
So
that
was
it
for
the
threads,
so
I'm
I'm
gonna
like
incorporate
the
stuff.
I've
said
in
the
side
and
sort
of
inline
as
little
to
do
things
as
well
as
expand
this,
but
I
think.
B
It's
I
made
a
choice
right
here
where
I
say
like
this:
otep
is
not
gonna
like
do
a
super
detailed
comparison
of
like
all
the
characteristics
of
the
various
like
dynamic
samplers
that
exist,
there's
three
or
four
of
them.
B
That
seem
to
be
the
most
popular
I
mentioned,
like
jaeger's
adaptive
mode
above
there's
also
a
the
other
two
I
have
in
mind
are
amazon,
x-ray
and
honeycomb
refinery,
and
those
are
like
really
interesting
to
just
to
compare,
and
I
kind
of
I
I
did
a
bit
of
that
comparison,
and
it's
not
present
in
this
document,
because
my
conclusion
was
that
maybe
it's
not
sort
of
necessary
information
to
proceed
with,
with,
like
the
actual
specification
of
like
a
basic
configuration
structure,
although,
like
probably
the
most
the
most
used.
B
The
way
that
that
would
be
most
useful
is
so
each
of
these.
Those
that
I
just
named
jager
x-ray
refinery
have
like
a
basic
structure
in
terms
of
like
a
sequence
of
rules
and
like
in
aws
case.
B
You,
like
each
rule,
gets
like
a
priority
thing
and
they're
like
that
imparts
an
order
to
the
rules,
and
then
you
like
go
buy
them
rule
by
rule
and
and
jager
has
a
different
way
of
like
imparting
an
order
to
various
pieces
of
its
jaeger
configuration
and
honeycomb
too,
has
like
a
notion
I
think,
of
like
top
to
bottom.
Yes,
rules
are
evaluated.
B
B
None
of
these,
for
example,
afford
a
way
to
say
like
combine
these
rules
with
like
hands
and
oars,
and
I
would-
and
I
don't
mean
like
this
scope
of
spans,
that
are
like
qualify
for
a
rule
that
can
like
refinery
in
particular,
you
can
say
like
spans
that
are
have
this
attribute,
and
this
attribute,
like
you,
can
do
that
to
like
select
first
for
a
specific
rule.
B
What
I'm
saying-
and
I
don't
know
if
this
is
necessary
to
support
but,
like
you
can't
say
like,
I
need
these
two
rules
to
like
produce
a
sample
decision,
and
only
then
will
I
actually
sample,
and
so
to
me
it
was
interesting
to
survey
all
of
these
things
to
figure
out
like.
B
All
of
them
in
their
own
ways,
support
like
you,
have
this
atom
of
a
rule
and
then
you
can
like
come
and
then
those
are
visited
in
a
certain
order
or
like
aggregated
in
some
sort
of
according
to
some
algorithm,
and
I
I
think,
the
part
of
all
of
those
systems
that
is
most
pertinent
to
like
the
present
objective
of
like
a
sort
of
skeleton
like
abstract
configuration
format,
is
like
how
they
evaluate
their
rules,
which
is
why
I,
like
that's
the
part
that
I
kind
of
focused
on,
but
there
are
other.
B
There
are
other
dimensions
to
these
things
that,
like
I,
I
think,
are
less
relevant
to
the
configuration
problem
but,
like
I
know,
I'm
kind
of
rambling
but
like
where
I
was
gonna
end
up.
Was
that,
like
I
did
this
task,
this
detailed
comparison
thing
and,
like
I
personally
came
away
saying
oops,
I
came
away
saying
you
know.
B
I
don't
think
this
analysis,
or
this
comparison
of
them
ended
up
being
useful
for
like
answering
the
the
current
question
of
configuration,
although
I
could
believe,
like
remembering
how
I
felt
before
I
did
it,
I
was
like
maybe
it'll
be
maybe
I'll
find
something
you
know
pertinent
when
I
do
this,
maybe
I
should
I'm
thinking
I
should
like
share
my
like
comparison
notes
for
these
things
to
sort
of
have
more
people
than
just
me
that
that
that
conclusion
that,
like
there,
there
is
no
like
useful
insight
to
be
found
in
this
in
this
comparison,
that
would
influence
our
like
configuration
format
designing.
B
So
I
guess
what
I'm
trying
to
say
is
like.
I
think
I
probably
will
share
this.
I
don't
know
if
it'll
like
end
up
in
line
in
this
hotep
or
if
it'll
just
be
a
sort
of
adjacent
like
study
that
will
inform
what
we
put
like
the
eventual
design
in
this
hotel.
A
Yeah
I
mean
I
do
think
if
you've
done
that
research,
it
would
be
useful
to
see
it.
I
just
this
this
reply
that
you
have
half
typed
here
and
the
more
I
think
about
it,
the
less
I'm
sure
that
that
should
be
a
goal:
okay,
yeah
yeah.
A
So,
but
I
I
think.
A
B
A
High
level,
rather
than
I
mean
yuri's
comment
how
about
a
tldr
here
at
least
yeah
yeah,
yeah
yeah,
so
yeah
yeah.
B
Yeah
that
all
sounds
good
to
me:
okay,
yeah
and
I
will
soften
my
like.
I
actually
don't
have
a
super
strongly
held.
B
C
A
C
Right
right,
so
we're
talking
just
about
format.
Of
course
there
will
be
differences,
it's
obvious,
but
if
we
keep
the
form
at
the
same
well
at
least
initially,
all
these
specifications
and
configurations
will
be
written
by
people
not
generated
automatically.