►
From YouTube: 2021-08-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
am
adding
to
the
agenda
to
discuss
a
prototype.
I
wrote
in
response
to
one
of
the
pr's
about
leaky
bucket
rate
limiting,
but
I
don't
want
to
get
ahead
of
myself.
A
A
Which
are
about
probability
sampling
and
maybe
that
won't
take
the
whole
time
here.
It's
time
to
start
sure
right,
please
see
the
notes.
I've
written
my
own
name
here
and
I
started
writing
the
agenda.
A
So
for
those
of
you
I'll
just
give
a
quick
review,
I
have
two
oteps
right
now:
number
170
is
the
first
of
them
and
168
is
the
second
of
them
they're
out
of
order,
because
I
renumbered
one
of
them.
A
A
A
The
problem
with
this
document
is
pretty
long,
and
it's
because
I
put
a
lot
of
background
in
and
I
think
it's
useful,
because
this
is
a
topic
that
is
full
of
nuance,
and
so
I
wanted
to
make
sure
that
sort
of
fundamental
objections
weren't
coming
up-
and
I
put
a
bunch
of
background
in
to
help
with
the
fundamental
objections
and
tried
to
point
to
the
science
that
fault
you
know
underlies
this
probability
sampling
and
then,
at
the
end
of
that,
gave
some
proposed
specification
text,
and
I
don't
know
what
the
bar
for
an
otep
is
these
days
about
how
close
this
has
to
be.
A
Approximately
that's
the
ultimate
goal
here
is
that
we
can
estimate
the
count
of
a
span
stream
from
the
sampled
spans,
so
this
proposal
builds
on
one
that
atmar
from
baltimore
ertl
from
dynatrace
put
up
and
I've
I've
linked
to
it.
It's
in
the
reading
here
anyway.
A
This
proposal
says
we
should
use
the
w3c
trace
state
header
spec
out
a
key
for
open
telemetry
to
use
we're
proposing
the
word
the
key
is
hotel.
This
is
my
my
example
here
is
in
order
to
propagate,
to
do
probability,
sampling.
We
want
to
propagate
two
pieces
of
information.
A
If
we
do
this
in
the
w3c
and
we
and
we
write
specs
on
how
the
trace
id
is
formed
to
avoid
extra
randomness-
and
we
add
these
bits
to
the
trace
parent
header-
we
can
do
this
for
two
or
three
bytes,
so
there's
a
big
difference
there
and
that's
why
this
proposal
is
a
little
bit
trickier.
The
easy
thing
to
do
is
to
take
30
bytes
per
context.
The
hard
thing
to
do
is
to
change
w3c.
A
So
I
just
now,
I've
shown
you
this.
It's
it's
much
shorter,
but
it's
also
a
lot
trickier
to
get
right
and
I
I
plan
to
write
some
more
text
here
to
help,
but
this
is
in
progress.
A
I've
now
summarized
my
two
oteps.
My
mission
is
to
get
those
two
merged
and
move
on
to
writing.
Specs.
I
have
a
prototype
and
I'd
be
happy
to
show
you
that
as
well
I'll.
Stop
for
questions
for
a
moment.
A
This
is
great.
I
actually
have
two
prototypes.
The
first
prototype
carries
both
of
those
oteps
into
code.
I
updated
the
tests,
so
I
can
actually
show
you
fairly
realistic
code.
I
can
also
show
you
what
I
don't
like
about:
the
go
tracing
sdk
this
attribute
map
stuff,
but
one
of
the
trickiest
things
that
I
ran
into
is:
if
you're
going
to
have
this
attribute
and
use
it
for
counting
well
now,
you've
got
this
span
limits
concept
of
having
a
limit
on
attributes.
A
I
actually
updated
my
my
tracing
implementation
to
prioritize
keeping
sampler
attributes,
because
I
don't
want
to
drop
those
because
they
impact
my
ability
to
count.
So
this
code,
you
see
right
in
front
of
you-
is
me:
saving
the
sampler
as
a
last
ditch
effort
when
I'm
dropping
attributes-
and
I
don't
like
the
way
this
code
is
written,
I
would
never
write
this
sdk
myself.
A
Sorry,
that's
a
feedback
on
the
go
sdk
right.
There.
C
A
That
will,
I
think,
you're
right,
that
solves
this
question
yuri
and
that's
a
nice
simplification.
I
appreciate
that
feedback.
If
we
have
a
field
called
well,
it
has
to
be
two
fields.
I
think
sample
name
and
sample
adjusted
count.
You
set
one
of
them.
I
don't
actually
strongly
object
to
setting
sample
name
unconditionally.
It's
just
that.
I
don't
really
need
it
and
it's
extra
bytes.
So
if
adjusted
count
is
known,
you
set
it
if
simple
account,
the
sample
name
is
known,
you
set
it
or
the
rules
are
in
that
oh
tip.
A
I
don't
want
to
repeat
them,
but
yeah.
That's
a
good
idea!
I'm
going
to
write
that
down
simplify
a
bit
this
logic.
Okay,
also
still
in
the
more
or
less
independent
of
what's
happening
in
this
pr.
This
is
just
a
bunch
of
go
boilerplate.
Here's
the
actual
trace
id
ratio
based
sampler.
There
was
one
before
you're
looking
at
the
old
version
here,
so
I
kept
the
same
signature.
This
proposal
says
we're
going
to
use
only
power
of
two,
so
it
it
does
some
rounding
right
away
and
that's
that's
a
pretty
significant
difference.
A
If
you
ask
for
75
sampling
you're
going
to
get
50
the
same
time
this
this
I
for
testing
purposes,
I
added
a
test
hook,
so
I
could
give
my
own
random
source
to
get
deterministic
sampling
in
for
my
tests,
but
this
is
basically
just
a
default
implementation
of
the
random
generator
that
we're
using
this
is
according
to
otep168
extra
randomness,
we
can't
rely
on
randomness
in
the
trace
id.
Under
this
assumption.
A
A
You
get
a
probability.
There's
this
thing
called
log
adjusted
count
log
adjusted
count,
there's
a
relationship
between
log
adjusted
count
and
probability.
You
do
one
divided
by
pro
longest
count
power
of
two
two
to
the
power
of
the
adjusted
account.
So
this
relates
probability
to
log
adjusted
count.
We're
going
to
use
this
expression
a
bunch
of
times,
you'll
see
the
actual
logic.
A
For
the
sampler
is,
is
here
it's
it's
got
some
comments.
It's
a
it's
a
it's
a
page
of
code,
it
more
or
less
follows
the
old.
If
you
have
a
trace
id
ratio
sampler-
and
you
don't
have
a
parent
context-
you
must
be
a
new
root.
So
there's
this
implicit
thing
I'm
doing
here
is,
if
I
don't
have
a
parent,
I
got
to
be
a
new
root.
If
I'm
a
new
root
create
this
new
random
number.
This
is
something
where.
A
If
we
have
a
spec
for
w3c,
we
can,
we
can
just
rely
on
the
bits
in
the
trace
id.
We
don't
need
to
do
a
new
random
number.
So
that's
actually
the
simplest
case,
and
I
wrote
a
to-do
about
about
the
bits.
There's
question
in
this
respect.
I
think
the
go
implementation
is
doing
something
we
should
spec
differently.
A
This
is
the
case
where
you
have
a
parent
context,
so
this
is
the
thing
that
was
never
specified
in
open
telemetry
up
until
now,
and
we
do
the
thing
that
we're
going
to
do
the
thing
that's
described
in
the
otp,
but
we
have
this
situation
now,
and
this
is
a
strong
reason.
Why
we
should
look
at
using
randomness
in
the
trace
id
if
you
have
a
situation
where
we've
got
the
spec
and
we're
using
extra
randomness
in
the
trace
state?
A
A
A
So
this
fallback
here
is
to
go
back
to
the
probability
sampler
logic,
which
was
whatever
we
had
before
I,
and
that
is
the
old
trace
id
ratio,
sampler
logic
which
is
not
specified
at
the
spec
level,
and
then
the
the
rest
of
this
is
just
very
straightforward.
Applying
what's
in
the
otp,
you
have
a
probability
it.
It
comes
from
sorry,
you
have
a
probability.
A
That's
baked
into
this
trace
id
ratio.
Sampler
you're,
going
to
put
that
into
the
trace
state
you're,
also
going
to
reset
you're
going
to
make
sure
that
your
own
randomness
is
in
there
and
then
you're
just
going
to
propagate.
D
A
So
there's
this
decision
to
record
and
sample
or
drop
it's
made
consistently.
It's
based
on
this
single
expression.
This
shows
up
in
ottmar's
paper.
This
is
the
magic
of
power
of
two
sampling.
We
do
this
very
simple
test
right
here.
I
don't
want
to
dwell
on
that,
but
but
it
is
very
simple
logic
and
I
have
tested
it
down
at
the
bottom
here.
You
see
this
is
a
new
sampler
called
propagate
based.
A
You
know
the
the
parent
base,
the
weight
of
spect
is
just
a
delegator
for
four
different
sub
c
sub
samplers,
depending
on
the
situation.
It's
in
this
is
the
target.
That's
the
delegate
you
get
so
this
all.
This
sampler
does
is
propagate
a
decision
and
and
it's
used
by
the
parent
sampler
and
then
this
is
my
parser.
I
managed
to
not
allocate
any
memory.
This
is
the
trace
state
parser,
it's
a
little
bit.
Hacky,
there's
another
another
spec
pr
about
this
that
carlos
took
up.
This
is
not
really
what's
happening
in
this
pr.
A
It's
just
I
needed
something,
so
that's
parsing
a
trace
state
and
then
this
is
the
the
legacy
fallback.
This
is
exactly
what
the
old
code
did
it
computed
a
number
by
shifting
the
bits
over
by
eight
and
well.
The
the
logic
was
something
right
here.
This
is
the
logic
you
sample
if
x
is
less
than
the
upper
bound,
which
is
computed
from
your
probability.
Okay,
I
have
fully
walked
through
this
prototype.
A
A
A
A
Those
children
are
not
going
to
know
their
probability
because
we
have
no
way
to
propagate
37
sampling,
those
children
when
you
when
you,
if
you
want
to
count
those
spans
you're
gonna,
have
to
go
to
the
old
way
of
doing
it.
You'll
see
a
sampler
name
on
those
stands
to
tell
you
adjusted
count
is
not
not
one
and
you'll
have
to
assemble
a
trace
and
look
at
your
root
and
figure
out
the
count
and
the
adjusted
count
will
be
one
divided
by
0.37.
A
If
you
have
37
percent
sampling,
so
we
can
spec
this
out,
but
it's
this
behavior
right
here
under
my
mouse
binary,
big
endian,
take
the
top
eight
bits
shift
by
one
for
some
reason
and
then,
if
it's
less
than
so
we're
taking
the
top
63
bits
of
the
trace
id
turning
that
into
an
unsigned
number
and
dividing
by
whatever
integer
discrete
probability
gives
us
here.
This
could
be
specked,
but
I
don't
know
that
other
languages
do
exactly
this.
This
is
the
gap
in
the
current
spec
right
now.
C
No,
I
mean
I,
I
think
this
isn't
a
good
approach
anyway,
because
we've
already,
this
is
what
jaeger
sdks
do,
and
we
we've
seen
issues
where,
basically
you
you
can't
always
guarantee
that
the
trace
id
is
random
enough
to
find
it.
This
way.
A
Yeah
I
so
one
of
my
thoughts
here
is
that
the
w
would
be
nice
if
w3c
had
a
version,
one
trace
parent.
That's
that
that
forced
you
to
make
a
like
a
reasonable
effort
to
have
true
randomness
on
the
top
64
bits.
We
need
62
of
them
to
be
random.
For
this
logic
in
this
code,
and
if
we
have
62
known
random
bits,
we
can
avoid
generating
a
new
random.
We
just
count
the
number
of
leading
zeros
out
of
up
to
62.
no
62.
We
don't
care.
C
But
I
suspect
that
that's
not
gonna
fly
because
we've
had
this
discussion
and
there
are
vendors
who
want
to
use.
I
mean
I
know.
Maybe
the
things
changed
since
then,
but
like
aws,
for
example,
they
they
were
encoding
a
time
stamp,
always
in
the
yeah.
A
C
Think
that's
us.
Do
you
know
if
there
is
any
any
very
simple
hash
function
that
we
can
just
apply
that
would
sort
of
at
least
not
get
skewed
by
so
like
some
parts
of
the
trace,
I
did
not
being
random.
A
I
think
that
that's
a,
I
think
that
that's
what
everyone
was
envisioning
in
the
spec
when
they
wrote
the
to
do
it.
There
are
issues
saying
get
a
hash
function
together.
I
guess
that's
the
answer
and
it
seems
like
just
some
amount
of
work.
Yeah,
that's
that's.
I
guess
that's
the
answer.
If
we
can't
get
rid
of
this,
we
should
hash
and
that
that,
just
to
me,
I've
never
written
a
spec
that
talks
about
hashing.
A
That
sounds
challenging
to
me,
but
we
can
I'm
sure
it
can
be
done.
Let's
talk,
yeah.
E
Sorry,
I
missed
the
first
15
minutes,
but
regarding
hashing
I
think
it's
it's
very
complicated
to
have
a
hash
function,
which
is
gives
the
same
results
across
different
languages
and
it's
all
difficult
to
have
a
hash
function,
which
is
which
has
an
output
which
is
really
random
right.
So
it's
there
are
not
so
many
hash
functions
which
satisfy
some
statistical
tests.
There's
a
github
project.
E
E
So,
as
far
as
I
know
so
this
is
for
sure
to
be.
A
Honest
the
thing
that
scares
me
most,
I
didn't
realize
I
was
as
video
muted.
You
can
see
me
hi.
Everybody
thing
that
scares
me
most
is
not
the
true
randomness
part
of
that.
It's
the
it's
the
specifying
across
language
like
getting.
A
E
E
You
have
to
do
it,
probably
by
your
own,
because
there
are
not
many
libraries
which
are
implemented
consistently,
because
some
hash
functions
allow
seeding
the
hash
function
and,
as
far
as
I
know,
for
example,
number
three:
the
seed
is
treated
differently.
For
example,
the
guava
implementation
is
not
consistent
with
the
sleepless
bus
implementation.
E
A
Part
scares
me
this
code
that
I've
highlighted
lines
161
through
one
set.
V6
is
the
alternative
is
to
just
say:
let's
just
generate
another
random
number,
and
this
can
be
optimized.
This
is
just
a
very
simple
form
of
it,
so
I
think
we've
definitely
covered
that
source
of
complication.
We
can
do
hashing,
it's
not
easy.
We
can
do
extra
randomness
it's
expensive,
and
if
we,
if
we
do
extra
randomness
or
we
can
just
spec
true
randomness.
E
A
Yeah,
so
that's
another
thing
you've
mentioned
is
that
we're
gonna
end
up
hashing
these
all
the
time
and
that
that
cost
can
be
real
as
well.
So
this
is
why,
from
ever,
I
think
our
perspective
is
the
best
we
could
get
would
be
to
have
the
62
bits
of
readiness
be
sufficiently
random.
I
don't
even
want
to
say
spec
that
out
that
sounds
hard
to
spec
too,
but
sufficiently
randomness
means
you
don't
need
any
extra
work
and
you
pay
that
cost
up
front.
A
Okay,
so
we've
covered
it
there's
three
ways
to
solve
this
issue:
it's
propagate
in
trace
state,
some
randomness
propagate
in
trace,
head
or
some
randomness.
Sorry,
there's
four
ways
to
hash
it
and
make
it
truly
random.
A
Those
are
four
four
ways
we
could
go,
but
I
think
we've
at
least
gotten
down
to
the
detail.
Here,
that's
the
hard
part
and,
and
hopefully
I've
at
least
given
you.
The
notion
that
oteps
170
is
in
a
pretty
good
state
and
168
is
pretty
is
is
at
least
exposing
all
this
complexity
we
just
discussed.
I
also
would
like
to
stop
again
to
for
discussion
or
questions.
C
Do
you
want
to
open
w3c,
usually
about
v1
of
trace
parent.
A
I
do
I
was
planning
to
go
to
the
seventh
meeting
on
august
17th,
so
next
tuesday,
I
understand
there's
a
meeting
and
I
was
just
going
to
discuss
it
and
bring
our
findings
to
that
group
which
I
don't
quite
know
very
well.
A
Opening
an
issue
sounds
great.
I
don't
know
how
or
where
but
I'd
be
happy
to.
A
Okay,
I
don't
know
if
the
next,
if
we're
ready
for
another
topic,
I'm
still
showing
a
prototype,
and
I've
got
one
more
to
show
this
started
with
a
comment
from
well
the
thread
that
started
a
while
back
honorag
asking
for
to
bring
the
leaky
bucket
sampler
from
jaeger
ecosystem.
As
far
as
I
understand,
maybe
it's
from
the
aws,
I'm
not
sure
which
into
the
hotel,
spec
and
I've.
I
flagged
this
as
a
let's
see
if
we
can
get
probability
sampling
instead,
because
you
know
we
lightstep
wants
to
do
spanner
metrics.
A
Basically,
it's
part
of
our
product
and
as
soon
as
this
type
of
non-probability
sampling
happens,
we
can't
count
so
in
in
this
thread.
It
was
discussed
how
we
think
I
say
we,
including
atmar
in
this-
that
we
think
that
we
can
do
rate
limited
sampling
in
a
fairly
straightforward
way
and
not
harm
the
probabilities,
and
so
it's
time
to
put
that
to
the
test.
Ottmar
wrote
this
relatively
brief
comment.
I
think
I've
interpreted
it
and
and
carried
it
out,
so
I'm
going
to
show
you
this
now.
A
This
is
forked
from
the
same
code.
I
already
showed
you
so
it
contains
the
same
stuff,
but
here's
a
new
file.
Okay.
So
this
is
a
new
implementation
of
a
sampler.
It's
in
its
own
package,
sdk
trace
greatly
net,
and
I
want
to
show
you
that's
pretty
simple:
it
leverages
everything
else.
I
just
showed
you.
So,
let's
start
with
the
interface,
you
say
new
sampler,
you
give
a
max
rate,
that's
in
spans
per
second,
I
have
some
options
to
help
me
test
it
just
to
again
determining
stick
randomness.
A
I
do
some
some
sanity
checking
and
then
I
compute
a
target
count
per
period.
The
structure
of
this
sample
is
I'm
going
to
have
a
an
interval
of
time
and
a
target
count
expands.
I'm
trying
to
start
samples
for
that
time.
Now,
it's
out
of
scope
here,
but
basically
the
the
constraint
that
we've
been
coming
up
against.
Is
you
can't
be
both
unbiased
and
do
head
sampling
with
a
rate
limit,
because
you
don't
know
how
many
spans
are
going
to
start
inside
of
your
interval.
A
A
So
I
start
with
a
target
and
in
order
to
to
set
up
the
target,
I
I
set
up
we're
going
to
use
the
power,
2
sampling
and
we're
going
to
choose
if
we
have
a
target
probability,
that's
between
two
powers
of
two
we're
going
to
choose
a
low
and
a
high,
and
then
I'm
going
to
flip
a
coin
to
decide
whether
it's
low
or
high.
A
So
I
have
this
function
called
split
prob
to
split
a
probability
it
takes
in
one
number
and
I'm
well,
and
I
did
some
trickiness
here.
I've
never
done
this
in
my
life,
but
I
did
for
the
first
time.
I
I
split
up
the
ieee
floating
point
number.
I
took
the
exponent
and
I
used
it.
So
what
I
did
is
I
compute,
because
I've
already
got
power
2
in
that
floating
point.
A
I
take
the
exponent
I
make
take
the
negative
because
I'm
I'm
I'm
looking
at
adjusted
counts,
which
are
the
negatives
of
probabilities,
and
so
now
I
have
a
low
and
a
high
this
and
I'm
trying
to
spell
it
out
an
example.
If
I'm
splitting
the
probability
0.375
I'm
halfway
between
one
one
quarter
and
one
half
this
expression
here
says
my
low
probability
is
one:
that's
because
I'm
two
to
the
negative
one,
my
high,
probably
sorry,
my
high
probability
is,
is
one
and
two
to
negative.
A
One
is
the
higher
probability
there
and
two
is
my
low
probability,
because
two
is
negative:
two
and
the
probability
this
this
high
p
minus
p
over
high
p
minus
low
p.
That's
the
the
linear
interpolation
between
those
two
probabilities.
I
think
I'm
doing
this
correctly
so
to
tid
sampler
for
log
adjusted
count
gives
me
a
trace
id
ratio.
Sampler,
I'm
generating
two
trace
id
ratio,
samples
for
the
two
probabilities
that
bracket
my
target
all
right,
so
the
logic
for
the
sampler
should
sampler
is
I'm
going
to
do
an
atomic
load?
A
My
current
state?
It's
it's!
I
don't
want
to
take
a
lock.
I
don't
want
to
do
exact,
exact
logic
here
I
want
to
do
approximate,
so
I
take
I
load
my
current
window.
I
compute
the
time
now.
If
my
time
now
is
greater
than
the
interval
I
go
into
a
synchronized
code
block.
I
do
something
once
per
window,
so
there's
going
to
be
a
race
to
find
the
first
person
through
after
the
window
closes
they're
going
to
update
that
window.
A
I'll
show
you
the
update
window
logic
in
a
second,
it's
just
computing
a
new
probability
based
on
what
it
knows,
and
then
the
rest
of
this
is
very
simple.
You
increment
the
count
of
the
window
that
you're
in
atomically.
A
A
These
remember
both
trace
id
ratio
samplers
and
then
I
just
delegate
to
the
trace
id
ratio
standpoint
that
was
selected
probabilistically,
that's
it
I
I
have
a
test.
I
want
to
show
you
that
this
works
update
window.
What
it
does
is
it
counts.
It
looks
at
the
current
window
that
thenx
just
expires,
so
this
is
the
window
that
I'm
just
finishing,
and
this
is
the
time
at
which
I
am
finishing
it.
A
This
is
this
logic
is
subject
to
scrutiny.
I'm
going
to
ask
atmar,
especially
or
georg,
if
you're,
on
the
call
to
double
check
me
on
this,
I'm
I'm
keeping
track
of
my
prior
count.
My
prior
duration,
I'm
going
to
combine
those
two
pieces
of
information
to
compute
a
new
probability.
There's
a
there's
math
behind
this.
I
don't
want
to
deny
it.
I
think
I'm
doing
it
correctly.
A
This
is
an
estimate
for
the
rate
based
on
what
I've
seen.
So
I'm
just
going
to
put
these
numbers
in.
I
did
this
on
paper.
I
I'm
not
sure.
E
I
think
I
should
write
some
questions
here,
so
the
total
count
is
the
number
of
spends
from
the
very
beginning
or
no
it's
it's
counting.
The
from.
A
Actually,
maximum
a
posteriori
estimate
of
the
rate
is
what
I
think
I'm
getting,
but
I'm
not
qualified
to
say
that
so
and
it
could
be
that
I
have
a
bug
yeah.
I
think
I
have
a
bug,
but
this
is
the
idea
at
least-
and
I
think
you
just
pointed
out
my
bug-
I
think
I'm
not
updating
the
prior
counts
correctly,
but
the
idea
is
roughly
in
place
here.
My
computer
new
probability,
that's
my
target.
I
split
it.
A
I
generate
two
tri-city
ratios
samplers
the
low
and
the
high,
and
I
put
the
probability
in
and
I
go
that's
it.
I
think
you
just
found
a
bug
but
but
turns
out
sampling
is
very
resilient
to
bugs
I've.
Had
this
happen
a
number
of
times
in
my
career,
where
you
write
some
sampling
logic
and
it
turns
out
to
work
very
well
and
you
have
bugs
and
you
fix
that
and
it
works
out.
It
turns
out
to
work
even
better.
I
tested
my
loaf.
My
split
probability
function
this
function
here.
A
I
don't
it's
a
little
hard
to
read
it
in
in
real
time
here.
What
I
did
is,
I
I
put
a
variable
rate
span
producer
through
this
sampler
and
I
let
it
adapt
20
times,
and
then
I
counted
the
point.
The
highest
level
of
this
sampling
meeting
I
want
to
get
through
going
back
to
otep
170.
A
Now
I
think
I've
got
a
bug,
so
I
think
it's
going
to
improve
when
I
improve
it,
but
I
finished
this
just
yesterday,
so
I
more
tests
are
needed.
I
think
I'm
willing
to
write
test,
especially
if
we
can
get
botep
170
approved,
but
this
is
the
idea
for
an
approximate
rate
limiter
now.
The
point,
I
think
everyone
should
see
this
for
clearly
for
what
it
is.
This
will
not
give
you
a
heart
rate
limit.
A
This
is
using
past
probabilities,
past
rates
to
predict
future
probabilities
and
if
the,
if
the
prediction
is
low,
you're
going
to
end
up
with
a
few
too
many
spans,
if
the
prediction
is
high,
you're
going
to
end
up
with
not
many,
not
enough
stance
for
your
target
rate,
but
it
will
adjust
and
come
back.
That's
that's
what
we
hope
and.
A
So,
just
just
to
say
again,
this
is
an
approximate
rate
limiter,
it's
not
a
heart
rate
limiter.
I
believe
the
next
question
coming
out
of
this
thread
about
rate
limited
samplers
will
be.
Can
we
do
a
hard
rate
limit
and
I
I
think
that
there's
a
proof
somewhere
that
you
can't
have
a
heart
rate
limit
and
unbiased
head
sampling,
and
I
don't
want
to
dwell
on
that,
but
but
what
I
do
know
how
to
do
is
to
do
hard
rate,
limited
tail
sampling
and
I'm
going
to
propose
to
follow
up
with
that
in
another
week.
E
Regarding
sampling,
I
mean
what
you
can
also
do
is:
is
reservoir
sampling
right
so,
but
I
mean
this
does
not
match
to
the
sampling
interface,
how
it
is
defined
right
now,
because
you
need
an
immediate
sampling
decision
right
then,
if
you're
thinking
of
reservoir
sampling,
it
can
happen
that
so
your
buffer
spans,
basically
and
after
period,
you
know
if
it's
sampled
or
not.
So
I
mean
if
this
is
an
option,
then
there's
a
way
to
do
it
also
consistently
and
also
it's
just
powers
of
two.
D
E
But
I
mean
it
does
not
satisfy
the
sample
interface
right.
A
Yeah
I've
run
into
the
same
thing.
I
do
understand
what
you
mean.
I
doubt
that
the
practitioners
in
the
room,
unless
they've
tried
this
themselves,
will
quite
understand,
but
I
can
explain
it.
I
think
what
you
just
said.
In
my
words,
are
you
know
you
at
the
moment
of
making
a
sampling
decision
you
return
sample
or
not,
but
you're
going
to
want
to
adjust
that
probability
after
the
fact.
A
Essentially,
when
you
record
spans,
you
can't
record
them
all,
and
what
this
means
is
that
you're
going
to,
I
can
use
the
word
speculatively
start
spans
that
start
traces,
that
you
don't
finish
you
that
you
don't
record
when
you,
when
you
have
a
burst
of
span,
start
you're,
gonna,
you're,
gonna,
try
sampling
too
many
at
the
beginning
of
your
window
and
by
the
end
of
your
window,
you're
gonna
say:
oh,
I
started
too
many
traces.
A
I
have
to
not
record
them
all
and
I'd
love
to
see
a
thing
that
works
with
just
powers
of
two,
because
I
don't
have
that
myself.
But
I
do
have
this
kind
of
catch-all
fallback
that
I've
used
a
bunch
in
lightstep,
which
is
this
algorithm
called
var
opt,
and
I
know
how
to
use
it
to
do
this
as
well.
A
But
but
you
do
run
into
this
question
about
the
sampler
api
being
not
quite
what
you
want,
because
I
think
you
could
do
it,
but
you'd
have
to
add
attributes
just
for
the
purposes
of
tracking
what
you're
doing,
because
you
have
a
decision
coming
out
of
the
sampler
and
you
want
to
coordinate
somehow
later
at
the
end
of
some
window,
which
of
the
spans
that
you
started
are
actually
going
to
record
and
and
to
do
that
consistently.
A
I
think
you
probably
need
more
state
and
you
could
stuff
it
into
an
attribute,
but
I
think
you're
just
abusing
the
api
at
that
point.
So
there's
no
connection
between
a
sampling
decision
and
a
span
object
right
now,
and
that
is
what's
missing.
So
what
I
was
going
to
propose
to
do
just
to
step
around
that
problem,
because
I
don't
like
to
like
run
into
other
problems.
A
My
step
around
this
problem
approach
is
the
the
rate
limiter
sampler,
I
just
showed,
gives
you
an
approximate
rate
limit
and
what
I
want
to
do
is
follow
that
with
a
heart
rate
limit
and
so
to
do
a
hard
rate
limit.
What
I
know
how
to
do,
and
I'm
sure
there
are
many
ways
to
do
it,
but
my
approach
would
be
to
create
a
span
exporter
because
tail
sampling
you're
doing
it
at
the
end
you're.
So
I'm
waiting
for
these
fans
to
export.
Now
I
can
put
a
hard
limit
on
spans
they're
exporting.
A
A
So
in
my
scheme
that
I
just
showed
you
it'll
be
one
of
two
adjusted
counts
per
per
interval
from
the
from
the
head
sampler,
because
you're
you're
flipping
between
a
high
and
a
low
and
then
when
you
transition
to
another
probability,
you're
going
to
see
potentially
two
new
counts,
so
you
could
have
up
to
like
four
different
adjusted
counts
in
one
window,
they're
all
powers
of
two-
and
maybe
you
have
a
reservoir
sampling
algorithm.
I
can
use
right
now
out
more,
but
I
do
know
how
to
use
var
up
to
do
that.
A
E
The
idea
is
quite
simple,
so
if
you,
let's
assume
that
the
trace
id
is
random
and
we
just
say
if
you're
sampling
with
50,
then
you
just
keep
those
with
one
lead
at
least
one
leading
zero
right.
So
and
if
you're
sampling
with
25,
you
would
just
keep
those
spans
with
at
least
two
leading
series
and
so
on.
So.
E
With
100
sampling
probability,
yes,
and
then
you
keep
everything.
If
you
reach
the
buffer
limit,
then
you
would
draw
balls
bands
which
do
not
have
a
leading,
zero
and
so
on.
So
then
you
would
reduce,
but
of
course,
then
you
then
you
would
lose
more
than
necessary
because
it
would
approximately
half
the
sample
right
and
you
would
have
some
space
left.
E
So
the
idea
is
that
you
still
keep
some
of
those
with
one
with
no
no
leading
zero
and
it
you
keep
all
of
the
spans
which
have
a
leading
zero,
and
then
you
still
use
this
trick
to
randomly
choose
which
of
those
spans
you
keep
which
do
not
have
a
leading,
zero
and
so
on
yeah.
E
So
it
I
have
a
prototype
implementation,
which
runs
with
expected
constant
time,
so
one
update-
and
so
it's
quite
fast
and
the
output
is
basically
a
list
of
spans
with
with
the
lengths
of
the
given
maximum
and
all
of
them
will
have
adjusted
counts,
which
are
powers
of
two
so
but
this
is
just
a
prototyping
implementation
with
no
documentation.
So
I
have
to
prepare
that
before
I
can
publish
it.
We'll
look
forward
to
the
proof
that
it's
unbiased
yeah,
I
have
a
unit
test
which
proves
that
actually,
anyway,.
A
We're
off
in
the
in
the
very
deepest
depths
here
omar.
I
think
you
and
I
are
understanding
each
other.
I'm
I
want
to
open
this
up
to
the
rest
of
the
group.
I've
definitely
covered
everything
I
had
to
say
I'd
love
to
answer
questions
and
do
anything
I
can
to
help
promote
boot
up
170
the
data
model
for
spans,
and
then
I
guess
the
next
step
on
168
is
to
continue
studying
it,
but
also
start
talking
with
w3c.
A
Yeah,
that's
why
this
168
right
now
starts
with
this
like
this
is
what
we
can
do.
That's
you
know
provably
right
and
doesn't
require
touching
w3c,
and
it's
just
that
30
bytes
or
so
that
worries
me,
but
I
think
we
got
to
show
that
we
want
this
badly
before
we
can
get
it
into
a
w3c
spec.
For
one
thing,.
F
Hey
josh
just
for
me
to
understand,
so
this
is
a
proposal
in
the
spec
right
so
that
every
language
will
implement
this.
A
Well,
let
me
let
me
see
if
I
can
answer,
I
think
so.
The
proposal
would
be
that
we
change
the
spec
for
all
the
built-in
samplers.
So
that's
parent-based
trace
id
ratio
based
always
on
always
off.
Those
are
the
four
I
care
about
the
spec
for
once
pure
otep
170
says
we
will
change
the
spec
to
to
require
those
samplers
to
to
follow
otech170,
which
means
either
output
your
adjusted
count
or
output,
your
name,
so
that
the
counter
can
count.
A
That
is,
step
zero.
As
far
as
my
agenda
to
get
the
the
built-ins
fixed,
and
then
we
have
these
so-called
composite
samplers
like
trace
like
like
the
one
I
showed
you
today.
The
rate
limited
sample
is
composite
because
it
does
a
bunch
of
stuff
and
chooses
either
the
higher
the
low
probability
fixed.
A
As
for
whether
there
would
be
a
rate
limit
sampler
built
in,
I
just
showed
how
much
code
it
is.
It's
not
tremendous,
I
think,
there's
a
bit
of
a
demand
for
it,
but
I
don't
know
how
strong
it
is.
You
know
I
think
rate
limited
sampling
is
powerful,
but
what
people
want
is
more
than
that.
They
want
to
choose
regular
expressions
and
do
matching
and
stuff.
A
So
I
I
consider
there
to
be
a
larger
topic
of,
like
view
configuration
and
it
definitely
intersects
with
sampling,
but
it
seems,
like
you
know,
they're
bigger
questions
afoot
like
how
do
we
configure
our
sdks
is
the
biggest
one
right
now
and
we're
running
into
a
place
where
people
want
lots
of
complexity
and
it's
not
the
logic
of
the
sampler.
That's
the
problem.
It's
it's
the
configuration
complexity
right
now,
so
I
hope
I've
answered
your
question.
F
A
This
would
be
the
sdks
now
what
the
collector
can
or
can't
do.
I
think
that's
another
interesting
area.
I've
talked
a
lot
about.
Well,
we
haven't
so
we've
mostly
been
talking
about
head
sampling
and
the
goal
there
is
to
lower
the
cost
in
the
libraries
and
lower
the
cost
of
collection
and
a
lot
of
times
what
people
are
talking
about
when
they
say
sampling
in
the
collector
is
they're
thinking
about
tail
sampling
or
like.
A
I
want
to
be
very
selective
and
I
may
have
you
know,
sent
as
much
data
as
I
could
afford
to
to
that
collector,
but
now
I'm
going
to
pick
over
it
again
and
and
down
sample
it.
You
know
another
100
factor
or
something
like
that,
so
that
I'm
I'm
doing
10
sampling
across
all
my
sdks,
and
I'm
doing
you
know
one
percent
sampling
again
by
the
tail
and
then
so
tail
sampling
can
be
very
powerful.
You
can
use
all
the
attributes
on.
A
A
I'm
personally
interested
in
in
a
demonstration
right
now
that
would
be
a
tail
sampler
and
then
the
kind
of
idea
that
I'm
looking
at
is
you
know:
you've
collected
a
30
second
span
window
of
spans,
you've
buffered
them
all,
so
you
can
see
them
all.
They
may
have
been
sampled,
but
you're.
Now,
looking
at
30
seconds
of
sample
data,
you
have
across
that
30
seconds
of
sample
data,
a
distribution
of
latencies
for
those
spans
and
typically
you'll
have
some
some
sort
of
dense
region
of
that
distribution.
A
You've
got
the
sort
of
mode
of
your
distribution
of
latency
and
then
you've
got
some
outliers.
Now
I
want
to
down
sample
my
30
second
window
expands
and
choose
priority
that
I
want
to
make
sure
that
I
get
those
outliers.
I
don't
need
a
bunch
of
examples
of
my
my
my
dense
region
and
my
distribution.
So
doing
uniform
sampling
at
the
tail
will
not
help
me.
A
I
want
to
make
sure
that
I
boost
the
probabilities
of
those
high
latencies
or
those
vocal
agencies,
or
something
like
that,
or
if
it's,
if
it's
bimodal,
I
want
to
boost
the
stuff
in
between
the
two,
I
can
show
you
an
algorithm
that
takes
var,
opt
and
does
roughly
you
know
like
go
over
that
buffer
and
you
can
see
what
what
latencies
you
have
you
can.
You
can
do
a
fairly
simple
algorithm
with
even
with
well,
you
can
do
a
fairly
simple
algorithm.
A
A
If
you
have
a
bucket
with
very
few
examples,
you
give
it
high
probability.
If
you
have
a
bucket
with
many
examples,
you
give
it
low
probability,
and
I
don't
want
to
spell
out
every
detail
in
this.
You
know
13
minutes
or
whatever,
but
the
point
will
be
that,
after
after
my
sample
runs,
I
can
fix
the
number
of
output
spans
and
I
can
pretty
pretty
well
guaranteed
that
you're
going
to
have
representativity
for
all
the
latencies.
A
That
could
be
a
very
high
number,
depending
on
how
many
spans
you
got
that's
the
outcome
that
we're
looking
at
is
that
you
can
do
tail
sampling
you
can
you
can
have
a
hard
rate
limit
on
the
tail
sampler.
You
can
choose
what
you
want
to
prioritize
by
by
weighting
things
that
go
into
your
sample
and
the
output
is
somehow
what
you
wanted.
E
One
question
so
I'm
just
wow
sampling,
I
mean
it
means
that
so
if
you,
you
explained
it
with
the
example
of
latencies
right
so
I
mean
it's
a
one-dimensional
categorization
right.
So,
but
what?
If,
if
you
have
some
further
attributes,
like
you
know,
method,
name
or
whatever?
I
don't
know
error,
flag,
yeah.
So
then,
so
it's
not
that
simple
anymore
right.
So.
A
It's
not
that
simple
anymore,
and
but
there
is
some
theory
behind
it
and
I'm
just
not
the
right
person
to
to
to
I'm
not
the
mathematician
here.
So.
A
I
have
done
some
experiments
on
two-dimensional
sampling
and
I
I'm
just
at
the
very
edge
of
my
mathematical
capabilities
right
now,
but
I've
read
on
correspondence
analysis
and
the
idea
of
multivariate
analysis,
and
if
you
look
at
a
chi-squared
distance
measure
between
your
multivariate
data,
I
believe
you'll
get.
You
can
get
the
type
of
balanced
samples
that
I
described
so
that
you
can
weigh
more
than
one
dimension,
but
that
is
not
what
I
would
recommend
it's
just
that
I've
seen
it
work.
A
If
you
go
looking
through
the
kind
of
last
10
years
of
research
on
sampling,
one
of
the
names
that
stands
out
is
edith
cohen.
There
are
research
papers
on
multi-objective
sampling
out
there.
I
would.
E
A
Yeah,
well
the
the
experiment
that
I
did
try
I
mean
this
is
it's
an
old
experiment
now,
but.
A
A
You
basically,
if
you
keep
your
old
window
of
sample
data
and
combine
it
with
your
and
you
know,
and
I
don't
think
that
I
should
try
to
communicate
about
this
in
the
remaining
time.
I
think
this
is
a
great
topic
for
me
to
prepare
a
little
bit
on
potmar
and
I'm
actually
really
excited.
A
Sort
of
a
practitioner
here-
and
I,
and
I
will
say
the
the
single
dimensional
stuff
that
I
described
earlier-
is
really
what
I'm
after
I
think
multi-dimensional,
is
interesting
and
there
are
definitely
academic
papers
that
talk
about
it.
A
I
haven't
gotten
that
far
as
on
a
kind
of
personal
note,
I
I've
more
or
less
just
been
trying
to
show
that
sampling
works
and
one
of
the
there's
this
idea
of
inverse
probability
sampling-
and
I
I
kind
of
just
mentioned
it
like
if
you're
looking
to
choose
examples
for
a
histogram,
you
count
how
many
are
in
each
bucket.
A
A
So
all
you
need
is
then
some
tool
to
compute
an
estimated
probability
distribution
from
a
sample
data
and
the
one
that
comes
to
mind.
Of
course,
is
yours
or
the
t
digest
algorithm
that,
I
think
is
you
know
well
known
in
our
community.
I
do
have
I
hacked
this
together.
I
I
have
it
in
a
pr
that
I
could
almost
show
you.
A
A
A
Well,
the
t
digest
algorithm
takes
sample
data
with
weights
in
so
I'm
producing
a
feedback
loop
between
t,
digest
and
var
opt,
and
this
totally
works.
I
have
seen
it
in
code,
I'm
just
not
quite
the
scientist
to
write
the
research
paper
about
it
at
the
moment
I
have
code.
So
this
is
something
that
I
think
interests
me.
A
That's,
but
that's
cutting
edge,
we're
talking
about
research
right
now
and
I
and
I
think
that
probably
there's
a
more
interesting
thing
that
that
people
might
want
about
multi-objective
sampling,
and
I
that's
just
on
the
cutting
edge
right
there.
I
don't
actually
haven't
experimented
with
it
much,
but
edith
cohen,
she's
at
google
she's,
a
researcher
and
she's
also
the
person
one
people
behind
varupt,
so
I've.
I
recommend
at
least
looking
at
that
for
you
atmar,
especially
and
anyway.
This
has
been
very
technical.
Now
I've
been
talking
for
a
while.
D
Just
a
point
of
business,
the
there's
a
semantic
conventions,
meeting
group
that
wants
to
meet
during
this
time
slot
focused
on
messaging,
so
that
group
will
be
around
trying
to
finalize
semantic
conventions
and
instrumentation
around
messaging,
hopefully
getting
some
subject
matter:
experts
in
there
to
just
to
make
sure
that
we're
reporting
all
that
stuff
correctly
before
we
declare
it
stable,
pretty
different
from
sampling.
So
probably
not
a
lot
of
overlap.
I
just
wanted
to
check
in
with
anyone
from
this
group.
Does
anyone
here
feel
like
it's
critical
to
attend
that.
A
I
was
gonna,
it
sounded
like
you
were
about
to
say.
Maybe
we
should
do
this
meeting
every
other
week
and
alternate,
but
I
don't
know
if
you're
trying
to
have
in
every
other
week
or
not
for
that
topic
either.
E
A
D
Yeah,
likewise,
I
think
that
messaging
meeting
is
not
one
that's
going
to
like
last
forever.
I
do
think
weekly
meetings
are
better
there's
something
about
meetings
that
happen
every
other
week
that,
like
kills
attendance
because
stuff
will
get
shoved
into
that
time.
Slot
anyways
yeah,
I
I
I
I
think
I
think
we
should
meet
every
week
until
we're
done
and
then
be
done
with
these
things.
Yeah.
G
D
I
I
think
it's
fine
that
these
happen
in
parallel.
I
will
probably
start
going
to
that
that
other
meeting
just
to
make
sure
it's
it's
it's
shoveling
along
fine,
but
I
feel
like
this
is
this
is
productive
and
now
that
it's
like
set
up
and
running
like
I
don't
like
my
presence
here-
is
not
super
critical.
I
just
wanted
to
make
sure
there
wasn't
anyone
else
here.
Who's
like
oh.
A
E
D
D
A
I
got
me
the
agenda
for
the
next
week
is
to
to
press
people
to
approve
170
first
well
I'll,
keep
working
on
168,
because
it
means
it
needs
words
and
and
depth.
But
I
think
170
has
reached
the
level
where
it
could
be
either
fixed
or
approved
or
merged.
H
H
A
I
I
So
yuri
you
are
a
legend
of
the
distributed
tracing.
A
I
Okay,
I
think
we
can
stack
rice.
Sure
jury
is
connected,
but
maybe
he
left
his
oh
vina.
I
I
Okay,
here
we
are
so
last
week
we
just
I
was
with
justine
from
square.
He
was
a
product
manager
at
the
square
and
was
interested
in
knowing
what
was
the
status
of
the
project
and
about
logging
at
the
project.
So
we
we
were
talking
about
that
in
a
generic
way,
just
talking
about
the
possibilities
for
improvement
in
login
in
the
and
the
timing
for
that
and
yeah.
I
Basically,
what
I
told
him
was
that
our
main
focus
now
was
adopting
the
new
metrics
api
as
a
project
as
the
first
thing
to
support,
and
after
that
we
will
go
into
the
login
functionality
that
is
still
in
the
works
in
the
spec.
But
I
also
told
him
that
if
he
or
someone
from
his
company
was
interested
in
pushing
the
login
forward,
we
could
help
him
as
soon
as
possible
and
we
could
change
priorities
if
that's
the
case,
so
that's
more
or
less
what
we
talked
about.
I
We
talked
about
the
new
os
log
library
from
apple,
how
it
could
be
used
and
how
it
couldn't
be
used
with
previous
versions
and
that
stuff
yeah.
Basically,
that's
what
we
talked
about,
so
that
could
be
when
we
implement
login.
We
should
have
some
importers
that
we
have
now
for
street
metrics,
for
example,
for
that
so
yeah,
but
we
need
some
still
some
time
for
that.
So
even
no
one
is
interested
in
pushing
it
personally
and
adding
help
directly.
I
So
for
the
rest,
we
should
so
we
didn't
review
actions
because
there
were
none
of
the
of
the
usual
suspects
here,
except
for
me
so
yeah.
So
a
previous
thing.
We
had.
I
I
Also
and
the
new
changes
you
added,
I
have
merged
it
in
the
project,
but
I
have
seen
that
we
had
a
bug
in
the
url
session
instrumentation
that
could
have
problems,
so
I
created
a
pr
to
fix
that
also
so
about
this
network,
the
url
session
instrumentation.
The
problem
was
that
we
we
had
a
callback
that
was
used
to
know
if
we
wanted
to
instrument
the
headers
and
not
the
tracing
headers.
I
That
also
allow
allow
to
modify
the
url
request,
but
that
callback
could
be
called
at
least
twice
in
some
of
the
code
paths,
so
that
could
be
problematic.
So
I
modified
that
callback
to
be
only
to
pass
only
the
url
request
without
modifying
it
and
created
another
callback
that
will
be
called
it
exist
and
that
allows
to
modify
that
and
also
passes
the
span
information.
So
you
can,
for
example,
convert
the
the
commit
sorry,
the
trace
id
or
the
span
id
to
another.
I
I
So
yeah
that
that's
that
that's
another
period
when
it's
approved,
I
will
create
the
release
with
all
that,
because
I
think
it's
a
back
currently
and
some
users
could
could
get
some
issues
there,
so
that
from
one
side
and
the
other,
the
dynamic
libraries.
I
was
thinking
about
changing
that
for
this
release.
But
I
think
it's
a
good
news,
but
it
breaks
what
we
have
yeah.
So
we
could
have
users
that
got
a
different
behavior
than
expected
if
they
use.
If
we
modify
that
so.
H
So,
even
even
the
yeah
there's
the
the
package
definitions
for
oh,
I
guess
they
are
still
dynamic.
Okay,.
I
If
you
just
link
with
one
library
that
that's
the
problem,
so
if,
for
example,
you
link
only
with
jager
exporter,
I
say
jager
because
yuri
was
who
created
jager.
So
so,
if
you
just
link
with
a
exporter,
it
will
link
indirectly
all
the
other
libraries
in
in
a
dynamic
way
with
your
app,
but
probably
that's
only
the
only
case
that
will
work
with
dynamic,
so
yeah.
I
think
I
can
change
all
the
two
static
also
for
this
release
and
we
check
how
everything
works
with
the
users.
H
I
F
So
because
I
I,
when
I
was
trying
on
different
things,
I
I
ran
into
this
issue
a
few
times.
You
know
so
yeah.
I
I
Okay,
I
will
create
that
now
for
the
rest,
where
yeah
also
do
you
have
any
other
past
topic
that
you
want
to
talk
about.
I
Okay,
one
of
the
other
topics
is
with
a
new
network
status.
Pr
we
have
used.
I
mean
bryce
used
all
the
os
logger
functionality
in
the
system
to
just
lock
once
things
don't
work
as
expected
is
the
first
usage
of
it
in
the
project.
I
I
Yeah
but
there's
no,
I
mean
we
should
probably
change
to
something
more
something
better,
at
least
for
production
yeah,
as
you
did
so
yeah.
I
don't
know.
We
should
provide
some
kind
of
api
for
doing
that.
H
Probably
the
yeah,
so
the
issue
is
the
os.
Logger
is
only
available
in
ios
14
and
so
we're
going
to
have
to
do
like.
I
think
it
would
be
better
if
we
provided
some
sort
of
logging
class
that
just
you
know,
had
the
had
the
check
for
the
availability
of
the
os
version.
I
H
So
could
you
say
that
again
so
using
the
previous
version,
the
older
yeah
so
in
this
log
is
the
I
was
looking
at
this
because
I
was
I
was
like.
I
probably
shouldn't
use
print
to
use
to
you
know
to
report
this
issue
and
it
seems
like
the
choices
are
either
using
the
new
os
logging
tool
provided
in
the
os
package
or
to
use
which
is
only
available
in
ios
14
or
to
use
the
ns
log,
which
is,
you
know,
goes
back
to
all
the
way
to
ios
7..
I
The
with
the
unified
login
system
that
is
comes
from
ios
10,
oh,
is
there.
I
Yeah,
probably
it's
not
documented
anymore
because
they
updated
the
documents
for
you
for
the
os,
but
there
is
a.
There
was
a
oslo
class
that
you
could
use
from
swift,
also
so
yeah
yeah.
I
Yeah,
but
I
I
think
it's
okay,
I
I
I
will
take
a
look
at
that.
I
mean
there
were
some
previews
to
to
I
I
mean
I
have
been
using
some
logging
functionality
in
the
system
for
some
time
and
it's
not
something
so
new.
F
I
That's
what
I
that's
the
one
I
meant
yeah,
but
now
you
cannot
find
documentation
of
it
anymore.
I
So
probably
yeah
there
were.
I
That
doesn't
impact
release,
version
of
production
versions,
yeah,
so
yeah
moving
to
the
should
be
should
be
done
so.
I
Yeah
and
yeah,
and
we
should
move
to
that,
and
also
I
I
had
mainly
the
point-
was
also
with
a
with
the
loss
of
with
supporting
locks
in
open
telemetry.
I
We
should
probably
use
the
same
functionality
that
we
use
now
and
send.
I
don't
know
much
about
how
to
do
that.
I
didn't
know
if
you
had
more
experience
with
this
os
law
thing
or
bryce
or
just.
H
H
Yeah
yeah
yeah,
absolutely
yeah,
I'm
not
sure
exactly
how
to
hook
into
the
hook
into
the
the
the
actual
like
what
what's
being
sent
to
yeah
I
mean,
maybe
if
we
just
instrument
that
that
class
it'll
it'll
capture
everything.
I
Yeah,
okay,
and
about.
F
I
Yeah,
this
is
the
previous
library
for
logs
in
the
system,
but
that
has
been
superseded
by
that
other
library
and
its
social
legacy.
Login
system.
I
I
don't
know
if
we
might,
I
mean
I
don't
know
how
it's
different
from
the
other,
but
we
I
don't
know
we
might,
instead
of
having
both
versions
for
all
our
libraries
having
an
unified
way
of
calling
logs
in
the
library.
So
we
can
just
use
that
version.
I
don't
know
if
it
would
be
better.
I
I
Okay,
so
yeah
that's
for
the
future.
I
I
I
don't
know
any
other
topic,
you
can
have
bryce
or
be
not.
H
I
don't
I
don't
think
so
right
now,
I'm
working
on
I'm
looking,
I'm
gonna
start
looking
into
like
context
propagation,
I'm
working
on
some
ui
instrumentation.
You
know
button
clicks
and
that
sort
of
thing
I
might
bring.
I
might
I
think,
there's
I
I
have
to
do
some
reading
up
on
exactly
how
the
open
telemetry
context
propagation
is
supposed
to
work
for
for
spans
between
threads
and
stuff.
Like
that,
I'm
not
sure.
I
Yeah
you
mean
in
the
system.
H
Oh
in
the
system
within
within
ios,
okay,
so
yeah
like
I
was
playing
around
with,
like
instrumenting
view,
controllers
and
stuff
like
that,
and
then
I
would
expect
the
context
or
you
know
the
parent-child
relationship
of
in
the
same
for
of
spans
that
are
executed
in
the
same
thread.
H
I'm
not
sure
if
that's
supposed
to
be
automatic
or
not,
but
then
I'm
also
gonna
see
if
I
can't
get
the
context
to
propagate
from
one
of
those
spans
on
the
ui
thread
down
into
the
networking
thread,
the
networking
layer
that
we
have
instrumented
already.
I
I
I
Yeah,
in
fact
it
does
maybe
you
you
have
to
make
that
expand
the
active
one.
So
it
takes
the
context
and
say
I
am
the
active
now
right,
so
all
of
them
will
inherit
from
him
and
if
you
create
another,
that
should
be
the
parent
or
keep
the
just
keep
the
make
it
the
active
one
and
it
will
work
cool
yeah.
I
I
Great
there
is
one
catch
up
with
with
that
is
with
the
new
async
await
things
in
ios
15..
I
H
H
I
Yeah,
so
maybe
is
that.
I
No,
no,
no,
it
will
be
in
october
when
they
release
maybe
there,
and
then
that
could
be
an
issue
with
some
async
await
methods.
But
for
that
stuff,
the
library
should
I
mean
they
know
they
have
this
task
local
variables
that
so
the
task
is
something
that
somehow
is
like
an
activity.
But
that
brings
the
that's
to
the
threads
that
are
waiting
on
that
stuff.
So
we
could
probably
have
that
task
context
to
keep
the
context
of
the
span
and
use
that
logic
for
that.
I
H
I
That's
right
I
mean
at
the
beginning
of
the
library.
I
think
that
all
spans
were
created
as
active.
H
I
And
we
decided
to
make
it
to
change
it,
so
you
have
to
make
it
active
yourself
if
I
remember
correctly,
but
yeah.
I
Yeah
I
remember
at
the
beginning,
all
of
them
were
active,
but
that
could
lead
to
some
other
problems.
Yeah.
I
I
remember
that
we
talked
about
having.
I
Active
by
default
also
with
some
other
method,
but
we
didn't
that
that
so
yeah
with
that
it
should
work
cool.
I
Yeah,
the
the
for
the
new
version.
They
have
a
new
concurrency.
I
So
what
they
have
is
something
called
task
that
somehow
glows
or
covers
all
the
functionality
of
that
you
want
to
do
so.
It
covers
from
the
start
of
your
function,
all
the
or
not
the
single
to
the
to
the
end
of
your
task,
so
that
thing
works
in
a
different
way
to
the
os
activity
thing
in
the
system,
and
they
are
not
still,
they
put
it
as
a
known
issue
of
the
libraries.
I
So
for
them
is
a
known
issue,
but
it's
it
might
be
an
issue
that
they
are
not
going
to
fix.
Never,
for
example,
they
have
an
issue
in
for
not
supporting
previous
version
for
the
concurrency
library
and
they
have
publicly
said
in
some
forms
that
they
are
not
going
to
support
in
previous
person.
So
maybe
it's
the
same
for
this
so
yeah
that
should
that
could
mean
that
we
would
need
to
handle
also.
I
Yeah
yeah
we
we
should,
I
mean
if
it's
in
a
task
and
you
create
an
span
and
make
it
active.
You
should
know,
which
is
that
spanning
any
other
function
that
is
called
or
any
other
thread
that
runs
related
to
that
task
in
the
same
way
that
it
does
now
with
os
activity.
I
But
we
are
just
handling
os
activities
ourselves
I
mean
in
open
telemetry,
we
are
creating
the
activity
we
are
setting
the
span
as
the
active
one.
We
have
a
table
to
relate
os
activity,
identifiers
with
expanse.
I
So
when
you
do
something
and
it's
inside
an
activity,
it
knows
that
is
there
and
it
takes
the
radius
pattern
or
not.
I
mean
if
you
create
a
thread
and
you
read
an
expander
and
in
the
previous
thread
you
do
something
they
are
not
related,
because
one
has
been
created
in
another
activity,
so
you
won't
get
that
spam
and
if
you
create
a
network
request,
because
you
do
something
after
that,
you
will
get
that
the
active
decorative
response.
I
Where
the
active
span
doesn't
work
is
in
the
network
responses
because
it
is
created
in
a
totally
different
thread
that
has
no
relation
with
the
request
that
created
it.
So
we
have
a
table
there
for
for
relating
the
response
with
the
with
the
request,
but
for
all
the
rest,
it
should
work
just
with
with
the
library
as
it
is.
I
Okay,
any
other
thing:
I
don't.
H
Take
a
look
at
your
pr
here
in
a
minute.
I
I
will
do
it
right
now,
just
after
the
meeting
so
yeah
we
can
release
a
version
as
soon
as
possible.
I
probably
will
create
it
tomorrow
morning,
so
I
can.
I
I
will
test
it
that
is
working
in
the
same
way
I
did
with
the
previous
one.
So
I
could
find
the
small
issues
that
we
still
have
there,
so
it
will
take
more
than
just
creating
the
release,
but
I
want
to
just
do
some
testing
before
releasing
something
that
might
fail
somehow
but
yeah.
It
will
probably
be
tomorrow
tomorrow
morning
on
my
time
so
tomorrow
it
should
be
ready
for
for
you.
I
expect
excellent.