►
From YouTube: 2021-08-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
You
can
oh
carlos,
you
moved
your
thing
to
the
bottom,
fair
enough.
B
Yeah,
this
is
a
new
issue
that
was
raised
last
monday
or
tuesday.
I
don't
remember,
since
it's
newer
and
it's
more
of
a
general
doubt,
we
can
just
put
it
at
the
end.
A
I
don't
know:
let's
go
over
it,
real,
quick
cause.
I
think
it's
a.
C
I
think
it's
a
short
okay
yeah
can
I
could
I
interject
before
we
go,
I
feel
like
we've
spent,
maybe
the
full
last
two
meetings
discussing
probability,
sampling
and
atma,
and
I
have
been
going
back
and
forth.
I
feel
like
that
that
workflow
is
going
pretty
well
and
I'd
like
to
get
back
a
little
bit
to
the
jaeger
sampling
and
all
the
other
stuff
that's
been
going
on.
Although
I
see
atmore's
point,
I
want
to
talk
about
it
too,
so
I
I
don't
want
to.
B
Okay,
fair
enough
yeah,
so
basically
it
was
raised
by
nikita
about
the
possibility,
like
whether
there's
like
general
set
of
guidelines
when
it
comes
to
composite
samplers.
B
At
this
moment,
we
have
de
facto
allow
them,
because
we
have
the
parent
space
sampler,
which
can
work
with
another
sampler,
but
he
was
mentioning
the
case
of
like,
for
example,
what
if
you
are
sampling
by
like
where
a
url,
and
then
you
want
to
mix
that
with
some
other
stuff.
You
know,
I
don't
know
whether
there's
something
bad.
A
There's
certainly
a
question:
even
with
this
trace
ratio
sampling
that
we're
doing
in
yeager.
Currently
I
mean
there's
still
a
desire
to
have
some
kind
of
rule
set
that
you
load
up
right
even
with
this
ratio
sampling.
Is
that
correct,
josh.
C
That's
that's
right.
I've
tried
to
separate
this
discussion
into
two
kind
of
big
big
halves.
One
is
like
all
the
stuff
about
probability
and
how
we
count
stuff
that
has
been
sampled
and
the
other
is
what
I've
been
calling
view
configuration
topic
and
I'm
trying
to
I'm
using
that
phrase
to
connect
it
with
the
same
conversation
happening
for
metrics,
and
it's
there's
like
so
much
similarity
between
this
conversation
about
the
views
in
the
metrics
sdk
as
there
is
in
nikita's
issue
here.
Are
they
composed
by
and
are
they
composed
by
or
is
there
short
circuit?
C
Those
are
the
questions
that
we
were
tackling
in
metrics
fuse.
So
I
think
the
co
my
answer
to
nikki's
quest
would
be
yes,
this
is
in
scope,
but
I
feel
like
it's
sort
of
already
present
and
I
say
already
present
because
well
this
notion
of
a
so
I'm
in
a
separate
point,
but
it's
related
yuri
gave
me
some
feedback
on
my
otep
saying.
Oh,
you
know,
there's
already
a
syntax
for
description
of
sampler
and
it's
used
to
print
your
own
description.
C
You
know
in
a
log,
I
guess
it's
specked
out,
though,
and
then
it's
also
there's
an
environment
variable
spec
that
says
when
we're
parsing
environment
variable
traces
sampler,
I
guess
there's
various
syntaxes
that
are
recognized
and
one
of
them
is
a
composite
for
the
parent-based
sampler.
C
But
then
we
also
added
this
jaeger
sampler,
which,
as
far
as
I'm
concerned,
is
a
like.
You
said
it's
a
rule
set
so
like
it's
getting
back.
To
that
view,
configuration
question
somehow,
which
is
to
say
that
we
compose
samplers
and
then
we
evaluate
rules
and
we
eventually
get.
C
I
would
I'm
going
to
call
a
leaf
sampler
or
like
a
base
case,
and
I
don't
know
if
we
have
the
right
terms
yet
but
trace
id
ratio
is
a
base
case
and
always
on
is
a
base
case
and
always
off
is
a
base
case,
and
everything
else
is
composite.
I
think-
or
at
least
that's
the
proposal
that
we
all
like
the
probability
sampling
only
works
with
those
base
cases
and
we
can
throw
in
all
the
composed
rules.
You
want
as
long
as
they
boil
down
to
a
trace
id
ratio
or
a
parent.
C
D
I
mean
we
also
see
it
as
a
dynamic
sampling
rate,
a
sample
which,
which
chooses
the
sampling
rate
dynamically
for
everything.
D
So
it's
more
like
dynamic
trees
that
you
should
be
simple
right.
A
Yeah,
the
rules,
you're
gonna,
start
updating,
live
eventually
right
and
presumably
you're
gonna
have
a
lot
of
rules.
So
you'd
want
the
thing.
That's
it
seems
like
all
the
rules
would
be
in
one
sampler
right.
You
wouldn't
have
lots
of
different
samplers,
but.
D
Yeah,
what's
different
is
basically
the
function
which
maps
the
span
to
its
sampling
rate,
that
this
is
what
is
different,
and
this
is
but
this
is
not
important
for
tracer
d
ratio
based
sampling.
Actually,
it's
not
important
for
estimation.
You
just
need
to
know
the
sampling
rate,
which
was
finally
used
for
that
span,
and
this
is
guaranteed
by
attaching
that
piece
of
information
and
how
the
sampling
rate
was
determined
is
does
not
matter
for.
C
A
Yeah
exactly-
and
that
seems
like
those
things
knowing
quite
a
bit
about
each
other
like
when
we've
done
composites
in
the
past.
I
think
you
usually
want
to
keep
that
the
individual
pieces
being
composited,
don't
know
about
each
other
right,
like
the
composite
thing.
Maybe
has
some
rules,
so
in
this
case,
whether
we
have
like
a
composite
sampler
or
not,
it
seems
like
really
probably
like.
A
C
C
That's,
that's
more
or
less
that
the
way
this
proposal
is
shaped
up
or
the
current
specs.
I
guess
I
just
updated
the
issue
from
nikita
with
a
link
to
the
recently
minted
spec
on
view
configuration
for
metrics
sdk
and
it
does
sort
of
address
a
lot
of
this
question,
which
is
to
say,
you're,
going
to
install
some
views
which
are
going
to
be
like
specialized
handlers
for
certain
metric
names
per
cups
or
special
spans.
If
it
was
fans
and
then
there's
going
to
be
some
there's,
there's
two
there's
two
sets
of
configuration.
C
One
is
all
the
stuff
that
I
want
to
set.
Specifically,
I'm
configuring
it
with
my
own
rule
sets
these
are
my
actions
and
then
there's
what
the
default
will
be
when
you
don't
have
a
rule
that
matches,
because
we
think
that
probably
vendors
want
to
install
their
own
defaults.
We
also
think
that
users
might
want
to
say
I
don't
want
any
defaults,
so
the
the
in
addition
to
I
can
set
up
all
my
own
matchers
and
and
and
they
do
not
short
circuit
by
the
way.
C
That's
our
that's
our
that's
what
we
came
up
with,
but
I
can
also
turn
off
the
defaults
so
that
I
can
then
provide
my
own
defaults.
If
I
want
or
not,
and
the
api
of
the
sampler
doesn't
change
it's
just.
You
still
have.
This
should
sample
call
which
has
the
attributes
and
parrot
context
and
all
that
stuff
to
make
your
decision,
and
then
you
evaluate
your
rule,
set
you
come
up
with
a
leaf.
I
I
think
that's
not
the
right
word.
It's
a
it's
a
base
case.
I
don't
know
it's
a
primitive
sampler.
C
A
C
A
Is
anyone
here
familiar
enough
with
the
collector
processing
pipeline?
I'm
curious
josh
about
what
like,
because
we're
already
doing
this
kind
of
stuff
in
the
collector
right,
like
you
have
pipelines,
and
I
kind
of
wonder.
C
A
E
A
C
That
you
can
wire
together
pipelines
and
I
think
that
that
may
be
like
bigger
scope
than
what
we
were
trying
to
get
into
the
sdks.
I
think
everyone
understands
sdks
need
to
be
a
little
bit
limited
and
a
little
bit
functionally
like
kind
of
focused,
and
then
you
can
kind
of
do
anything
you
want
in
a
collector
pipeline,
so
we
definitely
cut
short
the
metrics
sdk
spec
on
view
configuration
to
be
simpler
than
the
arbitrary
ball
of
yaml.
You
get
for
collector
and
I
know
there's
a
debate,
I'm
not
too
close
to
it.
C
It's
more
about
like
how
shall
we
design
our
processors,
and
you
know
like
what
are
what
is
that
rule-based
thing
going
to
look
like,
because
there's
so
many
ways
you
could
do
that
and
there's
been
a
kind
of
tension
between
like
the
proper
design
that
would
be
like
out
in
the
future
when
we,
when
we
all
know
what
we
want.
We've
had
a
few
engineering
iterations
to
like
boil
it
down
to
like
a
coherent
set
of
plugins
right
now,
we've
got
kind
of
wild
west.
C
E
A
Well,
it
seems
like
it
seems
like
we
do
want
to
have
a
composite-like
structure,
but
the
compositor,
in
this
case
is,
is
the
rules
yeah?
I.
A
And
yeah
part,
the
the
the
clean
architect
in
me
feels
like
to
the
degree
possible,
and
just
maybe
we
haven't
like
solved
it.
The
way
we
want
anywhere,
but
I
I
feel
like
intuitively
that
it
would
help
our
users
if-
and
this
is
maybe
separate
from
like
what
this
stuff
does.
But
when
we
design
the
like
configuration
language
for
this
stuff,
that
it'd
be
somewhat
self-similar.
C
Yeah,
it's
frightening
because
you
know
right
now.
We
have
this
spec
that
says,
if
you
parse
the
string
always
on
it,
will
give
you
the
always
on
stanford
and
if
you
parse
the
string
always
off
always
off
sampler
and
there's
a
few
other
syntaxes
that
are
specked
out,
but
what's
happening
in
the
collector
and
what's
kind
of
suggested
at
when
you
talk
about
views
in
the
metrics
sdk
is
it's
like
incredibly
fancy
yaml
stuff
and
then
the
prospect
of
standardizing
that
across
languages
horrifies
me,
and
I
don't
want
to
say
that
again.
C
C
A
Yeah,
maybe
another
way
of
putting
it
is
like.
Maybe
this
rules
language
is
not
not
a
sampler
plugin
right.
Like
maybe
I
mean
I
guess
it
depends
on
how
complex
our
configuration
goes,
but
it
certainly
seems
like
once
you
get
into
this
territory.
You
probably
wouldn't
be
spewing.
All
this
out
into
an
environment.
A
Variable
right,
like
the
environment
variable,
would
be
like
the
configuration
files
over
here
and
then
in
that
thing
you
have
a
way
of
like
defining
your
rules
and
mapping
it
to
stuff,
but
as
soon
as
you
get
into
the
idea
of
like
now,
we
have
like
remote
control
sampling.
A
I
I
personally
do
not
want
open
telemetry
to
spawn
like
five
separate
control
planes
that
control
separate
things.
You
know
it
seems
like.
If
we're
gonna
get
into
the
remote
control
business,
we
should
have
like
one
coherent
way
to
push
out
configuration
updates
to
the
collectors
and
to
the
sdks
and
then
have
it
be
tierable
right.
So,
like
you
push
updates
out
to
the
collectors
and
they
can
push
update
out
to
the
sdks
and
talking
to
them,
you
know
so
yeah.
A
Yeah-
and
I
mean
probably,
this
thing
will
have
to
be
able
to
push
out
updates
that
control
the
configurations
of
plugins
too,
but
anyways
like
when
we
start
talking
about
this
remote
control
stuff
and
like
configuration
languages
and
like
a
comp,
a
complexity
of
like
compositing,
this
stuff,
that's
more
complex
than
what
we're
doing
with
like
the
propagators,
where
it's
like,
really
damn
simple
right.
It's
just
like
try
the
try,
the
first
first
one
and
then
try
the
rest
of
them.
A
C
A
desire
for
that
last
summer,
when
google
sent
a
bunch
of
interns-
and
that
was
one
of
the
topics
that
google
had
in
mind
was.
I
could
reconfigure
the
sdk
and
they
started
small
with
like
metric
interval,
which
is
the
one
that
google
you
know
frequently
mentions
is
like.
I
want
one
second
granularity
here
and
I'm
going
to
push
that
configuration
out
to
you.
A
What
what
are
we
offering
here,
and
maybe
like
the
the
the
first
version,
we're
doing
here,
is
just
a
plug-in
that
doesn't
have
any
remote
control
stuff
it's
just
like.
Can
we
like
build
a
plug-in
that
allows
you
to
to
do
rule
based
stuff
and
then
pick
which
base
case
sampler
you
use,
and
then
for
like
these
different
kinds
of
sampling
options?
They
are
just
simple,
sampler
plugins,
and
so
we
just
essentially
prototype
it
out
that
way.
A
You
know
so
then,
in
the
future,
when
we
grow
a
control
plane
and
some
configuration,
gizmo
and
stuff
like
that,
you
know
we
can
be
like
all
right
now,
there's
a
more
official
way
of
doing
it
like
you,
don't
need
to
use
some
of
these
plugins
anymore,
maybe
or
something
like
that,
so
be
updated.
I
think.
C
The
jaeger
approach
is
going
to
be
popular,
though
we've
already
got
it
in
jaeger,
so
it'll
be.
Users
may
gravitate
toward
that.
A
F
A
C
A
A
I'm
sure
there
are
other
things
we're
going
to
add
to
the
clients
besides
sampling,
that
would
probably
benefit
from
being
remote
controlled,
like
views.
I
imagine,
would
benefit
from
this
so
like
we
should.
A
Just
it
feels
like
something
yeah
we
shouldn't
do
often
like
a
sampling
city
in
a
corner
and
yeah.
I
think
you're
right
josh.
Probably
people
might
gravitate
to
wanting
to
use
the
jaeger
one
in
the
meantime,
because
that'll
be
pretty
temporary.
G
A
Josh
has
a
concern
that,
because
there'll
be
a
gap,
people
will
adopt
like
the
jaeger,
the
jaeger,
remote
sampling
thing
and
that
might
become
a
sort
of
de
facto
standard,
but
I'm
almost
worried
like
if,
if,
if
we
move
faster
and
push
something
like
that
in
to
a
sampler
plug-in,
then
that
would
become
like
like
you
would
become
the
thing
you
hate
the
most
josh.
A
If
we
did
that,
you
know
like
like,
like
I'm
a
little
bit
worried
that
that
we
might
get
people
stuck
on
some
half-assed
control
plane,
not
that
the
year
one
is
half-assed
but
like
what
it
is.
Okay,
it
is.
I
don't
want
us
to
get
stuck
on
stuck
on
that.
I
would
rather
people
like
use
the
jaeger
one
in
the
meantime.
If
that's
that's
like
the
thing
they
want
to
play
around
with,
and
we
can
like
learn
from
that
and
then
do
it
right.
A
You
know
like
a
couple
of
months:
it's
not
talking
about
a
huge
timeline
here
right,
we're
talking
about,
like
I
think
the
metrics
and
even
the
logs
work.
It's
going
to
be
pretty
wrapped
up
by
end
of
year.
G
Technically,
this
is
a
whole
independent
project
of
telemetry
right.
The
configuration
runtime
configuration
delivery,
yeah
there's
a
whole
team
for
that
for
configurator.
A
Yeah,
no,
it's
it's
gonna
be
like
work
like
because
none
of
these
things
were
built
with
with
remote
configuration
in
mind,
like
that's,
actually
like
a
complex
requirement
to
add
to
something.
Certainly
the
collector
be
like
yeah.
Now
you
can
just
change
the
pipelines
on
the
fly.
Actually,
that's
it's
easy
to
say,
but
that's
actually
a
lot
of
work.
A
Okay,
okay,
I
feel
like
we
went
round
and
round
on
that,
but
maybe
the
place
we
landed
is
we're
just
going
to
prototype
this
as
like
a
rule-based
sampler
that
can
load
up
these
other
things,
and
I
don't
know
how
we
configure
that
rule
based
sampler
for
now,
but
it
could
be
like
it's.
C
Yeah
I've
hoped
I've
tried
to
break
this
problem
apart,
to
say
that
maybe
we
we're
all
pretty
comfortable
with
protobufs
and
we
could
just
somebody
a
team
of
people
could
design
a
protobuf.
That
is
the
official
configuration
or
version
one
configuration
in
terms
of
the
structure
of
a
protocol
buffer,
which
at
least
gives
us
the
freedom
to
not
talk
about
yaml
or
json.
C
Variables,
which
is
the
worst
of
this
stuff-
and
I
don't
want
to
deal
with
it
yeah
and
then
and
then
we
punt
the
problem
of
how
to
like
how
to
actually
put
those
bytes
into
your
configuration,
because
what
we're
after
is
dynamic,
config
at
which
point
it
doesn't
matter
how
you
emailed
it.
It
was
like
delivered
to
you
as
protobuf,
and
you
implemented
in
protobuf.
That's
just
what
I've
proposed
every
time.
This
comes
up.
A
Yeah,
I
I
think
I
think,
that's
sorry.
Someone
else
should
talk.
F
Yeah,
I
I'm
just
thinking
rather
than
look
at
this
as
a
rule-based
sampler.
F
If
we
say,
let's
construct,
construct
this
as
a
a
controller
or
a
manager
that
effectively
takes
an
input,
so
it's
the
the
interface
for
sampling
or
metrics
or
whatever
applies
rules
to
it,
to
determine
which
thing
it's
going
to
then
call
if
we
try
and
map
it
out
that
way
rather
than
saying
well,
this
is
the
one
for
sampling,
and
that
way
you
know
at
some
point
in
the
future
you
say:
well
now
we
have
a
remote
version
that
can
go
on
update
it's
configured
on
the.
E
F
You
just
plug
in
that
one
instead
of
the
static
one
and
in
terms
of
how
you
can
figure
it.
Well,
that's
entirely
up
to
what
we
prototype
initially,
the
dynamic
one,
yeah
protobuf
is
probably
like,
because
when
we
have
protobuf
everywhere
else,
but
that's
a
later
discussion.
A
F
Yeah
there's
no
common
interface
for
the
dynamic
languages.
It's
easy,
but
yeah
for
yeah,
fixed
type
language
is
a
lot
harder.
A
A
Yeah
we're
trying
to
divide
it
into
two
parts.
I
think
here
what
we're
trying
to
do
is
one
say
like
long
term,
we
do
want
to
grow
a
control
plane,
but
that's
going
to
be
like
we
want
to
do
that
coherently
for
all
of
open
telemetry,
not
one
specific
for
sampling,
so
it's
like
remote
configuration
update.
A
So
if
we
put
that
part
aside
and
just
say
well,
what
does
the
rules
language
look
like
for
sampling
like
and
josh
is
saying,
there's
a
rules,
language
for
views
that
they've
come
up
with
recently
and
we
have
a
rules,
language
for
collector
pipelines.
A
So
I
think,
like
the
question
facing
us
is
like
for
for
sampling.
What
should
the
the
rules
language
look
like
and
can
it
can
it
be
similar
to
one
of
these
other
things
and
obviously
you've
got
a
rules
language
in
jaeger
and,
like
that's
another
thing
worth
looking
at,
so
does
that
make
sense,
pavel.
A
A
But
what
we
want
to
do,
I
think
for
for
our
sampler,
is
to
to
first
figure
out
what
the
rules
language
is
and
separate
that
out
from
like
what
should
the
remote
the
remote
configuration
updating?
What
should
the
updating
mechanism
look
like,
because
we're
going
to
want
to
update
more
than
just
sampling?
H
A
H
Like
proto,
where
you
can
guess
how
it
works
but
yeah,
if
you
think
that
it's
good
to
create
a
dog
and
copy
it
to
open
telemetry,
then
you
can
do
that.
A
H
A
I
don't
I
think
well,
do
you
already
have
that
in
jager,
like
the
actual
protophiles,
like,
I
think,
it'd
be
better
to
link
to
those
than
have
the
jaeger
stuff
in
two
spots,
just
just
a
description
of
I
just
like
how
it
works
and
like
be
like
here's,
the
proto
files,
here's
like
an
explanation
for
like
what
what
this
thing
does
and
josh
was
mentioning.
We
probably
want
to
use
proto
to
define
our
own
rules.
A
C
A
It's
like
what
we're
doing
right
like
we
use
products
to
find
their
stuff.
I
don't
know
anyways.
I
guess
my
point
is
like
we.
We
need
to
define
what
how
our
rules
system
should
work,
but
that's
like
separate
from
defining
what
the
update
mechanism
should
do
so
like
we
can
go
ahead
and
define
the
stuff
the
rules
in
proto
and
write
them
down
in
the
proto
repo
and
then
they'll
be
there.
A
C
I
I
don't
know
the
answers,
but
I
do
still
see
this
odd.
A
change
of
approach
between
metrics
has
talked
about
views
and
wrote
in
the
sdk
spec.
You
can
choose
a
behavior
based
on
the
span
name
or
sorry,
metric
name
and
like
metric
type
and
tracer
name
or
sorry,
meter,
name
and
like
you
can
choose
all
based
on
all
these
attributes
and
then
you
can
make
a
an
aggregator
decision
and
that's
that's
what
view
configuration
means
yeah
it's
in
the
sdk
spec.
C
A
I
think
it
should
be
in
the
sdk
spec
if,
if
we're
saying
it's
helpful
for
people
to
see
this
stuff
written
down
in
a
participle
way
like
proto,
where
we
should
put
it-
but
I
think
maybe
the
next
course
of
action
is:
can
we
take
that
views,
work
and
apply
it
to
the
sampling
problem
right
like?
Can
we
make
a
version
of
what's,
in
the
view,
spec
and
apply
that
to
how
we're
managing
these
roles
for
the
sampler,
that
that
seems
to
be
like
this.
C
C
Yeah,
that's
that's
what
I'm
really
hoping
for
is
that
we
have
a
similarity
emerging
where
you
choose
a
view
of
your
metrics.
You
choose
a
few
of
your
stands
and
it
turns
into
an
aggregator
config
for
metrics
and
a
sampler
config
for
spans,
but
wouldn't
it
be
great
if
we
called
samplers
aggregators
honestly
they
are
aggregators,
metrics
terminology.
A
C
D
So
what
I
would
like
to
briefly
discuss
is
you're
you've
seen
my
slide.
D
Okay.
So
when
it
comes
to
partial
sampling,
you
know,
then
it
can
happen
that
you
know
racist
break.
So
if
some
span
in
the
middle
is
not
sample,
and
what
happens
then
is
that
the
parents
fan
id
of
a
non-sampled
span
of
a
child
of
a
non-sample
span
is
not
known,
basically
yeah.
As
far
as
I
understand
the
current
vanity
for
that
spam
is
the
spam,
which
is
not
sampled,
and
therefore
this
id
is
not
known,
so
this
id
is
completely
useless.
D
And
this
way
this
band
would
know
the
grandparents
span
right,
and
so
we
have
still
some
hierarchical
information.
Research
so,
and
I
mean
you
can
also
if,
if
multiple
spans
are
skipped,
you
could
also
add
a
counter
and
add
it
to
this
information.
So
you
could,
for
example,
have
an
order
which
says
or
defines
if
the
parents
fan
id
is
the
parent,
grandparent
or
even
great
grandparents.
D
D
E
C
It
is
not
going
to
be
easy
and
I'm
so
I'm
my
appetite
for
taking
on
new
new
things
is
low
right
now,
because
I
think
it's
going
to
be
hard
enough,
just
to
get
propagated
probability
and
and
fixing
the
trace
id
ratio
stuff.
So
when
it
comes
to
this
proposal,
my
I
have
another
proposal.
It's
similar
but
sort
of
addresses
the
same
problem.
I
wonder
if
you've
thought
about
this
is
that
I
don't
necessarily
care
how
many
spans
were
skipped.
C
I
would
like
to
know
the
name
of
the
spans
I'm
skipping,
so
I
was
thinking
instead
of
knowing
my
parents
span
id
I'd
like
to
know
my
parents
band
name,
because
if
I
end
up
with
a
situation
where
my
trace
is
partially
complete,
I
can
see
it's
incomplete
because
I'm
missing
parent
information-
and
maybe
we
should
combine
these
like
you-
want
your
first,
your
your
closest
trace,
grandparent
closest
trace,
ancestor
id,
which
is
where
you
connect
to
your
partial
trace.
But
if
I'm
trying
to
figure
out
what
am
I
missing?
C
I
don't
even
know
what
I'm
missing
except
I
have
an
id.
What
if
I
could
get
like
my
parent's
name
as
well,
I've
thought
of
this,
because
the
first
thing
is
like
I
have
an
incomplete
trace.
Where
do
I
go
turn
up
sampling
to
fix
the
problem?
If
I
don't
know
my
own
parent's
name,
I'm
just
like
I
got
an
id,
I
gotta
turn
up
sampling
there.
So
that's
the
thought.
I've
crossed
my
mind.
D
I
mean
collecting
the
name
is
a
lot
of
overhead,
I
would
say
I
mean
I
mean
if
you
really
need
that
you
shouldn't
sample
right
yeah,
but
I
mean
this.
This
change
does
not
introduce
a
lot
of
away
because
you're
propagating
one
id
anyway
and
discounting
with
the
order
is
just
optional.
I
mean
I
would
be
already
happy
if
I
know,
if
I'm
my
direct
ancestor
right
so
just
to
be
able
to
link
it
and
to
preserve
hierarchical
relationships.
D
But
this
is
yeah
some
extra
information,
but
currently
you
propagate
a
span
id
which
will
not
known
by
the
server
right.
It's
because
the
parent
span,
if
it's
not
sampled-
and
this
id
is
not
known
to
any
one,
and
so
it
does
not
make
much
sense
to
propagate
it
to
child
spins.
Then,
instead
of
propagating
this
span,
id
which
will
not
be
known
by
anyone,
it
would
be,
would
be
much
more
useful
to
propagate
the
parents
penalty,
which
actually
was
sampled.
D
D
C
F
Yeah
so
one's
a
general
question
then
and
then
potentially
a
proposal.
So
do
we
have
a
feel
for
how
many
the
depth
of
this?
How
many
ancestors
we're
going
to
have
in
spans
like?
Are
we
talking
hundreds,
thousands,
we
don't
know
so
as
a
more
general
concept?
Do
we
introduce
something?
That's
that's
effectively
a
dropped
span,
so
I
already
have
we
have
no
ops
bands,
but
if
we
just
have
effectively
spans
with
their
ids
and
their
names.
F
So
if
it's
sampled
out,
we
just
replace
with
that
and
has
no
other
data,
that's
going
to
be
problematic
for
some
back
ends,
but
that
would
actually
you
know,
act
as
a
placeholder
for
anything
that's
sampled
out,
and
that
would
address
both
of
your
concerns
here.
Yeah.
A
But
we
did
generate
still
generate
an
intolerable
amount
of
overhead
for
the
people
who
want
this
kind
of
sampling.
F
A
It's
just
cut
down
because
one
of
the
I
mean
maybe
they're
like
an
extreme,
but
you
know
you
can
look
at
f5
trying
to
put
this
stuff
into,
like
you
know,
edge
compute,
stuff
right
where
their
their
scale
and
load
is
like
crazy
making.
A
But
I
do
like
that
idea
of
like,
like
just
saying,
like
okay,
we're
just
going
to
preserve
we're
not
going
to
waste
any
resources,
but
we
are
going
to
preserve
the
the
trace
structure
because
that's
actually
critical,
I
I
don't
know
if
we
could
do
it
all
the
time,
but
one
of
the
reasons
why
I
like
that,
for
example,
one
thing
that
makes
me
concerned
about
this
kind
of
sampling
is-
and
maybe
jager
has
hit
this
because
they
have
some
of
these
tools
but,
like
you,
also
have
things
that
want
to
do
like
trace,
structure,
analysis
right,
there's
some
stuff,
that's
like
spam
level
aggregates.
C
But
for
light
steps
part
here,
we
are
not
ready
to
deal
with
partial
partial
traces
and
we,
our
whole
product,
is
kind
of
built
around
complete
traces
for
better
or
worse,
and
what
we
would
do
is
just
reject
those
traces
because
because
we
don't
have
partial
analysis
tools
built
it's
not
our
user,
it's
not
our
usage
model
really
so,
like
I'm,
I've
been
putting
energy
into
getting
this
parent
parent
stuff
to
work
so
that
we
don't
have
this
problem,
because
lifestep
wants
me
to
do
that.
So
I
I'm
I'm.
C
C
C
C
I
guess
I'm
more
interested
in
that
idea
that
I
could
include
my
span
name
so
that
I
can
see
the
spans
missing
and
give
me
the
first
thing
to
pull
on.
If
I
want
to
figure
out
how
to
make
it
complete,
which
is
okay,
go
find
the
things
that
are
producing,
that
type
of
name
and
turn
up
their
sampling
rate,
or
something
like
that
and
then-
and
I
guess
the
just
to
follow
on
one
more
thing.
C
Is
that
like
it
should
only
cost
you
when
you
have
a
sampled
parent
or
sampled
ancestor,
so
if
you're
doing
enough
sampling
propagating
the
parent
span,
name
shouldn't
cost,
you
too
much
is
what
I'm
thinking
so.
Yes,
this
would
cost
us
extra
bytes,
but
only
when
you're
sampled
to
include
your
parent's
name.
So
if
you
have
a
grandparent
sampled,
you
go
through
a
parent,
that's
unsampled.
The
parent's
name
gets
propagated
because
the
grandparent
was
sampled.
D
That's
that's
just
another
idea,
but
if
there
are
multiple
spans
that
are
not
sampled
in
a
row
I
mean
then
what
you
do
then
I
mean.
C
I
I
was
saying
how
I
just
want
that
first
thread
to
pull
on
like
okay,
I
have
an
incomplete
span
or
I
have
an
incomplete
trace.
Tell
me
the
name
of
one
span.
I
can
boost
probability
to
get
more
completeness
and
then,
if
I
boost
that
one
span
I'll
have
another
clue
which
is
the
next
missing
span.
It
only
helps
me
with
one
step
at
a
time,
but
it
is
there.
C
I
have
secondary
reasons
why
I
think
this
is
a
good
idea,
it's
that,
when
doing
something
like
tail
sampling
of
spans,
if
I'm
assuming
I've
gotten
complete,
you
know,
like
one
percent
of
all,
my
traces
are
now
complete
and
I'm
seeing
them
in
my
back
end
now
I
could
do
something
like
tail
sample
based
on
parrot
name,
so
I
want
to
collect
traces
for
a
particular
interior
node
of
my
trace,
where
I
want
to
balance
all
the
different
colors.
C
D
It
introduces
introduces
some
overhead
right,
yeah,
not
limited,
or
is
it
limited?
That's.
C
D
C
C
So
I
am
definitely
going
to
be
looking
to
you,
especially
because
dynatrace
has
a
lot
of
influence
over
the
w3c
trace
context,
and
I
think
we're
going
to
need
that
influence
and
it
would
be
so
much
nicer
if
we
could
move
quickly
here
and
just
change
w3c,
because
right
now
in
my
otep
168
it
looks
like
30
bytes
per
context
just
to
propagate
otel's
probabilities,
and
it
needs
to
be
three
bytes
according
to
your
paper
and
according
to
everything
we
understand
now
so
so
doing
it
right
in
w3c,
trace
context,
trace
parent
means
three
bytes
per
context
or
five
bytes
per
context.
C
I
guess,
whereas,
if
we
do
it
in
a
trace,
parent
or
trace
state,
it's
just
it's
very
expensive
and
I'm
afraid
that,
because
of
that
very
expensive
people
are
going
to
say,
make
it
optional
and
my
employer
doesn't
get
what
they
want.
If
it's
optional,
so
I
really
love
if
we
could
just
push
through
a
fast
change
in
w3c
version,
one
it's
the
same
as
version
zero
with
five
more
bytes
and
and
we've
got
a
spec
for
that.
But
I
just
feel,
like
that's
impossible,
so
I'm
feeling
a
little
bit
of
awareness.
A
C
D
A
By
the
way,
if
we
do
version,
if
we
do
version
the
w3c
spec,
I
think
in
a
new
version
of
it,
we
could
probably
require
randomness
in
the
trace
id.
This
is
agreed
with
yeah.
C
And
that
way
we
only
need
three
bytes.
So
you
you
it's
a
harder
spec
to
write.
You
say
I'm
gonna
definitely
say
the
high
64
bits
are
random,
maybe
or
for
63
bits,
random
or
whatever
that's
enough
for
us,
and
then
you
only
need
three
bytes
for
probability
and
maybe
two
more
bytes
for
this
count
that
omar
wants
and
great.
E
C
Then
base
16
is
the
standard
already
so
two
base.
16
bytes
gives
you
the
six
that
we
need
six
bits
we
need
for
that
proposal.
We
have
just
because
we're
using
base
16,
really
it's
it's
one
bite
effectively
six
bits
for
probability
and
then
maybe
this
order
could
be
another
bite
or
two
of
base
16.
So
you
could
skip
up
to
255
spans,
which
is
more
than
I
expect
ever
to
happen
by
adding
a
few
bytes.
C
And
then
perhaps
we
could
change
the
spec
to
include
grandpa
or
the
you
know.
First,
sampled
ancestor
instead
of
immediate,
immediate
unsampled
ancestor,
for
example.
Those
would
all
help
us
so
much
and
I
really
don't
want
to
get
in
a
situation
where
you
have
an
optionality
of
like
the
version.
The
trace
state
spec
was
done
in
2021
and
if
that's
you
can
use
parse
it
there
and
then,
if
w3c
version
one
came
along
in
2022,
you've
got
two
ways
of
doing
it
and
then
our
words,
like
complexity,
will
forevermore.
C
A
I
mean
yeah,
I
think
the
devil's
just
in
the
details
of
where
these
extra
fights
would
go.
Actually,
I
do
think
this
is
great
hotmark
by
the
way
I
really
like
your
paper,
and
I
think
this
information
is
good
and
I
kind
of
agree
that,
like
it
would
almost
be
bad
if
open
telemetry
started
doing
some
of
these
things
and
then
stuffing
things
like
the
the
order
into
like
trace
date.
A
Because
then
I
mean
it's
not
a
huge
deal,
but
then
we
were
kind
of
like
not
actually
following
the
w3c
spec
right
like
that
parent
id
like
is,
there
is
not
actually
your
parent
id
anymore,
so
you
know
it
wouldn't
cause
polishing
problems,
but
semantically
it's
no
longer
the
same,
and
so
that
seems
like
a
little
dirty.
A
But
I
also
predict
that
actually
making
the
current
some
of
the
current
fields,
unparsable
would
would
like
be
very
difficult
to
difficult
to
push
through,
and
you
almost
want
to
end
up
with,
like
a
third
header
or
something
like
that
to
put
this
stuff
in
so
that's
also
comes
with
overhead
anyways.
I
think
the
details
will
be
like:
where
do
we
shove
this
extra
information
in
more
so
than
like,
should
be
added
or
not
having
been
through
many
of
those
meetings
to
get
to
the
current
one
josh,
that's.
Why
that's?
Why.
A
When
you
first
propose
that
was
like,
like
you
just
end
up
with
a
lot
of
edge
cases,
you
know
like
the
people
who
have
implemented
the
current
thing
in
places
where
it's
gonna
be
hard
to
change.
Right
like
deep
within
amazon
and
other
places,
network
proxies
and
stuff
like
that,
we'll
might
have
a
cow
if
we
do
something
that
makes
their
stuff
break.
For
example,.
A
C
E
C
A
C
F
I
I
guess,
if
you
end
up
in
a
tree,
you
could
end
up
with
a
child
that
has
the
same
order
in
the
same
grandparent,
spain.
So
if
there's
true.
A
C
A
E
Nice
to
meet
you,
you
are
coming
to
the
swift
city
meeting.
I
E
Because
it's
a
holiday
in
some
people
can
be
vacationing
or
things
like
that.
B
E
E
E
And
we
are
basically
a
user
focus
development,
so
we
we
try
to
add
things
as
user
needs
them
or
for
our
own
needs
and
yeah
for
myself.
I
work
for
datadock,
and
so
I
am
also
using
the
telemetry
swift
in
my
in
my
own
product,
so
yeah,
so
some
some
of
the
developments
are
sometimes
guided
by
my
needs
or
by
needs
of
other
developers.
E
There
is
also
another
approver
on
the
on
the
project:
bryce
buchanan,
but
it's
not
coming
today
and
usually
be
not
a
video
is
also
usual
in
this
meeting
so
that
that's
more
or
less
all.
We
usually
have
a
notes,
a
document
that
we
follow
just
to
know
what
we
have
talked
recently
and
yeah.
That's
how
it
usually
works.
I
Yeah,
that's
right!
So
just
a
little
bit
of
introduction,
so
my
name
is
justin.
I
work
at
square,
I'm
an
engineering
manager
on
one
of
the
foundational
teams
that
supports
ios
and
android,
and
my
team
builds
analytics
libraries
and
logging
libraries
we're
looking
to
expand
into
logging
debug
logging
for
mobile
right
now.
We
kind
of
use
our
existing
analytics
pipeline
to
send
debug
log
messages
to
the
server
and
get
them
ingested
and
available
in
snowflake,
and
that
system
is
kind
of
under
a
significant
amount
of
strain
and
we're
wanting
to
replace
it.
I
So
we're
we're
looking
at
alternatives
there
and
I'm
very
inspired
by
kind
of
like
the
overall
open,
telemetry
premise
where
you
have
standards
and
specs
and
you
get
your
clients,
your
applications,
instrumented
with
these
standard,
specs
and
then
you're
able
to
swap
out
vendors
in
the
future,
because
we've
seen
this
kind
of
problem.
I've
seen
this
problem
several
times.
I
Moving
from
you
know
new
data
dog
to
new
relic
or
something
to
something,
and
you
know
it
causing
impacts
and
applications,
and
you
know
a
lot
of
libraries
churn.
So
you
know,
overall,
that's
kind
of
where
I'm
attracted
to
open
telemetry
is
this.
You
know
the
spec
that
can
allow
us
to
use
a
multitude
of
vendors
and
start
to
instrument
our
applications
and
continue
to
like
just
redirect
that
collection,
and
so
primarily
like
we
have
kind
of.
I
We
have
our
analytics
story,
mostly
in
good
shape,
but
where
we
have
room
to
grow,
is
in
metrics
and
alerting,
as
well
as
tracy's
traces
like
capturing
traces
from
mobile
and
visualizing
them
and
finally,
debug
logging.
So
I
that's
why
I'm
joining
this?
This
call
is
to
just
learn
more
about
how
this
group
operates.
I
I
Okay,
how
many
like
how
many
maintainers
are
like
full-time
working
on
this
from
the
various
metrics
companies
like
datadog,
new,
relic,
etc?.
D
E
I
I
am
working
full
time
in
a
project
that
uses
open
telemetry
and
I
am
setting
some
of
my
time
to
the
open,
telemetry
project
with
things
that
I
need,
but
also
for
the
new
specs
that
appear.
So
I
want
to
keep
the
project
healthy
and
following
the
specs.
E
Maybe
we
won't
be
the
first
to
have
the
spec
implemented,
but
my
idea
is
working
on
it
and
accepting
prs
from
anyone
interested
in
helping
the
project
grow
and
so
yeah,
but
basically
most
of
the
code
of
the
project
is
is
mine.
I
I
donated
most
of
the
code
at
the
initial
to
the
project
when
I
was
in
another
company
that
was
acquired
by
datadog
and
that's
how
the
project
started
and
after
that,
as
we
have,
we
have
had
peers
from
different
developers.
E
One
of
the
most
active
is
bryce
buchanan
from
elastic
he's,
also
working,
I
think,
he's
working
with
top
and
telemetry.
I
I
I
cannot
speak
for
for
for
himself,
but
I
think
he's
working
on
a
product
in
an
elastic
that
uses
open
telemetry
as
as
the
trace
and
metrics
generation,
and
we
also
have
had
from
time
to
time.
Some
other
spr,
but
no
one
more,
is
actively
working
every
week
on
the
project
right
now,
at
least
not
that
we
that
share
with
the
community
or
with
the
project.
E
So
we
have
pr
from
time
to
time,
but
not
so
so
frequently
and.
D
E
Status
of
the
status
of
the
project
is
it's
already
a
1.0
version,
so
it
follows
the
spec
in
the
1.0.
E
Some
of
the
areas
are
very
well
tested
and
are
working
in
projects.
Some
other
areas
are
not
so
much
tested.
They
are
implemented
following
the
spec
half
test,
but
have
not
have
been
tested
thoroughly
in
the
while
by
people.
So,
for
example,
some
of
the
exporters
could
be
having
some
issues,
but
nobody
reported
issues
there,
but
also
didn't
report
usage
of
some
of
them.
E
It's
not
in
the
1.0
aspect,
but
we
support
it,
but
as
other
parts
are,
is
not
thoroughly
tested
on
the
wild,
so
it
is
working
it
has
tested,
but
it
could
have
issues
or
backs
that
we
will
fix
as
soon
as
possible.
I
mean
fixing
issues
or
bugs
is
very
high
in
the
priority
list.
E
Developing
new
features
or
updates
on
the
spec
and
that
stuff
is
has
a
lower
priority,
but
it's
gonna.
It's
gonna
happen.
E
And
that's
more
or
less
the
status.
I
am
personally
using
the
project
in
a
product
for
data
log
that
is
currently
in
public
beta
for
civic
ability
in
testing,
creating
traces
and
basically
the
tracing
area
that
I
will
say,
is
the
most
tested
one
and
the
with
more
features.
E
We
also
support
metrics,
but,
as
you
know,
metrics
spec
was
updated
recently
to
a
final
version,
and
what
we
have
was
something
previous
to
that.
So
probably
the
api
for
matrix
is
going
to
change
following
the
specs
that
we
will
have
to
update
to
the
newest
spec
and
about
logs.
There
are
still
no
plans
for
logging.
E
E
So
that's
also
well
tested
and
working
well,
and
we
also
have
some
other
some
other
minor
instrumentation.
We
also,
for
example,
we
are
also
capturing,
I
think
it's
the
swift
metrics
from
apple.
We
have
a
sim
integration
with
that,
so
we
can
take
a
swift
metrics
from
the
apple
street
library
and
convert
that
into
open
telemetry,
metrics
and
and
some
other
developments
that,
but
but
I
think
those
are
minor.
Let
me
check
the
project.
E
I
don't
remember
what
more
we
could
have,
but
yeah.
If
you
have
checked
the
code,
have
you
checked
the
code.
I
I
mean
I've,
I've
cloned
it
and
I've
poked
around
with
it.
I
haven't
like
tested
it
in
an
app
or
anything
like
that,
and-
and
I
would
and
I'm
I've
poked
around
some
other
repos
because
it
looks
like
maybe
like.net.
For
example,
I
was
interested
in
logging,
so
it's
like
interested
in
how
other
like
what
the,
because
on
the
open,
telemetry
site,
they
say
that
the
logging
data
model
has
been
defined,
but
you
probably
know
better
than
me
how
stable
that
is
when
they
say
it's
been
done.
Yeah.
I
E
Probably
we
will
work
earlier
in
the
final
version
of
the
metrics
api,
but
if
you
are
interested
in
working
in
I
mean
you
are
someone
on
your
team
interested
in
adding
logs
to
the
swift,
open,
telemetry
project,
I
mean
we
are
totally
open
to
to
people
interested
in
adding
code,
and
I
would
help
I'm
sure
also
bryce
will
help
with
what
we
know
about
the
project
and
also
independence,
how
to
make
it
work,
and
that
stuff
I
mean
if
there
is
some
force
or
some
user
that
is
interested
in
in
some
feature
that
we
still
don't
have
supported
and
is,
while
into
also
take
some
of
their
time
in
improving
it
or
adding
some
feature.
E
We
are
going
to
help
or
change
our
priorities.
For
that
I
mean
it's
a
user-driven
development,
so
that's
that's
up
to
the
the
person
that
needs
that,
in
the
same
way
that
if
there
are
issues
we
are
going
to
fix
that
as
soon
as
possible,
and
we
are
currently
releasing
a
fixed
version
more
or
less
from
I
mean
so
I
cannot
find
the
word.
E
I
mean
we
are
doing
that
even
more
than
once
a
month
we
are
having
releases,
we
when
we
have
some
bugs
fixed
that
are
more
or
less
important,
or
if
people
come
to
the
meeting
or
in
the
channel
and
need
some
version
defined.
We
can
generate
that
so
that's
more
or
less
how
the
project
is
moving
right.
Now,
because
you
know,
resources
are
not
very
high,
as
you
can
see,.
E
Yeah,
so
that's
more
or
less.
If
you
check
the
project
we
we
are
following:
we
we
have
all
the
open,
telemetry
exporters
that
we
should
have
like
the
jager
prometheus
for
metrics
and
flipkin.
E
We
also
have
the
open
telemetry
protocol
exporter.
So
that's
the
collector
one.
So
you
have
your
collector
in
your
local
machine
and
the
collector
can
export
it
to
any
any
output
that
you
want
and
you
have
to
have
the
collector
working
for
ios.
It's
not
a
good
solution,
because
you
cannot
have
the
collector
running
on
your
mobile
and
make
having
a
an
app
talking
to
that.
So,
if
you
are
in
the
mobile
development,
probably
having
an
active
exporter
can
work.
E
So
maybe
if
you
are
working
with
traces,
jager
or
zip,
kim
format
could
work
or
you
can
could
also
have
some
kind
of
open
telemetry
protocol
supported
backend.
So
you
could
send
the
open
telemetry
protocol
in
json
format,
for
example,
because
we
have
an
exporter
for
for
json
from
the
format,
so
you
could
generate
json
ascend
to
the
back
end
in
a
otlp
protocol
if
interested
or
use
any
of
the
or
the
native
exporters.
E
Also,
the
data
log
exporter
is
also
working
for
both
traces
and
metrics
and
that
that
from
the
exporter
side
and
for
importing
what
we
call
importers
is
the
open
tracing
sim
that
supports
using
the
library
as
if
you
were
on
a
penetration
library.
So
you
you,
you
can
support
arbitration
instrumentation
done
by
a
code
and
we
will
pose
as
a
open
tracing
tracer.
E
So
you
we
could
take
the
traces
and
the
and
the
spans
from
penetration
and
generate
open,
telemetry
traces
with
that,
and,
as
I
told
you
also
the
swift
metrics
scene
that
uses
the.
As
I
told
you,
the
swift
metric
library
from
apple
that
it's
distributed
and
we
inherit
those
metrics
and
convert
that
to
open
telemetry.
So
you
can
export
them
as
open,
telemetry
metrics
with
the
metrics.
I
think
we
we
have
just
prometheus
for
metrics
as
exporter,
sorry
promises
and
data
support,
metrics
and
also
the
pen
telemetry
protocol.
E
E
That
means
that
most
of
networking
libraries
are
automatically
instrumented
with
it.
It's
just
initializing
just
works
with
things
and
you
can
customize
many
output
or
it's
a
it's,
a
very
powerful
instrumentation
it.
It
uses
a
code
injection
in
the
api
in
the
url
session
api,
so
it
can
be
very
easily
used.
E
We
also
have
what
we
call
sdk
resource
extension,
that
that
is
something
to
read:
the
system
configuration
and
the
system
version
and
the
application,
and
just
reading
data
more
or
less
automatically,
and
enrich
all
your
response
with
that
data
and
the
sync
post
integration.
That
is
very,
very,
very
that's
not
exactly
an
instrumentation,
it's
just
to
create
a
sync
post
for
the
sorry
for
the
instruments,
application
of
apple.
E
So
you
can
see
your
open
telemetry
response
directly
in
instruments
up
when
doing
some
profiling
there,
so,
but
it's
useful
for
for
for
debugging
and
that
that's
more
or
less
all
the
things
that
we
have
above
what
open
telemetry
is
and
the
traces
and
the
metrics
that
are
in
the
spec.
E
But
as
I
said,
the
metrics
are
currently
using
and
note
the
spec
support
most
of
the
features,
but
not
they
could
change
that
shouldn't
happen
with
the
traces
with
the
traces
code
and
also
we
have
support
for
baggage.
E
And
yeah
and
propagation
of
the
ones
that
are
in
the
spec,
that
is
the
standard.
I
Yeah,
I'm
a
little
confused
on
what
baggage
is
sorry,
my
my
yeah.
It's
allowed.
E
Yeah,
but
this
is
about
sharing
context
between
systems,
not
between.
E
I
E
That
you
can
set
context
that
you
pass
with
your
headers
in
your
network
cards
or
your
rpc
calls.
So
you
can
share
that
status
with
the
with
all
the
elements
in
a
in
a
integrated
network
call,
and
do
you
know
what
I
mean?
Yes,.
E
Are
passing
values
in
the
headers,
so
you
can
know
things
on
other
streams
of
the
and
you
can
use
custom
data
there.
So
you
you
can
enrich
your
response
in
the
server
with
data
that
you
are
collecting
in
the
client,
for
example,.
I
E
And
yeah
and
that's
I
think,
that's
more
or
less
all
how
it's
done.
E
Great
sorry,
let
me
let
me
share,
you
know
there
is
a
document
for.
E
Yeah,
do
we
have
here
the
meeting
notes,
let
me
put
justin,
do
you
want
me
to
share
your
company
or
that's.
E
Yeah,
it's
just
to
put
that
and
yeah.
I
don't
know
if
you
have
any
other
question
about
the
project
or
the
status
or
something
like
that.
I
Yeah
I
mean
I
was
just
trying
to
think
about
like
if
you
were
to
implement
logging
like
what
would
be
the
ideal
solution
base.
I
posted
in
the
slack
channel
that
I
was
inspired
by
you
know
how
microsoft
how
the
net
implementation
tries
to
not
stand
on
top
of,
but
not
create
a
new
definition
for
a
logging
interface
and
they
just
kind
of
ingest,
the
microsoft.extensions.logging
or
whatever.
I
There's
like
an
existing
library,
logging
library
that
comes
with
dot
net.
That's
users
use
to
do
log
messages
and
then
so,
there's
some
ingestion
pipeline
that
they
hook
up
to
that
for
for
apple,
the
system
is
a
little
bit
different
because,
yes,
there
is
this
new
os
log
capability
that
you
know
this.
I
The
unified
logging
sub
system,
that
apple
has
been
promoting
for
several
wwdcs
and
but
I
think
the
unfortunate
which
in
ios
15
plus
you
can
create
this
thing,
called
an
os
log
store
to
actually
gain
access
to
those
logged
messages
from
the
current
process,
but
I
and
which
is
very
inspiring
to
like
be
able
to
collect
those
messages
from
os
log
store
and
you
know,
transform
them
into.
I
I
Open
telemetry
would
have
some
sort
of
like
both
like
multiple
ingestion
technology,
and
I
was
just
kind
of
trying
to
get
my
mind
around
if
you
guys
have
had
any
thought
about
that
and
where
that
type
of
code
would
live,
and
you
know
like
yeah,
I
can.
I
can
personally
I'm
inspired
by
the
os
log
stuff,
but,
and
that
might
be
a
good
first
start
for
you
know,
but
because
it's
like
already
built
and
you
would
try
to
build
a
mapper
to
transform
that
data
to
the
like
common
data
structure.
E
Yeah
yeah
you're
through
having
a
login
system
on
on
on
apple
technologies.
E
Until
now,
it
was
impossible
by
default.
So
you
the
only
way
you
could
do,
that
is
capturing
a
standard
output
or
a
standard
error
directly.
E
In
fact,
I
I
have
to
do
that
for
my
my
login
needs.
I
am
capturing
that
and
I
am
sending
the
logs
currently
as
spam
events,
because
there
were
no
login
apis
in
open
telemetry
and
I
was
sending
the
logs
as
events
in
the
span
and
with
attack.
You
have
to
say
that
it
was
allowed
so
and
the
only
way
it
could
have
was
capturing
standard
error
and
standard
output,
and
when
you
are
used
using
os
log
with
the
output
in
the
back
mode,
it
will
be
captured
by
a
standard
error.
E
E
It
was
another
library
that
was
really
similar
to
os
law
and
there
you
could
define
a
store
to
the
os
law,
so
you
could
capture
what
was
being
written
if
you
use
that
library,
but
that
means
that
you
are
not
using
the
standard
os
log
library,
but
using
a
different
library
that
apple
provided,
but
where
you
could
capture
that
with
these
news,
I
don't
know
how
what's
the
status
of
that
previous
library,
maybe
that's
what
has
evolved
to
the
new
one.
I
don't
know
if
it
will
have
support
for
the
previous
one.
E
E
So
if
you
want
logs
to
be
captured
in
previous
ios
15,
you
will
need
to
use
that
other
library
that
was
very
similar
to
the
os
law,
and
that
was
my
first
option
when
I
was
going
to
thinking
about
implementing
the
login
apis
using
that
library
that
had
apple
and
also
giving
options,
for
example,
to
capture
a
standard
output
or
standard
error
as
part
of
the
of
the
api
as
a
different.
E
E
So
that
was
the
initial
idea
I
I
should
check
now
with
a
new
version
and
what
how
things
change
in
order
to
import
the
logs
but
yeah
in
any
way,
it's
a
different
library
depending
on
the
wrestling
and
ios.
So
maybe
it
could
work
with
two
different
frameworks,
depending
on
the
person
you're
running.
E
If
that
can
more
or
less
work
but
yeah,
it
has
not
been
now
an
easy
thing
for
apple
technologies,
capturing
logs,
they
were
impossible
to
capture
and
yeah,
and
os
log
is
only
getting
internally
captured.
So
if
the
user
doesn't
configure
the
project
to
show
that
to
a
standard
output,
you
will
you
won't
get
anything
it's
not
possible
with
anything
reduced
to
its
new
library.
So.
I
Yeah
yeah,
okay
yeah
at
square,
we
use,
we've
created
this
logging,
library
called
aardvark
and
it's
on
github,
and
we
have
some
other
collaborators
that
take
care
of
it
too,
and
it
has
this
idea
of
like
a
logging
interface,
you
know
the
interface
that
you
use
to
log
messages
and
then
there's
like
log
stores
that
you
could
probably
you
know,
have
a
a
separate
store
that
you
write
to
for
this,
that
we
could
potentially
plug
in
the
the
open
systems
too.
E
Yeah,
it
shouldn't
be
really
difficult
to
add
an
interface
for
that
yeah.
Probably
once
we
add
once
we
add
login
to
open
telemetry,
as
we
will
probably
provide
interfaces
for,
for
example,
standard
output,
probably
so
users
can
configure
that
so
making
the
same
transition
for
other
logging
mechanisms
shouldn't
be
very
difficult,
okay,
so
yeah.
E
I
I
think
that
that
could
be
doable
with
any
library
luxury
logging
system
that
is
out
there
and
has
some
standardized
concepts
you
know
but
yeah,
but,
as
I
said,
there
is
still
no
plans
for
adding
logging
to
the
project,
except
if
someone
really
needs
or
wants
it
and
is
also
going
to
spend
some
time
on.
Also
working
on
that.
I
E
I
Thank
you,
yeah
I'll,
have
a
look
at
the
project
to
see
if
some
of
that
stuff
is
obvious
on
where
you
would
start
to
define
some
of
these.
These
bits
I
just
had
an
idea
of,
like
you,
know
the
os
law.
The
ability
to
read
os
log
is
ios
15
plus,
but
the
ability
to
write
os
log
is
lower,
and
so
theoretically
you
know,
opt
opt
users
in
to
collecting
those
logs.
When
you
have
a
large
majority
of
folks
on
ios
15
plus,
we
don't
have
that
luxury,
but.
E
That's
what's
happening
really
fast,
you
know,
probably
in
january
we
probably
half
of
the
ios
users
are
already
with
ios
15
and
by
march
3,
probably
80
or
90
percent
of
the
users.
Are
there
so
yeah
it
could
be.
It
could
be
that
fast.
I
think
it
has
been
that
fast
with
other
versions
but
yeah.
I
don't
know
if
you
also
need
to
support,
probably
previous
users
yeah,
because
you
you
have
that
you
know
you.
You
must
keep
compatibility
with
previous
users,
so
yeah
yeah.
I
Okay,
well,
thank
you
very
much.
I
think
I
think
it
was
very
helpful.
E
Okay,
you
know
you
can
contact
through
the
slack
channel
or
come
to
other
meetings
whenever
you
want-
and
I
am
happy
to
help
anyone
that
has
questions
or
that
wants
to
add
something.
Okay
had
the
project
grow
or
at
least
advance
from
time
to
time,
cool
yeah.
I
I
guess
the
last
thing
is:
do
you
have
a
do?
You
have
a
guide
anywhere
to
like
point
me
to
like
an
end-to-end
demo
or
a
way
to
kind
of
and
like
get
a
get,
an
app
running
locally
and
start.
You
know
seeing
it
send
messages
to
yeah.
You
know.
E
Your
environment
somewhere
the
one
of
the
open
telemet
one
of
the
sweet
guys
that
came
to
the
meetings.
He
was
preparing
a
a
demo
or
an
article
talking
about
easily
using
pen
telemetry
in
your
project,
but
he's
just
still
working
on
it.
So,
okay
and
he's.
I
don't
think
it
will
come
soon.
E
E
Okay,
can
you
see
this?
Can
you
see
here
the
pro
drivers
project.
I
I
can
only
see
the
the
ot
swift,
sig
notes:
google,
doc,
okay.
E
Then
I
think
how
can
I.
E
So
basically,
this
is
the
structure
that
we'll
show
in
xcode.
We
have
some
examples
here.
E
They
are
quite
simple,
most
of
them,
but
you
can
get
an
idea
of
how
things
work.
We
have
here
the
simple
exporter
it.
It
basically
shows
jager
and
external
output
exporters
in
action.
It
creates
some
simple
expanse
and
you
can
see
that
it's
working
also
here
is
the
way
to
configure
a
docker
local
jager.
E
E
Basically,
this
is
the
initialization.
This
is
a
simple
expand
created.
This
is
a
child
and
parent-expanded
sample
code
configuration
for
jagger
configuration
for
the
standard
output
exporter.
Here
you
can
play
with
other
exporters
also
and
create
the
multi-span
exporter,
so
it
gets
to
jager
and
standard
output
and
create
the
processor
and
just
run
the
the
test.
So
it
can
so
it
was
a
simple
example.
It's
just
run
it
it's
it's
quite
basic,
but
you
can
just
make
it
make
it
work,
so
it
should.
E
So
this
is
one
of
the
examples-
cool
it's
very
simplified,
but
you
can
see
how
things
work
and
you
can
see
just
the
output,
here's
the
span
id
one
of
them
with
the
name
and
the
values
and
the
pattern.
This
one
has
no
pattern.
For
example,
this
is
the
parent
and
child
that
are
created
later
with
their
choice.
Ids
is
the
same.
This
id
and
this
doesn't
have
a
parent,
but
this
one
has
the
id
of
the
parent
here.
So
this
is
the
basic
example
here.
This
is
simple.
Exporter
is
the
simplest
one.
E
There
are
also
examples
for
prometheus,
also
very
basic,
but
you
can
configure
your
promiscuous
and
see
that
metrics
are
arriving
there.
The
no.
This
one
is
not
there.
This
I
started
with
this,
but
I
didn't
end
that
the
network
sample
is
this
is
similar
to
the
exporter,
but
it
just
shows
how
creating
the
url
session
and
how
it
just
captures
it.
E
H
E
With
a
delegate
and
the
other
with
a
simple
network,
but
this
is
the
code
that
you
need
just
for:
instrumenting
others.
This
is
the
example
code.
This
is.
There
is
no
open,
telemetry
code
here,
just
here
so
just
initialize,
the
simple
spam
processor,
that
is
the
simplest
one
that
just
outputs
directly
without
push
to
a
standard
output.
E
You
add
it
to
the
tracer
provider
initialize
the
url
session
instrumentation
with
the
default
configuration
and
call
that
both
network
calls
and
it
will
create
just
the
spans,
and
you
can
see
here
what
happens
there.
You
could
add
here
jagger
exporter
and
you
could
see
that
in
the
jager.
So
that's
the
multi
exporter
thing,
for
example.
Also-
and
this
one
also
is
quite
simple:.
I
So
basically,
what
you
do
is
you
kind
of
you
can
run
these
and
you
run
jaeger
or
something
and
docker,
and
it
will
it'll
send
the
message
over
there
and
then
you
can
somehow
visualize
it
and
yeah.
E
E
The
network
things
happening
here
are
the
achievements
that
we
are
sending
that
are
captured.
You
will
yeah.
I
Okay,
yeah,
this
helps
a
lot.
I
think
it
shows
me
kind
of
like
what
the
the
flow
is.
I'm
I
was
wondering
what
this
logger
tracer
is
because
it
that
one
seems
like
the
most
advanced.
Yes,.
E
No,
no,
it's
not
the
most
up
veins.
I
will
say
it
just
shows.
What's
happening,
I
mean
it
looks
what
how
things
are
happening.
So
it
just
say
when
spanish
is
starting
and
yeah.
E
It's
it
has
more
files,
but
just
shows
what
what's
doing
but
yeah
it's
just
for
just
outputting
what
the
treasure
is
doing
internally.
E
Okay,
it's
more
it's
more,
a
debug
thing,
so
you
see
all
things
that
you
can
create
with
propagation
and
that
stuff,
but
not
really
a
standard
usage
of
the
library.
I
mean,
if
you
want
to
integrate
your
app
with
just
capturing,
for
example,
network.
E
Just
initialization
like
this
in
your
application
code,
will
make
all
your
network
response
to
appear,
for
example,
and
if
you
create
a
parent
spam
here
and
you
call
the
network,
the
network
will
appear
as
a
child
of
the
span
created
before
in
the
same
thread
and
that
stuff,
if
it
has
not
ended
so
yeah
just
with
it,
you
don't
need
any
configuration
for
the
basic
stuff
that
you
see
it
will
work
with
with
any
app
or
library.
I
E
Yeah
and
that's,
they
are
not
very
complex
fact.
If
you
see
they
share
lots
of
code,
for
example,
the
data
dot
sample
uses
the
same
parent
inside
the
spans
that
are
used
in
the
simple
exporter.
E
Just
it
uses
the
metrics
that
is
used
in
the
prometheus
sample,
but
it
just
configures
it
to
use
data.exporter
instead
of
other
exporters,
but
yeah,
for
example,
for
prominent
is
the
same.
You
must
configure
your
local
address
for
prometheus
because
it's
a
bit
more
cumbersome
that
jager
to
come
to
work.
But
if
you
change
that-
and
you
create
this
promise,
your
jammer
with
your.
E
Address
here
your
real
computer
address,
you
could
see.
Also
your
promise
use
metrics
appear
in
there.
I
Okay,
well,
thank
you
for
all
this.
All
your
time
and
walking
me
through
this
I'll
I'll
have
a
think
about
this
and
kind
of
bring
this
back
to
my
team.
I
E
For
for
a
mobile
app
yeah,
as
I
told
you
or
you
have
a
exporter,
a
native
exporter,
for
your
back
end,
I
mean,
if
you
have,
or
you
have
a
15,
it
could
work,
you
can
ingest
flipping
or
you
could
also
ingest
otlp.
E
Here
intermediate
protocol
yeah,
you
can
you
have
here
the
otlp
exporter
that
connects
to
the
it
connects
to
the
open,
telemetry
collector
running
on
your
system.
But
that's
not
gonna
happen
enough
in
a
in
an
iphone
because
you
cannot
run
on
other
processes,
but
there
is
also
a
just
a
jason
exporter
of
the
protobuf
of
the.
E
A
format
but
exported
as
json,
so
instead
of
having
a
protobuf
calling
to
the
to
the
collector,
you
have
the
json
and
you
send
that
to
some
endpoint
that
should
be
able
to
ingest
it
as
a
way
of
having
that
in
a
bucket.
If
you
don't
want
to
use
any
of
the
vendors,
did
you
want
to
have
your
own
intake,
for
example,.
I
E
Yeah,
I
think
maybe
you
could
do
that.
I
don't
know
if
your
pc
yeah
it
should
it
might
work.
If
you
configure
the
channel
for
another
endpoint
yeah
you,
you
should
be
able
to
connect
that,
and
you
have
now
configuration
also
so
yeah.
Probably
you
can.
You
can
configure
that
for
another
and
yeah
that
that
could
also
be
an
option.
Sorry.
I
I'm
asking
right
now:
it's
okay,
all
right!
Well,
I
think
this
is
awesome,
I'll
have
to
think
more
through
it
I'll.
Let
you
well
I'll,
be
in
the
slack
channel
and
I'll
raise
any
questions.
I
have
there
and
I
might
continue
to
come
to
some
of
these
meetings.
I
saw
that
there
was
also
a
log
sig,
so
I
might
go
to
check
out
there
and
see
how
things
are
going
over
there
too.
I
E
Okay,
the
the
same
for
for
for
you.
I
mean
anything
that
any
question
you
can.
You
can
have
we
we
we
usually
answer
quite
fast
in
this
like
channel.
I
saw
that
you
had
one
question
and
it
took
some
time
to
answer
but
yeah.
I
was
back
at
you
and
I
didn't
want
to.