►
From YouTube: 2022-03-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
We
have,
I
think
some
people
are
out
this
week,
but
this
meeting's
recorded-
and
I
will
take
notes
so
even
if
we
don't
get
a
big
crowd,
I
will
definitely
pass
it
on
to
everybody
but
yeah.
I'm
super
super
interested
just
in
general.
A
It's
interesting
to
think
about
open,
telemetry
design,
because
a
lot
of
a
lot
of
what
we've
been
doing
as
far
as
design
is
concerned
is
like
really
staying
in
the
realm
of
like
pre-existing
designs.
You
know
like
like
a
the
basic
tracing
system
is
the
same
as
what's
been
running
in
google
and
zipkin,
and
you
know
all
these
other
things.
A
So
that's
really
well
thought
out.
Metrics.
You
know.
Systems
have
been
beaten
to
death,
but
now
we're
getting
into
things
like
links,
for
example,
another
thing
or
like
exemplars
like
attaching
tracing
trace
ids
to
like
metrics,
basically-
and
things
like
that
and
I
feel
like
if
we
don't
pay
attention
to
the
back
end
implications
the
implementation
implications
of
these
data
structures.
We
might,
we
might
be
doing
the
world
at
disservice
by
foisting
some
model
on
everybody.
That's
actually
really
expensive.
B
Yeah,
it's
it's
an
interesting.
It's
an
interesting
topic.
I
mean
with
links
in
particular.
The
back
end
cost
isn't
so
bad,
although
it
certainly
has
some,
I
guess
hidden
implications
that
you
have
to
enforce
limits
of.
Sometimes
in
the
background
I
think
most
of
the
challenges
we've
run
into
are
in.
How
do
you
expose
links
like
like
to
use
in
a
way
that
makes
any
sense
so
like
even
if
things
make
sense
in
the
telemetry
model,
it
might
not
make
sense
for
actually
describing
what
picture
you
want
to
see.
B
A
And
alex
gu
alex,
I
don't
think
I
know
you
but
nice
to
meet
you.
A
A
But
yeah,
why
don't
why
don't
we
just
go
ahead
and
dive
into
it?
So
we've
got
oswaldo
here
from
microsoft
to
talk
about
link
implementation,
and
so
you
were
saying
to
to
kick
it
off
like
exposing
links
you
found
to
be
tricky.
Do
you
mind
maybe
digging
into
what
you
found
tricky
and
like
what
kind
of
solution
you're
going
with
for
the
time
being,
yeah
I
mean
I
have
a
few
short.
B
And
awesome
yeah.
B
A
Okay,
yeah:
well,
you
want
to
take
it
away.
B
Although
I've
heard
of
discussions
for
different
things,
people
might
want
to
use
links
for
one
I've
recently
heard
about
was
using
it
for
http,
retries
and
other
things
like
that,
but
primarily
today
we
have
it
for
batches
and
queues,
and
one
thing
we
want
to
do
presentationally
is:
we
try
to
actually
follow
links
when
we're
rendering
them
to
users,
so
it's
not
just
treating
them
as
if
they're
hyperlinks
and
you
just
click
through
them.
We
actually
attempt
to
stitch
the
trace
back
together.
B
B
So
that's
part
of
the
background
on
what
we
try
to
do
with
links
and
it
ends
in
a
picture
like
this,
where
you
started
from
a
producer,
but
we
can
load
the
consumer
all
the
way
down
and
we'll
follow
the
links
for
the
end
user.
B
This
brings
a
where
to
look
problem,
so
I
tried
to
explain
this
as
generically
as
I
could
so
it's
applicable
to
other
telemetry
stores,
but
in
general,
the
services
in
the
system
aren't
necessarily
talking
to
the
same
place.
So
when
we
look
at
just
single
trace
ids
even
to
stitch
that
picture,
there's
some
mapping
that
says
where's
all
the
data
for
this
trace
id
live
so
that
you
can
get
your
distributed
trace
together,
links
throw
a
wrench
in
here
by
adding
a
multiplicative
factor.
B
They
say
this
item
is
essentially
the
same
as
five
other
items.
If
you
still
want
to
stitch
that
picture
together,
because
you
have
to
know
where
the
other
end
of
the
link
was
stored
on
a
back
end
side,
this
means
that
you
probably
will
have
to
require
limits
on
your
links
simply
because
otherwise
you
you're
just
people
can
for
every
one
item.
It's
you
know,
20x
that
the
impact
at
least
at
whatever
system
you
have
that
maintains
this
association,
even
if
it's
not
the
entire
pipeline
of
saving
the
telemetry.
B
So
this
is
a
downside
of
links,
but
it's
not
really
that
avoidable.
As
far
as
I
can
tell
unless
you
propagated
information
about
the
resource
that
is
storing
the
telemetry
along
with
the
the
trace
contacts,
but
that's
not
something
we
have
today,
so
you
have
to
build
these
associations,
and
then
you
have
this
multiplicative
factor
with
links.
A
B
Not
just
ahead
of
time,
so,
let's
say
like
put
aside
links
for
a
second
just
in
a
normal
trace.
You
know,
service
a
and
service
b
are
not
necessarily
talking
to
the
same
place,
but
you
still
want
to
stitch
that
picture.
So
there's
some
mapping
that
says
for
a
given
trace
id.
You
know,
there's
some.
I
don't
know
sql
database
and
sql
database
b.
They
both
contain
spans
for
this.
B
If
so,
if
you
want
to
build
that
whole
trace,
you
know
you're
going
to
need
to
query
these
two
sql
databases
links
mean
if
you're
building
this
mapping
every
span
that
has
a
link,
is
actually
n
spans.
It's
as
many
spans
as
there
are
links.
A
And
so
I'm
I'm
curious.
Do
you
just
end
up
building
kind
of
like
essentially
like
a
a
pivot
table
or
a
join
table
of
some
kind?
In
order
to
handle
this,
I
mean.
B
I
like
to
think
of
it
more
that
there's
like
a
directory.
Let's
say
that
says:
like
you
know:
here's
a
trace
id
here's
where
everything
is
looked
up
and
then,
when
you
would
need
to
render
you
you
get
that
directory
and
says
here's
where
everything
is
and
then
you
can
just
you
do.
Do
a
fan
out
query.
So
I'm
going
to
query
every
one
and
then
build
the
picture.
A
Right,
but
for
each
span
you
pull
in
that
because
that
span
has
links
it
may
be
pulling
in.
It
accounts
for
as
multiple
spans,
because
the
things
it's
linked
to
maybe
in
different
data
stores.
B
B
And
when
we
look
at
just
before
I
leave
when
we
look
in
terms
of
you
know,
cost
on
back
ends,
that's
where
at
least
from
what
I
can
tell
the
primary
costs
come
from.
It
doesn't
really
change
data
storage
too
much
unless
you're
storing
links
in
some
special
way.
It's
really
in
how
you
assume,
like
keep
the
whole
traces
together,
you're
going
to
have
to
do
extra
processing
every
time
you
see
a
link
right
in
terms
of
pain
points.
One
big
one
is
you
know
what
what
do
links
represent
right
now?
B
Is
this
like
without
heuristics
you
might
be
able
to
say,
oh
a
link
between
a
producer
and
consumer
is
probably
you
know
a
cue,
but
otherwise
you
know
you're
sort
of
blind,
and
then
you
don't
know
if
a
link
should
be
a
parent,
a
sibling
or
a
child,
or
if
you
shouldn't,
try
rendering
it
in
any
special
way
at
all
without
any
standardized
fields.
Here,
to
tell
you
what
the
relationship
is.
Implementers
are
kind
of
in
the
dark
so
that
that
leads
to
situations
where
you
might
say
you
have
a
send.
B
You
know
client
spam,
that's
related
to
a
message,
so
we
built
a
message
and
then
we
sent
it
out
to
our
queue
now.
This
is
linked
to
the
message.
So
you
know
this
send
sent
this
message,
but
should
it
be
a
child?
Are
they
siblings,
they're,
technically
both
because
they
lived
in
the
same
trace
id?
So
how
in
the
world
do
you
want
to
render
it
it's
a
without
more
information?
It's
just
you
know
everyone
can
sort
of.
Do
it
any
other
way,
just
purely
on
heuristics,
and
it's
not
really
great.
A
Yeah
makes
makes
a
lot
of
sense.
We've
discussed,
adding
an
attribute.
That's
essentially
you
know,
links
can
have
attributes
so
there's
room
to
add
information
there
and
obviously
the
the
default
solution
that
shows
up
anytime.
We
have
a
problem
like
this
is
to
add
a
like
type
field
and
give
every
link
a
type.
A
There's
we've
successfully
avoided
adding
these
kinds
of
type
fields
so
far
in
open
telemetry,
because
it's
it's
often
limiting
right.
Like
you
know,
as
soon
as
you
say,
a
span
has
a
type
well,
a
span
might
have
two
types
or
three
types.
So
you
know
is
this
an
array
of
types?
A
So
we've
resorted
to
kind
of
sniffing.
You
know
like.
If
there
are
http
attributes
on
the
span,
then
it's
an
http
span,
but
in
the
case
of
links
it
does
seem,
like
maybe
a
type
and
or
a
relationship
yeah.
One
way
I've
thought
of
it.
A
Yeah
and
maybe
that's
that's
a
better
way,
rather
than
a
link
type
thinking
about
it
as
a
link
relationship
might
be
a
better
better
term
for
it.
B
Yes,
I
mean,
if
we
look
at
relationship
here,
if
we
knew
that
I
mean
we
can
do
by
heuristics
and
say
a
link
between
two
items
in
the
same
trace.
Id
is
probably
a
purely
informational
thing
and
it
should
change
no
parenting.
But
if
there
was
something
that
said,
link
relationship
was
informational,
we
would
know
not
to
render
it
as
a
link.
It's
just.
A
Just
as
an
example,
http
would
be
a
good
example
there
we
have
retries
and
redirects
and
there's
not
necessarily
that
that
relationship
can't
be
modeled
like
parent
child
very
well,
and
you
could
presume
they're
all
siblings,
and
you
know,
according
to
start
time,
figure
it
out.
But
we
wanted
to
use
links
there
to
make
it
clear
that
if
you
are
making
an
http
request
and
then
like
retrying,
it
twice
and
then
doing
a
redirect
after
that
using
links
so
that
you
can
actually
understand
the
relationship
in
that
chain
of
events.
A
Since
we
it's
kind
of
like
what
you're
just
saying,
you'd
have
to
kind
of
infer
it.
Otherwise,.
B
Yes,
I
mean,
I,
I
think
it's
a
good
thing
to
to
bring
up
it's
something
I
looked
at.
I
think,
there's
a
pr
looking
at
looking
at
links
for
for
for
redirects
that
I
was
taking
a
peek
at,
and
I
certainly
if
there
was
a
relationship
attribute
or
a
kind
or
type
or
whichever
that
told
you.
This
was
retry
scenario
or
some
kind
of
special
scenario.
B
Then
implementers
at
the
rendering
site
would
be
able
to
say
I'm
going
to
treat
all
these
retries
as
siblings,
but
make
a
virtual
parent.
That
says
this
is
a
retry
group
or
something
but
but
without
knowing
that
this
link's
any
different
than
a
q
message.
Producer
consumer
link
it's
hard
to
make
those
calls
other
than
saying
well,
I
know
what
the
implementations
are
like.
If
I
see
this
span
name
versus
this
span
kind
or
whatever
I'm
going
to
infer
what's
going
on.
A
Yeah
makes
sense.
B
Cool
cool
and
just
two
other
minor
ones
links
actually
are
fairly
straightforward.
So
not
not
too
many
pain
points.
One
is
sampling.
We
for
us
at
least
you
know.
B
Samplers
typically
are
on
trace
id
and
we
usually
expect
it
to
be,
and
you
know
this
means
that
when
you're
doing
trace
links
you're
going
to
get
new
trace,
ids
and
now
you
know,
the
experience
with
links
is
basically,
if
you're,
using
sampling
use
links
as
few
times
as
possible,
because
you're
really
going
to
mess
up
your
odds
of
having
a
complete
picture
at
the
end
of
the
day.
B
When
an
item
has
multiple
links
and
you're
forced
to
have
a
user,
you
know
pick
which
one
they
want
to
see.
How
are
they
relevant?
I
understand
there
are
attributes,
but
if
we
assume
customers
are
like
consumers
like
are
not
manually,
doing
everything
having
some
consistency
in
terms
of
what
attributes
are
expected
on
links
help
in
terms
of
automatic
instrumentation.
What
do
we
want
to
be
available
there?
So
users
have
something
to
pick
rather
than
their
favorite
good.
A
Yeah
that
that
makes
a
lot
of
sense,
I'm
curious
to
know
what
attributes
other
than
you
know.
Relationship
would
be,
would
be
helpful
here.
B
I
think
it
changes
on
the
the
scenario
so
like
if,
like
like
cues
versus
I
don't
know,
retries
have
like
different
needs.
B
If
you're
looking
at
things
that
you
could
automatically
collect,
it's,
probably
not
too
many
options
like
you
might
have
things
like
actual
like
message,
size
or
something
like
that,
but
it's
something
that
would
have
to
be
explored
a
bit.
I'm
I'm
not
sure
exactly
what
values,
but
I
know
that
guides
are
probably
not
the
answer.
A
So
if
you
had
like
message
size
and
this
or
that,
but
it
is
curious
to
to
think
about
what
I'm
glad
we
have
link
attributes
but
yeah
I'm
curious
to
discover
what
we
should
be
putting
in
there.
So,
for
example,
with
like
retries,
you
know
you
could
put
the
retry
count
as
a
link
attribute,
but
you
could
also
put
it
as
a
spam
attribute.
So
why
would
you
put
it
on
the
link?
B
I
mean
so
retros
are
an
interesting
case
because,
as
a
as
I
understand
that
they
won't
cross
the
trace
boundary,
so
all
information
is
probably
already
available
for
you.
If
we
look
at
queues,
I
don't
know,
maybe
things
like
time
stamps
like
maybe
I
want
to
look
at
the
oldest
message
versus
the
newest
message
and
right
now,
you'll
have
to
query
all
of
them
to
know
anything
about
them.
A
B
A
Yeah
yeah
that
that
makes
sense,
but
yeah
it's
it's
also
true.
It's
yeah.
It's
not
clear
how
much
information
yeah,
I
guess
it's!
It's
pretty
situation
dependent,
but
you
know
the
the
flip
side,
I
would
say,
is
getting
back
to
representation
and
all
of
that
it
seems
pretty
unlikely
that
spans
would
have
like
a
mix
of
link
types
or
link
relationships
on
them
at
the
same
time
and
each
each
one
of
those
scenarios
has
like,
like
a
pretty
clear
way
to
model
it.
You
know,
is
this
fan
in?
Is
this
fan
out?
A
You
know,
is
this
a
chain
of
re-tries
or
whatnot?
So
it
does
seem
like
if
we
can
just
get
that
relationship
field
on
there.
That
would
probably
solve
most
of
the
display
problems.
B
Yeah
so
relationship
at
least
fixes
how
to
fit
it
in
a
chart.
It
won't
relationship,
won't
necessarily
help.
You
know
which
link
do
I
want
to
see,
but
it
it's
a
tricky
problem.
I
don't
have
answers
to
break,
unfortunately,
but
it's
certainly
as
an
observation,
a
challenge
we
ran
into.
When
you
put
someone
in
the
state
I
mean
we
might
as
well
not
even
give
people
an
option
and
just
picked
one
for
them,
because
they're
just
going
to
randomly
pick.
There's
no
value
here.
A
Yeah
yeah
and
so
getting
back
to
links
should
be
used
sparingly.
The
main
you
know
with
links
within
the
same
trace,
whatever
with
those
you
know
like
http,
yes,.
A
They
don't
hurt
too
much
exactly
and
but
links
between
traces.
The
the
main
place
we're
looking
at
using
them
is
these
asynchronous
systems
like
kafka
and
amqp,
and
things
like
that
and
the
what
we're
trying
to
say
is
you
know,
as
the
the
rule
of
thumb,
each
asynchronous
hop.
A
You
use
a
link
so
so
traces,
each
trace,
basically
trying
to
keep
a
trace
representing
a
synchronous
transaction
right
where
one
resource
is
actively
is
either
waiting
for
another
resource
to
complete
or
is
is
actively
using
resources
and
then,
if
you've
got
basically
a
gap
there
like
it,
went
into
you
know
some
some
q
system
and
then
popped
out
the
other
end
at
some
other
point.
A
That
would
then
you
would
link
those
together
rather
than
continuing
the
the
trace,
and
that's
that's
partially,
to
kind
of
put
a
an
upper
bound
on
how
long
a
trace
might
stick
around
like
there's
no
good
way
to
know
when
a
trace
is
ended,
right,
yeah
and.
A
If
in
open
telemetry,
we
can
keep
a
kind
of
a
concept
of
a
traced
window
in
there,
it's
not
like
an
official
concept,
but
just
from
a
modeling
perspective,
if
you
can
say
somewhat
reliably
that
you
know
if
you
haven't,
if
you
haven't
seen,
if
it's
been
15
minutes,
you
know
or
something
like
that,
you
could
reliably
say:
there's
not
going
to
be
any
more
spans
coming
in
here,
at
least
as
far
as
like
the
instrumentation
we're
providing
people
which
is
going
to
be
like
the
bulk
of
the
instrumentation
they
use
and
things
like
message,
cues
and
stuff
like
that.
A
As
soon
as
you
start
talking
about
these
asynchronous
systems,
now,
there's
really
no
time
limit.
You
know
it
could
be
tomorrow,
potentially
like
like
a
background
job,
for
example.
A
Yes,
so
on
the
one
hand,
it's
helpful
to
to
say,
like
links
are,
are
good
here,
because
it
helps
preserve
that
concept
of
a
you
know
of
like
an
open
trace
window
or
something
like
that.
But
the
flip
side,
as
you
said,
is
this:
this
sampling
problem
and-
and
I
don't.
B
Yeah,
I
mean
it's
not
just
sampling,
I
mean
part
of
it's
also
right
in
some
ways.
Vendor
specific,
it
depends
how
you
expose
the
data,
but
in
general
I
expect
it's
easier
to
craft
queries
and
data
analysis
and
whatnot
on
individual
traces
than
when
you
have
to
do
oh
well.
Now
I
need
to
figure
out
what
this
link
thing
is
and
join
them
together
and
then
union
or
whatever.
A
A
That
might
not
matter
that's
back
in
specific,
but
when
you
start
talking
about
traces
that
are
linked
together
into
like
a
graph,
especially
these
scenarios
where
you're
talking
about
you
know
batch
message.
Processing
like
these
graphs
could
get
really
big.
A
B
Oh,
it
absolutely
does
make
sense.
The
graphs
can
definitely
grow
huge
and
that
that
has
implications
on
several
layers.
It
has
implications
on
the
visualizations.
It
has
implications
on
back-end
processing.
If
you
have
anything
that
you,
if
you
have
a
concept
of
a
trace
at
a
back-end
level,
you'll
have
to
keep
that
around
for
as
long
as
you
might
need,
you'll
probably
end
up
putting
some
synthetic
limits
in
place
to
say
well,
we'll
stop
caring
about
a
trace
after
so
long
just
to
be
reasonable
about
it.
A
Yeah,
but
I
guess
yeah
to
that
point,
maybe
when
it
comes
to
limitations,
I'm
curious.
So
it's
true
that
you
know
you
could
stick
to
having
your
analysis.
Just
work
on
individual
traces,
but
you're
gonna
want
correlations
across
link
traces.
You
know
and.
B
In
general,
I
mean
what
I've
seen
as
I
guess,
sort
of
ignoring
the
time
issue
or
just
assuming
that
it
won't
get
so
bad,
even
though
there's
certainly
nothing
that
technically
will
prevent
it
from
happening
and
just
using
links
when
you're
forced
to
so
basically
whenever
you
might
have
a
divergent
history,
so
these
bad,
these
batching
scenarios
or
whatnot
and
otherwise,
async
or
otherwise
keeping
the
trace
idea
around.
I
don't
know
if
it's
a
great
model,
but
it
is
one
I've
seen.
A
Yeah,
I'm
I'm
curious
when
it
comes
to
limitations.
Have
you
put
well,
I
guess
I'm
curious.
Are
you
doing
any
analysis
that
use
links
yet,
or
are
you
just
just
displaying
them
in
like
a
trace
view
for
the
time
being.
B
So
we
recent
we
displayed
them,
I
mean
there's
some
processing
that
has
to
happen
to
understand
where
they
live
right
and
that
does
have
some
limits.
So
for
us
we
we
assume
that
traces
live
on
the
order
of
hours
if
traces
are
living.
On
the
order
of
days.
B
A
The
time
you're
interested
in
measuring
is
actually
the
the
gap,
if
that
makes
sense
right
like
like,
if
something's,
really
slow
in
your
background
processing,
it
might
very
well
be
that,
like
these
things
are
going
into
a
message
broker
and
then
disappearing
for
30
minutes
before
they
pop
out
the
other
end
and
you're
like
you
want
to
know
that
that's
the
problem
and
potentially
even
alert
on
that
sort
of
thing,
and
in
a
world
where
we
got
data
out
of
all
these
message,
brokers
and
whatever
it
wouldn't
be
a
special
problem,
but
we
don't
live
in
that
world
generally
speaking,
so.
B
Measuring
those
gaps
I
mean
because
so
like
time
spent
in
queue,
I
I
I
like
to
think
of
that
is
one
of
these
problems
where
you'll
be
forced
to
do
this
kind
of
processing
today
and
then
you'll
have
time
limits
and
whatnot,
because
you'll
you'll
want
to
make
sure
you're
not
processing
for
too
long,
and
if
you
did
propagate
things
like
a
time
along
with
your
your
enqueue
and
then
it
gets
dequeued
on
the
other
side
of
you,
you
admit
that
as
an
attribute
potentially
on
the
link
even
like.
B
A
That's
that's
true.
That's
information
that
would
have
to
get
propagated
as
like
trace
state,
essentially.
B
A
A
B
Terms
of
yeah,
I
mean
in
terms
of
limits.
If
we're
talking
about
links
here,
there
is
a
limit
on
the
number
of
links
we
will
process
on
a
given
span.
It's
like
in
the
order
of
like
a
few,
I
don't
know,
doesn't
something
it's
less
than
100.
I
don't
know
the
exact
okay,
though
presumably
no
one's
individually.
Looking
at
this
many
links
to
be
honest,
but
it
is
a
limitation.
That's
there
yeah.
Besides,
that,
there's
not
too
much
in
the
way
of
link
specific
limitations.
A
Yeah
it's
it's
kind
of
interesting
it.
It
all
gets
back
to
really
the
one
problem
child
we
have
just
in
general,
with
modeling
and
instrumentation.
Are
these
like
batch
message
processing
if
you're
just
talking
about
pub
sub
stuff
with
no
batching
going
on,
then
it's
like
all
well
and
good,
but
the
systems
where
you're
getting
like
a
batch
of
50
messages
off
of
the
queue
at
once
and
then
like
processing
through
those
50.
A
A
Yeah
yeah,
it
makes
it
the
instrumentation
is
very
difficult
to
deal
with
because
you
don't
have
like
a
bounding
closure,
necessarily
where
you
can
easily
stick
a
bunch
of
instrumentation,
because
usually
these
apis
are
here
is
a
batch
of
messages
user
and
then
the
user
does
something
with
it.
And
then
the
the
contacts
are
on
the
messages.
A
So
there's
some
trickiness
there
is
how
we
even
write
instrumentation
that
doesn't
involve
the
end
user
having
to
do
something
and
then
the
other
deal
is.
How
precisely
do
we
want
to
to
model
those
relationships?
And
I
don't
know
if
you've
had
a
look
at
any
of
the
messaging
proposals
yet,
but
I'm
curious,
if
you
have,
if
you
have
any
any
thoughts
on
them,.
A
Okay,
maybe
it's
like
a
follow-up
I
can.
I
can
send
send
that
your
way
cause
I'd,
be
curious
to
know
what
you
think
sure.
D
I
just
I'm
small,
I'm
nervous
kumar
from
aws
x-ray
team
and
just
to
be
honest,
we
are
all
also
facing
the
exactly
the
same
problem
that
teddy
just
described.
So
I
will
also
be
interested
into
the
how
we
are
thinking
about
it
and
I
think
what
teddy
actually
described
it
very
well
like
we
want
to
actually
do
it
without
customers.
Doing
anything
like
instrumentation
should
be
able
to
handle
automatically
possible
without
customers
taking
any
actions
so
yeah
also
interested
to
hear
the
opinions.
A
D
Yeah,
if
you
want,
I
can
quickly
summarize
like
one
in
one
minute
like
what
our
thinking
is
yeah.
I
was
done
here
so
like,
based
on
what
we
have
seen
it's
very
hard
for
our
instrumentation
to
know
how
customers
are
processing
the
messages.
Are
they
just
processing?
It
is
as
a
batch
and
put
calling
the
downstream
service
as
a
batch
may
be,
storing
it
like
putting
it
in
a
big
file
and
storing
it
into
some
storage,
or
they
could
process
it
one
by
one.
D
They
could
actually
split
those
into
the
multiple
partitions
and
then
they
actually
process
each
partition
separately.
So
it's
very
challenging.
So
one
way
we
were
thinking
is
like
maybe
default
experiences
like
for
the
default
experience
for
all
all
the
processing.
D
A
You
essentially
have
to
presume
that
the
the
end
user
has
has
created
a
bounding
span,
in
other
words,
there's
if
there
is
a
current
span
available
when
they
call
the
api
for
requesting
the
batch
of
messages,
then
that
api
call
can
be
instrumented
to
automatically
link
all
of
those
messages
to
the
the
current
span,
that's
available,
so
that
that's
one
way
to
do
it,
but,
like
you
said
it
presumes
that
you
want
all
of
those
messages
linked
to
that
same
span.
A
But
that's
basically
the
only
thing
we
can
see
that
we
could
really
give
people
automatically
any
any
other
scenario.
A
Kind
of,
as
far
as
I
can
tell,
would
require
us
having
to
give
the
user
like
a
utility
function,
maybe
to
make
it
easier,
but
they
would
still,
you
know,
have
to
to
do
something
themselves
regarding
like
extracting
the
trace
context
from
the
message
and
then
and
then
doing
something
with
it.
D
Yeah,
I
think,
at
high
level,
our
our
thinking
is
kind
of
alliance.
What
did
you
mention
yeah
so
by
default,
our
sdk
will
assume
that
customer
wants
to
send
like
sample
all
the
messages,
even
if
one
of
the
messages
sampled
so
like
into
the
batch.
We
know
that
if
one
message
is
sampled
or
not,
if
any
one
of
the
message
is
sampled,
we
assume
that
whole
invocation
and
the
consumer
side
is
bad.
D
It's
sampled
and
that's
how
we
treat
it
by
default,
but
we'll
allow
customers
to
like
they
can
write
one
or
two
lines
to
say
that
I
am
processing
this
message
and
if
that
message
is
not
sampled,
we
will
not
include
that
into
the
span
or
will
not
link
it,
and
while
calling
the
downstream
services
will
not
pass
the
sample.
True
decision
for
unsampled
messages.
D
However,
if
they
are
processing
the
sample
messages
they
can
like,
we
will
attach
that
to
the
span,
we'll
sorry
link
it
to
the
step
span
and
then,
as
well
as
while
calling
the
downstream
service
will
pop
past
the
trace
contact
with
sampling
through
yeah.
A
Yeah
no
it
I
mean.
I
would
honestly
recommend
that
you,
you
always
attach
the
messages
to
the
span,
whether
they're,
sampled
or
not,
because
that's
information
the
user
would
want.
I
could
be
wrong
about
that,
but
that's
my
instinct
but
yeah
you
do
have
the
question
of.
When
do
you.
A
When
do
you
choose,
how
do
you
choose
whether
or
not
to
have
any
further
sampling?
A
Essentially-
and
I
don't
know
we-
we
have
some
stuff
in
open
telemetry
now
around
span
specific
sampling,
like
weighted
sampling,
but
if
the
users
already
opened
the
span
and
you're
attaching
these
links,
you
know
to
an
existing
span,
then
you've
already
kind
of
made
a
sampling
decision
before
you
even
unpack
these
messages.
A
So
I
struggle
a
little
bit
to
see
how
the
messages
could
influence,
whether
there's
like
further
further
sampling
or
not
and
yeah
it.
The
end
result
is:
it
seems
like
if
you're
doing
much
sampling
at
all
then
you're
going
to
have
like
these
incomplete
the
the
more
sampling
you're
doing
the
more
linking
you're
doing
the
more
incomplete.
You
know
your.
A
Gonna
be-
and
I
don't
know
like
with
weighted
sampling,
you
know
we
have
a
way
of,
like
you
know,
looking
at
the
using
the
trace
id
to
weight,
whether
or
not
we
sample,
which
helps
make
it
more
likely
when
your
sampling
spans
that
you
you
get
complete
traces,
but
I
don't
personally
see
how
we
could
apply
something
like
that
to
this
situation.
A
A
So
we're
trying
to
get
that
changed
to
say
links
can
be
added
to
a
span
at
any
time.
But
you
know
as
soon
as
you
do
that
then
you're
saying
you
won't
even
have
these.
These
links
won't
even
be
available
to
the
sampler.
So
it's
it's
kind
of
a
conundrum.
A
I'm
personally,
I'm
not
I'm
not
totally
sure
what
the
solution
is,
because
with
these
links
and
every
like,
maybe
you
have
a
better
chance
at
it
if
you're
doing
sampling
later
on
in
the
chain,
but
I
don't
know
even
there,
it
seems
a
little
tricky.
So
I
think
that's
like
a
big
open
question
with
with
links
as
to
whether
or
not
there's
like
an
additional
sampling
algorithm.
We
want
to
add
to
open
telemetry
that
that
makes
life
better
for
our
users.
D
D
A
Yeah
yeah,
and
I
mean
I
think,
that's
that's
straightforward
enough
to
do
except
you
know.
Well,
I
guess
what
I'm
saying
is
you
can
do
that
right
and
you
can
still
do
that
with
links.
It's
just
the
end
result
is
just
you.
You
can't
use
the
information
in
the
links
to
adjust
that
sampling
decision,
which
just
means
you're
increasing
the
chances
that
you
won't
get
a
complete
graph.
A
So
if
they
want
to
have
like
you
know,
head-based
sampling
that
that's
no
problem,
it
just
means
that,
like
these
graphs
are
probably
going
to
be
incomplete
if
they're
doing
a
lot
of
sampling,
so
I
don't
know
personally,
I
think
I
think
the
way
we're
approaching
sampling
in
general
needs
to
to
change.
A
Like
I
think
end
users,
configuring
sampling
and
having
control
over
sampling,
doesn't
work
very
well
and
as
part
of
adding
like
a
control
plane
to
open
telemetry,
we
can
potentially
move
all
of
this
sampling
stuff
to
more
of
like
a
feedback
loop
with
the
back
end
system.
A
That's
doing
the
data
processing
because
you
know
it's
just
what
kind
of
sampling
you're
doing
really
depends
on
what
you're
doing
with
data
right
and
like
in
general.
Optimal
sampling
is
not
not
something
an
end
user
can
can
figure
out
very
well
on
their
own,
in
my
opinion,
unless
they're
just
doing
very
basic
things
with
their
tracing
data
right
like
looking
at
averages
or
something
like
that,.
A
Yeah
yeah
yeah,
so
I
think,
there's
well
there's
two
layers:
one
is
to
not
sample
up
front
at
all
right
and
having
some
intermediary
stage
where
you're
collecting
the
data
and
like
basically
defer
sampling
to
some
intermediary
stage
like
the
collector
or
something
like
that.
So
that's
that's
one
way
to
to
give
yourself
access
potentially
to
like
more
information.
A
It
can
issue
out
a
sampling
rule
to
the
jaeger
clients
that
says:
hey
crank,
crank
the
sampling
up
on
these
spans
because
we're
getting
a
ton
of
them,
and
so
it's
not
it's
really
not
important
to
to
have
them
all.
Well.
So
so
you
can
get
a
kind
of
sampling,
that's
tuned,
towards
weeding
out
really
common
data,
while
still
preserving
uncommon
data,
and
that's
just
not
the
kind
of
thing
an
end
user
can
do
right,
like
only
the
back
end,
can
do
that
kind
of
sampling,
stuff.
D
Yeah,
I
think
that's
a
great
idea,
and
I
agree,
like
customers
generally
struggles
like
if
what
sampling
rate
is
I
actually
put
like
we
don't
have
like,
like
like
question
mark
like
a
customer
shouldn't,
actually
worry
about
the
sampling
rate
at
all.
Our
system
behind
the
scenes
will
be
able
to
handle
it
automatically.
A
D
A
Just
gonna
put
this
in
the
chat,
but
here's
a
link
to
the
the
agent
management
protocol.
That's
being
worked
on
right
now,
so
if
you're,
if
you're
interested
in
that
there
is
like
a
agent
management,
spec
group
that
meets,
I
forget
when
they
meet,
but
you
can
find
them
on
the
calendar.
A
But
that's
that's
the
spec
as
it
currently
stands.
So
it's
about,
like
general
in
general
purpose
like
agent
management,
but
I
think
sampling's,
probably
one
more
interesting
things
you
could
do
with
remote
agent
management.
C
Yeah
we're
mostly
out
of
time,
but
I
do
have
a
couple
of
questions
related
to
the
other
topics
that
we
have
in
agenda.
Oh
sure,
so
do
you
have
any
updates
related
to
committee
sponsorship.
A
Is
interested
in
in
helping
this
the
sick?
He
can't
he
won't
be
able
to
come
to
meetings
until
april
because
he
has
to
rearrange
his
schedule,
but
I
think
he
can
help
ensure
that
future
prs
don't
don't
get
jammed
up.
Basically,
so
so
he's
a
person
to
reach
out
to
going
forwards.
A
Okay,
as
far
as
like
the
sponsorship
concept
got
a
little
bit
of
like
kind
of
like
push
back
about
what
what
does
like
a
sponsor
really
mean,
and
so
I'm
kind
of
like
re-drafting
after
talking
to
a
lot
of
people
kind
of
redrafting.
My
proposal
still
including
like
getting
rid
of
the
sponsor
language,
but
just
saying
for
for
specification
things.
If
we
spin
one
up,
there
has
to
be
tc
member
involved
right
like
we
can't
have.
A
We
can't
have
spec
working
groups
working
without
involvement
from
like
the
tc,
because
it
just
doesn't
just
doesn't
work.
So
I
don't
think
that's
super
controversial,
so
we
might
not
be
calling
sponsors
but
anyways
things.
Don't
think
that
wouldn't
change
too
much
for
for
this
group.
As
far
as
you
know,
having
a
tc
member
here,
bogdan
is
gonna.
He's
gonna
be
at
the
thursday
meeting
to
talk
about
links
in
the
morning
by
the
way.
Okay,
I
don't
so
I
don't
know
if
you
can
make
that
meeting
dennis
but
he'll
be
there.
A
Great
and
the
alternate
idea
floating
right
now
that
seems
interesting
again
hasn't
been
approved,
but
rather
than
sponsors.
A
Looking
at
making
more
like
a
sprint
process
for
the
specification
like
creating
a
project
board
for
this
specification
and
like
every
month,
having
like
a
basically
like
a
triage
session
and
having
the
tc
and
like
the
spec
community,
decide,
you
know
what
what
issues
npr's
are
like
on
the
agenda
for
that
month
and
then
trying
to
to
get
through
that
work,
and
hopefully
that'll
one,
add
clarity
as
to
what
people
should
be
paying
attention
to.
A
So
we
can
be
maybe
a
little
more
focused
rather
than
spread
out
across,
like
all
the
open
issues
and
then
yeah
the
flip
side,
also
giving
a
better
handle
on
when
to
to
say
no
to
things
right.
If
you
can
figure
out
how
much
we
can
get
through
in
say
a
month
put
it
into
a
project
board,
and
then
people
know
that
you
know
if
their
their
issue
or
thing
is
not.
A
You
know
is
not
on
board
for
that
month.
Then
we
we
don't
have
time
to
to
get
to
it,
so
maybe
provides
clarity
like
in
both
directions
like
helping
us
focus
more
on
the
stuff
we're
trying
to
get
through,
but
then
also
makes
it
a
little
more
straightforward
to
to
go
to
other
issues
and
say
like
we're
sorry,
this
is
an
interesting
idea,
but
we
don't.
We
don't
have
the
bandwidth
to
to
process
it
right
now.
So
don't
don't
sit
here.
A
C
We
can
probably
continue
talking
about
this
next
week,
but
we
do
have
this
timeline
and
scope
for
we
won
for
yes,
and
basically
we
wanted
to
do
this
by
the
end
of
this
month,
so
we
have
two
weeks
left,
so
I
was
thinking
that
we
probably
want
to
bring
like
a
reduced
scope,
and
since
we
like,
I
didn't
even
start
in
any
kind
of
discussions
on
this
another
topic,
like
called
contact
context
propagation.
C
Maybe
we
would
like
just
to
move
it
out
of
scope
for
v1
and
address
it
further,
just
to
make
sure
that
we
can,
you
know
complete
all
the
work
that
we
already
started
and
eventually
like
remove
the
semantic
dimensions
to
we
want
the
stable
version.
So
that's
something
that
probably
I
will
create
a
pull
request
to
this
old
tab
for
the
for
next
week,
and
we
probably
can
have
another
discussion
doesn't
seems,
seems
okay
for
you.
A
A
Was
hard
to
process
right
is
they
tend
to
be
like
spec
proposals,
not
timelines
and
things,
but,
but
I
agree
with
you
like:
we
can
just
move
that
move
that
out
of
our
out
of
our
v1.
Maybe
what
we
need
really
is
more
just
like
a
project
board.
We
could
create
a
project
board
in
github.
If
that
would
be
helpful
for
this
stuff.
A
C
A
So
maybe
next
meeting,
let's
let's
reserve
next
meeting
for
that
we'll
have
a
project
board
and
and
we'll
just
talk
about
required,
attribute
sets
just
getting
that
over
the
finish
line.
Okay,
I'll
point
riley
at
that
as
well-
and
let
him
know
that's
kind
of
our
last
piece
and
then
we
can
also
maybe
just
take
the
rest
of
the
stuff
we've
been
talking
about
and
just
fill
out
the
project
board.
So
it's
written
down
somewhere,
but
that.
C
Sounds
great,
so,
basically,
we
have
two
more
items.
There's
this
like
a
links
stuff
and
probably
we
can.
We
can
have
another
discussion
on
on
30
about
it
and
that,
like
a
4xx
status
status
codes,
that's
something
that
we
probably
oh.
A
Yeah,
oh,
that
got
decided
today
we're
not
we're
not
we're
not
changing
things
that
we
had
a
discussion
in
the
specific
this
morning
about
it.
Okay,.
A
Yep
yeah
yeah,
some
some
coherent
reasoning
got
laid
out
for
why
to
keep
it,
which
is
basically
the
principle
of
least
surprise
right.
I
see
I
see.
C
All
right
yeah,
it
will
be
great
to
like
just
to
like
put
all
these
details
or
decisions
to
that
to
the
pull
request
that
james
was
created,
yeah
and
just
we
can.
We
can
consider
it
resolve.
A
All
right
see
y'all
thursday,
whoever
can
make
it
otherwise
next
tuesday.
Osvaldo.
Thank
you
so
much
for
coming
and
presenting
that
was
super
helpful.