►
From YouTube: 2022-06-23 meeting
Description
Instrumentation: Messaging
B
B
C
D
B
D
C
C
C
And
I
see
we
have
I
mean
on
the
gender
there's.
I
guess
two
main
topics:
there
is
the
the
latter,
the
latter
one
general
attributes
and
span
names.
I
think
we
don't
have
many
of
the
people
today
who
usually
participate
in
the
discussion.
C
So
probably
we
can
not
sure
if
we
will
get
that
to
that
today
we
have
there
is
x-ray
demo,
that's
a
follow-up
from
last
time,
so
I
think
we
can
do
that,
and
I
also
remember
leila
you
put
some
questions
into
slack
and
I
ask
you
to
try
this
me
and
maybe
we
can
start
out
with
that.
B
Okay,
so
tim,
and
I
we
work
at
particular
software
and
well,
we
we,
we
basically
offer
a
messaging
middleware
technology,
so
we
were
looking
at
introducing
open
telemetry
in
our
product
to
help
our
customers
move
forward,
and
one
of
the
things
that
we
were
also
considering
is
or
thinking
about
is.
B
Should
we
implement
the
messaging
specification,
because
one
of
the
things
that
we
are
wondering
is
it
makes
a
lot
of
sense
for
for,
for
example,
a
client
sdk
like
rabbitmq
to
implement
the
messaging
specification,
but
because
we
are
a
layer
on
top
of
that
we
were
asking
ourselves.
Does
it
make
sense
for
us
to
adhere
to
that
same
specification
or
could
that
become
more
confusing
to
the
users.
C
That
is
a
great
question
that
will
basically
be
about
kind
of
multiple
layers
and
and
the
danger
of
kind
of
a
duplicate,
instrumentation.
B
C
This
is
a
question
we
didn't
really
yet
discuss
in
the
group
there
there
are.
There
are
people
were
thinking
about
that
in
in
other
contexts,
not
in
messaging
and
I'm
afraid,
open
telemetry
does
not
have
kind
of
really
stable
solution
for
that.
Yet
I
think
the
the
safest
solution
probably
would
be
that
that
you
provide
consistent
instrumentation
from
your
site
and.
C
And
maybe
then
in
the
in
the
end,
like
suppress,
lower
level
instrumentation
that
might
be
there
or
might
not
be
there.
I
think
in
that
I
think
in
the
end,
it's
it's
it's
probably
it's
it's
your
car.
Where
to
go.
I
mean
the
two
possibilities
are
that
you
don't
provide
instrumentation
and
rely
on
other
libraries
providing
instrumentations
and
then
you
might
end
up
with
probably
different
user
experiences,
and
maybe
some
libraries
don't
have
instrumentation
at
all.
C
So
you
don't
really
know
what
you
get
and
the
other
option
is
that
you
just
take
it.
Take
it
in
your
hands,
provide
this
instrumentation
and
then
more
or
less
suppress
any
lower
level
instrumentation
that
might
be
there
and
I
think
in
the
end,
it's
a.
C
It's
it
said
that
duane
is
not
here
he's
from
from
solace
and
I
think
they
have
like
a
similar
use
case,
and
it
would
be
interesting
to
get
his
opinion
on
that
too.
B
That
we
considered
is
that,
in
tim,
correct
me,
if
I
say
anything,
but
to
basically
try
to
create
a
solution
in
which
we
add
instrumentation,
where
it
makes
sense
to
create
visibility
into
what
we
are
doing
inside
the
framework
and
then
still
being
able
to
connect
that,
with
whatever
a
downstream
component
may
be
emitting
as
well.
So
let's
say
that
a
user
is
using.
We
have
the
messaging
middleware
on
top
of
that
and
we're
adding
instrumentation
that
creates
visibility
into
what
we
are
doing
in
the
framework.
C
Is
what
is
your
middleware
exactly
doing
like?
What
is
the
is
your
middleware
kind
of?
Is
this
like,
in
a
sense
of
a
broker,
middleware.
A
Yeah,
no,
it's
more
basically
on
the
client
side,
providing
apis
and
abstractions
on
top
of
transports,
for
example
like
rabbitmq
what
we
call
transports
or
brokers.
Ultimately,
so
it's
more
the
the
client
to
client
side
where,
as
you
already
said,
it's
the
difficulty
there
we're
not
really
sure
we
can't
really
know
what
kind
of
underlying
open
telemetry
support.
There
is
on
ultimately
client
sdk
levels
and
on
the
broker
side,
and
so
on.
A
Even
on
our
side,
we
have
multiple
levels
where
we
have
basically
the
core
abstraction
api
which
is
stable
across
the
different
transports
like
azure
service,
bus,
rapid
mq
and
these
type
of
things,
and
then
we're
also
looking
at
probably
more
as
a
next
step
than
adding
additional
instrumentation
for
our
transport
specific
aspect
so
that
we
like
the
way
we
integrate
with
that
with
mq
or
the
way.
A
Then
we
integrated
spots
to
provide
additional
information
and
there's
always
kind
of
this
or
one
of
the
difficulty
we're
experiencing
at
the
moment,
is
what,
as
layla
described,
that
we
don't
really
know
like
what
is
there?
What
is
not
there?
How
far
does
does
our
implementations
go
ultimately
on
the
transports
which
are
supported?
Where
do
the
client
sdks
go,
and
it
looks
like
at
the
moment
the
at
least
in
the
messaging
space
and
dotnet
space?
There
aren't
many
client
sdks
that
are
doing
anything
or
a
lot
in
regards
to
open
telemetry.
A
So
we
also
don't
have
a
lot
of
things
to
test
this
against,
so
we
feel
we're
gonna
want
to
avoid
that
as
soon
as
a
rabbit,
mq
deploys
something
in
the
client
that
it
screws
up
something
on
our
side
or
that
we
screw
up
something
on
their
side.
A
That's
the
thing
we
kind
of
want
to
avoid,
and
we
saw
from
like
community
contributions
that
typically
or
that
people
look
at
the
messaging
spec
the
way
they
described
and
try
to
follow
these
messagings
back
with
you
know
the
tags
and,
and
these
type
of
things
where
we
then
seem
to
that
seems
to
be
a
high
chance
of
getting
into
conflicts
in
that
regard.
So
I
think
one
of
the
discussions
we're
having
is
whether
we
should
then
maybe
really
try
to
stick
basically
to
our
own
namespaces,
for
example.
C
E
A
C
I
think
we
have
time
today.
I
think
I
think,
for
in
terms
of
the
trend,
I
think
we
will
skip
the
discussion
about
the
the
messaging
attributes,
because
many
of
the
participants
are
not
here
and
then
we
can
split
the
time
in
half
between
the
two
topics
we
have
like
we
until
8
30.
We
can
stay
on
this
topic
and
then
at
8
30.
We
can
switch
to
the
x-ray
okay.
E
Yeah
fine,
so
yeah
just
throw
on
me
everything
you
expect.
A
Okay,
should
I
start
layla
and
kind
of
just
you
fill
out
where
I
miss
some
details.
I
guess
so
one
of
the
things
I
got.
The
main
reason
we're
doing
this
at
the
moment
is
because
most
of
the
client
sdks
are
not
really
doing
open,
telemetry
stuff.
Yet
so
one
of
the
benefits
we
see
is
that
by
for
our
clients
that,
regardless
of
the
transport
they
are
using,
they
already
get
a
very
high
level
sort
of
of
tracing
capabilities
where
they
can
see
tracing
on
at
least
how
we
say
more.
A
The
business
message
levels,
but
that
hides
a
lot
of
the
underlying
things
that
are
actually
going
to
happen,
and
we
don't
know
whether
users
then
want
to
really
see
the
whole
flow
from
where
they
send
a
message
via
our
apis.
A
To
what's
going
to
happen
with
on
the
actual
transport
layer
like
how,
for
example,
the
then
the
the
topology
be
set
up
for
rabbitmq,
let's
say
for
pops
up
or
for
direct
sense,
that
that
the
user
then
would
see
the
whole
flow
from
our
api
towards
our
transport
layer
towards
using
the
rapidmq
clients
apis
and
how
it
then,
on
the
other
side
of
the
receivers,
either
on
the
subscriber's
side
like
unfolds
again,
the
the
thing
we're
kind
of
worried
about,
but
it's
hard
and
hard
because
again,
like
there's,
not
much
we
can
test
this
against
at
the
moment
is
what
if
the
user
adds
sources
for
both
our
diagnostics
and
then,
let's
say
revit
mq,
and
how
do
we
make
sure
that
these
things
connect,
because
otherwise,
in
the
tooling,
it
just
seems
to
be
like
completely
arbitrary
traces
or
some
stuff
might
be
connected?
A
Some
stuff
is
not
connected.
There
are
some
there's
some
trickiness
in
how
exactly
we
do
the
dispatching
of
messages,
because
we
have
like
this,
some
sort
of
the
I
think
it's
described
in
the
specs.
Also,
as
batching
like,
we
call
it
batch
dispatch
where
we
group
up
a
bunch
of
scents
and
then
only
hand
them
over
to
the
transport
layer.
A
So
there's
a
lot
of
things
where
we
need
to
make
sure
that
the
spans
are
correctly
connected
in
a
way
to
make
sense
for
the
user
and
to
make
sure
that
not
some
bands
are
just
kind
of
showing
popping
up
somewhere
without
any
connections
and
don't
make
sense
or
even
in
the
worst
case.
If
we
use
the
wrong
parents,
then
kind
of
split
the
part
spans,
because
at
some
point
the
the
the
connection
is
not
made
well
enough
or
it's
just
like
yeah.
A
So
at
the
moment
we
use
message
headers,
which
again
on
the
enterprise
level
is
for
us.
A
Ultimately,
every
transport
in
the
end
supports
some
sort
of
message:
headers,
some
do
it
natively
for
some,
we
kind
of
wrap
it
in
additional
to
provide
the
message,
headers
concept,
but
we
use
the
headers
there
and
that's,
for
example,
that
meant
also
with
if
we,
for
example,
use
a
transparent
header
or
something
like
that,
we
immediately
get
into
potential
conflicts.
If
what
somebody,
if
somebody
else
wants
to
use
exactly
that
same
header.
So
currently,
the
idea
is
to
use
that,
but
maybe
with
different,
like
our
own
custom,
headers
yeah.
E
You
can
use
it
because
this
is
what
I'm
gonna
write
the
presentation,
you're,
not
the
only
one
who
uses
this
headers,
okay,
so
feel
free
to
keep
using
it.
Oh
yeah.
A
E
We,
though,
it's
based
on
open
sensors
but
k
native
uses,
trace
parent
and
trace
context
together
too.
So
I
guess
it's
okay.
So
let
me
then
rephrase
so
there
is
like
half
of
system
on
the
left
side
that
is
traced.
Then
data
enters
repeating
queue.
E
A
A
This
has
never
been
possible,
so
we
we,
for
example,
provided
some
kind
of
tracing
functionality
using
different
techniques
in
our
products
where
we
were
able
to
create
tracing
based
on
like
like
in
different
ways
with
different
our
own
custom,
tooling,
and
everything
like
that
and
there
it
was
clear
that
we
don't
have
any
access
to
what's
going
on
under
the
hoods,
and
you
know
all
the
libraries
and
the
transports
ultimately
that
work
have
been
there.
A
I'm
not
sure
what
what
what
the
users
then
really
expected
that
they
want
to
see.
You
know
to
what
detail
they
really
want
to
see
their
traces.
I
I
think
that
that
I
we
we're
not
really
sure.
A
A
Don't
need
it
from
our
perspective
if,
if
that
helps
because
again,
we
can
connect
all
the
message
flows,
basically
on
our
level
that
makes.
A
E
A
I
I
think,
not
from
my
side,
I'm
not
sure
where
they're
laid
as
as
some
questions
already,
I
said
like
we're
at
the
moment.
Our
current
efforts
are
mostly
at
the
you
know,
highest
level
of
abstractions.
A
And
the
transport
like
adding
more
transport
specific
details,
let's
say
adding
transport
specific
routing
information
to
tax
and
all
types
of
these
things
are
probably
more
of
a
next
step
than
if,
if
users
show
interest
in
getting
these
additional
details,
so
we
kind
of
don't
have
the
the
like
enough
experience
with
how
exactly
we're
going
to
interact
with
the
client
sdks
on
the
tracing
side
yet
might
have.
E
Yeah
this
makes
sense
so,
but
okay,
so
final,
then
question
is
about
messaging
topologies.
So
all
this
that
lets
you
use
three
tries
stuff
like
that.
Do
you
use
them
and
like
when
you
create
traces
and
spans,
you
know
like
the
relationships
between
traces
and
spans
and.
B
B
That
we
have
as
well
regarding
retries
our
frameworks
at
our
framework
does
add
a
retry
or
recoverability
mechanism.
So
when
we
fail
to
handle
a
message,
then
what
we
do
now
is
we
catch
that
exception
and
we
make
sure
that
the
active
trace
is
marked
as
failed.
So
we
can
propagate
that
information.
B
But
then
what
happens
is
that
when
that
message
is
picked
back
up
from
the
broker,
then
you'll
just
see
a
sort
of
duplicate
trace,
and
then
hopefully,
if
that
receiving
that
message
and
or
having
that
message,
then
succeeds,
then
that
one
will
not
appear
as
failed,
but
one
of
the
things
that
we
were
also
wondering
if
it
doesn't
make
sense
to
link
those
traces
together
so
that
you
can
specifically
more
easily
understand
that
those
are
the
same
message,
because
you
you
would
be
able
to
deduce
that
information
looking
at
the
trace
tax,
for
example,
but
to
make
it
more
apparent,
it's
also
kind
of
tricky,
because
when
we
look
at
certain
phrasing
tools,
sometimes
having
a
lot
of
links
also
can
appear
a
bit
chaotic.
B
E
Yeah
yeah
exactly
this
was
like
what
was
my
question
about
basically
so
relationships
yeah
between
traces
and
spence.
Like
say,
if
I
don't
know,
if
you
support
this
topology,
it's
like
fan
out.
You
know
already
recently,
as
since
recent
times
has
streams,
for
example,
yeah
so
like
different
usage,
different
exchanges,
different
combinations
of
cues,
and
for
me,
then
it
will
be
interesting.
E
Our
folks
and
the
model,
you
know
the
computations,
using
the
what
driven
q
gives
and
how
they
like
model
or
represent
these
computations
by
traces.
You
know
in
spans.
E
This
is
an
immediate
goal
for
fox
some
folks
and
well
range
of
our
experience.
Like
the
users.
They
start
it's
starting
basically
from
we
don't
have
locks
at
all.
If
we
talk
about
achievability
in
general,
two,
I
don't
know
like
k
native
users
that
have
the
whole
package.
E
E
I
will
put
a
link
in
the
chat,
feel
free
to
give
me
your
favorite,
if
you
so
it's
because
well,
basically,
let's,
let's
continue.
We
need
user
base
for
this
particular
topic.
Filtration
and
observability.
C
B
Yes,
yeah
and
we
also
support
azure
service
bus,
azure
storage,
use
amazon,
sqs
msmq,
although
I'm
not
sure.
C
I
put
a
link
to
an
open,
telemetry
issue
in
this
like
channel.
I
will
wait
a
bit
quickly
share
my
screen,
so
I
can
walk
you
over
that.
It's
this
one
of
the
things
that
actually
corresponds
to
your
situation.
It
was
brought
up
by
ludmilla
factory
works
on
azure
sdks,
including,
like
the
azure
event,
hubs
and
service
bus
sdks,
and
she
brought
this
basically
proposal
up
here
about
like
having
instrumentation
layers
and
having
rules
how
to
interact.
I
think
that
is
basically
a
problem
that
you
are
facing.
You
have
the
different
instrumentation
layers.
C
You
have
your
layer,
you
have
maybe
rabbit
and
q
or
service
bus
or
sqs
or
whatever,
and
basically
those
layers
can
conflict
and
her
proposal
here
was
related
to
http
client
libraries.
C
But
I
think
it
applies
to
your
use
case
as
well,
and
she
gives
a
proposal
here
of
how
to
basically
make
sure
that
only
one
of
those
layers,
kind
of
kind
of
one
of
those
layers
more
or
less
adds
information
and
then
the
other
layers.
Basically,
the
lower
left,
the
lower
layer,
kind
of
knows
okay.
This
is
already
taken
care
of,
so
I
just
do.
I
don't
I
just
do
nothing
because
there's
already,
I
don't
know
a
messaging
expand
or
an
http
spend
there.
C
Unfortunately,
there's
been
some
discussion
there,
but
this
proposal
has
not
been
merged
or
or
implemented
or
even
agreed
on.
C
I
think
what
what
you
some
concrete
steps
that
you
could
do
for
your
to
move
forward
with
your
problem.
Now,
maybe
you
could
just
add
a
comment
on
this
old
tab
and
say:
okay,
that's
of
interest
for
us.
We
need
that.
Another
thing
I
think
that
you
could
definitely
do.
Is
that
usually-
and
I
just
noticed
from
from
azure
sdks
from
this
from
the
instrumentation
open
the
dimensions
to
make
instrumentation
there?
C
I
think
in
that
way
you
are
100
under
control
and
you
can
provide
like
a
consistent
experience
to
to
users
because
also,
as
you
said
before,
tim
is
that
you're
not
even
know
if
your
customers
are
interested
in
digging
down
into
the
rapid
mqa
details
vendor
actually
using
your
api.
It
might
actually
be
more
confusing
to
them
to
see
that
and
actually
for
your
users.
It
should
be
kind
of
a
consistent
experience.
C
C
But
that's
just
from
my
point
of
view,
like
the
kind
of
guidance
I
would
give
now
to
say:
okay,
take
the
responsibility
on
yourself
and,
and
probably
I
think
for
now
it's
just
good
enough,
not
enabling
any
instrumentation
on
the
lower
level
library
you
are
you're
using
because
they,
I
think
at
least
for
azure
sdk.
It's
not
enabled
by
default
for
others.
C
I
don't
think
it's
either
and
go
with
that
until
like,
because
the
solution
from
open
telemetry
side
to
this
problem
is
figured
out
because
currently
it
I'm
afraid,
there's
no
kind
of
automatic
solution
to
this
to
this
problem
into
this
conflict.
But
policy
ilia
has
his
hand
up
again.
E
Yeah
question
so
who's
separating
this
masters.
Like
rabbit
inclusion,
will
I
use
a
third
party
called
provider
or,
like
your
tradition,.
A
Who
does
like
call
the
apis?
You
mean
no.
A
E
E
B
C
Yes,
the
I
think
the
challenging
here
is
that
this
functionality
is
not
that
that
is
described
in
this
old
tap.
Here
is
not
really
there
yet,
so
that
suppression
would
be
then
something
that
would
needed
to
provide
from
open,
delametry
site
that,
and
that's
why
I
said
I
think,
from
what
you
can
I
think
currently
do
is
that
you
just
don't
enable
open
the
lam
instrumentation
for
this
instrument
for
this
lower
level.
C
Libraries
that
you
are
using,
because
usually
I
think
what
we
would
expect
from
libraries
that
are
instrumented
is
that
they're
not
automatically
kind
of
just
add
spans,
but
that
you
have
some
means
to
kind
of
switch
that
on
and
off.
We
have
that
for
azure
sdks.
C
I
know
that
it's
there
and
I
guess
it's
there
for
most
other
libraries,
do
you
have
to
explicitly
kind
of
activate
this
instrumentation
and
then
they
basically
create
spans,
and
I
think
the
short-term
solution
for
you
would
probably
just
be
to
just
kind
of
not
turning
it
on
or
turning
it
off.
Whatever
is
the
default
and
just
kind
of
making
sure
in
that
way,
making
sure
that
you
were
the
only
layer
adding
this
messaging
instrumentation.
A
A
problem
so,
in
that
regards,
when
you
know
saying
okay,
we
should
be
basically
making
sure
that
that
we're
not
having
anything
underlying
to
avoid
conflicts.
Would
you
say
it
is
still
like?
In
that
case,
we
should
still
basically
follow
the
recommendations
that
are
currently
defined
in
the
in
the
messaging
specific
specs,
like
the
specific
tag
names,
for
example.
A
Would
it
then
still
make
sense
for
us
to
follow
these
defined
texts
to
set,
let's
say
message
id
the
messaging.message
id,
for
example,
on
the
spans,
or
even
in
that
case,
for
us,
since
we're
more
in
middleware
like
let's
say
a
little
bit
higher
up
a
little
bit
further
away
from
what's
actually
happening
on
the
wire?
A
Would
it
still
would
it
also
be
viable
to
say,
okay,
we're
using
our
own
tech
names,
for
example,
or
is
that
something
that
you
say
the
following
these
specific
proposed
tag
names
is,
would
be
highly
recommended.
C
C
So
this
is
this
is
not
very
kind
of
mature
yet,
but
I
think
the
vision
is
that
that,
for
example,
I
mean
http
is
a
very
good
example,
because
there
is
already
their
http,
the
spans
have
certain
attributes,
so
http
url
http
status,
and
when
the
backend
basically
sees
this
attributes,
they
know.
Okay,
that's
an
http
span
and
that's
what
those
attributes
mean-
and
I
don't
know
if
the
status
is
five
zero,
something
then
okay.
This
is
an
error
and
maybe
make
this
band
red,
and
this
is
kind
of
fun.
C
This
gives
backhands
kind
of
fun
means
to
kind
of
interpret
spans
and
provide
kind
of
tailored
experiences.
On
top
of
that,
and
in
that
regard,
if
you
want
to
benefit
from
these
capabilities
that
might
be
there
in
the
future,
then
it
might
make
sense
for
you
to
follow
the
open,
telemetry
recommended
attributes,
because
it's
just
kind
of
the
idea
or
division
that
the
backhands
at
some
point
kind
of
use.
This
semantic
information
kind
of
to
interpret
the
span
information
they
get
and
then
provide
a
tailored
experience.
On
top
of
that.
A
Okay,
yeah,
I
think
that's
a
good
point
for
us.
I
mean.
Maybe
it's
a
little
bit
too
much
detail
there,
but,
for
example,
on
the
ends
of
space
level,
we
typically
use
a
different
type
of
message
id
than
what
the
rabbit
mq
then
would
probably
produce
on
the
wire,
especially
than
if
we
take
into
account
recoverability.
A
Not
we
felt
like
it
would
be
wrong
for
us
to
set
the
message
id
tag
if
we
know
that
the
transporter
of
something
for
the
down
would
actually
then
set
a
different
values
that
it
would
be
very
confusing
potentially
to
the
user
but
like
if,
if
the
approach-
and
it's
rather
to
say,
okay,
if
we
are
enabled,
we
explicitly
make
sure
that
nothing
further
down
on
the
on
the
call
hierarchy,
essentially
that
to
say:
okay,
no,
no,
rabbit,
mq,
client,
telemetry,
for
example,
then
that
would
probably
solve
the
problem
as
well.
C
Yeah,
I
mean
one
more
question
to
your
use
case
related
there
is
that,
basically,
you
have
your
library
that
you
that
the
customer
uses
for
producing
messages
and,
for
example,
on
the
consumer
side,
is
the
customer
also
always
using
your
library
or
might
even
be
trusted
yeah
your
library
is
using,
for,
I
don't
know
producing,
but
on
a
consumer
side
there
might
just
be
a
native
rabbitmq
client,
actually
reading
and
processing
the
messages.
C
A
I
think
the
the
standard
cases
is
that
they're
using
our
library,
both
on
the
on
the
producer
and
the
consumer
side
since
there's
a
transport
there's
always
the
possibilities,
are
some
some
customers
that
do
use
like
what
we
call
external
integration,
which
means
that
they
send
from
a
different
like
directly
from
using
the
rabbit
client,
sdk
or
potentially
also
consume.
I'd
argue,
the
consume
site
is
probably
less
likely.
It's
typically.
What
we
see
is
more
of
that
they
send
messages
directly
and
use
our
library
more
like
on
the
consumer
side.
A
It's
definitely
where
we
provide
more
functionality
as
well,
but
mo
like
the
most
of
the
cases,
is
like
the
standard
use
cases
that
we
we
have
they're
using
our
libraries
on
both
sides,
so
we
can
usually,
at
least
in
the
happy
cases.
The
making.
The
connections
is
not
a
big
problem.
C
That
makes
sense
because,
because
I
was
just
asking
because
of
the
message
id
because
otherwise
you
would,
on
the
producer
side,
have
your
specific
message
id
and
on
the
consumer
side,
you
would
see
the
rabbitmq
message
id
and
if
you
see
then
that
in
a
trace
kind
of
that
might
be
might
be
confusing.
But
then,
basically
you
say:
okay,
it's
anyway
standard
that
you
just
put
your
layer
on
top
of
it
across
the
producer
consumer
kind
of
a
consumer
side.
And
then
you
have
the
consistent
yeah
experience
there.
D
Regarding
the
the
the
attributes,
what
I
wanted
to
say
is
you
could
also
add
your
your
own
right,
because
I
think,
johannes,
we
even
discussed
at
some
point
we'll
have
specific
messaging
systems
or
a
specific,
like
abstraction,
libraries,
attributes
recommendation
and
then,
for
example,
if
your,
if
your
case
is
up,
is
a
popular
library,
then
you
will
have.
D
You
could
add
a
pr
to
add
your
library,
specific
attributes
there,
and
then
users,
using
your
library
or
your
your
engineers,
make
an
instrumentation
just
follow
those
and
and
then
again
back
ends
will
know,
because
this
is
a
popular
library,
for
example.
Is
it
in
service
spots
right?
That
is
your?
Is
your
library
yeah
yeah?
So
so
this
is
a
highly
popular
library,
so
pressure
backends
will
benefit
from
having
your
specific
thing
there
and
and
highlighting,
like
johannes
said,
like
the
ins
and
service
bus
logo
and
things
like
that,
and
so
on.
D
C
C
I
think
it's
to
require
those
are
more
that's
conditionally
required
and
if
you
want
to
be
compliant
with
opencenometry,
basically
those
required
ones
you
have
to
put
on
and
message
id,
for
example,
is
not
the
required
attribute.
So
if
you,
if
you
think
that's
confusing
in
your
case,
you
can
skip
that
basically
required.
C
Is
that
you,
you
have
a
messaging
system
and
a
destination,
and
what
trial
was
referring
to
is
that
here
we
have
this
general
attributes
in
the
spec
and
there's
then
also
messaging
system,
specific
attributes
like
for
rapid
and
q
for
kafka,
and
I
think,
there's
rocket
rocket
mq
at
some
point
rocketing.
You
here,
yes
and
what
you.
C
That
there
could
be
an
additional
section,
basically
for
your
library,
with
specific
attributes,
that
kind
of
that
your
library
adds
in
addition,
and
then
this
information
basically
can
also
be
used
in
this
in
this
form
by
backhands
to
kind
of
provide
the
daily
experience.
Also
of
of
this
instrumentation
that
you
provide
it's
just
once
once
your
attributes
are
here
that
you're
using
there's
pretty
strong
backwards
compatibility
guarantees
that
are
imposed
by
the
the
convention.
C
So
once
your
your
kind
of
stuff
is
put
here-
and
you
change
it,
basically,
this
would
kind
of
break
your
compliance
with
open
telemetry.
So
it's
kind
of
this
kind
of
tight
line
to
walk
whether
you
want
to
have
it
here
or
not,
because.
A
There
would
be
an
option
for
us
to
to
propose,
like
some
letters
and
the
response
specific.
C
You
so
if
you
have
any
more
questions
reach
out
to
in
slack,
if
it's
many,
probably
like
some
questions,
we
can
answer
in
slack
and
otherwise
we
are
meeting
here
every
thursday
8
a.m
so
happy
about
any
participation.
C
All
the
recordings
will
be
on
youtube,
so
I
think
there
is
on
the.
I
will
put
a
link
in
the
chat
to
to
where
you
can
find
the
recordings.
Okay.
D
Perfect
great,
thank
you.
Okay,
it's
a
bit
tricky
to
find
because
the
videos
don't
have
a
title,
so
you
have
to
kind
of
know
by
the
day
where
so,
like
you
have
to
look,
I
think
it
takes
a
day
or
two
for
the
upload
to
happen.
It
all
happens
automatically.
D
But
if
you
know
the
day,
then
you
can
look
at
the
faces
in
the
in
the
time
some
day
of
the
video,
then
it's
not
ideal,
but
that's
how
I
do
it
anyway,
when
I
have
to
look
it
up
and
in
in
slack
I
don't
know
if
you
saw,
but
there
is
a
channel
just
for
messaging
I'll
put
you
as
well,
so
you
can
direct
the
questions
directly.
There.
C
Okay
thanks,
so
I
think
we
can
continue
with
the
aws
x-ray
demo
and
any
follow-ups
that
we
had
from
last
time.
I'm
sorry!
I
had
to
drop
off
last
time
at
nine
and
I
guess
I
I
missed
some.
F
No,
I
think
we
didn't
like,
after
that,
we
didn't
discuss
like
it,
was
immediately
in
the
dive
deep
into
what
we
have.
Is
it
possible
if
I
can
share
the
screen,
and
I
can
suppress
like
thanks.
F
Yes,
okay,
so
just
to
give
a
quick
overview
of
what
we
discussed
just
like
last
time,
so
we
I'm
like
neras
from
aws
excellent
team
and
we
have
alex
also
from
aws
x-ray
team.
So
last
time
we
were
discussing
about
like
how
sqs
to
lambda
works
internally
and
how
we
are
planning
to
generate
the
links.
The
questions
was
majorly
around
where
to
generate
the
links.
F
So
this
was
the
architecture
where
we
have
a
sql
service
and
we
have
internal
service
sql
spoiler,
which
receives
the
message
from
sqs
and
then
invokes
the
lambda
on
customers.
Behalf
and
lambda
is
split
into
the
two
parts:
lambda
front-end
service
and
lambda
worker
service.
So
we
had
two
options
to
generate
the
links.
One
was
at
the
sql
spoiler
when
it
receives
the
message
and
then
like,
then
they
invokes
the
lambda.
At
that
time
we
can
generate
the
links.
Another
one
was
from
into
the
customers
code.
F
Customers
can
actually
generate
the
links
by
themselves,
so
we
had
one
or
two
options,
but
another
option
that
we
were
discussing
was
like:
why
not
allow
customers
to
do
the
both
just
to
quick
overview
of
like
what
are
the
benefits
of
sql
spoiler
sq
is
more
of
a
like
manage
links
of
customers
will
will
by
default
character,
links,
and
it
has
like,
has
better
handling
of
the
persian
pill,
because
if
it's
something
links
are
generated
at
customer
into
the
customer
score
then
like.
F
If,
because
lambda
has
like
constraints
on
the
resources,
just
in
case
they
fails,
the
links
will
also
not
be
generated,
but
in
sps
polar
services
like
many
services,
can
handle
those
scenarios
in
a
better
way.
So
that's
that
are
those
are
the
benefits
and
like
easy
to
find
the
siblings
into
the
sps
polar
way,
but
into
the
lambda
worker
way.
One
one
big
benefit
is
that
if
customers
are
processing
the
messages
one
by
one
in
that
case
like
they
can
directly
link
to
a
message
like
processing
spam
directly
to
the
message
panel.
F
So,
in
that
case,
if
like
they
want
to
find
the
end-to-end
latency
of
a
particular
message,
it's
very
easy
to
follow
the
path.
So
in
that
case
that
it's
being
highlighted,
sqs
message,
one
sql
message
was
put
into
the
s3.
Another
message
was
put
into
the
dyno
db
and
but
they
were
processed
by
the
same
invocation,
same
lambda
function
invocation.
So
if
we
go
with
this
sps,
polar
approach,
we
don't
know
like
because
links
are
at
the
left
front
hand
side.
F
We
don't
know
which
links
to
follow
like
which
path
to
follow,
to
find
out
how
much
what
first
message
took,
because
first
message
is
going
into
the
s3,
so
we
need
to
go
and
traverse
the
s3
pass.
So
we
actually
lost
that
information.
But
if
we
have
a
links
at
the
lambda
worker
site
like
at
the
processors
pen
directly,
then
we
already
know
that
sqs.
That
squeeze
message
is
linking
to
a
particular
process
message
once
pen
and
that's
the
path
we
need
to
follow.
F
F
Linking
where
our
sql
service
generate
the
links
in
a
managed
way,
but
allow
customers
to
generate
the
their
own
way,
also
so
one
confusion
that
it
sorry
one
sorry,
two
so
one
confusion
that
we
were
discussing
that
like
if
we
have
like
suppose,
vendors
as
access
service,
if
they
need
to
find
the
end
to
end
latency
which
links
to
follow,
because
if
message,
one
is
linked
to
the
lambda
front
end
and
it's
linked
to
the
process
message,
one
also.
F
So
that's
where
we
left
and
and
the
kind
of,
if
I
can
quickly
go
on
the
agreement.
So
this
is
done
before
you
also
after
that
we
just
dived
if
there
was
no
kind
of
concrete
this
decision
on
that
one.
But
when
you
were
there
correct
me
if
I'm
wrong
please
so
the
the
idea
was
that,
like
from
the
by
default,
generate
the
links
from
the
sqs
folder,
so
that
by
default,
customers
get
that
experience
without
customers,
making
any
changes,
and
that's
actually
kind
of
aligns
with
the
hotel.
F
Stacks
kind
of
guidance
also
like
give
some
default
custom
experience
to
the
customers
but
allow
the
customization
the
customer
wants.
So
that's.
Why
I
think
recommendation
was
to
still
generate
the
links
from
the
sql
spoiler
side
and
allow,
if
you
are
allowing
customers
to
generate
the
links
at
the
lambda
worker,
then
this
is
the
something
in
kind
of
requirement
and
for
the
hotel
specs
like
how
to
distinguish
between
those
two
links
which
are
actually
kind
of
doing
the
same
stuff.
F
C
I
mean
some
remarks
to
that
is:
first,
it's
it's
not
really
like,
but
just
to
get
this
out
of
the
way
you
said
that
maybe
can
you
go
two
slides
back,
comparing
like
the
two
first
approaches
that
you
said:
okay
yeah
when
you
basically
in
the
first
case
the
information
about
which
message
is
basically
put
into
a
s3
and
ddb
kind
of
this
is
lost.
That
is
true.
C
C
Basically
with
this,
with
with
the
combination,
you
have
all
the
information
there
and
I
think
the
challenge
is
then
just
is
just
to
know,
okay,
which
link
which
link
to
follow
to
kind
of
get
what
information-
and
I
there
is,
I
think,
currently
there
are
two
answers
to
this-
and
one
one
possibility
to
do.
That
is
looking
at
the
looking
at
the
span
kind
of
spans
that
are
linked
together
because
in
in
our
maybe
you
can
go
to
the
to
the
to
the
github
document
of
this.
C
I
think
I
saw
you
had
it
open
before
with
the
this
one.
Yes,
because
here
when
we
look
at
yes
in
this
case,
we
see
that
there's
a
published
span
and
a
receive
span,
and
here
the
published
span
will
always
be
of
type
producer
and
the
receive
span
will
always
be
of
type
consumer,
so
kind
of
these
band
types.
C
Give
you
give
you
kind
of
a
hint
about
the
semantic
meaning
of
the
link,
and
in
now
in
in
your
example,
basically
the
the
polar
dot
the
span
it
is
polar,
would
be
of
span
type
consumer
and
I
I'm
actually
not
sure
what
kind
of
type
of
spans
you
would
have
in
your
lambda
functions.
C
But,
for
example,
if
you,
if,
if
your
lambda
functions,
if
the
span
type
of
the
spans,
there
is
not
consumer
but
something
different,
that
that
could
already
give
you
a
hint
of
what
the
link
means.
So
when
the
link
goes
from
producer
to
consumer,
it
basically
is
this
kind
of
link
to
the
that
shows
you
the
badge
and
if
the
producer
link
goes
to
some
kind,
something
that's
not
consumer,
then.
Basically
that
can
help
you
to
give
the
end-to-end
latency.
C
However,
that's
like
a
first
answer
and
I'm
I'm
not
sure
how
how
stable
that
can
work
in
real
world
use
cases,
and
the
second
possibility
here
is
that
open
telemetry
actually
allows
you
to
put
like
attributes
on
links.
C
F
It's
like
something
adding
some
specific
attributes
into
the
lambda
worker
links
to
say
that
this
is
the
like
kind
of
special
lambda
worker
links
and
treat
it
in
a
different
way
as
so,
but
I
agree
like
do
you
feel
like
we
can
go
with
that
approach,
or
do
you
see
like
in
future
in
terms
of
extensivity,
we
will
have
any
concerns
on
that,
because
if
we
can
like
one
way,
I
can
think
of
is
like
this
is
specific
to
the
x-ray
and
suppose
customers
sending
the
same
data
to
other
vendors.
F
It
will
be,
the
attributes
will
be
there,
but
will
it
create
confusions
to
them
or
can
can
they
ignore
it?
Saying
that
I
don't
know
what
is
the
meaning
of
this
attributes
I'll
just
keep
it
as
a
metadata.
C
I
I
think
when
you,
I
think,
if
you
make
if
this
attributes
are,
then
I
think
that
here
is
that
adaw's
x-ray
then
uses
these
attributes
kind
of
to
do
something
specific,
for
example,
calculate
this
end-to-end
end-to-end
duration,
and
I
think,
as
long
as
the
attribute
name
makes
it
clear
that
this
attribute
is
specific
to
aws,
then
I
think
it
that
should
not
be
a
problem.
I
think
we
we
actually
have
that
there
is
already
a
set
of
semantic
conventions
specific
to
aws.
F
Got
it
got
it
so
if
I
actually
just
just
retake
it
like
create
it
on
water,
like
what
we
just
discussed,
is
generate
links
from
the
sql
spoiler
also,
and
if
customers
wants
to
generate
the
links
at
the
from
their
own
code,
we
should
allow
it,
and,
but
in
that
that,
because
we
into
the
hotels
library
that
there
is
a
specific
library
for
the
spx
to
lambda
one
in
that
library,
we
can
put
this
special
attributes
like
scope
to
the
aws
dot
x-ray
and
that
we
can
use
specifically
to
give
the
hint
to
our
back-end
that
this
is
the
link
only
for
like
and
like
treat
this
as
like
kind
of
special
link
got
it
thanks,
so
one
so
I'm
just
moving
to
the
next
topic.
D
Well,
I
I
just
wanted
to,
but
I
I
I
don't
want
to
take
the
time,
but
because
I
missed
last
the
the
last
week's
meeting,
but
the
problem
that
I
I
could
understand
is
that
you,
you
want
to
to
add
a
link
into
this
folder
and
then
there
could
also
be
a
possibility
to
add
another
link,
and
you
don't
want
to
have
both
of
them
right.
It's.
That
is
the
problem
right.
C
Yeah,
I
think
I
think
they
want
to
have
both
links,
but
then
the
question
is
here
which
link
kind
of
right.
If
you
I
mean
the
the
questions,
then
from
the
basically
producer
side,
you
see
it
in
two
links
and
you
then
have
to
know
okay,
which
links
actually
goes
to
the
processing
message.
This
is
the
relevant
one
for.
D
Right,
but
if
I
got
it
right,
you
want
to
know
this,
because
you
need
to
calculate
some
latency
right,
and
this
is
specific
for
x-ray.
D
Okay,
and
do
you
want
to
measure
the
late,
the
latency
from
when
it
reaches
to
this
polar
service,
or
you
want
to
calculate
the
latest
when
you
reach
the
name,
the
function,
because
if
you
always
want
to
calculate
the
latency
from
and
when
at
least
reach
the
polar,
then
like
the
attributes
or
even
the
spay
name,
for
example,
you
could
use
this
to
to
in
your
logic,
how
to
calculate
this.
D
You
could
use
this
to
find
out,
for
example
like
if
you
know
that
you
always
want
to
calculate
the
latest,
based
on
the
the
the
link
that
happens
in
the
poller,
then
you
can
do
like
pretty
much
anything
that
you
want
to
do
in
order
to
find
which
one
to
take,
and
then
that
could
be
the
spain
name.
D
For
example,
it
could
be
the
attribute
like
johannes
said,
but
I
think
if
you
know
which
which
of
these
two
links
like,
if
you
can
answer
which-
which
one
of
these
two
links
you
can
use
to
calculate
the
latency,
then
I
think
you
should
be
good
to
go
without
any
any
hotel
thing.
But
the
problem
happens.
If,
if
you
say
no,
no,
I
cannot.
I
don't
know
which
one
to
calculate,
because
sometimes
I
want
to
calculate
from
the
limit
and
then
and
then
it's
a
bigger
problem.
D
I
think,
but
I
I
still
feel
that,
even
though,
if
you
have
two
of
them,
you
might
know
in
your
logic
like
if
I
have
two
of
them,
I
want
to
calculate
the
one
from
the
label,
because
it's
the
more
accurate
one.
Then
I
take
that
one.
Otherwise,
if
I
don't
have
the
one,
then
I
take
the
other
one.
You
could
use
some
sort
of
this
logic.
That's
what
I
would
imagine,
for
example,
back
then
doing
something
like
this.
C
C
I
think
the
problem
is,
if
you
don't
have
this
link,
for
example,
from
sqs
directly
to
this
processing
span,
and
you
only
have
sqs
to
lambda
frontend.
I
think
you
can
actually
not
calculate
the
latency
with
this
link.
C
I
think
that's
not
a
fallback
solution,
because
you
have
squeeze
to
land
the
front
end
and
then
this
lambda
front
then
we'd
have
like
two
children,
spans
processing,
two
processing
spans,
and
I
think
then
you
cannot
really
know
which
kind
of
processing
is
related
to
this
message
that
I'm
that
I'm,
that
that
I
want
to
see
the
latency
for
because
I
think
they
only
want
to
see
the
latency
for
one
message,
not
for
all
the
messages
in
the
badge.
C
F
D
Single
message
not
a
batch,
then
the
link
will
work
right.
Then
you
can
do
pretty
much
what
you
want,
but
the
problem
is
it's
the
middle
case.
Then
this
one
that
I
I
didn't
get
it.
Okay,
the
one
that
goes
from
sqs
to
this
lambda
front
end
and
then
goes
to
the
lambda
function.
Then
the
lambda
function
sends
to
some
processing.
D
Okay,
then
you
don't
know,
then
you
don't
know
anymore,
which
which
but
isn't
the
processing
still
also
getting
the
message.
And
then
the
message
has
the
context.
So
you
know
which
which
link
it
is
like
in
the
end,
you're
processing,
a
message
right.
It
goes
through
this
hops,
lambda
front,
end
and
so
on.
But
in
the
end
the
mat,
the
processing
circle
there
is
processing
one
message
right.
F
Yeah,
so
it's
like
we
are
assuming
this
lambda
function
actually
is
processing
the
messages,
but
it's
processing
in
a
in
a
one
by
one.
So
each
message:
that's
process
we
like
is
creating
a
new
spam,
so
processing
message.
One
and
processing
measures
too
like
we
can
just
it
already
knows,
which
message
is
processing,
so
it
can
add
instead
of
the
link
attributes.
Maybe
it
can
add
some
other
attribution
to
the
span
directly
that
we
can
also
utilize
to
say
that
okay,
this
is
yeah.
That's
like
another
way
to
look
into
like.
D
C
I'm
sorry
we
are
out
of
time.
Unfortunately,
I
have
to
go
again
today.
I
have
a
hard
stop
at
nine.
If
there
are
remaining
open
questions,
I
will
put
this
on
top
of
the
gender
for
next
time.