►
From YouTube: 2022-02-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
C
Give
it
another
look
to
make
sure
that
everything
is
properly
covered,
and
I
we
have
like
a
shared
understanding
of
what's
there
so
just
to
go
to
this
open
discussion
items
what
they
updated
there
is
that
for
context
propagation.
Basically,
there
is
the
basics
that
we
say
yeah.
We
have
this
one
context
attached
to
the
message,
that's
immutable,
and
then
we
use
to
link
to
that
view
that
this
one
we
use
to
link
producer
and
consumer
post
version
one.
C
We
will
work
out
details
for
the
other
protocols
to
kind
of
have
stable
specs
of
how
to
attach
this
context
and
also
of
how
to
how
do
we
deal
with
this
transport
context
that
can
be
used
by
intermediaries
that
can
be
used
in
change
by
intermediaries.
So
that's
supposed
to
be
one.
C
We
decided
we
can
dispose.
We
want,
because
we
had
some
prototypes
that
showed
that
what
we
have
defined
in
the
basic
basics
is
readily
extensive,
with
with
transport
context,
recommendations
or
specifications.
We
come
up
with.
C
B
That
just
a
message
is
that
up
to
the
application
to
determine
how.
C
I
think,
with
version
one,
this
will
be
up
to
the
application,
still
okay,
so
because,
if
if
we
could
give
links
to
this
draft
specs
that
we
have,
but
I
think
of
for
version
one,
we
will
not
be
able
to
have
like
stable
mqb
and
mqtt
specifications
there.
So
I
think
yeah
for
version
one.
It
will
be
mostly
up
to
the
application
to
decide
how
to
do
that.
C
But
the
other
that
will
be
one
of
the
first
items
that
we'll
work
focus
on.
I
think
post
version,
one
and
also
yeah
intermediate
instrumentation
is
post
version,
one
that
we
already
talked
about
before
and
for
producer
and
consumer
instrumentations
yeah.
I
updated
updated
the
pr
with
the
old
tap
with
this
distinction
we
make
for
consumers
with
push
pull.
We
will
talk
about
that
today.
You
prepared
something
and
we
will
also
kick
off
talking
about
producer
instrumentation.
Today
we
didn't
talk
about
that.
Yet
at
all,
it
seems
pretty
simple.
C
There's
just
like
a
create
some
create
context
created
there
that
we
link
to.
But
let's
see
amir
did
some
prototyping
there
and
yeah.
Let's
kick
that
off
and
then
hopefully
in
the
old
tab
we
can
add
a
section
of
our
producer
instrumentation.
Then
we
basically
have
all
that
going
and
maybe
in
two
or
three
weeks
we
can
actually
switch
to
attribute
discussions.
C
So
with
that,
I
will,
I
will
hand
over
to
you
amir.
D
So
this
week
I
started
working
on
the
rabbitmq,
the
mqp
instrumentation
in
ojs.
D
So
what
I
try
to
do
is
to
take
everything
that
we've
been
discussing
in
the
last
few
months
and
like
implement
it
and
see
if
it
makes
sense
if
it's
feasible,
how
the
traces
that
are
being
generated
looks
so
I
attached
the
link
to
apr
with
all
the
changes
and.
E
C
So
I
think
I
think
we
should
we
could.
We
should
call
it
rabbit
and
q
and
not
aimp.
D
E
Right,
but
that's
not,
since
we're
trying
to
create
a
a
standard
here,
we
need
to
go
and
rely
on
what
the
standards
are
and
the
standard
is
that
there
is
an
mqp
draft
0.9
that
rabbitmq
relies
on,
which
is
no
longer
current
and
then
there's
a
standard
which
is
called
mkp
1.0,
which
most
other
brokers
rely
on,
and
that
is,
and
that
is
mkp.
C
I
think
we
should
trust
in
this
group.
I
think
when
we
talk
about
rabbit
mq,
we
should
when
we
talk
about
the
mkp,
that's
used
by
rabbit
mq.
We
should
refer
to
it
as
rabbit
mq.
I
mean
it's
different,
maybe
when
you
talk
to
customers,
but
but
in
this
group
just
to
pick
here
that
this
is
that
the
mqb
that
red
videomq
is
using
is
something
very
different
than
what
is
currently
in
the
standard.
C
D
Okay,
so
anyway,
this
is
the
pr
and
then
you
can
see
the
the
changes
if
you
want
to,
and
you
can
also
run
it
and
look
at
the
places
that
are
exported
from
the
example.
The
examples
in
the
repo-
and
they
also
summarize,
there's
a
list
of
changes
which
I
made
to
the
instrumentation
and
the
current
instrumentation
is
based
on
the
current
specification
and
what
I
did
is
I
documented
every
change
that
I
made
according
to
what
we've
been
discussing.
D
D
So
I'm
not
sure
if,
like
it's
a
good
idea
to
enumerate
all
the
changes,
but
I
have
here
the
changes
and
I
have
like
many
open
questions
which
are
like
implementation,
details
and
specification
issues
that
I
came
across
as
I
was
implementing
it,
and
these
are
things
that
I
would
really
be
happy
to
discuss.
But
I
think
maybe
we
should
start
with
the
producing
examples.
D
Okay
cool,
so,
regarding
the
producer
there,
so
it's
much
simpler
than
a
consumer.
As
you
probably
know,
what
I
try
to
do
is
to
take
a
few
examples
and
like
have
a
example
of
how
they
they'll
use
to
produce
messages,
and
they
wrote
some
notes
about
important
details
in
these
examples,
so
I
just
browse
over
it
really
quickly.
D
Just
so,
we
have
like
a
an
idea,
and
so
this
is
example
for
sqs.
So
in
sqs
you
call
a
send
message
and
give
it
a
qr
and
the
message
body.
This
is
an
example
for
a
single
message,
and
then
you
can
specify
like
a
schedule
message
or
a
delayed
message,
and
this
is
the
promise
example
and
there's
also
a
callback
alternative
to
the
syntax,
and
there
is
also
by
the
way,
can
you
see
it?
Is
it
the
large
enough?
D
Is
it
better
now,
okay
and
in
sqs
you
can
also
send
a
batch
of
messages.
For
example,
here
we
in
a
single
lpc
operation,
we
send
multiple
messages
and
then
the
response
which
we
get
back
from
this
rpc.
It
contains
a
status
quo
for
each
message.
So
if
we
send
two
messages,
one
of
them
can
sell
and
one
of
them
can
succeed,
which
is
important
and
get
to
it
later
and
yeah.
So
this
is
the
summary
sns
is
also
a
very
similar.
D
So
in
sns
we
can
have
both
a
low,
cardinality
and
high
cardinality
destinations.
So
this
this
is
something
important
to
note,
and
it's
also
a
supports
batch
publish
very
similar
to
sqs.
You
can
just
give
it
a
batch
of
messages
in
a
single
call,
so
this
is
the
sns
and
so
rabbit
and
cure
rabbit
mqo.
It
has
like
different
models.
D
You
can
either
send
to
a
queue
which
looks
like
this,
so
you
just
call
the
send
to
queue,
and
you
give
it
a
q
name
and
the
interesting
thing
about
a
bitmq.
D
It's
a
important
details
in
my
opinion,
and
so
you
can
either
send
to
a
queue
you
can
send
it
to
a
and
something
called
a
send
out
exchange.
So
the
fanout
exchange
would
just
send
the
message
to
the
exchange
and
everyone
will
subscribe
to.
The
exchange
receives
the
message,
so
you
would
call
channel,
publish
with
exchange
name
and
no
routing
key
because
it
because
everyone
gets
it,
so
you
just
send
it.
D
This
is
a
second
option
and
the
third
option
is
something
they
call
the
direct
exchange.
So
in
direct
exchange
you
give
it
like
a
routing
key.
For
example,
you
make
the
exchange
name
is
logs.
This
is
all
taken
from
their
examples
by
the
way
so
exchange
name
is
logs
and
then
you
sell
within
this
exchange.
D
I
want
to
send
a
message
to
info
and
then
consumers
can
register
to
get
messages
from
from
info
and
and
and
possibly
other
halting
keys,
so
the
publisher
just
published
to
this
exchange
and
the
routing
key
and
the
other
consumers
consume
for
it,
and
the
last
example
is
something
you're
familiar
with
is
like
a
topic
topic
exchange.
D
So
you
can
send
to
something
like
this.
This
is
a
topic
name
and
consumer
can
specify
which
topics
they
want
to
receive
with,
for
example,
asterix,
or
they
have
like
a
pattern
for
which
messages
they
want
to
receive,
and
yes,
so
and
and
there's
also
like
an
extension
by
default.
D
It's
not
the
default
behavior,
but
I
think
it's
an
extension
that
supported
in
all
clients,
it's
called
a
confirm
channel
and
if
you're
using
a
confirmed
channel,
then
when
you
publish
a
message,
you
get
an
arc
back
from
the
walker
saying
that
it
received
the
message
and
persisted,
and
you
know
that
the
message
is
sent
correctly
and
so
something
interesting
about
the
rabbitmq.
Is
that
the
intel?
The
api
is
always
for
a
single
message,
so
you
can't
send
a
batch.
D
You
have
to
send
one
by
one
and
for
me
the
question
is:
do
we
need
a
create
span
in
these
cases
because
it's
redundant?
We
can
do
it
in
a
single
span,
I
believe
which
will
make
the
traces
smaller
but
yeah
I'll,
get
to
it
soon
and
yeah.
So
so,
when
you
publish
a
message
into
habit
and
gear,
you
publish
it,
you
always
publish
it
to
an
exchange,
but
you
don't
always
know
what
the
exchange
type
is.
D
So,
for
example,
you
can
publish
a
message
to
exchange,
which
is
a
fan
out
with
send
it
to
all
consumers
or
to
a
topic
exchange
or
a
direct
exchange,
but
instrumentation
has
no
way
of
always
knowing
which
type
the
exchange
is
and
how
it's
going
to
be
routed,
and
the
exchange
name
is,
I
believe,
if
I'm
not
mistaking,
is
expected
to
have
a
low
cardinality,
but
the
routing
key,
which
is
the
name
of
the
queue
or
the
name
of
the
topic.
It
can
have
low
cardinality.
D
But
if
it's
a
topic
it
can
contain
ideas
and
any
string,
so
it
can
have
a
high
cardinality
and
by
default
messages
are
not
act
and
you
have
to
like
specifically
opt-in,
to
to
get
the
ax
from
the
broker
yeah.
So
this
is
rabbit
and
the
last
examples
I
want
to
show
you
is
from
kafka,
so
kafka
is
much
more
complicated
in
my
opinion,
so
in
kafka
you
can
do
something
like
this.
This
is
like
the
most
simple
example.
D
You
just
create
a
producer
and
connect
to
it
and
send
a
message
so
in
kafka,
you
can't
send
a
single
message
like
the
api
is
always
for
a
batch
of
messages
and
you
can
send
a
batch
of
a
single
message,
but
it's
always
a
badge.
So
here
we
have
a
send
for
a
topic
and
an
array
of
messages,
and
then
we
get
back
a
response
saying
get
to
which
partition
the
message
was
sent.
D
If
there
was
if
there
is
an
error
code,
you
get
the
offset
for
the
messages.
So
this
is
an
example
for
kafka
and
actually
it's
it
gets
very
complicated.
You
can
do
a
so,
for
example,
here
there
is
an
example
where
you
send
them
to
a
single
topic,
but
multiple
messages,
and
you
can
also
in
a
single
call
you
can
send
to
multiple
partitions.
D
Where
is
it
so
there's
also
in
the
api?
You
can
also
send
they
call
it
a
sendbatch,
publish
mesh,
I
think,
but
in
a
single
call
you
can
send
to
multiple
topics
where
each
topic
can
have
multiple
messages.
Well,
each
of
those
messages
can
go
to
a
different
partition.
Yeah
clemens,
I
see
you
on
twitter.
Yes,
so
there
is.
E
This
is
something
that
the
client
resolves
so
you're
handing
you're
handing
this
batch
to
the
client,
but
the
client
will
resolve
this
and
what
happens
on
the
wire
is
actually
that
the
client
maintains
direct
direct
relationships
with
the
brokers
that
own
those
partitions
and
then
sends
them
simply
so
so
you,
if
you
look
through
look
at
this
through
the
lens
of
the
api
as
you're
doing
that
with
rabbit,
then
you're,
also
looking
at
at
the
convenience
of
you,
know
the
apis
as
it
stands
right
now,
but
if
you're,
if
you
are
building
a
standard
that
tries
to
be
a
bit
more
stable,
then
of
course
it's
a
bit
risky
to
go
and-
and
you
know,
build
that
on
the
current
state
of
what
the
client
does,
but
rather
create
rules
around
how
this
materializes
on
the
wire,
and
so
so
that's
so
so
that's
that's.
E
So
one
of
the
observations
is
basically
you're.
Looking
at
this
producing
to
multiple
topics
and
that's
a
convenience,
abstraction,
that's
being
traded
by
the
client
for
you,
but
the
reality
is
that
you
have
two
connections.
The
way
how
you
communicate
with
kafka.
Is
you
walk
into?
You?
Walk
up
to
the
the
the
root
broker,
basically,
you
ask
about
which
partitions
exist
and
where
they're
located,
and
then
you
create
connections
to
those
brokers
which
own
the
the
respective
partitions
other
brokers.
Do
this
completely
differently,
so
there
is
so,
for
instance,
our
to
to
name
one.
E
E
If
you
don't
provide
any
partitioning
hint
which
you
can't
do
in
in
kafka,
then
the
broker
will
just
go
in
and
just
choose
a
partition
for
you,
apache
pulzar.
Does
that
might
do
this
in
a
different
or
does
that
in
a
different
way?
And
then
cncf
pravega
will
do
this
in
yet
another
day
in
another
way
for
rabbitmq
what
you
just
or
the
topology,
you
turn
you
you
you
called
out.
E
If
you
look
at
ibm
mq,
if
you
look
at
go
ems,
if
you
look
at
azure
service
bus,
if
you
look
at
sqs,
you
will
find
that
all
of
those
brokers
have
a
slightly
different
topology
model.
So
one
of
the
challenges
is
that
trying
to
adapt
to
those
various
topology
models
is
problematic
for
standardization,
because
first
you
have
to
chase
them
and
then
b,
there's
so
much
variability
that
it's
hard
to
accommodate
all
of
them.
E
So
it's
it's!
So
it's
safer
and
less
work
to
really
look
at
the
edges
and
and
say:
okay,
there's
a
there's,
an
interface
where
I
can
go
and
submit
messages
and
there's
an
interface
where
I
can
go
and
retrieve
messages
and
then
there's
a
problem
in
the
middle,
which
is
the
broker,
and
it
has
a
thousand
different
ways
of
how
to
dispatch
and
duplicate
and
and
create
copies
of
all
those
things
and
everything.
But
it's
going
to
be
looking
different
in
all
in
each
broker.
E
So
the
question
is
whether
this
group
here
wants
to
take
a
white
box
to
you
and
just
look
at
all
the
brokers
in
detail
and
and
how
the
topologies
work
or
whether
you
want
to
take
a
black
box
view
and
simply
say
we
want
to
have
end-to-end
tracing
through
those
messaging
infrastructures
in
a
reasonable
way.
But
we
don't
want
to
necessarily
look
inside
of
them
and
we
want
to
hand
those
broker,
vendors
or
contacts
that
they
can
then
go
and
use
if
they
want
to
to
go
and
do
tracing
inside
of
their
stuff.
F
E
C
Yeah,
I
also
want
to
echo
it
and
danny
said.
Definitely
we
here
wanna
go
for
the
for
the
approach
to
that
we
say:
okay,
we,
the
customer,
can
basically
users
can
track
their
like
producing
consuming
operations
and
we
kind
of
help
them
to
track
that
to
connect
those
and
then
we
hand
over
necessary
information
to
the
broker,
so
that
also
the
broker
kind
of
can
hook
into
this
tracing
operation.
I
mean
this
will
be
post
this,
this
hooking
into
brokers.
C
This
will
we
will
work
on
this
post
version
1.0,
but
it
is
definitely
this
second
approach.
We
will
go
to
with
some
caveats,
of
course,
because
I
think
for
me
here
domain
also
when
looking
at
examples
that
amir
gives.
The
main
question
for
me
is
also
on
here
on
producer
side.
What's
the
operation
that
we
are
going
to
track.
E
E
Let
me
add
one
more
thing
before
before
I
proceed.
What
am
I
did
here
is
super
valuable
work
because
it
surfaces
the
complexity
of
the
of
that
space.
Right,
it's
like
you.
E
And
it's
going
to
be
it's
going
to
be
very
costly
so
doing
that
kind
of
stopping
at
the
met
stopping
at
the
interface
and
and
putting
context
in
the
into
the
message
it
goes
in
it
comes
out
and
just
leave
it
there
is,
is
certainly
a
v1.
You
know
minimal,
viable
thing.
That
seems
to
be
the
the
best
thing
and
then,
with
that
baseline
you
can
go
into
rabbitmq
and
you
can
go
into
kafka
and
you
can
go
into
the
aqp
workers
and
say:
okay,
so
now
we're
getting
this
information
delivered.
C
C
That
is
very
generic,
so
this
is
basically
that
we
need
to
define
this
operation
that
we
trace
when
you
define
for
all
to
cover
all
those
scenarios,
we
will
then
define
probably
some
generic
attributes
that
we
put
on
this
spam
operation
and
of
course,
then
there
will
be
that's
also
already
in
the
existing
graph
like
then,
there
are
product,
specific
attributes
that
go
only
two
I
mean
and
for
those
product
specific
attributes.
C
I
think
some
of
the
scenarios
that
army
worked
out
here
can
then
provide
the
basis
for
that,
but
I
think
what
we
first
first
need
to
figure
out
here.
What
what
kind
of
what
general
operation
can
we
trace
here
in
all
in
all
those,
in
all
those
cases
that
we've
seen
and
yeah?
How
do
we,
what
context
kind
of
do
we
propagate.
D
Yeah,
so
so
I'll
answer
the
things
that
we
discuss
them
so,
first
of
all,
of
course,
these
are
the
few
examples,
but
at
least
for
node.js
and
few
other
programming
languages,
they're
very,
very
popular,
like
kafka
and
rabbit
and
sqs,
at
least
for
my
experience,
which
could
be
wrong.
But
this
is
the
way
I
see
it.
D
It's
like
most
of
them
of
this,
of
the
backend
systems
in
node.js,
so
using
those
three
messaging
systems,
I'm
sure
there
are
more
and
I'll
be
very,
very
happy
if
people
want
to
to
add
more
examples
and
share
other
patterns
in
other
messaging
systems.
D
But
what
I'm
showing
here
is
just
an
example
which
I
think
can
can
show
a
lot
of
the
common
patterns
for
using
the
messaging
systems.
So
this
is
one
thing.
Another
thing
is
that
when
we
discussed
the
the
specification,
we
should
ask
ourselves:
well
the
users
who
is
going
to
consume
these
braces,
this
telemetry
data,
and
I
think
there
can
be
many
consumers.
But
at
least
in
my
view,
like
the
consumer,
is
the
the
like
a
programmer
like
the
the
the
people
that
wrote
the
software.
D
They
want
to
to
understand
the
software
and
these
people
they
usually
they
don't
really
care
about
like
the
networking
low-level
stuff
they
might,
but
most
of
the
time
they
care
about
the
api
that
they
see
in
their
services.
So,
if
they're
calling
a
send
badge
on
the
on
a
on
a
client
or
a
package,
so
this
is
what
they
see.
This
is
what
they
want
to
trace.
D
This
is
what
they
called,
and
I
think
this
is
the
splendid
should
be
created,
even
though
under
the
hood
it
opens
a
few
connections
and
communicate
with
the
different
vocals.
So
it
is
valuable
information
as
well,
but
I
think
the
most
valuable
information
is
the
api
that
the
package
exposes
to
the
developer.
E
So
so
it
needs
to.
There
needs
to
be
a
rule
in
how
those
tracing
mechanisms
materialize
really
on
the
on
the
wire
interface,
so
that
all
apis
that
are
being
built
follow
those
rules,
there's
also
a
problem
with
diversity,
with
with
with
language
diversity
right
there
is,
you
know
you
can
home
in
on
j
on
on
node.js
and
you
can
hone
in
on
java,
but
in
reality,
there's
there's
python
and
go
and
and
see
sharp,
and
you
know
all
kinds
of
different
languages.
E
Many
of
them
which
are
used
and
there's
you
know,
niche
languages
for
all
kinds
of
different
things
are
and
whatever
in
ml,
all
of
which
are
using
messaging
to
some
degree
and
which
you
know
create
traceable
stuff
that
needs
to
want
to
go
and
diagnose
at
the
end.
E
So
all
of
those
need
to
follow
and
follow
the
same
rules
and
and
attaching
those
to
an
api
service
area
means
that
you
now
need
to
go
and
create
rules
for
all
of
those
sdks,
including
the
ones
that
you
have
that
haven't
been
written
yet,
and
you
also
need
to
make
sure
that
you
know
someone
who
writes
up
the
next
cool
programming
model
for
a
particular
broker.
Right
also
follows
those
rules
and
the
only
way
to
force
that
is
to
force
that
into
the
wire
protocol.
E
So,
and
that's
why,
like
for
kafka,
for
instance,
or
for
rather
than
q,
it
will
be
required
for
the
rabbit
mq
team
or
the
apache
kafka
project
to
go
in
and
agree
to
a
binding
for
for
how
those
trace
elements
are
happening
right
and
what
what
we've
done
in
with
w3c,
for
instance,
is
to
go
and
have
a
binding
for
nqp1
and
for
mqtt
for
how
that
actually
materializes
on
the
wire,
because
I
think
that's
the
only
sustainable
way
of
making
that
work
is
to
force
could
force
a
convention
on
everybody.
Who's.
D
Yeah,
so
I
agree
with
you,
but
I
think
at
least
my
reality
shows
I
work
for
a
spectre
which
is
a
vendor
for
dressing
and
we
have
clients
and
those
clients
they
they
have
needs.
They
ask
for
things,
and
I
hear
what
they
ask
for
when
I
try
to
do
my
best
to
give
them
what
they
want
yeah
and
what
they
want.
D
D
So
this
is
this
is
the
need
from
what
I
have
been
observing
and
I
agree
with
you
that
it's
much
more
complex
and
diverse
and
tracing
the
the
low
level
and
networking
apis
is
a
lot
general
and
and
easy
to
do
and
and
stable.
D
So
I'm
agree
with
you
about
it,
but
this
is
what
I
see
from
the
real
usage
in
reality
and
then
let
me
know
I
can
see
one
to
say
something.
E
And
so
I
realized
that,
and
that's
that
is
very
important.
Still
the
one
of
the
things
that
this
this
effort
you
know
open
tracing
overall
should
have
is
to
force
infrastructure
vendors
into
following
conventions
and
making
those
conventions
that
creating
the
customer
pressure
for
them
to
follow
convention.
E
Having
had
it
is
that
seems
premature
and
just
to
stay
at
the
api
level
instead
of
going
to
the
the
rabbit
mcu
project
and
the
kafka
program
says
okay,
so
we
need
this
from
you
as
a
requirement.
F
I
appreciate
it
so
I
I
don't
think
you
you
disagree
honestly,
I
think
both
are
important
and
let's
say
we
have
multiple
network
calls
in
case
of
this
rabbit
mq
example,
and
they
sent
to
multiple
topics
right
having
them
out
of
nowhere
in
application
would
not
be
beneficial
so
something
to
group
them
for
users
and
to
show
okay.
I
call
this,
and
internally
did
all
the
stuff.
I
had
no
idea
about
it's
useful
and
for
for
us,
at
least
in
azure,
is
the
case
that
cut
the
convention.
F
That
says
instrument
public
apis
is
descriptive
enough.
So
I
think
yeah
I
see
how
it
can
go
wrong
in
with
all
variety
of
different
cases
that
exist,
but
I
think
we
can
at
least
start.
A
Yeah,
having
been
the
person
who's
been
in
charge
of
doing
a
road
show
for
solace's
forthcoming,
open,
telemetry
implementation.
I
can
tell
you
that
there's
been
a
lot
of
whiplash
on
phone
calls
with
people
saying
yes,
yes,
yes,
bring
it
give
it
to
us
sooner.
A
For
the
simple
reason
of
you
know:
brokers
to
this
point
are
black
boxes,
and
I
mean
people
from
ikea
to
major
financial
institutions.
Have
all
said
like
we
need
this
in
particular,
because
not
everything
can
be
instrumented
at
the
the
application
level.
At
this
point
you
know
a
lot
of
us
a
lot
of
our
clients,
use
bypasses
like
mule
and
and
boomi,
and
especially
boomi.
You
know
they
don't
have
open
telemetry,
so
the
only
option
is
a
lower
level
for
the
for
the
infrastructure.
A
So
our
perspective
is
that
instrumenting,
the
the
infrastructure
is
a
huge
deal
for
a
lot
of
people
for
a
lot
of
different
reasons.
B
One
thing
I
was
going
to
add
to
what
jessie
said
there,
too,
is
that
I
think
maybe
the
demand
from
tracing
vendors
isn't
quite
there
at
the
lower
level,
because
there
isn't
much
done
yet
at
the
lower
level,
and
so
it's
like
chicken
and
eggs
sort
of,
and
so
I
think
that
I
agree
with
clemens
that
the
internal
brokers
are
also
different
to
trying
to
say
how
the
broker
should
do.
It
is
too
much
and
we
had
to
stop
at
the
edge
there
and
let
let
vendors
do
it
makes
sense.
B
D
Yeah,
so
I
agree
with
everything
we
said
and
I
think
there
is
room
for
like
applicative
api
implementation
as
well
as
networking
and
broker
instrumentation.
They
all
have
a
lot
of
value.
D
C
Moment,
army-
I
will
just
jump
in
here
and
I
will
quickly
also
share,
maybe
trying
to
answer
all
the
previous
points
that
we
had.
Maybe
in
the
beginning,
we
came
up
with
this
model
here.
If
you
remember,
we
had
like
the
stages
and
I
think
it
it
might
be
a.
C
And
we,
when
we
provide
tracing
for
messaging
systems,
we
also
want
to
kind
of
model
those
stages
and
their
hooking.
So
we,
for
example,
did
this.
When
we
worked
on
the
consumer
instrumentation,
we
basically
have
the
receive
operation
more
or
less
or
deliver
operation,
which
is
basically
passing
messages
from
receive
to
process
for
which
we
kind
of
create
spans,
and
maybe
it's
more
helpful
when
we
talk
trying
to
focus
on
this
initial
model
that
we
worked
up.
C
Also
when
we
talk
about
which
bands
do
we
create
what's
page
what
stages
or
what
operations
do
we
trace,
because
I
think
this
should
be
like
a
generic
model
that
we
have
here
in
each
api,
that
they
kind
of
have
is
more
or
less
an
instantiation
of
this
generic
messaging
model
that
we
have
here
and
if
we
talk
about
messaging
systems.
Basically,
when
we
with
what
we
come
up
here,
cover
this
generic
model,
then
we
should
also
be
able
to
cover
like
the
instances
like
the
apis
that
come
out
there.
C
I
mean
there
will
be
probably
some
kind
of
gray
areas
there,
but
most
of
all,
I
think,
for
the
most
cases.
C
D
So
so
my
question
is
this
model:
it
has
a
publish,
so
if
the
user
is
calling
a
publish,
one
api
called
publish
and
then
under
the
hood,
the
the
client
communicates
with
the
two
vocals
and
have
have
two
different
messages
being
sent
to
two
different
brokers.
So
do
we
consider
it
as
a
single
publish
or
to
publish
my
question
is:
do
is
this
published
refers
to
the
api,
or
does
it
refer
to
the
networking
to
like?
Does
it
result
to
the
application
api
or
to
the
vocal
api.
C
C
C
F
F
It's
just
a
quick
remark
that
if
we
consider
we
should
specify
that
if
the
publish
is
a
process
of
publishing
a
batch
to
intermediary
run
or
potentially
then
would
http
call
to
publish
them
b1,
no,
it
will
be
a
transport
call
right.
So
publish
is
anyway,
some
sort
of
logical
construct
on
top
of
transport.
D
I
I
want
to
regarding
those
issues
I
want
to
to
share
my
experience
with
the
current
specification.
I
wrote
messaging
instrumentations
for
sqs
rabbit
and
kafka
in
a
node.
So
when
I
tried
to
in
to
to
write
those
instrumentations-
and
I
said
the
current
spec,
it
was
very
like
hard
thing
to
do,
because
the
spec
didn't
match
the
real
problems
that
I
encountered
when
I
tried
to
to
to
write
the
instrumentation.
D
So
what
I
had
to
do,
and
and
I've
seen
it
in
many
areas,
many
other
instrumentations
is
that
I
had
to
like
improvise
like
say:
okay,
I
can
do
it
according
to
the
spec.
I
have
to
do
it
in
some
other
way.
So
I
think
many
people
are
having
this
experience
and
and
then
what
we
the
end
result
is
that
we
have
so
many
diversity
and
and
people
they
they
each
a
language
and
each
client
they
they
make
up
their
own
interpretation
of
the
very
high
level
spec.
D
And
that
brings
me
to
my
other
role
of
writing.
The
back
end
that
receives
those
spends
and
traces
and
trying
to
to
ingest
and
process
them
in
a
general
way,
and-
and
currently
it's
a
very
frustrating
experience
because,
like
I,
I
tried,
I
write
the
logic
and
then
I
receive
something
from
a
ruby
and
it
behaves
completely
different
and
then
something
from
java
and
and
suddenly
some
of
the
attributes
are
interpreted
in
other
ways
or
the
trace
structure
is
different
or
the
duration
measure
something
else
than
than
what
I'm
expecting.
D
So
it's
a
really
really
really
difficult
job
currently,
because
since
the
specification
is
so
general
and
it
doesn't
like
it
doesn't
tell
you
what
to
do
in
in
the
real
world
problems,
then
then
it's
it's
nearly
impossible.
I
have
like
in
my
back
end.
I
have
like
a
list
if
it's
a
java
and
sqs
do
this.
If
it's
rabbit
and
ruby
do
this,
if
it's
a
kafka
and
a
python,
then
this
is
the
way
you
need
to
handle
it,
and
I
really
hate
this
logic.
D
My
my
dream
is
that
this
the
specification
will
be
written
in
a
way
that
I
can
get
a
kafka
spam
and
no
matter
where
and
use
the
same
logic
to
process
it
and
show
it
in
the
ui
and
be
certain
that
the
attributes
and
the
structure
and
everything
means
the
same
in
all
languages
and
and
current
specification
is
really
bad
in
doing
this
and
makes
my
life
really
hard
and
yeah.
So
so
this
is
my
motivation.
We
have
a
lot
of
questions
here,
probably
peop.
D
Other
people
can
see
it
in
other
ways,
but
this
is
my
take
on
it
and
and
regarding
this,
if
we're
running
out
of
time,
but
I
just
want
to
finish
the
document
which
I
started,
and
so
I
wrote
it
a
few
questions
like
we
talked
about
this
createspan
and
we
talked
about
it
that
it
covers
only
the
creation
of
the
message,
but,
as
we
can
see
in
these
examples,
like
most
of
the
examples
they
arc,
the
sending
of
the
message
like
the
broker
returns
a
successful
indication
and
and
and
it
can
also
returns
an
arrow
indication.
D
For
example,
if
the
message
is
too
long
or
something
is
invalid
and
we
need
to
understand
where
we
report
this
arc
indication
on
the
trace,
so
I
think
the
most
suitable
place
to
do
it
is
like
bell
message
saying:
did
we
succeed
in
sending
it
or
not
like
we
sent
two
messages?
One
of
them
was
successful
and
one
of
them
failed.
D
So
we
want
to
to
have
this
data
somewhere
and
I
think
the
most
popular
place
to
put
it,
is
on
this,
a
single
message,
spam
and
then
the
name
create,
might
be
a
bit
confusing
because
we
have
to
wait
until
we
get
the
arc
to
close
this
span
with
the
right
status,
and
then
I'm
not
sure
it's
a
great
span.
It's
like
something
else,
I'm
not
sure
what
it
is.
But
this
is
one
issue
that
I
think
we
should
discuss.
E
D
Yes,
so
I
I
can
only
use
my
examples
to
show
it
so
here
I
am,
I
send
a
batch
to
sqs
and
I
get
back
a
response
and
for
each
message
I
write
an
id,
for
example,
one
and
two,
and
then
this
response.
It
contains
an
array
that
says
for
message
one.
It
was
successful
and
message
two.
It
was
failed
and
this
is
the
for
sqs,
but
I
also
it
also
applies
to
a
rabbit
so
in
rabbit,
if
you're
using
a
confirmed
channel.
D
D
So
we
have
to
decide
where
we
store
this
data
and
I
think
the
proper
place
to
store
it
is
on
the
spend
that
the
permiss
per
message
spam
yeah.
D
My
other
question
is
if
we
talked
about
this
create
span,
but
I
think
we
need
to
like
decide
if,
if
this
creates
pen
belongs
to
the
instrumentation
library,
or
is
it
something
that
the
user
should
decorate
his
code
to
create,
and
I'm
not
sure
about
it,
and
another
thing
that
I
think
important
to
talk
about
is
the
duration,
like
my
experience
from
open
telemetry,
is
that
like
their
instrumentation
libraries
and
they
create
spends
and
those
pens,
they
have
duration,
but
there's
no
like
a
semantic
like
you,
you
get
back
a
duration
and
you
don't
know
what
it
means
like
does
it
mean
like
the
when
I
sent
a
message?
D
D
So
it's
it's
currently
very
confusing,
at
least
to
me
and
and
we're
very
low
on
time.
So
we
can
talk
about
it
more
some
other
time,
and
the
last
thing
that
I
think
we
should
discuss
in
this
in
the
scope
is,
for
example,
I
talked
about
it
before
the
cardinality,
which
is
very
important
for
beckons,
because
attributes
are
being
used
for
aggregations
and
for
display-
and
I
think
apparently
the
specification
says
nothing
about
it.
It
says
about
the
cardinality
of
spanish
names
but,
for
example,
the
destination.
D
It
has
no
cardinality
specification,
so
instrumentation
can
use
it
to
to
put
high
cardinality
values
or
low
cardinality
values,
and
then
it's
not
very
useful
if
it's
not
expect
also
the
destination
type
which
is
currently,
you
can
only
sell
topical
queue,
but
I
think
real
world.
We
have
more
options
on
this,
like
it's
not
only.
E
So
on
that
on
that
one,
I
don't
think
the
client
gets
to
choose,
because
it
doesn't
know
so,
whether
in
in
many
brokers,
whether
you
send
to
a
queue
or
to
a
topic
and
whether
there's
a
single
receiver
or
multiple
receivers
is
something
that
the
client
doesn't
know
and
actually
by
design
should
not
know.
If,
if
you
have
a
pop
sub
system,
whether
there's
even
a
single
party
interested
in
those
messages
is
something
that
in
very
often
it's
not
a
concern
that
the
client
is
is
concerned
with.
E
D
So
current
specification
requires
this
attribute
and
it
creates
a
situation
where
you
have
to
to
write
something
there.
But
it's
it's
not
correct
and
I
don't
like
it
at.
E
That's
how
those
things
bleed
in,
but
it's
not
useful
to
have
to
have
that
information,
as
you
point
out,
because
there
is
topology,
there's
a
topology
model
bleeding
into
the
client
where
you
are
now
creating
you're,
creating
coupling
in
the
client
that
is
actually
in
the
way
of
of
you,
reshaping
the
topology
on
the
on
on
the
broker
side
of
it.
D
Yeah,
so
that's
a
small
detail
about
this
attribute
that
I
think
we
need
to
decide
what
we
do
with
it.
I
agree
with
you
that
it's
most
of
the
time
it
will
not
be
accurate
and
I'm
not
sure
we
should
report
it.
D
We
should
at
least
give
it
a
thought
and
decide
what
we
want
to
do
with
it
and
make
sure
that
it's
it
makes
sense
in
like
real
world
scenarios,
so
there's
extra
routing
attributes
and
if
you
look
at
the
current
specifications,
so
there's
like
extensions
for
rabbit
and
kafka,
and
things
like
this,
so
we
we
need
to
review
this
and
decide
what
we
want
to
do
with
those
extra
attributes
scheduled
and
delayed
messages
that
are
very
common
and
the
current
specification
doesn't
have
attributes
for
them
at
all,
and
I
believe
there
should
be
yeah
and
and
then
there's
also
like
those
delivery
modes
or
quality
of
service
which
are
very
common,
and
I
believe
that
they
deserve
to
be
described
in
the
specification
in
some
way,
and
we
only
have
a
few
minutes.
D
So
I'm
going
to
stop
and
hear
your
last
words
on
this.
For
this
week,.
C
My
last
words
thanks
a
lot
amir.
That
was
great
that
sparked
the
lively
discussion
and
it
showed
us
the
scope
of
the
problem
we
have
to
tackle
also
could
use
your
site.
I
think
one
of
the
first
things
to
have
we
have
to
figure
out
is
again,
as
we
talked
also
about
it.
On
the
consumer
side,
a
lot
is
to
figure
out
context
propagation,
like
which
context
do
we
attach
to
each
each
message
and
what
meaning
does
this
context
have?
Does
each
message
have
a
unique
context?
C
I
think
that
is
the
first
thing
for
us
to
figure
out,
because
that
also
more
or
less
feeds
a
bit
in
in
what
what
clemens
was
talking
about
about
this
via
protocol,
because
that
context
basically
then
goes
on
the
wire
and
sticks
on
the
message,
and
I
think
it
has
to
be
very
clear.
What
does
this
context
mean?
That
is
on
the
message,
as
I
think
that's
the
first
fundamental
thing
we
have
to
figure
out.
Basically,
what
does
this
context
mean?
C
What
operation
is
this
context
related
to
and
what
kind
of
does
this
operation
cover?
And
I
I
think
then,
with
this
discussion,
I
think
we
will
cover
many
of
the
points
that
you
brought
up
there
regarding
the
great
and
published
kind
of
distinction.
But
this.
What
is
the
creator?
What
is
the
what
is
covered
by
the
crate
operation?
What
is
covered
by
the
publish
operation
and
the
details
with
the
attributes?
C
I
would
like
to
delay
them
for
the
for
the
later
attributes
discussion,
so
I
think
we
should
focus
really
on
this
on,
like
the
semantics
of
the
context
that
we
attach
to
the
message
and
on
and
on
the
operation
that
this
context
refers
to,
and
I
think
that
will
help
us
to
straighten
out
many
of
the
big
open
questions
here
and
then,
as
I
said,
there's
lots
of
kind
of
detailed
discussions
in
the
attributes
where
they
need
to
kind
of
separate
like
for
this
broker-specific,
but
it's
not,
but
that
they
would
like
to
delay
it
when
we
talk
about
attributes
later,
but
all
in
all
thanks
a
lot
armier.