►
From YouTube: 2021-12-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
C
Yes,
the
big
the
telemetry
in
general,
but,
as
you
saw
my
main
concern
or
whatever
is
the
aq
in
the
database,
the
messaging
engine
that
we
have
in
the
database,
yeah
I'll,
make
sure
that's
up
to
snuff.
A
For
for
starting
out,
I
just
put
the
link
here
to
two
pr's,
the
first
one
that
is
a
pr
that
tigran
put
up
and
that
kind
of
concerns
us
and
this.
Basically,
this
pr
defines
stability,
stability,
requirements
and
guarantees
for
semantic
conventions.
A
So
it
also
affects
us
in
a
way
because
it
it
tells
us
what
we
can
change
after
first
stable
version
of
the
semantic
convention
is
released
and
yeah.
Basically,
it's
not
much
so
open.
The
lamentary
gives
pretty
strong
aims
at
giving
giving
pretty
strong
stability
guarantees
once
some
thing
is
called
stable.
A
So
if
we
have
time,
maybe
just
have
a
look
at
this
pr
to
get
an
idea
of
of
the
restrictions
that
we
are
going
to
face
once
we
declare
something
stable,
I
mean
restrictions
for
us
kind
of
stability
guarantees
for
the
customer.
A
So,
let's
put
it
in
a
more
positive
way,
then
the
second
link
here
this
is
just
do
a
very
first
draft,
oh
tip
that
I
put
up
like
all
our
discussions
here,
should
then
be
a
restart
in
in
another
old
tap,
in
addition
to
the
first
one
that
then,
as
the
next
step,
is
merged
into
the
spec
and
just
for
having
like
some
working
document
that
can
also
give
other
people
insights
into
what
we
are
discussing
here.
A
I
I
put
up
this
draft
pr
here,
which
is
basically
starts
this
second
old
tap
and
it
is,
I
will
put
it
in
a
format,
that's
easier
to
view
it's.
A
It
basically
takes
over
parts
from
the
other
other,
like
we
have
the
terminology
that
we
had
already
defined
in
the
article
tab.
It
has
this
model
of
the
five
stages
that
we
also
had
defined
in
the
other
otap,
and
it
has
this
one
first
section
that
I
added
here
that
is
partly
or
that
is
basically
informed
by
the
discussions
we
had.
A
Is
this
section
about
context
propagation,
because
we
were
talking
a
lot
about
these
two
different
layers
of
context,
and
I
tried
to
summarize
here
what
came
out
of
these
discussions,
and
I
would
shortly
just
like
to
to
go
over
that.
A
So
basically,
we
said
that
we
are
gonna,
have
two
more
or
less
layers
of
context
that
we
need
to
consider
like
the
first
layer
is
this
recorded.
I
called
it
application
context
initially,
but
then
ken
said:
okay,
he
prefers
the
name
creation
context
I
don't
mind
so
whatever
name
is
most
clear
to
folks
is
fine
here
and
basically
the
idea
of
this
context.
Layer
is
to
allow
correlating
producers
and
consumers
regardless
of
intermediaries
and
intermediary
instrumentation.
A
So
basically,
this
this
context
is
created
on
consumer
site
and
just
needs
to
be
transported
or
propagated
to
the
producer
without
any
changes.
So
this
context
just
needs
to
traverse
the
intermediary
and
the
intermediary
must
touch
this
context
and
then,
with
this
context,
they
can
basically
correlate
producers
and
consumers.
A
A
So
this
context,
information,
basically
also
yeah,
gets,
would
basically
be
forwarded
to
the
intermediary.
The
intermediary
can
change
it,
use
it
and
then
yeah
a
forward
like
a
different
transport
context
to
the
consumer.
A
This
this
layer
basically
would
help
to
gain
insights
into
details
of
the
message-
transport,
maybe
or,
and
also
basically
enable
like
intermediary
instrumentations
to
be
kind
of
linked
with,
with
together
with
the
overall
messaging
traces.
A
So
because
those
those
were
two
big,
two
main
requirements
that
we
had
like
we
wanted,
we
want
to
have
this
overview
like
directly
correlating
producing
consumer.
That
is
basically
what
the
current
semantic
conventions
do,
but
we
also
want
the
way
to
kind
of
correlate
producers
and
consumers
with
intermediaries
and
also
see
what
happens
inside
intermediaries.
A
Based
on
that,
I
just
then
yeah
put
some
normative
language
underneath
here.
That
is,
your
producer
must
attach
creation
context
to
each
message.
So
that's
that's.
That's
that's
a
must.
That
must
be
there
and
also
this
creation
context
must
be
attached
in
a
way
so
that
it's
not
changed
by
intermediaries,
because
it
needs
to
go
to
the
producer
unchanged
and
then
a
few
like
maze
here
like
it
says
a
producer
may
propagate
a
transport
context
to
an
intermediary
and
also
an
intermediary
may
propagate
transport
context
to
a
consumer.
A
So
basically,
the
creation
context
is
a
must.
The
transport
context
is
not
the
must,
because
here
we
will
have
to
deal
with
uninstrumented
intermediaries
or
any
kind
of
intermediaries,
that
probably
don't
even
have
semantic
conventions
defined.
A
So
this
is
basically
a
try
to
capture
faithfully
what
we
discussed.
This
is
in
this
draft
pr
now,
so,
if
you
have
any
concerns
there
or
or
ideas,
feel
just
free
to
comment
in
the
pr
or
or
raise
your
hand
now,
as
you
prefer,.
A
D
No,
I
think,
that's
that's
great
johannes.
We
need
to
be
better
about
or
not
better,
just
in
general,
we
should
avoid
situations
where
the
way
decisions
get
made
are
by
coming
to
attending
meetings.
The
meetings
are
super
helpful.
I
I
am
a
big
proponent
of
open
telemetry
having
lots
of
meetings,
but
we
need
to
make
sure
that,
like
we
actually
are
writing
down
what
we're
trying
to
do
and
letting
that
be
the
the
place
where
this
decision
process
happens.
A
B
Yes,
I
just
have
one
question,
so
I'm
not
sure
I
understand
the
part
where
you
mentioned,
that
the
transport
context
might
be
changed
by
intermediaries.
B
A
The
intermediary
can
then
pass
the
context
from
the
the
span
that
is
active
here
at
this
point.
So
basically
the
context
that
gets
the
transport
context
that
is
sent
in
here
by
the
producer
will
most
likely
differ
from
the
context
that
is
sent
out
here
by
the.
A
C
A
Is
kind
of
the
usual
way
it
works
for
instrumented
services,
whereas
the
epic
decoration
context,
basically
the
very
same
context
that
gets
in
here
must
come
out
unordered
here
on
this
side.
So
those
are
the
two
different.
B
All
right,
so
that's
the
part
where
I
I
think.
Maybe
we
have
to
write
a
little
bit
different
because
it's,
I
think,
it's
a
bit
confusing
that
the
size
can
change,
because
so
the
the
statement,
the
last
statement
that
you
say
that
you
should
put
in
the
message
the
attach
the
crate
and
create
creation
context
to
each
message.
So
this.
C
A
B
Exactly
but
yeah
the
other
bubble.
I
think
it's
a
bit
conflicting,
so
maybe
we
might
have
to
rewrite
a
little
bit
because
it
says
the
transport
context
might
be
changed
by
intermediaries
and,
if
you
just
read
like
through,
then
you
think
okay,
I
should
include,
but
then
this
other
one
should
can
can
can
be
changed
or
something.
A
And
then
we
can,
we
can
I'll.
B
A
But
well
also,
if
you
have
any
kind
of
concerns
about
this,
this
context.
Propagation
ideas
here
feel
free
to
add
any
kind
of
comments
here
as
early
as
possible,
because
I
think
later
work
we're
gonna
do
will
build
build
on
this.
E
Yeah
hi
I'm
interested
in
context,
propagation
protocols-
maybe
it's
too
early
to
discuss,
but
we
we
don't
have
anything
defined
for
anything
except
http
and
there
were
a
bunch
of
conversations
binary
versus
strings
do
do
we
have
any
considerations
there.
I
think
we
should
give
some
guidance
right.
A
Yes,
I
think
we
should
give
some
guidance.
I
mean
there
is
a
there's,
a
w3c
draft
for
mqb
and
mqtt,
and
either
look
at
amqp
and
for
mqb,
basically
just
means
to
pass
these
two
different
contexts.
There's
like
some
read-only
message
attributes
and
enter
some
other
attributes
that
can
be
changed
where
there
is
transport
context
would
fit
in.
A
But
I
think
we
should
try
to
give
some
guidance
here
while
avoiding
to
go
too
deep
into
other
protocols,
and
I
was
wondering-
maybe
it
might
in
the
end
just
be
like
in
some
examples
that
we
show
how
how
how
this
can
be
achieved
with
different
protocols.
E
At
protest
white,
can
we
stabilize
this
while
the
let's
say
mkp
is
in
draft.
D
It's
it's
unfortunate.
Possibly
we
could
join
that
group
and
try
to
just
make
help
shepherd
that
work
over
the
finish
line
before
it
becomes
an
issue
for
us.
A
I
mean
I
already
I
checked
in
with
like
this.
The
wcc
working
group
because,
like
a
colleague
of
mine,
he's
he's
contributing
there
and
basically,
what
they
said
is
that
they
are
basically
just
mqp
and
mqttp
the
specs.
They
are
in
a
draft
state
and
if
we
want
them
to
be
stable,
that
is
that
they
are
be
free
to
to
work
on
that
and
pushed
it
over
the
finishing
line.
But
they
are
currently
not
working
on
that
and
nobody
is
pushing
that.
So.
E
D
Yeah
yeah
I
mean
it
is
a
okay.
We
we
should
be
wary
about
implementing
a
draft
of
the
spec.
I
think
that's
a
thing
we
should
be
wary
of.
Unless
that
spec
has
in
it
versioning
characteristics
already
as
a
draft,
we
should
make
sure
that's
at
least
in
place.
Otherwise,
if
there's
in
other
words,
if
there's
no
way
to
differentiate
between
an
open,
telemetry
implementation,
that's
divergent
from
like
the
final
implementation
that
that
was
the
thing
that
gave
people
the
most
concern
with
baggage.
A
But
I
think
the
one
of
the
main
things
we
define
here
are
basically
also
in
a
certain
way
requirements
for
those
mqp
and
the
mqtt
w3c
specifications
and-
and
I
think
in
this
way
it
can
go
hand
in
hand
pretty
nicely
but
yeah.
As
I
said,
I
wouldn't
like
wanna
bake
like
mqb
and
mqtt
details
into
into
this
spec,
but
I
agree
with
ludmilla's
concern
that
yeah.
A
If,
if
this
is
stable
and
the
mqb
and
mqtt
extensions
are
not
stable,
then
implementers
are
yeah
kind
of
confronted
with
another
unstable
experimental
part
that
prevents
them
from
releasing
a
stable
version.
So
I
think,
in
the
end
it
might
maybe
yeah
might
be
for
the
end
user
requirement
to
also
get
the
wcc
drafts
to
a
stable
state.
D
I
have
one
other
requirement,
which
is:
I
really
really
would
love
to
see
working
prototypes
of
this
stuff
before
we
before
we
mark
it
stable.
I
I
have
concerns
in
general
around
having
these
two
context
layers.
That's.
This
is
not
a
thing.
D
We
do
anywhere
else
in
open,
telemetry
right
now
and
I
I
just
feel
like
it
would
be
helpful
to
see
working
versions
of
this
in
a
couple
of
messaging
systems,
even
if
we
can't,
even
if
it's
difficult
to
implement
the
transport
stuff,
because
you
would
have
to
say
make
a
fork
of
some
messaging
system
in
order
to
get
it
in
there,
even
if
it
was
sorry
even
if
it
was
just
like,
like
a
fake
or
a
mock
of
that
system,
it
would
be
helpful
to
to
actually
see
this
because
I'm
a
little
concerned
that
there's
going
to
be
some
some
real
confusion
between
these
two
layers,
potentially.
E
Yeah,
actually,
we
have
them
implemented
in
our
jsd
case,
it's
been
running
for
quite
a
while.
I
shared
some
product.
Well,
it's
not
prototypes.
It's
a
production
code.
They
implement
something
very
similar
to
as
long
as
the
stack
evolves.
I'm
happy
to
prototype
them
in
real
leisure
is
the
case.
We
can't
even
ship
like
preliminary
versions
with
the
preview
packages.
E
Also
like
we
had
some
learnings
from
this,
I
can
share
them
again.
Maybe
I
will
just
document
them,
but
basically
from
what
we
saw,
there
is
no
problem
coming
from
two
different
players
and
they
are
inevitable.
We
cannot
avoid
them.
Yeah.
D
That's
awesome
for
audrey
you're,
talking
azure
messaging
is
is
similar
to
amqp.
Is
that
the
system
it's
most
similar
to.
E
D
Cool
yeah
I'd
love
to
see
kafka
as
well,
personally,
just
because
that's
that's
a
very
popular
messaging
system
that
is
more
complex
than
amqp
around
the
transport
stuff
right,
like
there's
just
more
bells
and
whistles
in
that
system.
D
D
That's
just
the
one
system
I
have
vague
concerns
about
and
on
the
flip
side,
if,
if
we
can
ensure
that,
like
kafka
works
with
this,
I
think
it
is
a
sufficiently
complicated
enough
messaging
system
that
I
would
feel
confident
that
other
messaging
systems
that
showed
up
later
worked
well
and
we
weren't
making
something
that
just
kind
of
accidentally
over
fitted
for
the
amqp
pub
sub
kind
of
systems.
A
A
So
I
think
I
can
do
some
work
on
that
myself,
but
yeah
any
help
is
appreciated
so
also
like
luke
miller,
if
you
can
prototype
some
of
that
stuff.
Oh
it's
already
there,
as
you
said,
and
we
just
get
some
confirmation
that
it's
that
is
great.
A
F
A
A
I
will
also
kind
of
then
promote
this
pr,
like
the
people
outside
the
group,
so
that
other
people
can
chime
in
and
see
what's
going
on
and
yeah,
I
will
also
attempt
then,
to
capture
any
agreements
that
we
come
to
in
this
pr,
so
that
it's
kind
of
a
work
in
progress
as
we
are,
as
we
are
going
forward.
A
Okay,
then
there
is
amir,
worked
out
some
some
batching
scenarios
and
yeah.
We
talked
lots
about
context,
propagation
and
also
correlating
producer
and
consumers,
and
I
think
batching
is
still
something
we
didn't
talk
about
yet
at
all
and
which
is
a
kind
of
a
topic
that
comes
up
in
many
in
many
different
varieties
and
contexts.
F
Original
document,
okay,
so
last
time
we
talked
about
scenarios
for
messaging
systems,
and
I
promise
that
I'll
make
a
list
of
things
that
I
encounter
while
working
on
implementing
this
stuff
for
node.js.
F
So
a
bit
of
a
background,
I'm
working
on
open
telemetry
for
almost
two
years
now
and
I
implemented
the
node.js
forward,
instrumentations
for
kafka
for
a
rabbit,
aws
sdk,
which
has
sqs
and
sns
in
it
and
socket
io,
and
we
implemented
them
and
they
also
shipped
them
to
clients
and
got
a
lot
of
feedback
on
real
world
examples
of
traces
that
are
generated
with
the
current
specifications
and
a
lot
of
gaps
that
we
found.
That
needs
to
be,
I
believe,
address
somehow
in
the
new
specification.
F
So
this
is
a
scenario
list,
but
also
I
have
a
bullet
with
suggestions
on
how
to
change
the
specifications.
So
if
we
end
up
in
a
situation,
we
don't
have
broken
context
or
missing
data,
and
so
it
might
be
a
bit
long.
So
if
you
feel
that
it's
not
helpful,
just
let
me
know
and
stop
me
because
there's
a
lot
of
things
going
on
here
so
just
out
and
tell
me
if
it
was
any
input,
so
the
first
one.
I
think
this
is
like
the
most
ideal
trace
that
we
we
think
about.
F
It's
like
the
easiest
to
understand
and
easiest
to
work
with,
and
so
what
we
have
here
is
we
have
the
receive
span.
This
receives
span,
I'm
talking
only
about
batch
operations
and
only
on
the
receive
side.
So
I'm
not
touching
a
single
message
consuming
or
a
descending
part
at
all,
and
so
what
we
have
here
is
wherever
the
sales
span
this
receives
span
is
for
a
batch,
so
it
gets
0,
1
or
n
messages.
So
for
the
example.
So
it's
easy
to
to
see.
F
I
edit
the
two
messages
for
which
I
receive
so
I
have
this
receive
and
we
get
two
messages
back
and
then
we
do
some
for
loop
iterations
on
those
messages,
and
then
this
is
the
processing
part
of
the
of
the
consuming
of
messages,
and
this
iteration
can
have
side
effects
which
creates
additional
spends.
So
here
we
see
we
have
to
receive
to
process
spends
and
underneath
them
all
the
things
that
happened
downstream
is
part
of
processing,
a
single
message.
F
So
this
way
I've
already
today
in
the
spec-
and
we
have
this
blue
arrow-
that
points
it's
it's
a
link
and
the
open
terms
will
expand
link
to
the
to
some
other
span,
which
is
the
the
history
of
where
we
are,
and
so
this
is
already
in
the
spec.
My
suggestion-
and
I
will
back
it
up
later-
is
to
also
add
this
blue
span
from
the
receive
to
the
create
and
okay.
So
this
is
the
ideal
trace.
F
It
has
a
single
process
and
the
next
example
is
very
similar,
but
instead
of
having
one
for
for
each
iteration
on
the
messages
we
made
matter
multiple.
So
in
this
case
we
may
have
multiple
processing
expense,
so
we
have
a
receive
that
got
two
messages.
We
have
processed
one
process
two
and
then
again
process
one
process,
two
for
the
second
iteration.
F
So
these
are
things
that
it's
very
hard
to
avoid
in
a
production
code,
and
it
will
be
super
amazing
to
have
them
auto
instrumented.
Sometimes
the
user
may
need
to
to
decorate
a
bit,
but
this
is
the
I
think,
the
what
we
want
to
achieve
in
this
case.
F
Okay,
so
the
next
one
is
very
similar
to
the
previous
one.
It's
just
that,
instead
of
doing
the
iteration
on
the
same
array,
we
have
some
pipeline.
We
get
the
messages,
we
do
some
transformation,
it
may
or
may
not
create
downstream
spans,
and
then
we
get
back
an
array.
Let's
say
if
we
called
them
up,
then
we
get
a
transformed
array
or
maybe
we
call
filter,
but
then
we
can
do
additional
steps
of
processing
which
might
create
additional
steps
of
spans.
That
should
be
correlated
to
the
message.
F
And
the
last
one
in
this
category
is
when
we
have
no
processing
spends
at
all,
and
I
believe
that
this
is
the
most
common
one
because,
as
I
will
show
later,
it's
very
very
hard
in
reality,
when
you
get
back
an
array
of
messages
to
be
able
to
set
the
context
automatically
in
the
code
and
correctly
so
that
the
additional
spans
will
have
the
right
link.
So
as
an
example,
you
can
see
the
the
lower
graph.
So
here
we
have
a
sieve
and
we
have
three
spans
underneath
it.
F
We
have
two
db
lights
and
one
outgoing
http,
and
if
we
get
this
span,
we're
unable
to
tell
what's
what
happens
here
so
like
maybe
the
first
message:
processing
created
the
two
db
lights
and
then
outgoing
http,
maybe
the
the
second
one
has
the
db
one
db
right
and
the
second
one
is
dbright,
and
the
outgoing
http
is
something
that
was
caused
by
the.
F
So
in
the
absence
of
processing
spends,
a
current
specification
is
worthless
because
we
we
don't
have
any
link
to
the
history.
We
can
tell
what
happens
before
that
caused
this
trace.
So
this
is
the
first
category.
This
is
about
the
number
of
processing
spends,
and
I
believe
that
the
spec
should
consider
those
situations.
When
we
argue
about
things
or
want
to
link
to
a
reference,
then
we
should
take
considerations
of
these
scenarios.
So
the
next
one.
Do
you
want
me
to
continue,
or
is
it
too
much?
F
F
And
what
we
can
also
see
and
in
production
code
is
something
like
this:
where
we
don't
have
any
process
spends,
we
just
have
a
sieve
and
we
have
something
that
is
on
the
bench,
so
it's
very
similar
to
the
one
we
saw
before
and
now
this
is
a
kind
of
off
topic,
but
I
just
want
to
mention
it
that
at
least
in
node
I
am
not.
F
So,
for
example,
here
here
we
have
the
receive
and
it's
doing
some
javascript
javascript
stuff,
but
when
this
function
returns,
it
just
returns
an
array
of
messages
and
the
context
on
the
code
that
that
is
not
not
received
context
anymore.
So
this
is
because
of
real-world
technical
issues
with
the
language
that
makes
it
hard
for
us
to
capture
the
right
context
all
the
time.
F
So
these
are
things
that
also
happens,
but
I
don't
want
to
talk
about
them
as
part
of
this
presentation,
maybe
in
the
future
it
feels
interesting.
So.
A
F
A
One
here
yeah
the
one
double
one,
the
upper
one
with
the
receive,
and
maybe
the
the
side
effects
and
the
code
above,
because
I
wonder
here
about
the
duration
of
those
bands,
because
I
mean
when
I
look
at
the
code.
It
seems
that
this
receive
operation
basically
ends
before
those
process.
Operations
start,
but
still
they
are
kind
of
parent
trials
of
each
other.
F
F
So
it's
it's
called
by
the
callback
itself,
but
unfortunately
the
async
await
version
makes
it
extremely
hard
to
know
like
you,
you're
doing
this
receive,
and
you
get
back
the
answer
and
the
receive
ended,
and
this
code
after
it's
like
how
can
I
tell
which
context
it
should
run
in
it's
completely?
It's
very
complex.
I
spent
a
lot
of
time
trying
to
understand
how
to
do
it,
and
I
was
able
to
do
some
magic
on
very
common
cases,
but
not
all
of
them.
F
I
think
it's
very
interesting
to
discuss
in
a
different
topics.
It's
not
directly
related
to
a
messaging
systems
and
I
don't
want
to
distract
their
focus
from
the
scenarios.
But
if
someone.
A
F
This
is
the
the
ideal,
but
it
can
be
very,
very
hard
to
technically
implement
in
node
at
least
yeah
ludmila.
Do
you
want
to
say
something.
E
Yeah,
thank
you.
So
when
we
approached
it,
we
basically
decided
that
we
cannot
know
how
users
use
our
apis
right.
They
can
use
them
in
one
way
or
another
in
one
async
framework
or
another,
and
we
cannot
rely
on
assuming
anything.
So
we
if
something
is
impossible
right.
If
we
don't
know
if
how
messages
are
processed,
we
we
can't
tell,
and
we
should
do
just
the
best
effort
and
we
should.
E
Our
choice
was
to
focus
on
consistency.
So
we
don't
create
some
special
treatment
for
certain
scenarios
and
for
the
receive
operation.
It's
actually
one
of
the
hardest,
because
we
don't
know
in
which
context
it
runs.
We
don't
know
it
what
it
will
receive
when
it
ends
right,
so
I
kind
of
if
we
can
break
it
down
to
smaller
operations.
F
Yes,
so
I
agree
with
what
you
said
here.
I
have
scrolled
down
to
exactly
this
use
case.
I
started
with
the
easy,
easy
use
cases
where
everything
is
ideal
and
we
can
have
these
process
fence
created
easily.
But
I
agree
with
you
that
in
reality
in
I
believe
most
of
the
times,
we
will
not
be
able
to
automatically
create
those
processes,
then
so
what
we
end
up
with
is
we
end
up
with
a
trace
that
we
have
a
receive
span
and
then
there
are
no
processing
span.
F
And
here,
if
you
can
see
my
screen
and
what
I
propose
here
is
to
have
some
sdk
a
consistent
function
that
the
user
can
call
can
decorate
his
code
with.
We
talked
about
it,
we
mentioned
it
in
a
previous
talks,
so
we
will
have
a
function.
The
user
can
import
it
from
open
telemetry.
It
will
be
consistent
for
all
the
messaging
systems
and
user
can
use
it
to
add.
F
F
F
So
we
had
a
similar
thoughts
and
we
added
some
mechanisms
to
support
these
missing
processing
expense.
But
I
believe
that
it's
something
that
is
so
general
and
needed
by
all
the
instrumentations
and
all
the
users
that
we
might
consider
adding
it
to
the
sdk
as
a
function
that
can
help
the
user
build
back
the
trace.
According
to
the
recommended
specification.
F
Yeah
interesting
yeah,
okay,
so
we
did
talk
about
this.
It
makes
these
traces
that
have
no
context
for
the
messages
become
this
trace
of
on
the
right,
and
here
what
I
want
to
show
is
that
that
when
the
user
gets
back
the
message
arrays
in
the
javascript,
he
has
multiple
way
of
iterating
what
we
call
processing.
F
So,
for
example,
you
can
call
air
functions
on
the
array,
a
map
for
each
printer,
reduce
it
would
look
like
this
or
we
can
use
the
index
and
a
regular
for
loop
with
index
that
reference
the
array
and
takes
the
reference
and
does
something
with
this
message,
and
we
can,
as
this
say,
to
iterator
pattern
that
simplifying
the
forward
loop.
E
It
depends
on
the
specific
sdk,
that's
being
instrumented
most
of
our
sdks.
You
can
use
them
in
the
way
you
have
a
handler
and
then
sdk
can
do
the
processing
span
regardless,
where
in
in
other
cases,
users
don't
have
to
use
it
or
sdk
does
not
provide.
So
the
choice
will
make
that
if
sdk
has
this
pattern
of
handling,
we
instrument
it.
Otherwise
we
rely
on
user
for
manual
instrumentation.
A
As
your
pointer
that
I
mean
for
it
basically,
in
the
worst
case,
there
are
no
process
bands,
but
there
can
always
be
this
receive
span
that
is
maybe
created
by
the
app
by
the
by
the
sdk
itself.
That
can
always
be
this
receive
span,
and
this
receives
spam
can
then
link
to
the
basically
to
the
to
the
producer,
spanx
yeah,
and
basically
then,
this
backup
link
that
we
always
have.
Basically
that's
the
minimal
solution,
even
if
there
is
no
process
bans
that
will
always
allow
the
user
to
basically
trace
back
the
the
or
that.
F
Yes,
so
this
is
an
example,
so
this
is
the
create
sperm
and
we
have
link
also
from
the
process
and
also
from
the
receive
so
well.
I
always
able
to
tell
which
messages
are
involved
and
which
I
believe
is
a
must,
because
if
it's
absent,
then
you
get
there
like
a
twice.
The
user
wants
to
observe
his
system
and
suddenly
there's
a
break
like
the
message
has
been
sent
and
consumed,
but
you
can't
connect
them,
so
it
loses
a
lot
of
value
so
yeah.
I
really
support
this.
Adding
of
links
on
the
receive
as
well.
A
And
I
also
liked
it
for
another
reason,
because
I
think
to
receive
spam.
This
will
always
be
a
span
or
almost
always
be
a
span
created
by
the
sdks
by
the
respective
sdks,
whereas
the
processbands,
I
think
in
many
cases,
would
have
to
be
or
not
in
many,
but
in
at
least
some
cases
would
have
would
need
to
be
created
by
the
by
the
user,
who
writes
the
application.
A
So
I
think
that
other.
Like
nice,
I
mean-
let
me
mention
those
kind
of
those
sdks
that
let
you
register
handler.
A
But
I
think,
as
you
also
said,
not
every
sdks
allows
allow
that,
but
I
think
every
sdk
should
have
the
ability
to
add
this
receive
spam.
B
It's
also
good
because
with
the
example
that
you
have
the
the
db
operation
going
going
off
of
the
send
and
then
if
even
if
the
user
does
the
processing
correctly,
the
other
thing
you
it's
not
capture,
because
it's
not
part
of
it
yeah.
So
if
you
have
the
the
link
to
the
crate
on
the
send,
and
by
going
to
the
db
thing,
you
will
see
that
db
was
triggered
by
because
this
message
was
sent.
B
F
D
The
kind
of
prototyping
I
want
to
see
because
it's
it's
these,
these
patterns
that
show
up
in
messaging,
around
bulk
processing,
in
particular,
where
it's
yeah.
It
has
been
very
confounding
to
figure
out
how
we
can
automatically
give
the
users
some
kind
of
consistent
instrumentation
for
this
stuff.
So
this
is
great.
F
F
Another
issue
is
just
a
second,
never
mind
I'll
just
show
you,
but
my
point
is
that
we
may
have
it
or
we
may
not
have
it.
So.
This
is
an
example
like
the
other
glass,
but
we're
missing
those
blue
lines
that
are
linked
to
the
create
spam.
F
A
Yes,
so
that
is
a
good
point.
I
just
want
to
add
something
here.
I
think
christian
miller.
He
brought
up
an
issue
for
exactly
that
case
when
you
have
like
you,
receive
like
a
message
without
context
and
what
he
proposed
like
in
an
issue
was
actually
to
add,
like
a
a
link
to
an
invalid
context.
A
Basically,
a
link
where
trace
id
and
spanner
is
just
all
heroes,
and-
and
I
mean
I
personally
that
felt
a
bit
awkward
to
me
and
actually
prefer
what
you
propose
that
there's
just
like
the
number
of
messages
on
the
receive
operation.
F
Okay
and
yeah,
so
this
this
is
another
example
where
we
don't
have
the
process
fans.
We
just
have
this
thing
and
then
we
can
have
the
link.
Actually
I
really
like
this
empty
links
context.
I
think
it's
more
consistent
for
beckons
to
process
and
better
to
to
to
mentally
understand
yeah
thanks
for
bringing
it
up
and
yes,
so
what
we
have
here
is
another
example
of
we
have
received
two
messages
and
we
just
have
one
link.
We
have
no
processing,
we
don't
have
all
the
links,
it
can
happen
as
well.
F
Okay,
so
these
are
all
the
scenarios
that
I
have.
I
have
most
scenarios
that
I
can
add
for
next
week.
If
you
think
it's
helpful,
I
I
only
had
a
little
amount
of
time
to
to
sketch
this
examples
to
show
you.
So
this
is
all
about
the
scenarios.
I
do
have
my
suggestions
written
so
I'm
not
sure
if
it's
a
good
time,
we
have
the
only
few
minutes
to
the
end
of
the
meeting.
A
Next
week,
I'm
here,
okay,
because
I
think
we
will
not
finish
with
with
those
suggestions,
but
that
is
that's
great
work
thanks
for
for
for
getting
it
up
and
let's
continue
there
next
week,
because
now
I
think
we
have
other
two
other
things
on
the
gender.
I'm
not
sure.
If
I
think
one
is
goals
and
roadmap,
I
think
ted
you
put
it
up.
Maybe
we
can
use
the
some
of
the
remaining
minutes
to
talk
about
that.
D
Yeah
absolutely
here,
let
me
just
try
to
pop
the
notes
up
for
some
reason:
the
notes
there
we
go.
You
have
like
a
weird
extra
note.
Doc.
Oh
and
people
should
know
we
have
finally
fixed
the.
I
believe
the
zoom
link
issue.
So
all
of
these
meetings
now
have
their
own
zoom
links.
So
that's
nice,
you
have
to
live
in
fear
it'll
be
interesting
to
see
if
that
actually
works,
but
yeah,
something
just
so.
This
is
the
I'll
be
out
for
the
rest
of
the
year.
D
D
This
is
my
guess
of
what
we
need
to
do
in
q1,
as
we
mentioned
earlier,
showing
some
prototypes
much
like
what
amir's
been
doing
of
different
messaging
systems,
trying
to
perform
these
different
different
patterns
using
the
the
specification,
the
experimental
specification
it'd
be
great
to
see,
see
that
does
that
seem
reasonable
thing
for
us
to
be
doing
in
q1.
A
D
So
we'll
need
to
find
people
to
volunteer
to
do
this
stuff.
A
D
Mostly
I'm
trying
to
figure
out
what
what
kind
of
staffing
this
working
group
is
going
to
need
and
q1
how
many
people
we
need
to
to
ask
if
they
have
some
cycles
to
volunteer,
and
likewise
anyone
on
this
call.
You
know
think
about
your
your
q1
next
year
and
what
what
kind
of
cycles
you
might
have
available
to
not
just
review
the
spec
but
actually
be
able
to
write
some
code
so
getting
implementations
prototypes
and
put
out
there,
as
we
mentioned
earlier
in
this
call
mill
brought
up.
D
You
know
the
w3c
working
group
is
working
on
messaging
related
stuff.
I
don't
have
their
meeting
times
in
front
of
me
if
someone
else
does,
maybe
they
can
post
them,
but
we
should
probably
at
least
some
of
us
should
start
attending
that
meeting.
D
Get
that
thing
over
the
finish
line,
then
there's
actually
marking
this
stuff
as
the
spec
as
stable
right
and
I
think
what
we've
identified
is
getting
the
trace
structure.
D
The
work
johannes
has
been
doing
into
the
spec
and
approved
then
there's
that
the
actual
individual
semantic
conventions
for
different
protocols
right,
like
you
know,
just
what
are
the
actual
keys
and
values
with
the
attributes
you're
putting
on
various
spans
that
you're
creating
when,
when
you're
actually
implementing
this,
I
imagine
there's
some
commonality
there,
but
there's
also
some
system
specific
information
that
needs
to
go
on
there,
so
that
hasn't
been
included
in
you
know
the
spec
we've
been
talking
about,
but
we
do
need
to
get
get
those
things
reviewed
kind
of
as
part
of
prototyping.
D
Hopefully,
and
then
one
thing:
that's
come
up
in
the
http
working
group
that
I
think
we
should
keep
in
mind
as
well
is
just
identifying
which
of
those
attributes
may
contain
sensitive
data
that
needs
to
get
scrubbed
at
minimum.
D
For
starters,
in
hp,
http
group
we
decided
what
we
want
to
do
is
simply
add
a
column
to
the
the
semantic
conventions
tables.
You
know,
that's
like
you
know,
contain
sensitive
data-
yes
or
no-
and
just
just
mark
this,
so
that
we
can
figure
out
a
system
for
scrubbing.
D
And
then
that's
all
all
of
this
work
is
around
getting
the
spec
done,
but
once
we've
got
we're
kind
of
happy
with
the
spec
there's
also
the
need
to
implement
all
of
this
stuff
in
the
real
world,
and
I
think
that
has
like
two
parts.
One
is
just
an
audit.
D
We
need
to
figure
out
what
are
all
of
the
messaging
instrumentation
libraries
that
are
currently
available
in
open,
telemetry,
plus
the
ones.
Maybe
that
are
missing,
that
we
know
we
want
basically
creating
a
matrix
and
then
trying
to
identify
contributors
and
ideally
contrib
maintainers
right,
like
I'm
here,
to
help
actually
write
update
these
instrumentation
libraries
once
we've
got
the
spec
in
order.
We
want
to
sweep
through
everything,
we've
got
and
and
get
it
up
to
date
and
ideally
identify
maintainers
who
will
be
available
to
help.
D
You
know,
process
prs
with
those
libraries
and
a
lot
of
languages.
The
instrumentation
we
currently
have
doesn't
have
maintainers.
It
was
kind
of
ported
over
from
somewhere
else,
but
the
core
maintainers,
like
the
sdk
maintainers,
for
that
language,
tend
to
have
to
respond
to
those
pr's
and
a
lot
of
them
feel
like
they're
they're.
A
little
underwater
they're,
usually
not
experts
at
these,
these
different
systems,
so
they
don't
feel
super
comfortable
managing
them.
So
it
would
be
great
to
get
more
contrib
maintainers
who
can
help
the
core
maintainers.
D
Just
around
these,
like
specific
libraries
and
as
part
of
updating
that
stuff,
we
don't
currently
have
much
in
the
way
of
like
support.
So
you
have
to
write
all
this
instrumentation
by
hand,
but
it
would
be
interesting
to
see
if
creating
a
test
harness
here
again
is
the
thing
we're
looking
at
in
http.
D
Ideally,
you
know
some
kind
of
black
box
text
test
harness.
You
know
data
driven
test
harness,
so
we
don't
have
to
recreate
it
too
much
work
in
every
language,
but
something
here
would
probably
be
helpful,
especially
if
we
ever
need
to
change
any
of
these
semantic
conventions.
You
know
this
will
help
drive
identifying
what
needs
to
change
in
the
future
and
then
last,
but
certainly
not
least,
to
what
degree
can
we
create
classes
and
utility
functions
added
to
the
api
packages
that
that
just
make
it
easier
to
write
this
instrumentation?
D
So
there's
this
java
instrumenter
class,
that's
worth
looking
at
and
part
of
why
this
is
helpful.
Is
we've
just
been
talking
about
tracing,
but
there's
also
going
to
be
metrics
that
show
up
as
well,
and
so
it
ends
up
being
quite
a
bit
of
code
and
for
http
at
least
there's
room
to
have
an
object
where
you're,
basically
changing
the
interface
you're
writing
around
to
something
that
is
more
of
like
an
attribute
provider
and
then
the
object.
D
The
api
provides
actually
takes
that
information
and
then
creates
all
the
appropriate
metrics
and
things
off
of
it.
For
you,
messaging
is
a
little
bit
more
spread
out
across
different
types
of
implementations,
so
I'm
not
sure
how
much
of
the
generic
generic
utility
functions
we
can
create.
But
again
these
are
things
that
that
potentially
lower
the
amount
of
effort,
maintaining
all
the
individual
instrumentation
libraries
because
you're
centralizing
some
of
that
work
into
like
a
utility
package.
D
D
Yeah,
I'm
not
sure
how
much
we
can
say
we
finish
all
the
implementation
in
q1.
I
think
it
to
me.
What
would
be,
I
think,
necessary
in
q1
or
in
real
trouble,
is
to
have
people
actively
working
on
all
of
this
stuff.
If
q1
comes
and
goes,
and
we
don't
have
people
actively
doing
it
we're
in
we're
in
real
trouble.
D
I
think
it's.
I
don't
know
that
I
wanna
be
bold
and
say
we're
gonna,
we're
gonna
finish
all
of
this
in
q1,
because
that's
quite
a
lot
of
work-
and
you
know
it's
kind
of
the
implementation,
somewhat
chained
on
the
prototyping
and
the
spec
work
getting
getting
done.
A
G
D
Yes,
which
yeah
we
should
have
people
committed
to
to
doing
this,
and
we
should
be
be
trying
to
to
roll
on
it
before
we
finish
all
of
the
spec
work.
Otherwise
we're
going
to
have
this
kind
of
awkward
gap
right,
yes,
definitely
yeah,
so
I
don't
know,
that's
that's
my
proposal
for
for
road
map.
I'm
curious
to
do
people
have
any
any
other
comments
on
this
anything
missing.
Anything
they
think
is
unlikely
or
unreasonable.
G
D
All
right,
yeah
I'll,
be
bringing
this
back
up.
You
know
when
I
come
back
in
q1
and
I'm
gonna
I'll
try
to
help
help
organize
people
into
into
these
into
these
work
paths,
but
thank
you
all
for,
for
all
the
effort
you've
been
putting
into
this
group.
I
just
want
to
say
that
as
well.
It's
been
a
great
group
effort.