►
From YouTube: 2021-09-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Good
afternoon
here,
hello
guys,
I'm
ivan
I'm
one
of
the
maintainers
of
camon.
I
don't
know
if
you've
heard
about
us
before.
B
E
D
E
E
People
wouldn't
mind
filling
your
name
in
in
the
attendee
part
of
the
agenda.
That's
just
the
thing
we
like
to
do
with
every
meeting
here.
D
Okay,
I
think
let's
get
started
so
maybe
if
you
don't
mind
so
that
you
also
kind
of
get
to
know
each
other
a
bit
because
I
see
there
are
a
few
people
new
people
here,
I
should
open
telemetry
that
they
get
a
note
to
each
other.
Maybe
everybody
can
say
like
just
one
or
two
sentences
about
the
back
their
background
and
why
they
are
here.
D
D
I'm
trying
to
drive
this
instrumentation
effort
for
messaging,
basically,
the
semantic
convention
for
messaging,
and
my
I
had
some
previous
involvement
in
open
telemetry,
mostly
on
open
telemetry,
c
c,
plus
plus,
and
at
microsoft.
I'm
working
like
on
the
distributed
tracing
backhand.
G
My
name
is
clemens,
I'm
the
lead
architect
for
all
the
messaging
services
of
microsoft,
which
includes
all
the
messaging
services
that
we
have
in
azure,
I'm
also
representing
microsoft
in
oasis,
I'm
the
co-chair
of
the
mqp
technical
committee
in
oasis
and
I'm
a
member
of
the
mqtt
technical
committee
on
aces
in
the
cncf.
I'm
the
I'm
kind
of
the
architect
of
cloud
events
and
I've
been
building
cloud
events
that
are
kind
of
from
the
from
the
start,
and
otherwise
I'm
kind
of
driving.
G
In
my
role
as
an
architect
driving
product
strategy,
as
well
as
our
open
source
and
open
centers
engagements.
E
G
F
F
W3C
and
open
telemetry
libraries
to
reach
standardization
and
achieve
the
same
goals
that
are
set
out
by
the
project
and
my
interest
here
is
I'm
trying
to
help
in
general,
with
the
instrumentation
of
trace
semantics
conventions
to
get
the
specs
to
from
experimental
to
stabilization
for
the
betterment
of
the
community.
As
well
as
for
some
of
our
platform.
Initiatives
that
we're
trying
to
drive
for
tracing.
B
B
Jump
in
if
nobody
else
goes
before,
I'm
even
I'm
one
of
the
authors
of
camon,
it's
an
instrumentation
library
for
applications
running
on
the
jvn
we
have
been
around
for
years.
B
Like
I
don't
know
seven
eight
years,
I
was
slightly
involved
on
the
early
days
of
open
tracing,
but
it
got
disconnected
from
that
and
nowadays,
I'm
thinking,
okay,
open
telemetry
vote
way
further
than
what
open
tracing
was
and
what
open
senses
was-
and
I
think
it's
time
to
make
these
things
compatible
and
people
using
open,
telemetry
or
commands
should
be
able
to
mismatch
whatever
way
they
want.
So
my
goal
is
to
figure
out:
where
are
we
similar?
Where
are
we
different
and
and
see
how
we
can
make
this
interoperate
seemingly
seamlessly?
B
Because
you
know
we,
we
have
this
special
focus
on
all
the
like
scala
kind
of
libraries.
So
I
can
play
framework
all
that
stuff,
and
you
know
it.
I
I
see
how
we
could
complement
each
other
there,
so
the
idea
is
to
make
it
all
interoperable.
So
I
don't
know
what
it
will
take
to
do
that.
But
that's
what
I'm
trying
to
figure
out
now.
A
A
The
relevant
ones
are
instrumentation
for
kafka
and
for
a
rabbit,
some
q
and
I
implemented
the
aws
sdk
instrumentation,
with
the
sqs
as
a
messaging
system
inside
it,
and
I
read
the
specification
and
many
times
try
to
implement
it
and
found
a
lot
of
issues
which
I
would
like
to
discuss
or
we
can
discuss
together.
I'm
sure
other
people
notice
them
as
well.
I
Awesome
hi.
H
My
name's
ken
I
work
for
red
hat
I've.
I'm
focused
on
microservice
frameworks
with
java,
so
the
last
one,
the
last
few
years
has
been
quarkus
the
new
one
we
have,
and
I
am
involved
in
implementing
open
telemetry
for
that,
as
well
as
micrometer
from
the
metrics
side
and
I'm
kind
of
pushing
the
open,
telemetry
semantic
conversions
across
the
entirety
of
the
org.
H
For
any
run
time
we
have
to
produce
the
metrics
in
that
format,
so
they
can
be
consumed
and
processed
through
monitoring
and
the
like,
and
we
don't
have
mismatches
of
data
and
names,
and
things
like
that.
So
that's
kind
of
where
I'm
coming
from.
J
Yeah,
maybe
I
can
go
next
yeah
christian
from
I'm
working
for
dinette
trace
and
I'm
here
today,
because
I
wrote,
I
think
I
wrote
the
most
of
the
original
semantic
convention
for
messaging,
which
we
have
now
here.
To
be
honest,
I'm
not
really
an
expert
on
messaging
systems.
I
this
was
part
of
an
effort
to
bootstrap
the
semantic
conventions
all
together.
J
So
this
was
based
on
the
data
models.
We
have
dynatrace
for
tracing
messaging
systems,
which
historically
come
mostly
from
jms.
So
this
might
explain
why
some
of
the
magic
conventions
looked
like
the
look
today
yeah-
and
I
will
probably
mostly
listen
in
and
maybe
answer
questions
if
you
have
some
questions
by
something.
Is
that
way
in
the
semantic
conventions
today.
K
I'm
matt
I
work
at
lightstop,
I'm
here
mostly
out
of
curiosity,
I've
written
some
instrumentation
for
message
queues
in
the
past.
I've
reviewed
quite
a
few
prs
in
open
telemetry
for
them
and
in
general,
I'm
just
curious.
If
the
approach
that
we're
taking
is
is
the
best
approach,
I
sense
there's
better
approaches
that
can
be
taken,
so
I'm
kind
of
here
to
to
just
kind
of
kind
of
participate
in
the
discussions
and
see
see
what
comes
of
them.
E
And
I
can
go,
I'm
ted.
I
also
work
at
lightstep,
I'm
one
of
the
open,
telemetry
co-founders,
so
I've
been
around
here
for
a
while-
and
I'm
mostly
here
to
help
facilitate
these
conversations,
I'm
very
interested
in
getting
our
instrumentation
into
good
shape
now
that
the
sdks
are
getting
finalized,
so
I'm
excited
and
yeah.
Besides
just
the
issue
of
defining
the
conventions
themselves,
I'm
also
interested
in
putting
a
backwards
compatibility
plan
through
its
paces.
E
We
need
to
have
some
way
of
applying
having
like
translation
rules
and
being
able
to
apply
those
translation
rules
say
in
the
collector,
otherwise
we'll
be
just
kind
of
like
stuck
in
stone,
which
is
maybe
not
a
terrible
place
to
be,
but
we're
about
to
have
a
bunch
of
discussions
about
potentially
changing
some
of
these
things.
So
so
I'm
interested
in
some
of
the
things
kind
of
around
instrumentation
and
around
the
semantic
conventions,
as
well
as
getting
them
defined.
D
Awesome,
I
think
we
have
everybody
so
to
to
frame
this
to
frame
this
meeting
a
bit
we
had
like
also
in
the
introduction,
and
also
see
in
the
in
the
in
the
protocol,
already
some
directions
where
people
want
to
go
and
just
to
frame
this
a
bit.
I
think
in
this
group
now
we
have
some
messaging.
Also
specialists
there's
an
easier.
So
I
want
to
just
focus
this
discussion,
particularly
messaging,
and
don't
divorce
too
much.
D
I
think
that
is
maybe
best
to
discuss
in
depth
in
one
of
our
more
general
instrumentation
meetings
that
are
not
focused
on
on
messaging,
because
I
think
that
is
a
topic
that
goes
far
beyond
messaging,
and
I
I
would
prefer
if
we
talk
talk
about
this,
like
in
maybe
a
separate
forum
and
keep
to
keep
the
messaging
here
and
also
in
the
in
the
gender.
D
I
saw
lots
of
great
remarks
from
amir
that
go
already
in
pretty
much
like
deep
technical
details,
so
I
would
definitely
love
to
discuss
this
further,
but
I'm
not
sure
if
we
will
get
to
that
in
today's
meeting,
because
I
think
the
first
high
on
the
frame
today's
meeting
is
that
our
that
we
can
first
agree
like
on
a
goal
and
where
we
want
to
go
with
those
semantic
conventions,
because
the
the
they're
first
to
give
a
bit
of
background
here,
the
open,
telemetry
tracing.
Basically,
we
have
parts
of
it
stable.
D
Basically,
the
api,
how
you
create
spans,
how
you
set
attribute
on
spans
that
is
all
stable,
but
then
there
is
another
big
part
which
are
the
semantic
conventions
which
define:
how
do
you
give
spans
a
certain
meaning?
How
do
you?
How
do
you
say?
Okay,
this
is
a
span
that
kind
of
publishes
a
message,
or
this
is
a
span
who
kind
of
consumes
a
message.
D
Those
are
this
these
conventions.
We
are
because
semantic
conventions-
and
none
of
those
conventions
is
stable.
Yet
so
kind
of
this
group
is
one
of
the
first
groups
who
tries
to
push
for
particularly
part
of
the
semantic
conventions
to
a
stable
state.
So
we
are
a
bit
on
kind
of
a
new
ground
here
he
went
underground
and
I
think
one
of
the
main
things
we
need
to
do
first
is
to
yeah
define
what
does
what
should
a
stable
semantic
conventions
for
messaging
cover.
So
what
should
the
messaging
semantic
conventions
cover?
D
So
we
can
call
it
stable.
What
criteria
do
we
impose
there,
because
I
mean
there
is
lots
and
lots
of
messaging
scenarios
out
there,
and
I
think
we
cannot
possibly
cover
it
in
version
1.0
and
I
think
we
have
to
kind
of
decide
what
are
we
going
to
cover.
Then
we
work
towards
that
and
then
basically
we
have
version
1.0,
which
is
clearly
defined,
and
then
from
that
down
we
can
iterate
on
and
maybe
consider
additional
scenarios.
D
D
Here
we
go,
and
I
I
already
had
some
very
good
discussions
with
comments,
mostly
who
is
here,
and
we
tried
to
identify
like
very
general
scenarios
that
we
want
that.
D
We
think
we
want
to
cover
and
that
are
kind
of
the
most
maybe
important
scenarios
for
measuring
systems-
and
I
I
put
those
scenarios
here
in
this
old
tap
and
I
want
to
start
out
in
this
group,
just
maybe
with
going
through
this
out
tab
and
seeing
if
what's
in
here,
makes
sense
to
all
of
us
and
if
that
seems
like
a
sensible
goal
to
work
towards
for
version
1.0
and
also
yeah
any
kind
of
input
from
any
anybody
like
it.
D
When
you
think
somebody's
missing
somebody's,
not
correct
here
would
be
highly
appreciated
and
very
welcome.
D
G
Yes,
I
switched
it
okay,
yeah,
since
we
talked,
if
I,
if
I
may
say,
say
a
few
words
here
about
this,
the-
and
this
is
also
coming
from
from
a
bit
of
background
from
cloud
events
and
some
discussions
we've
been
having
there
because
they're
we've
been
having
some
a
tracing
extension
that
has
caused
all
kinds
of
interesting
discussions
around
around
this,
where
we
have
like
transparent,
trace
state
as
an
extension
in
cloud
events.
G
But
then
people
were
mightily
confused
about
the
role
of
it
and
they
wanted
to
go
and
throw
it
out,
and
so
there's
a
bit
of.
There
was
a
bit
of
confusion
around
this,
and
so
I'd
like
to
like
along
the
lines
of
that
picture.
If
you
put
that
up
again
say
a
few
words
about
the
discussions
that
we've
been
having.
G
Thank
you
that
we've
been
having,
but
I've
been
having
with
johannes
about
this
and
how,
from
the
from
our
from
our
product
perspective,
on
the
one
hand,
but
also
kind
of
from
the
standardization
perspective
in
cloud
events,
we've
been,
we've
been
seeing,
we've
been
seeing
kind
of
tracing
play
in
so
there
is
for
for
messaging
in
general.
G
One
of
the
things
to
observe
is
that
there
is,
when
you
have
an
intermediary,
in
most
cases,
not
all
cases
but
in
many
cases
there's
a
push-pull
translation,
and
that
is
that
matters
and
that's
important
and
that's
easily
overlooked
when
you
are
doing
tracing
and
and
when
you
are
taking
a
look
at
operations-
and
you
look
at
this
through
an
rpc
lens,
which
typically
happens,
then
everything
is
kind
of
nicely
nicely
lined
up
on
a
single
logical
thread
because
you're
effectively
just
calling
with
the
call
flow.
G
And
then
you
end
up
at
some
point
and
then
you
know,
then
the
call
flow
goes
back
and
everything
is
nicely
correlated
with.
And
it's
it's
one
logical
thread
with
with
messaging.
It's
that's
a
bit
different
because
there
are
really
three
distinct
operations
or
actually
four
distinct
operations
that
play
a
role
when
you're
dealing
with
a
single
flow.
So
there's
a
there's,
an
application
to
application
layer.
G
Very
often,
if
you
look
at
this
from
an
application
perspective,
the
the
application
is
is
is
abstracted
from
the
underlying
messaging
infrastructure,
which
means
it
doesn't
care
how
the
message
got
to
it
and
very
often
these
days,
this
the
the
push,
pull
translation.
That
happens,
so
you
send
into
a
queue
and
then
you
pull
from
that
queue
is
again
reversed
on
the
receiver
side
or
on
the
consumer
side
and
then
turned
into
kind
of
a
reactive
flow.
So
it
looks
like
it's.
G
It's
there's
one
logical
thread,
but
there's
stuff
happening
in
the
middle:
that's
extracted
away,
but
mostly
those
applications.
The
application
context
doesn't
care.
The
application
context
is
is
mostly
concerned
about
itself.
It
is
like
the
app
over
here
does
something
the
app
over
here
receives
something
and
it
acts
on
it,
and
what
you
want
to
see
in
the
trace
is
really
the
application
to
application
concern
with
none
of
the
group
in
the
middle.
G
That's
that's
the
top
layer,
and
when
you
talk
about
applica,
an
event-driven
application,
you
will
often
think
about
that
relationship.
You
will
often
think
about
the
the
the
just
the
the
publisher
and
the
consumer
without
really
worrying
about
the
messaging
infrastructure
that
sits
in
the
middle
and
and
and
the
details
of
the
messaging
infrastructure,
but
sometimes
that
does
matter
so
the
more
involved
you
get
into
application
into
message,
handling
and
there's
a
bit
of
a
difference
here
between
eventing
and
messaging,
which
means
job
processing
and
reacting
to
pub
sub
events.
G
Then
your
relationship
to
the
messaging
infrastructure,
somewhat
changes
and
also
your
your
interest
in
the
the
the
functioning
of
the
application
in
first,
the
messaging
infrastructure
changes.
So
there
are
several
acts
here
that
matter
the
producer.
So
what
we
were
where
we
were,
how
what
I
just
talked
about
with
that
relationship
is
number
one
and
number
four
in
this
list
right
producer
creates
the
message,
then,
that
somehow
gets
to
the
consumer
and
the
consumer
processes.
The
message:
that's
a
that's
the
application
level
relationship.
G
Then
there
is
a
there's,
an
act
of
publishing,
so
the
producer
publishes
the
messages
to
the
message
to
an
intermediary
that
might
work
the
first
time
that
might
fail
the
first
time.
Then
you
might
go
and
retry
that
operation
retrying.
That
operation
may
cause.
G
Will
cause
new
information
to
be
created?
There
will
be
an
error
that
you
have
to
go
and
trace.
There
will
be
that
retry
may
cause
a
back
off
and
the
back
off
will
be
impacting
latency.
There's
all
kinds
of
things
that,
even
even
if
there's
magic,
that's
happening
under
the
covers,
will
be
something
that
will
be
interesting
for
for
diagnostics.
G
So
the
act
of
publishing
that
message
is
something
that
is
an
operation
which
is
already
separate
from
the
from
the
the
flow
that
the
application
can
end
to
end
application
is
interested
in,
and
that
is
because,
if
the
message
can
ultimately
be
published
and
it
can
ultimately
ultimately
be
be
received,
then
for
the
overall
application
flow,
everything
is
hunky-dory,
everything's,
wonderful
right
just
for
the
the
particular
fact
the
particular
you
know,
zoom
in
of
of
application
infrastructure,
piece
you're
interested
in
what
what
happened
wrong.
Why
did
this
the
publishing?
G
Why
did
the
publishing
take
so
long?
And
that
is
because
there
were
a
ton
of
retries
and-
and
you
then
find
that,
because
of
the
time
and
retries
there
might
be
a
networking
issue,
there
might
be
something
else
that
happened.
So
that's
the
so
there's
this
first
extra,
the
new
context.
The
interesting
thing
is
that
that
interest
that
that
new
context
derives
from
the
the
original
context.
G
G
So
that's
that's
the
that's
the
the
second
extra
operation
once
you
have
that
context.
Now
you
are
now
working
on
that
you're,
not
working
on
that
message:
you're,
not
processing
it.
That
is
number
four
right
number
four.
The
processing
of
the
message
is,
is
effectively
linked
up
to
number
one
producer,
creating
the
message
ultimately,
and
now
you
have
a
and
then
number
five
is
you
have
now
processed
the
message
and
now
you're
settling
it
and
settlement
is
something
that
is
really
important
in
messaging.
G
G
Settling
means
that
the
outcome
of
the
processing
is
now
reflected
in
this
delivery
state
of
the
messaging
system,
meaning
the
processing
of
the
message
did
work
and
you
are
interacting
with
the
queue.
Then.
If
the
processing
of
the
message
worked,
you
want
to
have
the
message
removed
from
that
queue,
which
means
you're
going
to
complete
it
or
you're
going
to
settle
it.
If
the
processing
did
not
work,
you
will
go
and
abandon
the
message
so
that
it
can
be
can
be
represented
so
re-delivered
and
reprocessed.
G
If
the
message
is
faulty,
which
means
you
know
that
the
message
will
fail
again,
you
will
reject
it,
which
will
cause
the
message
to
be
to
be
dead
letter,
and
so
those
are
the
settlement
modes
effectively
if
you're,
interacting
with
the
queue
and
that's
typically
true
for
most
cues,
that
you
have
those
mechanisms,
if
you're
interacting
with
the
stream
with
the
stream
engine,
you
will
so
the
kafkas
and
pulsar
and
vent
hubs
and
and
kinesis.
G
You
will
not
individually
settle
messages,
but
you
will
basically
just
just
decide
on
whether
you
want
to
go
and
and
advance
the
cursor
that
you're
managing
on
the
client
and
then,
if
you
are
deciding
not
to
advance
the
curse,
are
you
then
going
to
go
and
reread
and
then
effectively
reprocess
the
message
based
on
that?
G
So
but
that's
the
that's
kind
of
the
the
various
different
different
context
that
we
have
to
have
identified
and
we
have
unfortunately
on
the
slide.
So
one
thing:
that's
a
bit
missing
from
the
picture
here
is
that
you
have
these
combinations
of
contacts
in
the
up
in
the
in
the
actual.
In
the
pretty
slide,
that's
kind
of
missing
here.
G
That's
kind
of
what
so
this
is,
I
think,
that's
that's.
A
summary
of
where
we
were
in
our
discussions
is
that
there
are
actually
several
different
trace,
tracing
context
or
context
scenarios
that
are
kind
of
arising
out
of
the
inter
interactions
with
and
with
a
messaging
intermediary
which
are
caused
by
this
push.
G
Pull
translation
first,
because
you
know
once
you're
receiving
that's
a
tracing
operation,
but
then
you're,
just
picking
up
a
context
that
you
don't
know
about
first
and
then
that
message
settlement
is
something
that
is
a
very
unique
interaction
with
a
messaging
system
that
is
dependent
on
the
outcome
of
the
the
processing
operation,
which
also
needs
to
be
effectively
reflected
in
the
traces
and
then,
of
course,
that
there
is
a
difference
between
between
publishing
the
act
of
publishing
and
the
message.
G
Creation,
where
the
the
application
per
se
will
only
care
about
the
the
end
result
of
the
delivery.
G
While
the
the
someone
who's
trying
to
find
out
why
stuff
took
long
or
why
something
is
stuck,
will
go
and
and
go
a
level
further
down
and
go
and
try
to
investigate
why
the
publishing
didn't
work.
And
then
the
last
thing,
what
we
see
more
and
more
on
the
messaging
side
is
now
that
we
have
replication
routes
so
that
these
these,
these
interactions
are
no
longer
just
go
to
a
single
queue
and
then
get
the
message
from
a
single
queue
and
that's
kind
of
the
application
view.
G
And
then
you
have
an
end-to-end
relationship
between
a
publisher
and
a
consumer
where
you
have
three
brokers
in
the
middle
which
are
immaterial
and
actually
invisible
to
the
publisher
and
the
consumer,
and
that's
something
that
the
publisher
and
consumer
don't
know
about.
But
then
you
have
kind
of
these
three
infrastructures,
which
might
be
managed
by
different
parties,
which
are
then
observed
separately
as
those
messages
pass
through
and
still
the
the
the
met.
G
So
that's
why
the
the
the
the
mother
context,
if
you
will
right
the
original
context
that
comes
out
of
the
original
message
kind
of
always
plays
a
role
in
in
all
of
these.
You
know
the
daughter
contacts,
if
I
may
call
them
like
this-
that
are
then
kind
of
related
to
the
the
publishing
settlement.
Republishing
of
all
those
other
messages,
and
here
I
will
stop.
D
Thanks
clements,
that
was
great,
so,
as
you
can
see,
kevin's
helped
me
a
lot
coming
up
with
this
diagram
here
so
just
to
like,
maybe
summarize
what
he
said
in,
like
very
few
words
of
a
non-messaging
expert
like
I
am,
I
think
we
have.
D
We
have
here
like
the
two
very
high
level
stages
of
creating
a
message
and
processing
a
message
like
creating
a
message
on
producer
side
and
processing,
a
message
on
consumer
side
that
you
can
see
from
a
very,
very
high
level
and
then
there's
all
the
work,
basically
that
the
message
broker
does
in
between
and
then
with
this
we
get
those
additional
stages
like
publishing,
receiving
and
settling
and
those
all
kind
of
talk
with
this
intermediary
in
between,
so
that
the
kind
of
you
can
see,
as
maybe
a
layer
below
this
very
top
level
layer-
and
I
think
when
now
when
we
define
how
we're
gonna,
how
the
traces
for
this
scenario
should
look
like.
D
I
think
it
makes
sense
for
us
to
keep
those
two.
Those
two
layers
in
mind
that
those
are
kind
of
those
two
layers
is
kind
of
slightly
separate
in
mind.
I
mean.
F
Yeah,
so
johannes
and
and
clements
is
the
intent
that,
like
so
for
the
case
for
time,
spent
in
queue
that
we
want
this
bands
to
represent
each
one
of
these
layers,
so
that
like
would
we
have
to
be
issuing
multiple
spans
or
would
this
would
it
be
a
single
span
that
represents
all
of
them?
That's
what
I'm
just
trying
to
understand
a
bit
here
on
how
later,
when
it
gets
stitched
as
a
trace.
D
I
think
we're
not
talking
about
traces
exactly
yet.
I
think
that's
a
later
discussion.
I
think
we're
just
trying
to
get
the
same
conceptual
model
here,
gotcha,
okay,
we
will
talk,
then,
once
we
agree
on
this
model
and
once
we
agree
on
these
scenarios,
we
can
then
get
into
talking
about
like
how
we
really
want
to
model
this
with
traces,
okay
and
then
what
one
last
question.
F
Was
that
like
on
the
settlement
part
clemens
is,
is
that
asynchronous
in
nature,
and
I
the
part
I
didn't
get
is
like
if
it
fails
multiple
times
at
what
point?
Do
you
just
say?
I'm
not
gonna
settle,
and
I
I
need
like
just
I'm
just
gonna
drop
it
yeah.
G
So
the
way
how
message
message
brokers
do
this?
Is
they
they
will
silently
drop
that
message
into
the
dead
letter
queue
and
not
and
and
and
don't
tell
the
application
about
it.
So
the
way,
the
way,
how
there's
a
concept
called
poison
messages,
where
a
message
that
gets
presented
to
to
to
an
application
and
there's
something
wrong
about
that
about
the
the
message.
But
the
application
doesn't
know
or
doesn't
understand
it.
G
G
So
in
that
situation
there
is
a
max
delivery
count
and
there's
that's
implemented
in
most
most
q
brokers
in
in
some
in
some
fashion,
which
will,
if
you
exceed
this,
then
so
the
deliveries
are
being
tracked
by
the
broker.
And
then,
if
you
exceed
this,
then
the
broker
will
go
and
take
the
message
out
and
just
drop
it
into
a
deadline
or
queue
and
say
sorry
that
that
didn't
work
so
effectively.
G
There
is
a
and
that
will
often
be
annotated
with
a
with
a
reason
code,
so
you
will
find
a
message
in
the
dead
letter
queue
which
has
a
trace
identifier
if
it's
annotated
with
with
the
transparent
trace
state,
where
you
can
then
effectively
see
the
message
coming
out
of
the
dead
letter
queue
and
that's
a
that's
a
path
that
you
then
need
to
kind
of
have
have
also
kind
of
part
of
your
diagnosis
infrastructure.
G
You
can
also
peek
it
and
you
can
go
and
browse
it
and
that's
typically
possible
with
all
the
with
all
the
brokers.
But
you
can
see
the
delivery
count
being
exceeded
up
to
10
whatever,
and
then
you
can
also
see
the
the
the
context
being
annotated
on
the
message
in
metadata.
G
And,
of
course,
and
of
course,
with
every
delivery-
and
this
is
so-
this
is
where
we
get
into
the
details
of
the.
What
is
the
stuff
that
we
need
to
go
and
catch
in
from
the
message
metadata
and
having
having
the
log
in
the
in
the
derivative
events,
as
I
would
call
them
to
make
sense
out
of
those
things.
J
I
have
two
points,
maybe
about
this
model
when
compared
to,
for
example,
http
or
any
other
semantic
convention.
I
wonder
if
they
create
versus
publish
step
is
something
that
is
so
special
to
messaging.
For
example,
I
could
say
to
produce
http
request.
The
logical
data
I
want
to
send
is
a
separate
step
from
actually
calling
it
to
my
http
client
library.
J
G
So
so,
for
anything,
that's
networked
for
anything,
that's
networked
apis.
There
is
a
so
so
may.
Maybe
I'm
now
bro,
I'm
now
kind
of
getting
into
the
broader
story
here,
but
anything
that's
a
network
api
can
fail
and
and
pretty
much
all
the
the
libraries
we
have
now
have
some
notion
of
automatic
in
built-in
retries.
G
The
fact
that
this
built-in
retry
policy
did
some
magic
to
make
sure
that
the
message
actually
got
over
there
doesn't
matter
because,
because
for
the
app,
the
only
thing
that
matters
is
like
it
all
depends
on
who
you
are
right.
If
you
are
the
the
application
developer,
that
only
cares
about.
I
have
a
request
and
there
is
something
that
arrived.
Then
you,
you
don't
care
about
the
policy
having
done
stuff
right
yeah,
but
if
you're,
if
you,
if
you're
now
down-
and
you
try
to
figure
out
particular
error
scenarios-
then
you
care
about
that.
G
But
ultimately,
depending
on
your
role
relative
to
the
stack,
you
might
not
care
about
about
particular
things
that
are
happening
with
that
message.
So
I
think
that
what
we
have
here,
that
separation
is
something
that's
just
as
applicable
to
to
htp,
because
ultimately,
the
difference
between
http-
and
this
is
the
is
always
the
consumption
path
right.
When
you
think
about
messaging
right
messaging
is
mqtt.
Amqp
kafka,
pulzar,
google,
pub
sub
sqs,
whatever
the
the
systems
are
and
and
some
of
them
do,
that
over
http
and
the
send
side
is
always
the
same
thing.
G
They
all
differ
in
consumption,
and
so
the
consumption
model
is
is,
is
what's
what's
what's
really,
ultimately
the
the
distinction
and
that's
how
the
settlement
piece
comes
in
here,
which
is
really
the
the
the
and
then
and
then,
of
course,
the
explicit
step
of
pulling
because
in
http
in
reality,
you're
also
pulling
but
you're
pulling
from
a
socket,
and
so
so
that
that
pulling
is
something
that
kind
of
goes
away
in
the
http
listener.
And
that's
kind
of
so
far
down
down
in
the
weeds
that
you
don't
care.
G
J
Yeah,
that's
kind
of
what
I
also
understood,
so
the
creation,
the
producer
side
is
not
that
fundamentally
different
on
a
fundamental
span,
relationship
level
from
other
semantic
conventions
correct.
So
so
I
I
was
wondering,
since
the
scope
is
already
probably
quite
big,
if,
if
this
general
retry
handling-
and
I
think
that's
what
this
is
mostly
about-
the
complications
on
the
creation
and
published
site
should
maybe
be
mostly
discussed
in
the
other
semantic
conventions
meetings.
J
If
except
maybe,
if
it
is
interleaved
with
the
other
side
very
closely,
then
of
course
we
need
to
discuss
it
also
here,
but
otherwise
this
would
be
maybe
an
opportunity
to
reduce
the
scope
a
bit.
There's.
G
Well,
there's
metadata
you
care
about
so
so
the
the
the
the
difference
I
think
is
in
in
will
then
lie
in
the
the
kind
of
metadata
that's
related
to
that
operation,
because,
as
you
are
publishing,
there
is
information
that
you
may
that
you
may
want
to
see
in
the
in
the
in
the
log
messages
right,
and
that
is
interesting
for
you.
I
don't
know
so
I'm
I'm
actually
not
as
far
in
the
so
I'm
not
the
expert
in
I'm
the
messaging
expert,
I'm
not
the
vlogging
tracing
everything
experts.
So
wonderful.
G
We
meet
here.
One
of
the
things
that
I've
seen
in
the
in
the
draft
in
the
so
in
the
existing
draft
was
that
there
were
quite
a
few
metadata
items
that
were
called
out
right,
so
some
rabbit
some
some
some
yeah
from
rabbit
and
some
some
some
fuel
from
kafka
like.
G
G
G
So
I
think
for
where
so
so
it
would
be
good
to
have
a
generalized
notion
of
the
distinction
between
making
a
message
or
or
or
or
creating
a
a
job,
a
call
and
then
and
then
trying
to
get
it
over
there,
because
I
think
I
think
those
are
two
distinct
things
and
having
a
generalized
notion
of
that
will
probably
be
useful
yeah,
but.
G
It'll
be
interesting
to
then
go
and
have
particular
bindings
that
enumerate
key
elements
for
how
you
want
to
represent
them
and
and
also
having
a
discussion
about
what
is
really
the
stuff
that
we
want
to
go
and
standardize,
because,
ultimately,
there
is
something
that
independent
of
what
the
transport
is
will
be
will
be.
You
know
what
are
the
fields?
What
are
the
three
four
things
that
we
want
to
definitely
see
in
any
tool,
no
matter
which
proprietary
transport
it
uses.
G
E
Yeah,
this
was
actually
a
a
question
that
I
had
as
someone
who's.
I
mean
I've
used
messaging
systems,
but
I
wouldn't
call
myself
like
the
expert
in
the
room,
but
I
for
you
all
the
experts
who
know
how
a
lot
of
these
different
systems
work.
E
Is
it
feasible
to
to
what
degree
is
it
feasible
to
have
a
a
general
model
like
a
generic
model
of
messaging,
that
we
then
shoehorn
all
of
these
different
messaging
systems
into
like?
Are
they
all
actually
similar
enough
that
conforming
to
a
general
model
is
helpful
to
an
operator
because
the
end
user
of
this
tracing
data
ultimately
is
an
operator
who's
trying
to
debug
a
kafka
system
or
an
amqp
system?
Yeah?
And
I
I
it's
a
genuine
question.
E
I
don't
know
what
the
answer
is
like
to
what
degree
does
making
like
a
generic
model
like
the
what
we've
seen
elsewhere
is
generic
attributes
are
helpful
in
the
sense
that
someone
can
add
a
new
system
and
back
end
tracing
systems.
E
Analysis
tools
can
automatically
perform
something
useful
on
that
system
without
knowing
anything
but
its
specifics,
because
we
have
this
generic
model,
but
then
the
tension
that
you
hit
is
if
these
systems
really
do
diverge
in
how
they
work,
and
they
also
diverge
in
terminology
and
and
in
other
ways
the
the
more
you
try
to
shoehorn
into
a
generic
system.
E
A
generic
representation
of
a
messaging
system,
like
the
the
more
that
operator,
is
trying
to
now
like
look
through
it,
an
extra
layer
of
abstraction
in
order
to
understand
what
their
system's
doing
so,
I'm
I'm
curious.
I
mean,
I
think,
we'll
get
into
the
weeds
of
this,
but
I
that
that's
an
area
that
I'm
just
curious
about
is
like.
G
The
line
the
weeds
of
this,
so
so
this
I
I
believe
this
model,
so
I
believe
this
model
is
representing
generically
most
of
the
the
common
messaging
systems
that
are
out
there
from
a
all
of
those
that
do
this
push
pull
translation,
there's
there's
another
class
of
messaging
systems,
which
are
which
I've
kind
of
only
kind
of
scoped
out
like
the
aws
event,
bridge
or
azure
event
grid
or
k-native
eventing,
because
what
they
do
is
they?
G
Basically,
you
publish
an
event
in
by
push
and
then
those
systems
push
them
back
out
again,
so
they
are
effectively
a
ultra
powerful
weapon
caller
right
there.
There
are
effectively
like
like
what
what
we
just
discussed
as
retry
policy
right
is
something
that
those
those
kinds
of
eventing
brokers
do
on
the
server
side.
G
G
So
that's
why
we
can
kind
of
let
them
out
yes,
so
in
in
terms
of
in
I
think
in
terms
of
of
how
generic
and
how
specialized
things
can
be.
We've
done.
G
We've
done
work
in
cloud
events
in
particular
to
try
to
find
at
least
four
events,
a
minimal
set
of
metadata,
a
minimal
set
of
attributes,
which
kind
of
all
events
need
to
go
and
follow,
and
actually
I
think
cloud
events
might
be
an
interesting
and
that's
a
meta
point
right
cloud
events
might
be
an
interesting
mechanism
to
also
standardize
how
log
entries
look
because
log
entries
are
nothing
more
than
cloud
events
per
se
right,
so
that
nothing
more
than
events
that
you
capture,
but
but
what
we
did
in
cloud
events
is
to
basically
say
what
is
there
in
an
event?
G
What
do
we
need,
and
so
there's
a
type
there's
an
id
there's
some
time
and
then
there's
a
source
and
there's
a
subject.
So
we
have
like
four
or
five
attributes
that
we
have
and
then
what
we've
done
is
we
took
those
attributes
and
we
kind
of
mapped
them
onto
constructs
that
are
that
are
existing
in
the
various.
In
the
various
transports,
akp,
mqtt,
kafka,
http,
etc,
but
kind
of
from
that
exercise,
what's
clear
is
that
there
is
no
clean
mapping
like.
G
You
can
always
find
a
good
place
for
any
of
these
information
items
right,
but
you're
coming
if
you're
coming
from
reverse.
If
you
say
I
want
to
go
and
inspect
an
mqp
system
or
I
want
to
go
and
inspect
the
kafka
system,
then
you
really
are
coming
from
the
opposite
direction.
You're
you're,
you're,
coming
with
you,
can't
make
any
rules
around.
G
G
Lack
of
standardization
in
this
well
lack
of
standardization
aspects
of
this
of
this
area
and
and
deal
with
with
attributes
and
metadata
kind
of
in
the
on
individual
basis.
Yeah
good
news
is
that
there
that
standardization
is
does
exist
across
quite
a
few
brokers.
G
So
there's
I
said
nkp
mqtt
kafka
is
is
something
that
is
kind
of
sort
of
standard
or
a
pulsar
at
least
there's
large
oss
projects,
so
that,
even
though
the
space
is
still
has
issues
with
standardizations,
there
are
not
so
many
products
that
it
becomes
impossible
and
and
because
the
models
are
still
very
closely
aligned.
It's
it's
the
there
are
not
so
many
patterns
in
messaging.
G
I
think
the
the
the
the
complexity
of
of
making
those
bindings
will
be
manageable.
Yeah.
D
I
also
want
to
answer
that
to
your
question,
because
I
think
it's
a
good
point,
but
when
we
look
at
the
existing
semantic
conventions
I
mean
also
there
we
have
some
some
rules
there
buried
that
certificate.
This
bands
has
to
be
parented
and
those
bands
have
to
be
linked
and
those
can
be
parented
again,
so
I
think
already
in
existing
semantic
conventions.
We
have
kind
of
a
model
like
this
underlying
it's
just
it's
just
not
made
explicit
it's
just
simply
there
somehow
and-
and
I
think
whatever
we
do
now,
with
a
stable
version.
D
I
think
if
we
have
this
kind
of
model
underneath-
and
I
think
I
think
we
need
some
kind
of
rudimentary
model
to
define
how
spans
should
be
linked
and
parented
yeah.
I
think
we
should
make
this
model
explicit
great,
so.
E
Like
we
have
this
in
http,
for
example,
and
we're
trying
to
flush
it
out
and
it
matches
a
lot
of
what
you're
saying
where
people
want
to
see
like
a
logical
http
span
representing
like
a
logical,
http
client
and
that's
like
very
useful
to
have
one
span
with
all
of
the
indexing
on
it
all
the
attributes
on
it.
E
Recording
the
overall
total
request
time,
but
then,
under
that
you
have
mult
child
spans
that
are
representing
the
kind
of
mechanical
http
requests
where
there
might
be
a
bunch
of
retries
and
redirects
and
other
stuff
going
on,
and
so
you
need
to
have
that
kind
of
model.
I
think
where
we've
been
kind
of
like
on
easy
street,
with
the
the
other
things
we've
been
defining
is,
they
are
like
tend
to
be
very
well
defined.
Protocols
like
http
is
like
the
http
protocol.
E
E
More
than
we've
had
to
wrestle
with
it
in
other
areas,
because
there's
just
more
divergence
over
here
and
well.
It's
well.
It's
true
that
that
there
is
like
a
generic
mental
model,
and
if,
if
I
am
an
application
developer
trying
to
architect
my
system,
I
would
want
to
architect
my
system
against
a
more
generic
model
and
maybe
not
try
to
rely
too
much
on
the
specifics
of
an
implementation.
E
E
Like
our,
you
see
what
I'm
saying
you
still
have
like,
I'm
not
saying
you
wouldn't
have
these
like
logical
operations
like
you're,
describing
them
it's
just.
We
should
maybe
do
some
make
sure
we
do.
G
What
what
makes
what
makes
things
a
little
so
the
extra
complication,
the
so
if
you
say
so,
there's
there's
some
some
some
there's
wire
standards
and
then
there's
api
standards
and
and
the
and
how
the
application
interfaces
with
stuff
is
always
different
and
and
and
the
way
we
where
we
slot
in
tracing
is
obviously
in
the
code
or
not
on
the
wire,
which
means
which
means
the
the
the
the
apis
is
kind
of
an
interesting
place
to
go
and
hook
into
so
in
the
in
in
messaging.
G
G
Kafka
has
a
common
api,
but
then
that
is
very
specific
to
the
kafka
broker
and
doesn't
work
with
with
many
others.
So
I
think
we
will
have
to
go
and
find
find
and
then
and
then
mqp
just
another
thing
right.
You
might
quite
well
have
to
have
the
situation
that
you
have
an
amqp
application,
which
is
generic
and
can
and
works
against
active
mq,
but
also
works
against
azure
service
bus
right.
So
you
basically
just
go
and
switch
them
around.
G
G
You
want
to
go
and
deal
with
a
queue
or
you
deal
with
the
stream
and
then
and
then
the
tool
also
needs
to
have
a
way
to
go
and
have
a
generic
representation
of
what
that
q
thing
is,
and
so
that
we
have
a
set
of
common
attributes
that
you
can.
We
can
always
map
to
and
from
those
products
and
from
to
and
from
those
apis
where
you
then
have
extensions
where
you
can
go
and
effectively,
you
can
double
click.
E
Yeah
this
is
exciting
stuff.
I
think
I
may
need
to
call
time
just
so
people
know
we
only
have
a
four
zoom
meetings
for
all
of
open
telemetry
and
this
this
airspace,
this
kind
of
time
slot
they
all
get
used
right
after
this,
and
so,
if
we
don't,
if
we
don't
get
off
the
call
in
the
next
two
minutes,
then
people
in
for
the
next
meeting
are
going
to
start
piling
into
this
one,
I'm
going
to
go
to
a
cncf
meeting
right
now.
So
that's
all
good
yeah.
D
D
Yeah,
johannes
is
going
to
yes,
I'm
gonna
wrap
up
then
so.
Firstly,
thanks
all
for
participating.
We
are
planning
like
next
thursday
at
the
same
time,
to
continue
this
discussion
in
person.
In
the
meantime,
please
comment
on
this
pr
that
is
open
here.
It's
linked
in
the
document
also
in
versus
lag
channel
hotel
instrumentation,
where
we
can
continue
any
kind
of
discussions.
D
D
All
that
was
a
great
first
kickoff
and
I
hope
we,
I
hope
I
see
many
comments
on
the
pr,
and
I
hope
I
see
you
all
again
next
week
and
hopefully
next
week
we
can
move
on
from
this
from
this
diagram
to
talk
about,
like
particular,
scenarios
that
we
have
here,
and
maybe
this
will
also
help
us
to
kind
of
discover
gaps
that
we
might
have
here
in
this
model.
If
you
can
come
up
with
scenarios
that
are
not
captured
by
that,
but
yeah
awesome.
That
was
great
thanks.
I
I
So
I
think
we
should
both
log
out
of
here,
because
so
we
can
get
the
recording
reset
because
it
will
go
out
together
and
we
gotta
tell.
I
gotta
tell
everybody
that
we
need
to
bail
out
of
here,
okay
I'll,
leave
and
then
join
again:
cool,
trask
and
tyler.
We
need
to
bail
out
to
get
the
the
recording
reset
all.