►
From YouTube: 2021-09-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
Hi
or
good
morning
or
good
evening,
let's
give
it
two
more
minutes
to
wait
for
paper
to
dial
in
and
then
let's
get
started.
D
E
I
think
they
just
take
some
time
to
be
uploaded
to
youtube.
Oh,
I
think
it's
like
isn't.
E
D
E
C
Yeah,
so
please,
if
you
have
anything
that
you
would
like
to
discuss,
just
put
it
on
the
trender
and
we
can
we
can
schedule
so
some
some
general
kind
of
just
details.
I
want
to
give
first
that
we
discussed
in
the
tuesday
meeting
that
is
actually
focused
on
http
semantic
conventions,
but
what
we
discussed
there
is
that
there
are
certain
areas
where
messaging
and
http
overlap,
or
which
are
even
kind
of
kind
of
very
general
kind
of
problems
that
we
need
to
solve.
C
That
are
not
just
the
problem
for
http
and
messaging,
but
for
many
other
telemetry
areas,
one
example
would
be.
How
are
we,
how
are
we
modeling
retries
in
the
span
structure
and
those
problems?
We
want
to
discuss
or
find
solutions
or
conventions
that
are
independent
from
hdb,
and
so
those
will
be
discussions,
kind
of
independent
or
only
very
loosely
covered
to
hdb
and
messaging,
and
then
we
come
up
with
some
general
guidance
there,
and
we
will
then
use
this
in
the
messaging
and
http
semantic
conventions.
C
So
it
will
be
kind
of
a
bit
of
a
split
here
in
terms
of
the
topics
and
just
one
proposal
here
from
my
site
meeting
wise.
I
think
it
makes
definitely
sense
to
keep
like
a
messaging
related
discussion
stream
going
here
and
just
we
have
to
propose
it.
For
myself
is
that
if
you
see
okay,
we
need
also
like.
C
We
only
have
these
two
meetings
and
we
see
we
need
time
to
discuss
these
more
general
topics
that
you
may
we
organize
it
in
a
way
that
we
always
devote
like
the
first
part
of
this
meeting,
at
least
to
messaging
related
topics,
and
then
maybe
the
latter
half
if
needed,
we
can
switch
to
other
more
transfer
topics
so
that
people
who
are
just
interested
in
discussing
messaging
here
and
are
not
really
kind
of
observability,
nerds
and
wanna
go
into
the
other
topics.
C
F
F
I
actually
think
it
would
be
good
to
get
input
from
to
see
prototypes
for,
for
both
some
messaging
instrumentation
and
some
http
instrumentation
that
that
tackles
some
of
this
stuff
before
we
go
signing
off,
anything
is
being
stable
or
whatever.
I
think
it
just
would
be
good
to
recognize
that
we're
missing
some
like
guidelines
around
some
fundamental
stuff
and
we're
missing
those
guidelines,
because
we
actually
haven't
haven't,
haven't
really
prototyped
these
things
very
far.
F
So
yeah
for,
like
clarity
like
one
issue,
is
granularity,
like
you
know,
when
we
have
it
additional
levels
of
detail,
like
you
know,
physical
connections
and
stuff,
like
that,
how
do
we
model
that
another
is
that
I
think
is
very
relevant,
for
this
group
is
for
links
linking
between
like
session
level,
spans
and
transaction
level.
Spans
seems
like
it's
something
that
comes
up
in
messaging
a
lot.
F
You
know
where
you
have
like
you
might
want
to
model
something
that's
doing
batch,
processing
or
a
longer
term
connection
of
some
kind,
and
then
you've
got
every
message
having
its
own
its
own
trace.
How
do
you
connect
the
two
of
them?
We
know
we
want
to
use
links
to
to
build
out
these
graphs,
but
like
what?
F
What
is,
at
the
end
of
the
day,
we
should
deliver
a
data
model
as
well
with
this
to
to
people
like
on
some
rough
approximation
like
what
would
a,
if
someone's,
going
to
build
a
back
end
that
implements
otlp
data,
including
links
like
what
would
what
would
a
reasonable
expectation
be
for
for
modeling
links
in
like
a
graphing
database,
or
you
know
a
tool
like
cassandra
or
something
like
that.
C
Yes,
definitely
I
mean
I
just
wanted
to
kind
of.
Let
people
know
that
there
will
be
kind
of
yeah.
The
gentle
discussions
here
also
like
to
keep
us
in
sync
with
others
and
to
cover
those
general
topics,
but
I
definitely
agree
that
we
that
we
should
keep
like
this
messaging
related
and
focused
discussion
going
because
I
think
he'll
be
making
pretty
good
progress
and
also
building
some
common
understanding
and
knowledge
in
our
community
here,
and
I
think,
that's
extremely
important
for
us
to
to
get
that
to
a
staple
state.
D
Thank
you
guys
so,
like
maybe
we
can
use
both
suggestions.
Maybe
we
can,
like.
I
use
half
of
this
meeting
and
half
of
the
http
like
a
relative
meeting
for
messaging
and
http
related
discussions
and
another
half
at
least
for
next
current,
like
next
several
occurrences,
we
can
discuss
these
fundamental
principles
that
looks
like
we
are
missing
just
to
like
get
cloud
on
them
and
basically
apply
to
to
both
http
and
messaging
and
other
smalling
conventions.
So
we
can
make
progress
in
several
directions
simultaneously.
C
Okay,
so
for
the
agenda,
what
I
put
here
on
the
trender-
it's
some
open,
br
commands
that
I
think
can
can
raise
those-
and
I
just
wanted
here
to
get
some
opinions
here
from
the
group.
It's
mostly
about
terminology
and
the
first
one
I
can
write
jump
into
here.
C
It's
it's
about
of
how
we
define,
like
you,
know,
terminology
what
the
message
is,
and
ken
here
is
suggesting
to
use
like
the
words
envelope
instead
of
wrapper
and
payload
instead
of
data,
because
what
we're
saying
here
is
a
message:
is
a
transport
wrapper
for
transformation
of
information
between
two
or
more
parties?
C
I
think
that's
actually
like
a
definition
that
it
took
from
the
first
version
of
the
semantic
conventions.
Here
it
says
information
is
a
combination
of
data
and
metadata
and
yeah.
I
don't
have
a
strong
opinion
here.
I
wonder
actually,
if
cameron
says
thoughts
on
this,
because
I
think-
or
I
just
assumed
there
have
been
lots
of
discussions
about
this
in
the
cloud
events
back.
A
Yeah,
so
the
message
is
the
message:
is
it?
A
The
message
is
all
of
it,
so
it's
the
envelope
and
everything
that's
inside
of
it
and
whether
it's
envelope
or
wrapper
doesn't
matter
and
it's
and
whether
payload
or
data
we
in
cloud
events.
We
chose
data
instead
of
payload
and
we
call
the
the
metadata
context
attributes
and
then
we
call
them
in
mqp.
A
We
call
them
properties
and
I
think
in
mqtt,
they're
called
headers,
so
yeah
there's
varying
opinions
about
how
that
how
that
lands-
and
I
I
usually
stay
out
of
those
naming
discussions-
it's
basically
whether
it's
payload
or
data
doesn't
make
much
of
a
difference
and
whether
it's
envelope
or
wrapper
also
doesn't
make
much
of
a
difference,
most
important
things
that
people
understand
to
spec.
C
Okay
agreed,
so
I
also
don't
have
a
strong
opinion
here.
So
if,
if
nobody
else
is
a
strong
opinion,
I'm
fine
going
with
ken's
suggestions
because
it
seems
he
has
a
preference
for
envelope
and
payload.
And
if
that's
fine
with
everybody
I
will.
I
will
change
that.
G
Well,
I
think
also
with
that
particular
statement.
I
think
I
don't
know
if
I
had
another
comment
on
this,
I
can't
remember
off
the
top
ahead.
Is
it's
the
whole
concept
of
a
transport
wrapper,
because
a
message
can
be
a
message
that
isn't
going
over
a
transport?
So
it's
not
just
a
wrapper
for
a
transporter.
It's
a
wrapper
for
the
message
itself.
If
that
makes
sense,.
G
Well,
for
example,
in
with
quarkus,
we
implement
the
reactive
messaging
spec
from
microprofile,
which
defines
like
a
message
structure
which
can
be
utilized
to
send
messages
to
kafka,
mkp
or
other
things,
or
it
could
be
used
to
pass
messages
in
memory
or
in
between
single
within
the
same
process.
Different
parts
of
the
application,
so
you're
still
creating
the
message
with
the
metadata
and
the
actual
body.
G
But
it's
actually
not
being
sent
over
a
transport
mechanism.
But.
G
I
guess
in
theory,
yes,
whenever
I
see
transport,
though
I
think
something
more
tangible
like
http
or
kafka
or
amqp,
or
anything
like
that.
So
I
think
that's
where
I
was
coming
from
with
that,
but
I
mean
I'm
open
to
whatever.
A
The
point
of
the
message
is
typically,
that
it's
not
it's
a
structure.
It's
we
can.
We
can
expand
this,
but
it's
typically
a
structure
that
has
data,
that's
serialized
and
doesn't
keep
references.
So
if
you,
if
you
have
a
structure
that
is
that
has
references
inside
of
it
into
the
same
memory
space,
then
then
yeah,
that's
probably
another
message.
A
C
Sounds
good,
I
will
make
like
the
changes
proposed
with
using
envelope
and
payload
here,
but
I
think
we
will
keep
the
transport
wrapper
or
then
transport
envelope
reference
here,
and
the
second
question
also
brought
up
by
ken,
which
is
actually
a
good
one.
C
I
mean
the
whole
spec
is
called
like
messaging
semantic
conventions,
but
I
mean
if
we
really
are
exact
about
the
terminology.
We
also
will
support,
at
least
in
my
understanding
and
that's
what
I
wanna
also
confirm
is
everybody
here.
We
will
also
support
eventing
systems
with
this
with
this
with
this
semantic
conventions.
C
So
the
name
then,
is
a
bit
unfortunate
because
yeah
there's
this
distinction
between
messaging
and
eventing,
and
the
name
suggests
that
we
only
like
support
one
of
those
but
yeah
ken
asks
to
kind
of.
If
you
support
both
of
those
to
point
this
out
and
yeah.
I
will
add
this
and
point
out
that
we
are
also
supporting
like
event
streaming
and
event
driven
architectures.
So
I'm
not
sure
if
it's
worth
like
changing
the
name
messaging
semantic
conventions,
I
mean,
I
think
it's
a
bit
unfortunate
because
it
can
give
someone.
G
Yeah,
I
wouldn't
go
that
far.
I
think
if
we
pick
another
name,
we're
just
going
to
have
as
many
problems
trying
to
get
something.
That's
clearly
distinguishable.
I
think
messaging
is
fine.
It
was
more
if
this
is
for
more
than
just
messaging
applications.
I
think
we
just
need
to
call
that
out
to
make
it
clear.
That's
all
yeah.
C
Awesome
then
we
have
those
open
points
solved
and
we
could
actually
switch
over
to
starting
some
discussions
about
the
scenarios.
C
Okay,
I
mean
we
talked
about
the
the
overall
stages
that
that
we
laid
out
here
and
we
pretty
much
agreed.
I
mean
there
was
some
discussions
about
the
settlement
state.
I
think
amir
brought
this
up,
which
was
related
to
certain
fire
and
forget
scenarios
where
there
is
no
settlement
on
the
settlement
stage
and
yeah.
I
just
here
added
the
clause
to
this.
In
exceptional
cases
the
settlement
stage
does
not
exist.
C
I
think
there
was
this
example
that
army
brought
up
with,
like
this
mqtt
quality
of
service
hero
where
the
message
is
just
fired
and
there
is
no
accomplishment
whatsoever.
C
So
yeah,
that's
that's
added,
but
apart
from
that,
I
think
we
agreed
on
those
five
stages
that
we
worked
out
here
and
what
is
done
in
this
document
yeah,
based
on
those
five
stages.
There
are,
then
those
two
main
scenarios
that
brother
clements
had
to
work
out
and
basically
the
question
is
whether
we
can
sub
sum
or
the
scenarios
that
we
wanna
cover
with
this
back
under
those
two
main
two
main
scenarios,
or
if
there
are
certain
other
things
that
you
want
to
cover
that
don't
fit
in
there.
C
So
just
to
give
like
a
quick
overview
like
the
first,
the
first
scenario
he
recorded
individual
settlement
and
what
basically
is
the
distinctive
feature
here,
is
that
each
like
single
message
needs
to
be
settled
individually,
so
the
message
flow
for
each
single
message
is
independent
and
it's
measured
its
method.
Each
message
is
like
delivered,
consumed
and
settled
like
a
small
entity,
and
it
also
needs
to
be
settled
individually.
C
So
you
see
here,
for
example,
that
there
is
not
really
a
message
stream,
as
you
will
see
in
the
other
scenario,
but
you
kind
of
have
individual
messages
and
you
can't
just
just
settle
particular
messages
that
are
in
this
queue,
without
without
any
other
constraints,
to
make
this
a
bit
clearer.
C
To
just
then
contrast
this
with
what
we
hear
called
checkpoint
based
settlement
and
in
this
scenario
here,
messages
are
processed
as
a
stream
and
settling
does
not
happen
like
on
the
individual
message
level,
but
settling
happens
like
with
so
on
certain
checkpoints
a
checkpoint.
C
Basically,
I
also
tried
is
to
visualize
here
points
to
a
certain
position
in
the
stream,
so
in
the
stream
you
see,
there's
like
messages,
the
messages
are
ordered
and
the
checkpoint
points
to
a
certain
position
and
every
message
that
is
before
this
checkpoint
is
considered
as
settled
and
every
message
that
is
past.
This
checkpoint
is
considered
as
not
settled,
so
you
see
the
difference
here.
You
cannot
hear,
for
example,
just
settle
an
individual
message
randomly
in
the
stream.
C
You
just
have
the
checkpoint
and
you
basically
divide
the
stream,
and
you
say:
okay,
everything
before
the
checkpoint
is
settled.
Everything
after
reflect
point
is
not
settled,
and
you
cannot
achieve
something
like
here
where
you
are
just
set
individual
messages
and
you
kind
of
have
the
gaps
of
unsettled
messages
in
between
here
I
mean
to
give
some
examples,
basically,
the
checkpoint,
based
settlement.
That
is
something
like
that
kafka,
for
example,
does
and
the
individual
settlement.
That
is
something
that
the
rabbit
mq
does
I
mean
rabbit
and
q.
C
A
There's
there's
one
one,
one
further
aspect
which
might
matter
typically
the
the
when
individual
messages
are
settled
that
settlement
is
done
via
bookkeeping
on
the
server
well
checkpoint
based
settlement
is
typically
done
using
bookkeeping
on
the
clock
by
the
client,
while
the
server
may
help
with
storing
those
checkpoints.
A
The
decision
on
which
checkpoint
is
actually
being
set
is
usually
done
by
the
client,
so,
for
instance,
in
kafka,
kafka
has
a
has
a
place
where
you
can
store
the
checkpoints,
but
it
doesn't
get
in
the
way
of
the
of
of
deciding
for
the
client
what
gets
settled
and
when
what
the
checkpoint
is.
So
you
can
scroll
forward
and
backward
and
forward
and
backward
on
the
stream
without
the
broker
being
affected
or
the
broker
changing
the
delivery
state
effectively
of
the
messages
which
is
different
by
in
the
prior
model.
G
A
The
because
the
client
is
concerned
with
the
offsets,
I
think
it's
interesting
in
diagnostic
information.
G
That's
true,
yes,
I
know
for
the
use
of
kafka.
I've
been,
and
I
think
I
added
it
to
the
existing
conventions
added,
like
I
think,
offset
and
petition
to
as
span
out
attributes
to
messages
that
were
being
processed
on
the
consumer
side.
C
I
think
that
would
be.
That
would
be
like
an
important
question
to
answer
here,
if
not
the
better,
so
we
can
then
basically
go
ahead
with
those
two
scenarios
in
mind.
C
That
sounds
good,
so
then
I
will
make
like
just
a
amendment
here
that
gammons
proposed
and
yeah.
I
think
the
pr
anyway
will
still
be
open
for
like
some
more
time.
So,
if
anything
comes
to
your
mind,
just
comment
on
the
pr,
but
then
we
will
leave
the
scenarios
for
now
as
they
are,
and
actually
I
have.
C
We
actually
can
move
on
like
to
the
to
the
open
questions
in
the
in
this
document.
Basically,
open
questions
are
things
that
I
mean.
I
called
it
open
questions,
but
it's
basically
things
that
we
might
want
to
consider
out
of
scope
for
like
a
first
1.0
version
of
those
messaging
semantic
conventions.
C
This
does
not
mean
that
those
things
will
never
make
it
into
the
messaging
semantic
conventions.
I
think
it's
just
trying
to
reduce
the
scope
and
say:
okay,
that's
not
gonna,
be
in
the
first
stable
version,
but
yeah.
There
is
definitely
I
mean
the
first
two
for
sure,
like
sampling
and
instrumenting
intermediaries.
That
is
definitely
something
that
we
want
to
have
and
support.
C
C
It
makes
sense
for
us
to
reduce
the
scope,
because
here
we
want
to
strip
something-
that's
stable
at
some
point
not
too
far
in
the
future,
and
it
makes
sense
to
kind
of
push
this
out
a
bit
to
two
further,
maybe
1.1
or
2.0
versions
of
those
semantic
conventions
and
nevertheless
we
should
be
aware
of
those
things,
because,
whatever
we
come
up
with
1.0,
it
should
be
kind
of
extensible
in
a
way
that
we
can
easily
kind
of
add
those
things
without
breaking
compatibility
if
possible.
C
And
let's,
let's
just
try
to
go
through
those
points
in
the
order
I
mentioned
them
here.
The
first
one
is
sampling.
C
This
is
somehow
related
to
a
comments
that
luke
midland
made-
fortunately
she's
not
here
today,
but
she
made
a
comment
about
yeah
that
we
should
consider
like
the
volume
of
data
or
the
throughput
of
those
measured
messaging
systems,
in
terms
like
ford
also
many
conventions
that
we
come
up
with
because,
like
yes,
so
many
systems
have
like
lots
of
throughput,
create
very
big
batches.
C
And
then
we
have
like
a
trace
three
with
like
thousands
of
links
in
it,
because
there's
a
very
big
batch
that
is
fetched,
then
we
need
to
consider
this
in
our
visualization
solution
that
somehow
touches
on
the
on
the
sampling
problem
here,
mostly
related
kind
of
in
terms
of
like
the
throughput
or
the
volume
that
we're
gonna
process
to
elaborate
a
bit
on
this.
C
That
open
telemetry
currently
has
certain
like
samplers
that
they
support
most
prominently
is
like
the
probability
sampler
where
you
say:
okay,
I
just
want
to
sample
10
or
5
percent
of
all
the
spans
that
are
created,
the
only
one
that
basically
sent
ten
or
five
percent
of
those
bands
to
the
back
end,
and
there
is
another
sampler
that's
relevant
related
is
discussion.
That's
the
parent-based
sampler
and
basically
the
parent-based
sampler,
says:
okay.
C
If
for
every
span
in
the
sampling
decisions,
if
the
parent
is
tampered,
then
also
the
child
is
sampled,
and
if
the
parent
is
not
sampered,
then
another
delegate
sample
is
invoked,
and
this
could,
for
example,
be
this
probability
sampler.
C
The
idea
behind
the
parent-based
sampler
is
that
yeah
we
achieve
that.
We
sample
complete
traces
and
not
just
individuals
individual
spends.
So
the
deal
is
that
there's
a
root
span
with
no
parent.
Here,
for
example,
we
have
the
probability
sampler
and
then
every
children
of
this
root
span
are
have
the
same
sampling
decision
of
the
parent
because
of
the
parent-based
sampler.
So
in
this
way
either
we
have
the
switch
that
we
say:
okay,
either
the
whole
trace
is
sampled
or
the
whole
trace
is
not
sampled.
C
For
for
messaging
systems,
the
thing
is
gets
most
likely
more
complex
because
you're
not
just
dealing
with
parenting
relationships,
but
we're
also
dealing
with
like
links
with
links
relationships.
C
There
is
nothing
like
a
link
based
sampler,
yet
I
mean
I
worked
on
some
scenarios
where
we
came
up
with
something
like
a
link
based
sampler,
but
there
is
nothing
in
the
community
that
exists
yet
in
that
in
that
direction,
and
so
it
will
actually
be
a
challenge
for
messaging
systems
to
have
this
feature
that
complete
traces
are
sampled
complete
traces.
Also,
in
the
sense
that,
with
messaging
system,
we
will
not
just
treat
a
deal
with
one
trace
like
when
a
message
flows
like
through
there
will
actually
be
several
traces.
C
There
will
most
likely
be
a
trace
on
the
con
producer
side
and
there
will
be
a
different
race
on
the
consumer
side
and
it's
still
yet
open.
What
we
actually
want
to
achieve
in
this
scenario
is
sampling,
like
we
want
to
say:
okay,
when
the
producer
is
sampled,
we
also
want
to
sample
the
consumer
trace,
like
that.
We
keep
like
both
together
and
and
sampling,
helps
us
to
keep
like
all
the
traces
that
are
related
to
a
single
message.
C
C
That
might
also
need,
like
some
discussions,
not
only
this
group,
but
also
in
the
sampling
sick,
that
exists,
and
that
is
something
that
I
just
suggest
here-
to
leave
out
for
1.0,
because
links
are
not
a
very
heavily
used
concept
yet
actually
they
are
used
very
rarely
many
backhands,
don't
even
support
them.
G
Yeah
I
I
would
tend
to
agree
that
it's
probably
something
we
need
to
put
off,
but
I
would
just
caveat
with
that
with
a
bit
of
it's
somewhat
of
a
chicken
and
egg
problem
in
that
we
don't
want
to
wait
until
links
are
in
the
back
ends
and
then
find
that
what
they've
done
doesn't
work
with
what
we
how
we
envision
things
should
be
linked
together.
F
I
I
would
definitely
suggest
it's
a
it's
a
now
thing
I
think
some
stuff
like
sampling.
It
might
be
totally
unrealistic,
like
the
answer,
might
just
be
too
bad
or
come
up
with
like
an
advanced.
F
You
know,
tail-based
sampling
algorithm,
it's
the
only
realistic
way
to
do
it,
but
but
the
general
idea
of
like
how
do
we
expect
people
to
to
implement
this
stuff?
Does
matter
right,
like
one
question
really
is
like
what
is?
How
do
we
control
the
size
of
these
graphs
and
what's
the
expectation,
how
how
big
can
a
graph
get
before
it's
unreasonable
for
someone
to
to
implement
that
in?
F
Like
a
you
know,
ordinary
graphing
database
or
or
some
somewhat
straightforward
way
to
do
it
in
something
like
cassandra,
because
the
other
it's
like,
there's
two
ways
to
do
it
right,
like
it's,
either
implemented
as
a
as
a
graph
in
a
graphing
database
where
everything
is
properly
indexed
so
that
you
can
pull
the
whole
graph
or
some
chunk
of
it
in
one
call
or
you're
left
with
this
implementation,
where
you
fetch
one
trace
and
then
you
go,
find
all
its
links
and
then
you
go
fetch
them
and
then
you
go
fetch
them,
and
I
think
everyone
can
see
the
the
serialization
issue.
F
With
those
I
mean,
that's
just
like
a
classic
performance
issue,
but
if
the
graphs
we're
making
are
like
unknowable
or
like
potentially
limitless
or
something
like
that,
then
it's
it's
it's
a
problem.
So
so
I
think
we
should.
We
should
sort
that
part
out
and
to
be
clear.
This
is
a
place.
I
know
for
a
fact
at
google
and
other
places
where
they
they
did
run
into
problems
with
this
stuff
trying
to
model
certain.
It's,
it's
surprisingly
easy
to
generate
a
graph
that
just
links
everything
together.
F
I
don't
know
if
we
would
hit
that
with
with
messaging,
but
I
know
there
were
certain
attempts
at
using
links
that,
where
people
kind
of
like
had
to
back
off
from
it
or
or
get
really
tricky
about
it,
maybe
we
can
get
someone
from
google
or
from
another
place
yeah.
Maybe
google's
really
the
best
place
where
they've
used
this
stuff
in
the
past
to
to
get
some
some
feedback
into
this
group
about
how
they,
how
they
thought
about
modeling
it
over
there.
C
Well,
that
definitely
makes
sense,
but
it
was
useful.
Tyler
knows
what
ken
said
and
I
just
to
clarify
on
how
I
laid
it
out
before.
I
think
we
definitely
need
to
worry
about
like
how
we
create
links.
I
mean
that
will
be,
I
think,
a
central
part
of
the
conventions
we
come
up
with,
and
we
also
need
to
think
about
like
the
size
or
the
the
size
that
those
trace
trees
get
that
you're
creating
here
here.
C
C
C
C
Then
the
next
point
like
that
is
listed
as
an
open
question.
Here,
it's
instrumenting
intermediaries.
C
There
is
definitely
value
in
this
there's
also,
I
mean
one
of
the
comments
that
ken
made
and
I
put
a
link
there
or
so
kafka.
For
example,
they
have
a
proposal
up
for
instrumenting
kafka,
not
with
traces
but
with
metrics.
C
So
just
a
few
words
about
what
is
meant
by
this.
That
is
just
what
the
semantic
conventions
as
we
let
it
out
above
and
that's
the
first,
the
first
unstable
or
experimental
version
that
you
have
is
about
it.
Just
basically
tells
you
how
to
instrument
your
consumer
and
your
producer
and
the
broker
itself,
or
whatever
is
between.
We
call
it
in
the
media.
C
Here,
it's
just
a
black
box
and
it
just
kind
of
forwards
the
context,
but
there's
nothing
else,
but
there's
definitely
use
cases
for
looking
into
this
black
box
and
getting
insight
into
your.
I
don't
know
event
up
or
into
your
kafka
broker
or
your
your
rapidmq
instance,
but
what
kind
of
conventions
or
guidance
we
will
give
for
those
cases?
How
did
it
look
like?
C
I
think
that
we
should
also
consider
out
of
scope
from
1.0
and
just
basically
focus
on
the
consumer
side,
and
maybe
that
could
at
some
point
be
like
then,
a
later,
maybe
a
different
set
of
semantic
conventions
for
messaging
brokers.
F
I
might
suggest
for
for
intermediaries,
that's
where
we
would
want
to
get
the
companies
that
actually
produce
these
products
in
involved
right
because,
like
realistically
we
wouldn't
it
doesn't
really
matter
what
we
say
right
because,
like
that's
not
instrumentation,
we'll
be
writing
right,
that's
instrumentation!
They
have
to
write.
C
Yes-
and
I
I
mean-
I
didn't
think
too
much
about
that-
but
also
think
it
might
be
harder
there
to
come
up
with
a
general
set
of
semantic
conventions,
because
I'm
not
I'm
actually
doubtful
if
all
those
different
intermediaries
internally
work
in
the
same
way
yeah.
So
they
might
work
like
in
very
different
ways
internally,
and
it's
actually
questionable
in
how
far
you
can
come
up
with
a
common
set
of
semantic
conventions
to
tell
them
how
to
instrument
their
services.
C
F
I
I
suspect
it's
just
like
those
services
are
just
services
and
what
we
should
want,
and
if
people
know
people
who
work
at
these
companies
to
maybe
start
getting
them.
Thinking
about
is
just
instrumenting
themselves,
with
open,
telemetry
right
and
producing
otlp
data
and
when
they're
doing
things
that
that
are
normal
things
like
making
http
requests
or
whatever
you
know,
using
our
conventions
there
for
those
things
but
but
yeah,
I
don't
think
we
expect
the
internals
of
rabbit
and
q
to
have
anything
to
do
with
the
internals
of
a
kafka
kafka.
C
And
the
next
point
here
is
metrics,
I
mean
that
might
actually
be
interesting
very
soon,
but
also
for
this
document
we're
working
on
here.
I
think
we
just
focus
on
on
tracing.
C
And
the
the
last
point
that
might
be,
I
think
ken
I'm
sure,
has
some
opinions
on
that.
The
last
point
here
is
that
the
I
put
as
out
of
scope
in
here
is
in
memory,
cues
or
channels
also.
C
C
I
mean
you
can
also
consider
like
a
plane
go
channel
which
is
just
a
programming
language
construct
as
an
in-memory
channel,
and
that
is,
I
think,
definitely
something
that
we
don't
want
to
cover
with
these
semantic
conventions.
But
then
there's
also
like
things
like
quarkx,
which
might
also
technically
qualify
as
an
in-memory
queue
or
triangle.
But
it's
kind
of
basically
exposes
the
interface
of
a
full-fledged
messaging
system
and
it
might
make
sense
to
use
those
to
use
those
semantic
conventions
on
those.
C
So
I
I
think
it
might
still
be
worth
thinking
more
about
this
point
and
maybe
finding
like
a
better
better
distinction
of
what
we
aim
to
support
in
this
context
and
of
what
we
don't
want
to
support
with
those
semantic
conventions.
G
Yeah,
I'm
just
thinking,
I
guess.
One
thing
that
could
be
said
is
that
it's
not
just
it's
not
general
cues
and
channels,
it's
very
specifically
in
memory
messaging
channels
that
have
intentions
of
either
in
the
front
or
the
back,
connecting
to
external
messaging
systems.
But
there
are
some
internal
hops
within
an
application
that
are
usually
using
the
same
constructs
and
semantics
to
pass
messages
around
internally.
G
But
yeah,
I
it
honestly
it's
not
critical
that
this
is
directly
included.
It's
just
something.
I'm
going
to
keep
in
the
back
of
my
head
to
make
sure
that
whatever
we
come
up
with
doesn't
cause
a
direct
problem
with
what
we
do
with
quarkx
for
in-memory,
stuff
and
I'll
certainly
try
and
call
that
out
when
I
notice
it.
C
Yes,
I
mean-
I
I
think,
in
my
opinion,
one
I
mean
out
of
the
emphasis
I
put
here
is
on
like
inter
application
systems,
and
I
I
just
know
that
some
back
ends.
C
They
build
some
kind
of
application
map
where
you
have
like
a
overview
of
replication
like
different
applications
or
different
kind
of
nodes
in
this
graph,
and
it
shows
how
kind
of
requests
flow
in
between
those
parts
and,
for
example,
those
that
creating
those
application
maps
like
the
semantic
conventions
here,
might
might
help
to
kind
of
determine
those
notes
there
and
show
that
okay
in
between
those
two
nodes,
there's
kind
of
a
broker
like
a
message
broker.
That
routes
messages
that,
for
example,
that
does
not
really
get
clear
just
via
the
context
propagation.
C
But
the
messaging
semantic
conventions
could
be
used
to
kind
of
make
clear.
Okay
between
those
two
applications
like
there's
a
messaging
communication
and
there's
a
broken
between,
and
I
think
those
are
scenarios
that
we
should
keep
in
mind.
G
G
It
may
just
be
something
as
simple
as
saying
in
the
conventions
that
if
you
have
a
system
where,
on
a
boundary,
there
is
an
external
message
broker
and
you're
feeding
into
your
system
and
doing
some
kind
of
memory
thing.
Then
we
recommend
the
in-memory
stuff
utilizes
the
same
conventions,
so
that
when
you
have
messages
flowing
from
external
to
internal
everything's
got
the
same
meaning
and
definition.
G
C
That
makes
sense
yeah
I
will.
I
will
still
spend
some
time
and
try
to
kind
of
find
a
better
way
to
to
put
this
here
so
that
we
ideally
we
both
make
sure
that
we
cover
like
that.
We
make
clear
that
we
intend
to
cover
things
like
quackers,
but
that
we,
at
the
same
time,
don't
intend
to
cover
things
like
yeah,
cues
or
channels
like
programming,
language
constructs.
C
Because
I
think
that's
definitely
something
we
agree
that
we
don't
want
to
cover
with
those
semantic
conventions
when
just
somebody
has
like
a
queue
in
the
application,
and
one
program
part
put
something
in
the
queue.
The
other
program
bar
to
read
something
out
of
this
simple
queue.
I
think
that's
not
the
use
case
for
those
semantic
conventions.
C
I
interpret
the
awkward
silence
as
no
currently,
nothing
is
coming
to
people's.
E
C
E
I
was
going
to
say
something
better
yeah,
also
it's
not
about
the
was
going
to
ask,
because
I
was
not
in
the
initial
one,
but
I
was
trying
to.
I
was
trying
to
read
the
document,
but
from
the
last
meeting
that
I
was,
we
discussed
a
little
bit
about
the
cloud
against
things
and
how
propagation
works
is
this
also
going
to
be
part
of
this
initiative
now
like
define
how
the
propagation
will
work
or.
C
Yes,
I
would
definitely
say
so
like
context
propagation
in
relation
to
like
messaging
system.
Workflows
is
definitely
in
scope.
Okay,
okay,
I
think
that
is.
That
is
one
of
the
main
and
bigger
questions
we
need
to.
We
need
to
answer
with
those
semantic
conventions.
C
It's
not
this
document,
just
kind
of
defines
the
scope,
but
the
solution
is
actually
that
those
tests
that
will
be
a
separate
document
all
right.
B
C
B
C
Like
define
how
we
propagate
and
use
context
to
to
model
tomorrow,.
E
C
Awesome,
so
I
think
we
are
we
are
through
with
discussing
this
document.
I
believe
there
are
a
few
open
commands
have
a
few
notes
here,
so
I
will
incorporate
those
into
into
this
document
and
then
yeah.
It
will
be
open
for
like
quite
some
some
more
time.
So
if
you
have
other
remarks,
please
add
them.
C
Otherwise,
I
think
we
couldn't
try
here
to
maybe
get
some
approvers
on
the
document
and
maybe
get
it
merged,
but
I
think
yeah
we
will
wait
at
least
until
next
week.
I
will
put
all
those
all
those
changes
in
that
I've
noted
here
and
then
maybe
next
week.
We
can
then
see
if
there's
anything
open
yet
and
then
discuss
how
to
best
proceed
with
that
document.
C
F
I
think
maybe
my
only
question
is:
do
people
I'm
curious
if
people
are
thinking
about
getting
prototypes
of
these
things
done
up,
it
would
be
great
to
see
you
know.
Are
we
at
the
stage
right
now,
where
we're
ready
to
start
diving
into
implementations.
G
C
C
So
that's
maybe
what
I
would
suggest
that
we
then
yeah
maybe
start
discussions
about
like
focused
on
context
propagation,
and
then
you
ate
this
discussion,
like
with
prototypes
that
we
are
doing.
B
C
B
C
Great,
so
if
there
is
nothing
more
from
anybody
than
a
trau,
do
you
have
anything.
E
E
I
read
a
little
bit
more,
so
I
think
I
understand
better
how
how
call
events
works
so
I
opened
a
new
one
there
and
I
think
you,
you
had
some
comments
and
I
I
think
I
agree
with
them.
E
I
just
have
I'm
just
having
a
hard
time
explaining
that
the
attributes
are
just
like
meant,
for
I
mean
you
can
add
to
the
let's
say:
parents
fan
or
you
can
create
a
newspan
with
this,
but
the
semantic
conventions
are
not
related
to
like
how
context
propagation
works
in
whatever
protocol
that
you're
sending
the
cloud
events
on
or
it's
just
it's
just
really
the
attributes
that
you
can
use
to
attach
to
anything.
It
could
be
to
a
spain
that
you
want
for
called
events
yeah,
but
yeah
and
so
yeah.
E
E
No
yeah
and
then
I
replied
to
one
of
your
questions
because,
as
I
said
last
time,
I
opened
a
vr
for
the
go
sdk
with
with
an
implementation
that
use
this
distributed
tracing
extension.
We
will
figure
out
if
we
will
ever
use
it
or
not,
but
for
now
it's
there
and
there
is
an
example.
There
there's
an
exa,
a
sample
app
that
you
know
just
a
client
server
app
and
that
one
creates
cloud
event
spans.
E
Because
one
of
your
comments
is,
you
said
that
maybe
creating
other
instance
might
not
be
useful
or
something
and
then
there's
this
same
web
there.
That
has
an
example
so
yeah,
so
I
think
it
depends
then
like
maybe
if
you're
just
sending
http,
because
the
sample
there
is
just
using
http
and
it's
using
the
auto
instrumentation.
E
So
when
I
send
an
event
to
a
to
a
server,
the
the
alt
instrumentation
creates
the
http
spans,
but
they
are
basically
like
they're,
not
useless,
but
there's
nothing
there
in
them
yeah
and
then,
as
part
of
that
I
before
sending
I
create
manually,
I
stand
with
those
attributes
and
then,
when
I
look
at
the
full
trace,
then
I
have
the
the
cloud
events
span
and
then
they
also
auto
generate
expand
from
the
http
output
instrumentation.
E
So
I
think
it
it's
it's
somewhat
valuable
to
to
create,
but
I'm
not
sure
if
we
should
put
this
in
the
into
that
semantic
commercial
just
leave
out
completely
because,
for
example,
there
there
is
recommendation
how
to
create
this.
How
to
name
this
pen
and
things
like
that.
So
I'm
not
sure
if
we
should
leave
that
stuff
out
and
just
really
beat
the
table.
C
I
think
we
have
to-
and
this
meeting
now
because
on
nine
starting
with
this
on
this
channel.
But
let's
put
this
on
the
trender
for
next
week
and
maybe
see
how
far
we
get
with
like
offline
discussions
in
the
pr
right
and
if
there's
still
things
open.
Let's
put
it
on
the
channel
for
next
week
and
discuss
it
at
the
beginning
of.