►
From YouTube: 2021-09-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
I
put
some
things
on
the
agenda
here,
so
please
add
anything
if
there
is,
if
your
things
that
you
want
to
discuss
like
I
took
over
here,
some
open
points
from
last
week.
C
Those
were
both
from
ted,
so
so
ted.
Maybe
you
can
just
maybe
shortly
talk
about
those
points
and
then
we
can.
Then
we
can
discuss
any
open.
A
Yeah,
I
I
think
we
we
had
dug
into
the
the
first
point
during
last
week's
meeting,
which
is
just
that
there's
a
tension
between
a
generic
conventions
versus
system
or
service
specific
conventions.
A
So
you
know
internal
messaging
or
pub
sub
systems,
home
brewed
systems
or
new
systems
or
systems
we
simply
haven't
covered.
So
generic
conventions
allow
for
those
the
downside
to
generic
conventions.
Is
they
don't
use?
A
They
may
not
exactly
match
how
a
specific
service
works,
so
you
end
up
having
to
kind
of
shoehorn
or
shove,
meaning
and
sort
of
over
end
up
overloading
generic
conventions.
To
some
degree.
This
is
fine
so
long
as
in
the
semantic
conventions,
it's
written
down,
how
that
should
go.
So
I
think
one
thing
we
haven't
been
doing
in
open
telemetry
that
I
think
we
need
to
go
back
and
add
is
for
every
specific
system
that
we
know
about
or
that
you
know
we've
covered.
A
We
also
include
in
their
table
all
of
the
generic
conventions
as
well
and
how
those
should
be
filled
out
for
that
specific
system.
So
there's
no
ambiguity
about
what
the
format
and
and
what
data
should
be
in
there.
A
But
in
many
cases
you
can
see
that
that
things
may
not
may
not
fit
very
well,
especially
when
we're
talking
about
databases
or
messaging
systems
where
we're
not
actually
talking
about
a
strict
protocol.
A
There's
another
subtler
thing
for
better
for
worse,
which
is
when
you
use
generic
conventions
you're
not
using
the
the
terminology
right,
even
when
these
systems
all
overlap
in
terms
of
what
they
do.
They
tend
to
use
different
words
to
describe
these
things,
and
you
know
past
a
certain
point.
I
think
that
that
causes
some
strain
on
the
operator
who's
having
to
look
into
these
systems
because
they
kind
of
have
to
map
terminology
in
their
head
for
very
obvious
things.
A
I
don't
think
it's
a
big
deal,
but
but
those
are
the
the
drawbacks
of
of
generic
conventions.
So
I
think
that's
just
something.
When
we
come
up
with
overhauling
these
conventions,
you
know
making
sure
we're.
We've
got
say
three
different
systems
that
we're
trying
to
apply
them
to,
and
just
just
asking
these
questions
as
we
go.
A
I
don't
know
if
anyone
has
any
any
comments
on
on
that.
It's
just
kind
of
a
generic
heads
up
yeah.
D
I
do,
I
think
so
I
think
you're
right
the
gen.
What
we
can
use
as
generic
conventions
in
for
messaging
is
fairly
thin,
though
like
without
looking
at
the
messaging
system,
at
all
like
what
or
knowing
what
the
the
schema.
If
you
will
of
the
the
messages
I
think
about,
the
only
thing
you
have
is
the
the
address
that
you're
sending
things
to,
because
everything
really
everything
else
is
different.
A
Yeah
so
yeah
you
could
say
for
databases.
This
is
kind
of
similar
right
like
we
do
have
some
things
like
db
statement,
but
that
that
gets
that
gets
vague,
real
quick
once
you
start,
adding
in
various
key
value,
databases
and
stuff,
like
that.
D
I
think
I
think
there
are
actually
three
layers,
not
two.
There
are
they're
totally
generic
conventions
that
apply
to
every
messaging
system,
and
that
is
the
only
thing
you
can
have
is
the
fact
that
you
send
the
message
and
then,
where
you
sent
that
to
and
there's
some
notion,
there's
always
some
notion
of
uri
that
you
can
go
and
derive
from
somewhere
and
then
and
then
you
send
you
send
the
data
frame.
D
But
then,
beyond
this
all
the
data
frames
for
all
the
messaging
systems
are
different.
Let's
put
that
for
a
second
like
if
we
completely
back
off
and
look
the
student
at
the
look
at
this
generically,
then
there's
a
next
layer
where
you
are
where
you
have
some
notion
of
a
of
what
those
messages
look
like,
but
independent
of
implementation.
D
So
that's
where
you
get
into
standards
at
the
higher
level,
some
something
like
cloud
events
or
at
the
more
closer
to
the
wire
protocols
like
amqp
and
mpdt,
that
are
independent
of
implementation,
and
then
you
have
the
third
layer
which
is
then
specific
to
the
implementation,
where
you
get
into
customizations
or
get
into
proprietary
protocols
and
with
proprietary.
I
mean
both
protocols
that
are
only
only
exist
for
a
single
project.
D
So
I
don't
mean
proprietary
with
you
know,
belongs
to
a
company
or
strictly
commercial,
so
the
the
conventions
of
pulzar,
the
conventions
of
kafka,
the
conventional
conventions
of
rabbitmq,
because
that's
the
only
broker
that's
using
a
particular
protocol
is
would
be,
would
be
that
that
that
sort
so
there's
kind
of
super
generic.
And
then
it's
generic,
because
it's
a
standard
and
it's
product,
it's
in
independently
implemented
by
multiple
products.
And
then
there
is
implementation
by
just
one
product
by
just
one
product
or
project.
D
And
to
to
come
to
complement
that
that
parallel
is
just
true
when
you
look
at
like
that
is
like
yes,
it's
a
database,
but
then
it
is.
It
is
immediately
completely
proprietary
like
if
you
just
look
at
their
nosql
at
the
nosql
interface
yeah.
A
Exactly
so
yeah,
I
think
sorry
go
ahead.
E
Thank
you.
I
I
have
a
comment
on
this,
so
we're
gonna
start
was
talking
about
attributes,
but
even
before
that,
I
think
like
regardless
of
what
information
we
put
on
spence.
There
is
a
question
of
spent
kinds
and
relationships
between
them,
and
this
is
like
super
super
generic,
and
maybe
we
can
even
start
there
no
to
before
we
go
into
this.
The
attribute
naming
and
specifics
of
each
system.
E
A
So
so
for
people
who
who
aren't
familiar
with
this
layering
issue,
I
think
we
brought
this
up
at
the
last
meeting.
I
can't
quite
remember,
but
something
lumila
is
doing
a
lot
of
research
on
right
now.
Is
you
often
have
layering
in
terms
of
you
might
have
a
database
like
mongodb,
which
is
proprietary,
but
then
it
is
sitting
on
top
of
http
it's
using
http
as
its
transport,
and
so
you
have
some
layering
going
on
there.
A
Where
and
in
some
cases
it's
not
clear
like
how
how
much
this
affects
database
stuff
and
messaging
stuff,
but
certainly
on
like
when
it
comes
to
http
clients
and
web
frameworks.
A
You
end
up
with
this
question
of
how
many
spans
should
there
be?
How
much
should
that
information
be
compressed
onto
the
same
span
versus
having
do
you
end
up
with
a
database
fan
and
then
a
logical
http
span,
and
then
physical
http
requests
underneath
it
to
what
degree
does
that?
Should
they
get
the
http
span,
let's
say
in
the
database
span
gets
collapsed
or
in
other
cases,
if
you
have
multiple
layers
of
middleware,
essentially
trying
to
report
the
same
information.
A
E
Thank
you,
and
here
there
is
even
more
like,
for
example,
we
have
a
producer's
pen
which
describes
creation
right
and
we
should
write
it
in
this
pack
and
then
we
have
a
transparent
spell,
which
probably
is
even
not
necessarily
related
to
messaging
system,
but
it
should
know
about
these
messages
and
it
transports
right,
and
then
we
have
a
consumer
spend
which
also
have
some
links
to
what
it
consumes
right.
So
these
things
are
not
specific
to
the
system.
In
some
cases
we
will
have
links.
C
Mean
I,
I
think
that
those
three
layers
are
pretty
helpful
because
yeah,
what
I
think
what
miller
is
talking
about
here
is
that
what
we
tried
last
time
to
work
out
in
this
little
flow
chart
that
I
had
like
this.
We
tried
to
work
out
like
a
super
generic
message
flow
that
we
can
basically
apply
to
all
the
scenarios
that
we're
gonna
support
and
but
but
I
think,
with
the
final
semantic
conventions,
I
guess
we
will
be
all
over
those
three
layers
we
will
have.
C
C
That
is
maybe
kind
of
somehow
harmonized
with
the
cloud
event
standard,
and
then
we
will
have
information
implementation,
specific
attributes,
as
we
all
have
them
in
the
current
semantic
conventions
like
with
with
kafka,
specific
attributes
and
rapid
mq
specific
attributes.
C
So
we
will
basically,
I
guess
we
will
be
all
over
those
three
layers
that
comments
described.
F
D
So
one
one
one
annotation,
probably
on
the
the
standards
versus
non-standard,
specifically
related
to
rabbit
so
there's,
there's
nqp
is
the
standard
that
was
finalized
by
oasis
in
2012
and
then
rabbit
is
using
a
version
called
amkp,
0.9.1
or
or2,
which
is
a
derivative
of
that.
But
it's
purely
proprietary
vmware.
D
So
so
we
need
to
go
and
split,
so
we
need
to
make
sure
that
we
that
we're
aware
of
and
split
those
up
into
things
that
are
actually
standardized
and
things
that
are
just
being
used
by
a
single
product
and
and
the
in
the
case
of
the
thing
that
is
called
mqp.
There's
two
versions
and
one
is
one-
is
proprietary.
The
other
one
is
national
standard.
D
Well,
that
is
that
is,
that
is
that's
how
that
you
should
go
back
into
history
and
and
and
see
how
those
things
work.
So
the
amqp,
the
oasis
of
mkp
project
that
has
been
shepherding
mgp,
decided
to
go
going
from
0.9
to
1.0
to
do
a
to
abandon
the
topology
model
that
was
embedded
in
0.9
and
then
0.9
got
split
off
effectively
and
got
developed
into
a
fork
which
wasn't,
which
is
not
sanctioned
at
all
by
the
by
oasis
and
the
oasis.
Draft
0.9
expired
and
no
longer
exists.
D
I'm
the
co-chair
of
the
of
the
oasis
or
fpp
committee,
so
the
the
the
standard
amqp
is
1.0
and
0.9
is
no
longer
a
valid
draft
that
expired.
It's
not
a
standard.
D
It's
it's,
it
is
being
used
by
rabbit
and
it's
called
amqp,
but
there's
it's
even
doubtful
that
it's
it's
it's
it's
legitimately
called
amqp
because
there
is
a
trademark
related
to
amqp,
but
oasis
has
not
enforced
it,
but
it's!
So
if
we
look
at
things
formally,
then
it's
it's
just
a
choice
that
the
rabbit
mq
project
made
and
that's
fine
right.
I'm
not
I'm
not
arguing
with
that,
but
it's
not
a
standard
protocol.
A
You
so
I
I
have
one
just
practical
suggestion
for
for
how
we
we
tackle
these
questions,
which
is
to
maybe
come
up
with
a
a
list
of
messaging
systems
that
we're
gonna
use
to
prototype
out
all
the
the
work
we're
doing.
A
A
For
example,
you
know
two
different
flavors
of
amqp
or
another
protocol,
that's
somewhat
standard,
but
has
maybe
some
divergence
or
proprietary
extensions,
two
very
different
kinds
of
messaging
systems
such
as
amqp
and
kafka,
and
I'm
not
sure
you
know
what
the
the
third
might
be,
but
I've
definitely
found
from
experience
doing
a
lot
of
this
just
spec
work
in
general.
A
It's
really
helpful
to
be
working
with
practical
examples
as
we
go
forwards,
it's
very
easy
to
to
start
going
kind
of
round
and
round
as
long
as
we're
we're
remaining
abstract.
A
So
so
that
would
just
be
a
practical
suggestion
for
going
forwards
is
to,
as
we
propose
these
models
to,
along
with
that,
you
know,
follow
them
up
very
quickly
with
like
here's,
what
it
would
actually
look
like
for
these
different
systems.
A
D
D
I'm
I'm
trying
to
I'm
just
out
of
principle,
I'm
trying
to
stay
on
the
standards
lane
as
good
as
I
can,
because
that's
also
where
we
we're
kind
of
on
safe
ground
when
it
comes
to
legal,
legal
matters,
and
so
if
the
aws
people
show
up
and
they
want
to
go
and
do
work
related
to
the
mappings
for
aws
kinesis
and
aws
sqs
aws
sms.
D
That's
great
right!
It's
just
not
clear
how
much
we
need
to
go
and
do
the
the
work
of
aws.
If
you
aws
doesn't
do
it.
So
that's
that's
a
that's
a
principle
that
we
also
have
in
avid
cloud.
Events
is
basically
you
come
and
contribute
great
and
you're
welcome
and
and
we
will
go
and
we
have
a
section
for
you
know
proprietary
protocols.
You
can
go
and
contribute
and
build
your
extensions
and
do
all
those
things.
But
it's
like
people
need
to
go
and
do
their
own
work.
A
Yeah
so
and
I'll
try
not
to
hog
this
whole
meeting,
but
just
to
respond
to
clemens.
I
I
think
yes
there's.
There
are
so
many
messaging
systems
out
there
right
and
I
don't
know
that
we're
required
to
to
go
down
and
try
to
to
check
all
of
them,
but
I
would
say
certainly
for
what
this
group
is
doing,
which
is
trying
to
get
our
model
into
some
kind
of
shape
where
we
feel
like
we
could
start
declaring
things
stable.
A
In
other
words,
we've
we've
vetted
what
we're
doing
with
enough
practical
work
that
we
feel
confident
for
the
things
we
are
shipping
that
we
could
call
them
stable
and
someone
wandering
in
with
with
a
new
system,
isn't
gonna
upset
the
whole
apple
cart,
because
we
didn't
we
didn't
model,
we
didn't
vet
our
model
across
enough
different
kinds
of
systems,
so
you
know.
A
So
that's
why
I
was
saying:
maybe
something
we
want
to
do
is
is
just
come
up
with
a
list
of
like
the
specific
systems
we're
going
to
use
as
our
prototypes
for
this
modeling.
A
Just
saying
like
we
have
a
good
list,
for
example,
when
we're
defining
api
work
in
clients,
we
try
to
ensure
that
we
have
three
different
kinds
of
languages
that
we're
modeling
it
in
like
java,
for
example
or
net,
and
then
a
scripting
language
like
like
python
or
javascript,
and
then
you
know
a
more
less
dynamic
system
like
go,
for
example,
by
having
those
three
very
different
kinds
of
languages.
A
When
we
model
an
api,
it
kind
of
ensures
that
the
final
model
doesn't
end
up
being
like
say
very
java
flavored,
and
then
it
doesn't
doesn't
fit
go
very
well.
That's
just
this
is
just
a
lesson.
We've
learned
from
doing
all
of
that
work
is
having
those
different
prototypes
happening
at
the
same
time
helps
everyone
kind
of
evaluate.
A
So
maybe
we
could
just
pick
pick
that
list
at
some
point
soon.
What
a
good
good
example
set
of
messaging
systems
would
be.
That
would
help
us
actually
identify
and
work
out
all
these
different
kinds
of
issues
we're
going
to
run
into.
C
I
mean
I've
have
a
remark
here
and
if
you
look
at
this,
there
is
documented
out
tab.
That's
actually
already
there.
So
the
idea
of
this
octopus
that
we
have
a
list
of
scenarios
and
that
for
each
like
scenario,
we
have
kind
of
yeah,
maybe
one
or
more
examples
and
there's
there's
already
kind
of
a
very
rudimentary
list
here
and
the
other
idea
is
also,
as
ted
said,
to
have
kind
of
some
validation
of
the
solution.
We
come
up
for
each
scenario.
C
So
I
think
we
will
have
to
do
some
work
there,
but
yeah
we
will
try.
I
think
that's
also
why
I
think
it's
important
that
if
you
said
that
we
kind
of
work
out
the
list
of
examples
that
we
want
to
work
on,
so
that
that
this
doesn't
get
out
of
hand
and
people
ask
for
like
examples
for
like
lots
of
different
systems
in
the
end.
So
I
was
just
thinking
about
maybe
a
one
or
maybe
two
examples
pair
like
scenario
that
they
come
up
with
at
the
end.
D
So,
for
for
the
respective
messaging
systems,
do
we
do?
We
think
we
need
to
have
people
who
then
then
are
their
their
their
advocates,
like
so
they're
they're,
effectively
they're
represented,
like
like
people
in
this
group,
represent
them.
So
I
I
imagine
that
ilya
will
want
to
represent
rabbit
and
q.
D
We
have
certainly
three
brokers
that
we
will
want
to
want
to
have
represented.
How
do
we?
How
do
we
want
to
see
that.
A
I
would
say
I
hopefully
I
feel
like
this
group
in
general-
has
some
knowledge
about
how
these
systems
work
and
I
ideally
we
can
move
forwards,
but,
yes,
it
would
be
great
to
at
minimum
have
our
work
being
reviewed
by
subject
matter,
experts
that
that
we
could
we
could
bring
in
so
yeah.
My
hope
is
that
microsoft,
aws
and
others
are
are
gathering
up
some
of
these
subject
matter.
Experts.
D
I
mean
we,
we
are
it's
just,
for
instance.
I
think
it
would
be
important
for
for
for
gm
for
jms
to
be
to
gms
to
be
reflected
like
how
you
use
gms,
because
that's
a
common
obstruction.
I
think
that's
that's
an
area
where
we
can
help.
I
don't
know
what
ken's
background
is
from
from
the
red
hat
side
and
or
ilias.
D
So
that's
something
we
should.
We
should
certainly
be
able
to
tackle,
I
think
the
specif
for
for
the
in
the
industry's
interest
and
looking
at
like
market
shares,
etc.
D
It
would
be
very
valuable
to
have
ibm
mq
and
have
aws
sqs
reflected,
but
the
question
is
who's
going
to
do
that
work
because
I
certainly
don't
feel
entitled
all
I'm
also
only
lukewarm
motivated
to
help
with
the
aws
sqs
description
or
with
ibm
and
q
description.
C
Offline
contributions
and
kind
of
reviews
of
documents
we've
come
up
with,
so
this
meeting
unfortunately
doesn't
work
for
them.
We
have
like
a
really
more
people
from
europe
than
from
apec
time
zones.
So
that's
why
we
went
with
this
model.
G
Okay,
great
yeah,
I
was
just
gonna
say
that
you
know
if
we
enumerate
the
the
areas
that
we
need
to
cover,
that
that
should
be
sufficient,
and
then
you
know
later
we
can
find
those
smes
to
help
like
it.
I
don't
think
clemens
like
you
need
to
be
doing.
You
know
every
every
each
one
of
those
enumerations
right.
I
think
that's
what
the
community
is
for
and
so
and
if
we
don't
have
the
coverage,
then
we
know
that
hey.
We
need
to
raise
it
and
try
to
find
somebody.
G
So
let's
just
make
sure
we
have
each
of
those
items
enumerated
of
what
we
want
in
the
spec
and
then
we
can
address
like
how
to
actually
get
those
prototype
invalidated.
A
A
Is
an
advertisement
links
just
so
people
know
lynx
is
actually,
I
think,
the
specific
issue
here
every
I
think
everything
else
is
like
supported
quote
unquote,
but
we
do
have
this
issue
of
tracing
systems.
They
have
parent-child
relationships
and
then
there's
this
concept
of
links,
which
is
how
you
glue
multiple
traces
together
and
there's
a
chicken
egg
issue
going
on,
which
is
most
existing
tracing
systems.
A
Do
not
have
this
concept,
it's
somewhat
new
to
open
census
and
thus
open
telemetry,
and
so
I'm
really
happy
to
hear
azure
models
them,
because
we're
we're
gonna
run
into
this
that
actually
most
of
the
existing
systems,
don't
model
links
and
the
reason
they
don't
model
links
is
because
open,
telemetry
isn't
really
leveraging
links
very
heavily
because
we
haven't
done
this
work.
So
there's
this
this
this
chicken
egg
bit,
but
but
hopefully
we
don't
need
to.
A
Hopefully,
our
model
doesn't
impact
the
implementation
of
these
systems,
and
I
mean
I
don't
know,
there's
maybe
another
question
of
like
how
much
do
we
have
to
care
about
how
well
our
model
works
with
existing
systems
as
they're
currently
built
versus
going
back
to
existing
systems
and
say
like
well,
if
you
want,
you
know
you,
you
need
to
go
in
and
actually
implement
this
protocol
and
this
data
structure
and
this
model.
If
you
want
this
data
to
work
in
your
system,.
E
I
think
we've
done
this
exercise
when,
when
we've
done
this
exercise
with
azure
monitor,
I
can
share
learnings
during
our
next
meetings
when
we
get
to
the
the
relevant
things,
but
basically
it
matters
like
how
do
you?
What
meaning
do
you
you
have
for
producer,
spend?
How
do
you
separate
producer
from
transport?
How
do
you
attribute
consumer
stuff
like
that,
so
we
might
there?
There
might
be
some.
Some
important
hints
that
backhand
systems,
these
are
visualization
tools
need,
but
let's
get
there.
C
Yeah,
I
think
that
is
right.
It's
a
chicken
egg
problem
because
I
myself
coming
from
like
a
bacon
system
myself.
Basically
our
point
of
view
is:
we
only
support
this
once
there
is
a
stable
semantic
convention,
but
then
we
actually
can
start
supporting
this
in
the
back
end
and
we
will
not
implement
something
based
on
something
experimental,
so
it
is
kind
of
a
a
dick
and
a
problem,
and
I
think
we
will
definitely
need
to
make
some
compromises
when
validating
this,
but
I
I
think
we
can.
C
A
As
long
as
we
have
at
least
one
system,
you
know
the
side
effect
is
okay,
azure
for
taking
the
plunge
and
actually
going
ahead
and
and
modeling
this
stuff.
Before
it's
done,
you
know
you
get
some
free
advertising,
perhaps
because
we're
gonna
need
to
use
your
tracecrafts
in
order.
H
A
I
actually
worry
that
links
are
going
to
get
overloaded
in
some
way,
but
again
we
haven't.
We.
E
And
there
is
another
part
of
topology
maps:
they
they
don't
really
work
on
links
at
least
ours,
and
how
to
visualize
that
the
topology
map
is.
It
is
a
question
and
it's
basically,
the
the
conventions
were
put
on
producer
and
consumer
stance.
B
Hey
I
wanted
to
share
just
from
the
java
instrumentation
perspective
on
the,
so
we
have
shared
just
with
the
different
messaging
instrumentations
that
we've
implemented
so
far,
and
so
we've
we've
certainly
kind
of
struggled
to
make
our
best
interpretation
from
the
current
spec
and
but
also
to
then
offer
these
as
all
things
that
we
would
prototype
and
and
mold
be
happy
to
update
via
recommendations
from
this
group,
and
so
that
that's
sort
of
my
my
interest
in
where
I
am
here
so
we've.
B
We
have
instrumentation
for
jms
kafka,
rabbit,
mq,
rocket,
mq
sqs,
as
well
as
a
bunch
of
sort
of
higher
level
messaging
abstraction
instrumentation
like
spring
kafka,
spring
jms
spring
rabbit.
B
So
in
those
we've
had
to
deal
with
the
layering,
you
know
approach
for
better
or
worse
and
also
spring
integration,
which
is
even
a
higher
level
abstraction
of
like
any
messaging
systems
and
apache
camel,
which
is
sort
of
enterprise
service,
fussy
messaging,
and
we
have
a
couple:
we've
we've
sprinkled
in
links
here
and
there
again
for
better
or
worse
so
anyway,
that
just
wanted
to
offer
up.
We
have
a
pretty
rich
set
of
existing
instrumentation.
A
Are
there,
I
think,
maybe
a
thing
that
would
help
is
having
examples
like
code
examples
that
can
be
run
and
generate
traces
that
that
model
these
different,
these
different
scenarios
right
will
probably
end
up
needing
those
if
we're
gonna
well,
I
mean
we'll
definitely
end
up
needing
those
we're
going
to
want
to
to
look
at
these
things
in
a
tracing
system
right.
I
don't
know
how
much
that
exists
already.
B
Well,
there's
certainly
integration
tests
for
all
of
them
that
exercise
all
the
different
paths
and
generate
we
for
the
integration
tests.
B
We
they
do
go
through
an
exporter
and
kind
of
a
fake
exporter
and
we
verify
the
telemetry
that
was
exported.
I
don't
think
anybody
has
tried
hooking
that
up,
but
I
don't
know
why
you
couldn't
run.
We
probably
could
fairly
easily
run
the
integration
tests
pointing
to
a
real
exporter
and
a
real
back
end.
E
I
think
the
interesting
question
here
is
that
it's
it's
not
just
the
integration
test
which
uses
the
api
in
a
certain
way
like
when
we've
instrumented
azure
educates
our
messaging
systems.
Basically,
we
we
have
different
sorts
of
apis
like
send
batch
rates
and
one
message
consume
like
the
the
consumer
that
has
a
handler
or
the
consumer
who
just
receives
and
processes,
and
my
feeling
is,
we
need
multiple
examples
and
which
are
like
not
a
test
code
that
they
are
expressive.
E
B
Yeah
yeah,
that
makes
sense
and
and
those
resonate
with
our
the
experience
of
our
experiences
like
each
one.
Each
one
seems
very
like
sort
of
different,
like
there's
some
weird,
like
the
very
subtle
differences
that
make
it
feel
like.
Maybe
the
modeling
is
different,
and
I
know
ken
has
run
into
that
also
with
doing
some
of
the
corkus
work.
B
That
a
specific
scenario
is
can
be
very
specific,
especially
when
you're
talking
about
auto
instrumentation,
which
is
sometimes
more
limited
in
what
we
can
capture
or
the
the
links
like
it's
hard
for
us
to
sometimes
tie
somebody
writes
some
code
to
pull
pull
a
message,
and
then
they
have
a
different
block
of
code
to
process
that
message.
C
Yeah,
I
also
think
it
goes
a
bit
out
of
scope
of
like
the
semantic
conventions
that
we're
working
on.
I
think
we
cannot
we
as
this
group
as
we
hear
now.
We
cannot
provide
examples
like
for
the
different.
We
cannot
provide
system
specific
examples.
C
I
think
those
I
think
somebody
else
should
provide,
but
I
think
what
we
can
make
sure
is
that
we
have
scenarios
that
kind
of
map
to
the
important
system
out
there,
that
we
want
to
cover
that.
We
have
a
proof
of
concept
for
those
scenarios
with
certain
systems
that
we
define,
but
I
think
for
actually
providing
them
authoritative
examples
for
how
people
should
instrument
in
different
like
when,
when
using
different
systems.
A
A
Actually
you
know
have
pick
some
subset
of
these
systems
and
and
try
to
actually
work
through
real-world
scenarios
before
we
declare
things
that
are
stable,
batch
batch
message
processing
is,
I
think,
a
great
example
of
in
the
real
world
where
it's
I
I
get
concerned,
there's
even
potentially
not
one
correct
answer,
because
it
kind
of
depends
on
how
you're
you're
processing
those
messages
how
you
would
want
their
trace
structure
to
be
represented,
for
example,
and
so
I
don't
know
I
I
I
think
we
should.
A
We
should
try
to
verify
our
model
in
real
world
scenarios
before
we
sign
off
on
it.
Well,.
C
C
I
was
talking
about
examples
in
a
different
sense
and
I'm
actually
not
sure
enough.
Miller
was
referring
to
that,
but
I
think
we
definitely
should
have
examples
where
we
verify
what
they
come
up
with
or
the
real
word
examples.
But
I
think
it's
not
our
obligation
here
to
provide
real
world
like
authoritative,
examples
to
customers
or
to
users
about
how
they
should
instrument
instrument
their
applications.
C
That's
what
I
was
talking
about.
I
think
this
semantic
conventions.
I
think
it's
not
the
it's
not
the
place
to
have
kind
of
real
world
kind
of
examples.
For
user,
like
like
a
tutorial
or
a
manual
in
those
conventions,
yeah.
E
I
agree
that
maybe
conventions
does
not
necessarily
include
this,
but
we
should
try
it
out
right.
So,
for
example,
when
I
write
a
whenever
consumer
batch,
then
as
a
user,
I
want
to
extract
a
context
from
each
message
and
create
my
own
spence.
If
I
want
to
trace
each
message
individually
right,
so
I
don't
necessarily
follow
semantic
conventions
and
we
probably
can
never
ask
users
to
follow
them
strictly
or
expect
them
to
follow
them
strictly.
E
But,
like
this
very
basic
thing,
the
the
how
I
use
apis,
we
should
have
an
idea-
and
maybe
we'll
publish
it
separately,
but
we
want
to
make
sure
this
works
with
our
instrumentation,
that
we
create
auto
magical
instrumentation.
A
A
You
know
trace,
structure
and
modeling
that
trace
structure,
and
so
I
think
I
I
yes,
I
don't
think
we
have
to
get
into
every
single
scenario
an
end
user
might
encounter,
but
I
do
think
we
need
to
define
the
suggested
approach
for
for
modeling
these
different
kinds
of
scenarios,
because
otherwise
the
our
users
do
just
kind
of
hit
a
hit
a
wall,
and
you
know
if
we
can
define
them
in
a
way
that
is
suitable
for
going
into
the
spec.
A
Maybe
that's
enough,
and
then
you
know,
people
from
this
group
or
other
people
can
then
take
it
a
step
beyond
the
spec,
and
you
know,
produce
blog
posts
and
documentation
and
other
things
that
that
help
define
it
more
concretely
for
users
of
specific
systems.
B
For
the
point
of
user
confusion,
wouldn't
that
be
up
sort
of
the
the
instrumentation
libraries,
the
say
java
python,
instrumentation
library's
responsibility
to
provide
convenience
things
for
those
popular
libraries
versus
I
was
kind
of
thinking
to
johannes's
point
it
and
it
is
and
kind
of
where
I
was
hesitating
to
go
beyond
offering
up
just
the
integration
tests
in
the
java
instrumentation
is
that
it's
a
lot
of
work
to
put
together
a
sample
app
for
you
know,
and
it's
very
specific
to
java
and
kafka,
for
example,
and
python
and
rabbitmq,
and
so
if
the
spec
can
focus
on
outlining
the
use
cases,
the
the
different
examples
of
like
batching
and
receiving,
and
these
different
things
and
we,
the
instrumentation
groups,
the
language
specific
instrumentation
groups,
can
focus
on
making
sure
that
those
examples
cover
all
of
our
use
cases.
A
If
we
want
to
rope
other
people
in
to
to
help
share
the
workload,
I
think
that's
great.
I
my
only
concern
is
calling
a
spec
stable
when
it
hasn't
been
exercised.
Well,.
B
B
It
would
be
exercised
vigorously
by
the
the
instrumentation
groups
right,
like
all
of
the
java
instrumentation.
We
would
exercise
all
of
that
vigorously
in
our
instrumentation.
A
C
Yeah,
I
I
agree
with
that.
I
think-
and
I
think
that
is,
I
hope,
also
captures
for
trusts
that
I
think
we
we
should.
I
mean
the
end
product
of
this
work
that
we're
doing
it
should
be
documented
conventions
which
is
in
a
format
of
a
specification
more
or
less,
but
we
will
not
provide
any
kind
of
tutorials
that
the
users
can
use.
C
I
mean
it's
great
that
when
that
comes
up,
and
anybody
in
the
in
this
group
wants
there
to
publish
them
by
themselves,
but
there
will
be
no
kind
of
open,
telemetry
kind
of
authoritative
tutorials,
of
how
to
instrument
the
messaging
system.
I
think
we
will
come
up
with
a
spec
that
is
exercised
by
kind
of
examples
or
prototypes
that
we
maybe
do
internally
or
that
we
will.
I
think,
pretty
sure
that
we
will
do
internally
and
run
and
see
whether
things
are
working,
but
we
don't
will
provide
any.
C
I
think,
authoritative
examples
or
tutorials
for
users.
B
Yeah,
definitely,
and
that
that's
my
whole
interest
here,
is
to
do
that
verification
in
across
all
of
those
different
java
instrumentations
that
we
have
yeah.
C
That
sounds
awesome
by
the
way.
That's
why
I
added
it
immediately
here
to
have
it
in
writing
too.
C
Okay,
so
yeah,
that's
definitely
what
we
are.
Basically,
those
would
be
one
of
the
next
steps
here
that
we
have
here
this
list
of
scenarios
with
the
numbers
that
we
can
use
to
internally
verify
like
our
solutions
that
we
come
up
with
for
the
scenarios
that
we
define,
but
those
examples
here
will
be
internal
and
will
serve
for
our
internal
verification.
C
I
count
the
awkward
silences
approver,
so
if
there
is
nothing
more
to
this
point
that
race,
maybe
we
can
move
on
to
the
next
one
that
still
open
from
last
time.
A
Great
all
right,
so
this
one,
I
think,
will
be
very
quick,
but
I
just
want
to
highlight
that
there's
there's
actually
two
kinds
of
instrumentation
when
we're
talking
about
messaging
systems,
the
kind
we
are
discussing
modeling
and
perhaps
the
only
kind
this
group
needs
to
deal
with
is
instrumenting
the
message
path
right.
A
So
basically,
looking
at
the
system
from
the
perspective
of
tracing
individual
messages
and
how
they
have
moved
through
the
system
and
then
to
whatever
degree
those
traces
can
allow
you
to
infer
things
about
how
the
say
the
topology
of
that
system
is
working.
You
know,
for
example,
this
kind
of
instrumentation
would
be
sufficient
to
help
you
identify
a
slow
kafka
producer
right
like
oh,
actually,
there's
a
correlation
between
slow
messages
and
say
a
particular
node.
A
You
can
do
that
you'll
be
able
to
do
that
kind
of
correlating
with
these
these
structures
we're
building
there.
Is
this
this
other
approach
to
instrumenting
these
message,
services,
which
is
from
the
perspective
of
the
services
themselves,
because
your
next
question
is
going
to
be
well?
Why
is
that
particular
thing
slow
and
to
answer
that
question
you
then
often
need
to
pivot
towards
say
looking
at
the
internal
processes
going
on
within
a
particular
node
in
a
service,
let's
say,
and
it's
so
it's
sort
of
like
an
orthogonal
view.
A
You
have
like
the
messages
going
through
the
systems,
and
then
you
have
the
the
resources
that
are
processing
those
messages
and
those
resources
and
terms
ideally
are
generating
traces
and
spans
describing
their
own
internal
processes.
A
I
have
a
feeling
that
all
is
like
very
system
specific
and
often
couldn't
even
be
implemented
without
the
people
who
are
building
those
systems.
You
know
doing
the
work
themselves
because
perhaps
no
one
else
can
do
it.
A
You
can't
generally
install
third-party
instrumentation
into
services
that
are
like
third-party
services,
but
I
do
want
to
bring
up
that's
another
form
of
instrumentation,
that's
very
useful,
and
it
would
be
great
if
we
could
convince
these
systems
the
the
authors
of
these
systems
to
look
at
adding
instrumentation
and
producing
otlp
data
from
their
their
services,
so
that
might
be
more
of
an
advocacy
job
than
something
this
group
is
like
doing
any
semantic
convention.
Modeling
around,
but
I
do
want
to
bring
that
up.
A
That's
another
form
of
very
useful
telemetry
that
we
want
out
of
these
these
systems,
so
people
have
ideas
on
how
we
could
accomplish
that.
I
would
love
to
hear
it.
I
know
we
only
have
five
minutes
left
in
this
meeting,
though,
and
johan.
You
probably
want
to
get
us
going
on
next
steps
here.
So
maybe
we
could
have
that
discussion.
Offline.
C
Yes,
just
just
to
add
something
to
that
point
that
I
think
here
actually
in
in
the
hotep
that
I
have
in.
I
think
what
you're
referring
to
is
what
I
refer
to
as
instrumenting
intermediaries
here
intermediaries
that
would
actually
be
an
instrumenting
brokers
and
getting
insight
into
what's
going
on
inside
a
particular
broker.
C
So
if
there
is
any
instrumented
intermediary
at
some
point
that
should
fit
in
into
this
message
flow
model
that
we
are
following
here
but
yeah,
I
I
put
it
out
out
of
scope
of
the
semantic
conventions,
because
I
think
it's
gonna
be
very
system
specific
in
the
end
and
we're
gonna,
really
I'm
sure
we're
gonna
struggle
with
coming
up
with
with
models
here.
Maybe
this
can
be
part
of
like
a
future
2.0
version.
A
Great,
I
think,
once
we've
made
significant
progress.
I
will
probably
start
poking
people
in
this
group
to
perhaps
leverage
your
connections
with
people
who
actively
work
on
these
different
systems
to
encourage
them
to
take
a
look
at
what
we're
doing
and
then
also
encourage
them
to
consider
doing
this
other
scope
of
work.
H
On
that
just
quickly,
I
know
kafka
has
a
proposal
to
have
their
own
metrics
collection
mechanism
added
over
the
coming.
I
guess
six
to
twelve
months,
it's
on
their
roadmap,
to
do
that.
I
think
they're
coming
up
with
their
own
semantic
conventions
for
how
they're
naming
their
metrics
as
well.
If
I
recall.
A
C
Yes,
that
sounds
I
will
try
to
reach
out
to
any
of
them
and
see
if
you
can
get
in
touch
with
the
right
people,
so
we
have
to
wrap
up
for
now,
because
I
think
this
meeting
will
be
used
by
other
people
at
the
9am
so
for
next
steps,
yeah
great
next
step.
We
I
already
got
some
great
video
on
this
otep
that
I'm
working
on,
so
it
would
be
awesome
to
get
more
eyes
on
that
and
maybe
have
people
participating
in
existing
discussions
and
yeah.
C
C
Put
your
add
your
points
there
or
or
raise
additional
points
that
you
think
need
to
be
raised
for
this
old
tap
and
then
maybe
we
can
resolve
issues
with
offline
discussions
in
the
pr
and
otherwise
yeah.
Let's
convene
again
next
week
and
discuss
the
open,
open
points
and
open
discussion
points
in
person.
C
Cool
awesome,
so
thanks
all
for
your
participation,
thanks
for
everybody
who
kind
of
already
reviewed
the
old
tap
and
hopefully
see
you
all
next
week,
thanks
all
thanks.