►
From YouTube: 2021-09-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Hi
or
good
morning,
let's
give
it
two
or
three
more
minutes
for
people
to
show
up
and
then
let's
get
started.
C
B
B
B
B
I
also
see
wow
you
are
here.
That's
awesome!
Oh
yes,
thanks
for
thanks
for
attending,
because
your
item
is
the
first
one
I
put
on
okay,
oh
no.
I
will
share
my
screen
just
a
moment.
B
Okay,
so
I
put
the
item
first
here
because
there
was
some
discussion
going
on
about
that
and
also
here
in
this
meeting
we
have
clemens
who
is
kind
of
a
cloud
events
expert,
and
I
think
he
can
help
us
to
cure.
So
do
you
want
to
talk
yourself
shortly
about
the
the
institute
and
the
pr
that
you
yeah.
D
Yeah
sure,
yes
yeah,
so
the
idea
with
this
was
that
I
was.
I
started:
adding
open,
telemetry
instrumentation
to
a
open
source
project,
another
cncf
project
called
captain
and
they
use
they
use
cloud
events
heavily
in
there
to
communicate
between
the
the
services
and,
as
part
of
that,
then
I
had
to
dig
into
the
cloud.
Events
go
sdk
and
I
saw
that
they
offer
some
some
kind
of
extensibility
point
where
you
can
it.
D
It's
basically
an
interface
that
you
can
implement
and-
and
there
is
like
hooks
where,
when
the
called
events,
clients
send
sends
event
receives
events
and
so
on.
So
it's
kind
of
it's
kind
of
an
extensibility
point
where
you
can
create
spams
and
they
have
in
their
go
sdk.
They
they
have
an
implementation
for
open
senses,
but
since
that
one
is
yeah,
it's
already
merged
now
into
open,
telemetry
and
so
on.
D
So
I
thought
of
writing
an
implementation
that
works
for
works
with
open
telemetry,
and
as
part
of
that,
I
did
this
internal
thing
for
captain
and
I
thought
of
contributing
upstream
to
cloud
events,
and
I
have
this
vr
there
now
for
this
it's
called.
D
It's
called
a
observability
service
and
it
creates
stance
using
open,
telemetry,
it's
yeah,
it's
this
vr
and
the
the
the
thing
is
that
when
these
stands
are
created
by
the
cloud
events
client
in
the
open
senses
of
implementation,
they
had
already
some
predefined
attributes
that
they
add
to
this
span.
For
example,
they
had
the
the
event
type,
the
the
spec
version
of
the
event
and
things
like
that,
and
I
basically
reused
the
thing
that
they
have
in
their
repo
right
in
the
sdk.
D
But
I
thought
that
would
be
a
good
idea,
maybe
to
put
these
attributes
into
the
hotel
semantic
conventions
and
that's
more
or
less
how
how
I
I
started
and
then
yeah.
I
was
a
little
bit.
I
guess
naive,
because
the
the
way
they
go
sdks
works
as
far
as
as
far
as
I
got
is
it
doesn't
only
offer
a
way
of,
let's
say,
creating
and
describing
events
into
different
protocols.
D
That's
mainly
the
the
reason,
as
as
I
see
for
cloud
events,
but
they
also
offer
wrappers
around
the
transport
and
the
protocols.
So
you
basically,
for
example,
create
a
http
protocol
and
then
you
pass
to
a
client
from
the
sdk
and
then
you
just
send
the
event.
So,
potentially
you
could.
The
position
of
the
the
same
client
can
be
used
to
send
events
across
different
protocols,
so
like
ntp,
nuts
and
so
on.
D
So
I
thought
that
the
other
sdks
would
be
the
same.
So
it
would
make
sense.
I
guess,
to
have
the
attributes
in
the
messaging
semantic
convention,
because
it
kind
of
acts
as
a
messaging
system,
because
you
just
create
a
client
and
then
you
send
you
receive,
as
you
would
be
doing
in
a
messaging
system,
but
the
other
sdks.
They
only
offer
a
way
of
say,
converging
the
events
but
the
actual,
sending
or
receiving
the
events.
D
Then
it
you,
you
would
interact
with
the
library
for
the
q
system
that
you're
using
not
with
the
not
with
cloud
events.
So
then,
in
those
cases
I
don't
think
it
makes
sense.
Actually,
as
a
christian
was
discussing
with
me
in
the
thread
there,
I'm
not
sure
if
that
was
clear.
It
was
too
much
information,
but
that
was
mainly
the
motor
or
the
reason
why
I
opened
it.
B
Yeah
I
mean
for
us
here,
it's
I
saw
in
the
pr
you,
basically
you
proposed
to
add
it
like
to
the
to
the
messaging
semantic
conventions.
That's
what
that's!
B
What
we're
discussing
here
yeah
and
I
think
what
is
not
clear
and
what
also
gets
it's
apparent
from
the
comments
that
are
showing
up
here
is
that
it's
not
clear
how
those
messaging
so
many
conventions
that
we're
working
on
how
it
should
relate
to
the
to
cloud
event
semantic
conventions,
if
you
want
to
have
them
at
all,
I
think
maybe
it
makes
sense
to
capture
cloud
events
specific
attributes,
but
I
think
people
commenting
on
the
pr
me
too.
D
A
I'm
I'm
kind
of
the
architect
of
cloud
events.
We
don't
have
these
roles,
but
that's
kind
of
how
it
works
out
and
that's
not
the
way
how
we
work
in
that
project,
so
we're
spec
first
project.
Really
we
are
more
a
spec
project
than
we
are
a
code
project
and
making
changes
in
the
go.
Sdk
is
not
going
to
do
anything
for
cloud
events,
the
normative.
A
What
we
what
what's
normative
in
the
cloud
events
projects
is
always
the
specs
and
then
the
code
follows
what
the
specs
say
and
the
the
sdks
are
at
best
proposals
and
they
make
some
people's
lives
easier
to
kind
of
have
some
shared
implementation
of
of
cloud
events,
but
they're
by
no
means
the
the
way
how
something
normative
gets
into
the
spec,
and
some
people
might
use
the
sdk
and
some
people
some
people
might
not.
So
that's
so
that's
generally
not
how
the
cloud
events
project
functions.
A
So
that's
so
so
so
much
as
kind
of
set
up
for
yes,
you're
doing
you're,
making
some
investment
in
in
making
changes
for
for
for
the
go
sdk,
but
that's
not
going
to
help
you
with
any
other
go
implementations,
implementation
of
of
cloud
events,
and
certainly
it's
not
going
to
do
anything
for
the
the
rest
of
the
sdks
because
they
all
go
back
to
a
spec.
A
So
the
question
is:
what
are
you
trying
to
achieve?
What
is
the
thing
that
you
want
to
bind?
What
is
the
rule
that
you
have
encoded
here
in
your
code?
What
does
that
do?
Because,
ultimately,
so
we
have
a,
we
have
an
extension
spec
in
cloud
events
or
distributed
tracing
that
has
been
picking
up
the
the
trace
parent
tray
state
extensions
and
there
is
a
very
strong.
A
There
is
actually
a
pending
open
item
for
whether
we
want
to
keep
that
as
an
extension
or
toss
it
toss
it
out
all
right
and
because
there
has
been
no
engagement
kind
of
from
from
this
from
from
the
the
tracing
community
in
cloud
events.
So
far,
so
that's
a
discussion
that
needs
to
be
brought
up
need
to
be
brought
to
the
cloud
events
to
the
cloud
events
group
as
a
discussion
point
to
argue
effectively
what
the
what
the
goal
is
of
that
kind
of
integration.
A
Because
in
the
in
the
cloud
events
community,
I
have
tried
to
we've
had
several
issues
prs
debates
about
the
how
much
this
makes
sense
to
have
kind
of
even
this
extension,
and
there
has
also
been
mighty
a
lot
of
confusion
around
it
and
that's
mostly
all
because
there
really
hasn't
been
anybody
with
background
with
enough
backgrounds
in
those
discussions
who
could
go
and
argue
for
this,
because
we
we've
been
we're
mostly
all
messaging
people
and
we,
we
have
a
sense
of
kind
of
the
diagnostic
stuff,
but
we
really
haven't
been
been
able
to
kind
of
pin
down
why
that
stuff
is
needed.
A
So
I'm
I'm
assuming
that
the
code
that
you're
adding
is
built
on
some
principles.
It's
built
on
some
some
is
adding
some
functionality
and
I
think
what
we
need
is
a
a
good
understanding
of
what
you're
doing
and
what
is
what
what
is
motivating
it,
so
that
we
can
basically
go
and
take
that
to
the
cloud
events
group
and
say:
here's
how
we
think
about
this.
A
Okay
and
and
that's
and
that's
true
for
that's
similarly
true
for
other
for
other
standardization
efforts
right.
So
there
is
a
tie-in
into
amqp.
A
I
happen
to
be
the
co-chair
of
the
ndp
technical
committee,
which
is
probably
helpful,
and
so
we
can.
We
can
then
think
about
you
know:
what's
the
right
place
to
anchor
this
in
mqp,
what
is
the
field
where
we
want
to
have
this?
Do
you
put
this
into
properties?
Do
you
put
this
in
into
into
message
annotations,
there's
a
good
there's,
a
good
choice
in
the
bad
choice
here
and
then.
A
Similarly,
I
think
there
have
been
debates
about
http
and
there's
the
http
semantics,
which
is
kind
of
a
different
group,
and
but
I
I'm
guessing
that
there's
a
bunch
of
http
experts
there
who
has
an
opinion
about
this.
So
I
there
is
desire
in
the
cloud
events
group
to
do
that
to
have
that
sort
of
tracing
story
sorted.
D
Level
yeah
yeah,
so
I
I
reached
out.
I
think
I
posted
in
the
into
one
of
the
selection
of
the
post
in
the
general
cloud
events
channel,
that
I
was
looking
to
this
observability
thing
for
the
go
sdk
because
they
have
this
interface
and
it
was
what
the
product
that
I
was
instigating
was
using.
I
I
had
the
same
the
same.
Let's
say
comments
that
you
have
now
that
you
should
be
in
the
spec
how
the
the
observability
works
and
things
like
this.
D
I
saw
this
ticket
that
you
mentioned
as
well,
that
someone
asked
to
remove
the
distributed
tracing
extension,
but
I
basically,
I
posted
there
that
I
was
working
on
this
for
some
reasons
and
then
I
thought
of
upstreaming
the
the
gold
sdk
change,
because
I
already
have
one
for
open
sensors
and
then
I
I
already
had
the
code,
let's
say
done
and
it
worked
and
it
works
great
expansion
yeah.
D
But
I
also
feel
that
it
should
be
probably
something
that
is
like
the
the
way
the
cult
events
are
instrumented
yeah,
but
the
other
change
yeah.
A
But
it's
not
going
to
help
the
effort
because
treasurer
because
you
will
have
you-
will
have
put
your
code
there,
but
it
will
not
show
up
in
the
specs
and
somebody
and
so
from
the
the
standardization
perspective
which
is
looking
after
here
and
and
over
there
right.
Yeah
code
doesn't
matter
right.
Jess,.
C
Could
I
could
I
get
some
some
clarity,
but
just
just
to
be
clear,
so
cloud
events
is
basically
correct
me
if
I'm
wrong.
It's
an
a
data
schema.
C
That's
messaging
system
independent,
so
you
want
to
have
messaging
right
and
that
messaging
might
be
going
across
various
kinds
of
transports,
but
you
want
all
the
decoupled,
senders
and
receivers
to
be
able
to
understand
the
structure
of
the
the
data
that's
coming
through
those
systems.
Right
is
that
would
that
be
a
correct
way
of
describing
what
cloud
offenses
yeah?
That's
that's
a
good.
A
Good
approximation,
basically
the
the
there's,
the
the
motivation,
is
twofold:
first
is
a
definition
of
what
an
event
is
so
that
everybody
kind
of
puts
events
into
a
system
in
the
same
form
form
and
then,
for
that
event
to
be
mappable
to
and
from
various
transports,
so
that
there's
no
loss
of
information
as
you
do
this.
C
Yeah,
okay
and
then
so
as
that
applies
to
distributed
tracing.
There's
there's
two
things:
one
is
I'm
observing
this
kafka
client
right,
I'm
using
kafka.
I
have
a
kafka,
client,
and
so
I've
got
kafka
instrumentation
doing
what
it
does
like
letting
me
know
how
that
kafka
system's
working,
but
because
I'm
that
kafka
client
is
say,
embedded
in
a
cloud
events,
client
or
being
utilized
by
a
cloud
events
system.
C
I
want
to
add.
I
want
to
enrich
the
spans
that
that
kafka
client
is
creating
or
wrap
them
in
some
way
with
cloud
event.
Specific
attributes
right
so,
rather
than
just
recording
that
I
sent
a
kafka
message
with
whatever
we
defined
to
be
the
generic
kafka
attributes.
I
want
to
add
cloud
events
attributes
describing
it
right.
D
Yes,
so
my
case
was
that
we
have
a
there's
a
this
captain
project
that
was
describing
so
it
it
sends
events
using
the
cloud
events
client
to
a
to
a
maths
queue,
and
so
it's
it's
more
more
convoluted,
but
it
sends
to
a
http
service
and
then
this
http
service
then
sends
to
the
to
the
queue.
But
anyway
so-
and
there
is
already
alt
instrumentation
for
the
http
request
that
is
being
sent.
D
But
I
also
wanted
to
create
a
span
to
have
more
things
and
and
the
one
that
is
already
in
the
go.
Sdk
already
has
this
this
let's
say
called
event
specific
attributes
yeah.
So
basically
that
I
I
moved
here
because
I
didn't
want
to.
C
C
And
and
so
yeah
you're
proposing
a
specification
here,
so
it's
not
just
about
writing
client
code,
but
the
specification
for
anyone
who's
implementing
if
they
want
to
observe
it
in
their
distributed
tracing
system.
This
will
is
the
spec
guide
for
how
they
should
yeah
something.
C
So
that
just
back
ends
will
then
be
able
to
know
if
they're
cloud
event
aware
they'll
be
able
to
do
something
more
useful
with
the
data
than
just
saying.
This
is.
D
C
D
And,
for
example,
if
I
create
a
span
manually
then
I
might
want
to
add,
like
these
things,
the
spec
of
that
I'm
using
and
things
like
that
or
like
in
the
case
specific
for
the
go
sdk
it's
because
it
is,
the
spanners
are
generated
there.
So
the
user,
don't
don't
have
to
create
it,
and
then
it's
also
using
the
same.
But.
A
From
that
one,
one
more
thought
on
this
from
the
scenario
perspective,
what's
important
to
understand
for
cloud
events
is
that
cloud
events
is
really
made
for
it's
not
only
made
for
this
these
these
trivial,
I'm
posting
a
message
and
I'm
pulling
it
from
a
broker
scenario,
but
it's
really
also
meant
for
the
more
complex
scenarios
where
events
get
routed
so
an
event
gets
an
event
gets
created
somewhere
in
a
device
or
somewhere
in
the
corner
of
the
system,
gets
first
posted
to
via
http
to
an
endpoint
that
endpoint
turns
around
and
now
posts
it
off
to
kafka.
A
A
In
the
end,
there
is
two
pers
there's
there's
there's
by
what
I
just
said.
There
are
five
perspectives
on
the
slow
right.
There
is
because
they're
all
those
five
pieces
are
also
different,
ownerships
very
often
so.
There's
the
the
individual
four
brokers
that
I
just
talked
about
and
then
there's
this
end-to-end
flow
for
this
end-to-end
flow,
where
the
publisher
and
the
consumer,
the
ultimate
consumer,
that's
kind
of
being
that
sits
behind
that
queue
right.
A
E
So
does
that
kind
of
mean
that
I
know
one
of
the
comments
christian
had
added
to
the
pull
request
is
that
it
almost
sounds
like
the
the
any
of
these
data
related
to
cloud
events
is
a
better
fit
for
something.
That's
not
tied
to
a
transport
like
messaging
or
http.
In
is
more
just
data.
That's
attached
to
a
span
if
a
cloud
event
is
going
through
the
span
kind
of
thing.
A
Yeah
so
yeah
so
think
of
the
cloud
event
being
as
more
abstract,
and
we
will
always
have
so
so
in
in
in
in
the
work
that
we're
doing
here.
Cloud
event
is
kind
of
cloud.
Vent
is
probably
the
highest
level
of
abstraction
that
might
exist,
because
there's
nothing
that
kind
of
comes
close
to
that
level
of
abstraction
right
now
in
the
rest
of
the
the
the
messaging
space
and
then
in
the
level
below
that
there
is
a
cloud
event
which
is
being
sent
over
mqtt
or
there's
a
cloud
event.
A
F
No
worries,
thank
you.
I
actually
want
to
add
to
what
you
said.
So
basically,
if
we
translate
this
to
the
tracing
language,
the
this
spec
describes
the
span
about
stands
about
message,
creation
and
consumption
right.
There
could
be
irrelevant
transport
to
this
specification
and
then,
if
we
think
about
the
whole
thing
so,
okay,
let's
say
we
have
the
span
created,
which
is
a
cloud
event
span
which
probably
has
a
context.
F
F
But
then,
if
we
keep
this
mental
model,
it
doesn't
work
with
the
rest
of
the
messaging
instrumentations,
because,
let's
say,
if
you
use
kafka,
then
kafka
uses
different
metadata.
It
doesn't
even
know
that
the
transfer
is
cloud
event.
It's
just
the
beta
ray,
perhaps
for
the
sdk,
where
it's
an
arbitrary
thing
right,
it
does,
it
doesn't
have
to
know,
doesn't
need
to
serialize
or
disrelease,
and
then
basically,
we
have
a
conflict
between
the
cloud
event
context
and
specific
system
metadata
where
we
transferred
the
context
for
the
message.
B
I
I
would
love
to
add
something
to
this
with
miller,
because
I
mean
I,
but
also
when
looking
at
the
cloud
event
attribute.
The
guardian
attributes
has
given
here
also,
that
seems
pretty
straightforward
to
me
and
for
me
actually
it
seems
actually
not
strictly
messaging
related.
It
seems
to
me
more
like
an
add-on
set
of
attributes,
and
you
can
can
use
this
either
for
messaging
or
you
could
also
use
it
for
http
I
mean
when
you
send
the
cloud
event
of
http.
B
B
I
think
that
actually
makes
things
really
complicated
is
because
this
can
conflict
either
with
what
we
define
here
messaging
or
with
what
is
defined
in
http,
and
I
would
also
be
curious,
as
kevin
said,
that
there
is
discussion,
if
understood
correctly,
to
actually
remove
this
from
the
cloud
event.
Spec
like
this
trace
context,
propagation
yeah.
I
think
that
actually
would
make
our
work
here
easier,
because
then
this
would
be
centralized,
but
yeah,
I'm
not
sure
about
other
stakeholders
and
their
their
take
on
this.
F
C
I
I
wonder:
well,
it
comes
down
to
what's
what's
the
propagator
in
open
telemetry
terms,
right
usually,
is
it
is
it
not
that
that
simple.
F
It's
the
brokers
right,
so
let's
say
you
send
an
http
request
and
then
it's
converted
to
something
else
that
the
content
of
this
the
message
is
sent
to,
let's
say
kafka,
and
then
these
are
brokers.
They
are
not
instrumented
with
upon
telemetry
and
probably
they
will
never
support
tracing
the
fully
blown
tracing
ever
and
then
it
means
that
something
needs
to
know
about
the
standard
and
needs
to
convert
between
different
systems.
C
C
C
The
cloud
events
message,
in
which
case
just
in
open
telemetry,
turns
the
thing
that
that
does
that
storing
is
called
a
propagator
right,
so
there's
something
that
injects
it
on
one
side
and
something
that
extracts
it
on
the
other
side,
and
I
think
maybe
your
concern
is
that
it
might
no
normally
we
do
this
by
the
the
low
level,
the
network
clients
do
this
right.
They
just
say
like
I
have
an
http
client,
and
so
it's
saying
I'm
gonna,
I'm
going
to
inject
the
current
trace
context.
C
That's
been
built
up
into
headers
and
then
something
on
the
other
end
is
going
to
pull
that
off
and
likewise,
if
it's
kafka,
then
I'm
going
to
inject
this
in
whatever
the
agreed
upon
the
open,
telemetry
agreed
upon
place
to
put
it
so
that
it
can
get
pulled
off
by
the
the
kafka
client.
C
F
You
see
so
when
the
cloud
event
is
created,
let's
say
it's
created
and
there
let's
say
we
capture
a
context
in
which
it
was
created
inside
the
cloud
event,
then
let's
say
it's
sent
over
amtp
and
then
the
library
which
does
it,
let's
say
kafka,
adds
another
context
to
the
message,
because
it
has
message
metadata,
because
it's
what
sdks
do
to
instrument.
They
don't
know
if
it's
a
cloud
event
they
carry
and
then
there
is
a
search
context,
which
is
a
transport
context.
F
How
this
message
is
actually
sent
to
the
broker
and
when
you
receive
you
have
the
same
story.
You
have
the
the
context
where
you
receive
this
message.
It's
not
interesting,
then
you
have
a
context
from
amqp
which
has
whatever
the
sdk
the
kafka
sdk
put
on
the
other
side
and
then,
when
you
deserialize
this
cloud
event
in
your
application
code,
this
is
all
on
users
or
maybe
this
this
cloud
event.
As
the
case,
you
get
a
search
context,
which
is
a
cloud
event
context.
C
But
yeah
and
well
what's
the
talk
but
just
to
super
clarify
is
the
issue
here
that,
like
we're
talking
about
trace
context,
we're
talking
about
trace
id
and
spat
id
like?
Let's
just
keep
it
to
that,
and
you
know
whatever
you
know,
trace
data
a
trace
state
happens
to
be.
There
is
the
issue
that
these
systems
would
want
to
have
different,
trace
and
span.
Ids
like
they
don't.
C
You
know
same
id
either
way
right,
regardless
of
you
know
what
happens
to
be
doing
the
encoding
like
some
something
upstream
created
the
trace
and
the
span
id,
and
that's
the
thing
you're
trying
to
put
into
the
kafka
metadata
or
your
http
header,
so
that
something
on
the
other
end
can
become
a
child
or
link
to
that
trace.
F
I
think
I
have
a
very
simple
answer,
but
we
will
need
to
go
into
a
lot
of
details
later
to
explain
it.
Okay,
I
argue
that
every
message
should
have
a
unique
context
to
be
identifiable,
and
then
it
means
for
every
message
each
cloud
event.
We
should
have
created
new
context,
otherwise
we
won't
be
able
to
identify
it
ever.
C
D
D
So
there's
an
api
that
that
receives
under
a
request-
and
that
is
a
very
often
instrumented,
with
with
with
http,
open
telemetry
health,
instrumentation
all
works
and
then,
as
part
of
some
work,
then
an
event
is
is
sent
and
then
this
this
sending
event
is
also
auto
instrumented
because
it's
http,
so
this
event
gets
sent
via
http
using
cloud
events
to
another
service,
also
http,
and
then
it's
also
ultimate
streaming.
D
So
until
then,
I
have
three
three
spans
so
like
the
incoming
spam,
the
sending
span
yeah
two
okay
and
then
this
event
gets
received
into
this
service
and
then
then,
in
this
service
is
the
one
responsible
for
forwarding
to
to
max,
in
this
case
the
cube
and
what
I
did
using
this.
D
This
distributed
tracing
extensions,
dot
event,
that's
just
like
just
like
just
a
struct
with
the
trace
parent
and
the
trace
state
properties
as
like
raw
strings
and
what
I
do
before
before
ending
the
event
to
the
queue
I
inject.
So
I
created
a
a
a
carrier
like
the
ones
in
open
flame,
so
I
created
a
carrier
that
injects
the
the
current
span.
D
The
the
current
trace
context
into
the
event
as
this
distributed
tracing
extension,
that
is
in
the
cloud
event
spec
and
then
this
is
inside
it
and
now
and
then
this
gets
sent
to
the
queue
and
then
whoever
service
that
is
picking
this
up
is
subscribed
to
it.
D
Then
it
it
receives
this
and
then
at
this
point
there
is
no
context
right,
because
it's
just
there's
there's
no
context
and
then
what
I
did
is
I
then,
in
this
case
I
received
the
event
and
then
I
extract
the
trace
context
from
the
event
create
a
context
and
go,
and
then
I
continue
the
trace.
That's
that's
what
I
did
and
I
did
all
this
using
the
this
distributed
tracing
extension.
D
That
is
the
cloud
event
offers
but
yeah,
but
it's
it
worked
for
my
case,
but
I
don't
know
if
yeah
and
that
this
is
what
is
in
the
pr
that
I
think
the
other
are
not
this
one.
For
the
specs
of
the
convention.
C
So
yeah-
and
so
this
is
this-
is
my
concern.
I
I
totally
agree
with
ludmilla
that
you
want
to
have
like
a
span
id
and
possibly
even
a
separate,
trace
id
depending
on
how
it
all
works
per
message.
C
Every
message
needs
to
have
one,
but
I
think
that's
like
in
general,
what
we're
trying
to
crack
here
in
this
messaging
group
like
it's
not
just
cloud
events
when
they
go
over
cough
goods,
it's
when
you
send
anything
over
kafka
or
amqp,
and
what
makes
this
hard
with
what's
making
our
job
hard
is
the
fact
that
messaging
systems
don't
don't
have
this
easy
one-to-one,
parent-child
relationship
like
simple
transactions,
do
right
you
get
into
all
these
other
scenarios
of
like
batch,
processing
and
and
other
stuff,
or
you
know
you
might
have
like
a
entire
chunk
of
messages
right
that
are
actually
part
of
a
single
transaction.
C
We
even
see
this
sorry.
Just
to
finish.
We
even
see
this
in
a
simple
rpc
scenario
like
with
grpc,
where
in
some
cases
the
grpc
connection
is
short-lived
and
sends
a
series
of
messages
and
that's
one
transaction
and
there's
other
cases.
We
have
a
grpc
system.
That
is
long-lived
a
grpc
connection.
C
That's
long
lived
and
every
event
is
actually
a
separate
transaction,
and
so
on
our
end,
we
have
to
to
pick
some
default
way
of
dealing
with
these
situations
so
that
we
can
spec
it
out
and
build
as
much
of
this
as
we
can
into
the
instrumentation
we're
providing
people,
because,
right
now,
what
everyone
has
to
do
with
messaging
is
kind
of
what
trial
is
doing.
Is
you
just
have
to
do
it
all
by
hand
right?
C
You
have
to
get
in
there
and
write
all
of
the
injection
extraction
stuff
yourself
and
like
do
all
the
modeling
yourself,
and
I
think
what
we're
trying
to
accomplish
in
this
group
is
like
to
to
move
away
from
that
in
most
scenarios
essentially-
and
the
only
thing
I'll
say
is
like
I
do-
have
a
concern
that
if
cloud
events
also
starts
defining
a
place
to
serialize
this
data
and
store
it,
that's
that
is
just
gonna,
add
another
layer
of
complexity
to
what's
going
on
here.
C
So
hopefully,
that
that
makes
sense
like
it
would
be
great
if
we
could
do
all
the
modeling
over
here
in
this
group
and
figure
out
where
we're
going
to
put
this
stuff
and
that
the
the
cloud
events
group
works
with
us
to
do
it.
But
to
put
it
here
in
the
in
the
tracing
system,.
A
I
I
think
I
think
we,
I
think
the
cloud
event
would
be
very
happy
to
do
that.
The,
as
I
said,
the
this
extension,
the
the
the
distributed
tracing
extension
that
exists
for
cloud
events,
sits
there
as
a
semi-unwanted
stepchild,
because
nobody
wants
to
own
it,
and
so,
if,
if
you
guys
say
we
want
that,
then
go
right
ahead.
So
I
mean
that's
something
where
I
think
that's
where
the
interface
lies
and
it's
expected.
A
A
The
logical
thread
is
the
the
the
the
the
thing
that
the
thing
that
is
it
forms.
The
logical
thread
is
the
message:
it's
not
the
process,
and
so
so
you
have
in
in
all
of
the
rpcs
in
rpc
style
scenarios
or
call
style
scenarios.
You
have
a
logical
thread,
that's
been
called
that's
being
formed
by
the
call
chain,
and
that
is
not
true
for
messaging.
For
for
messaging.
A
A
That's
one
thing
so
that
needs
to
be
accommodated
some
somewhat
in
the
model
and
then,
while
we're
all
talking
about
individual
messages,
what
we
will
find
eventually
and
that's
something
that
you
know
further
as
we
go
further
down
the
weeds
mqp
as
well
as
mqtt
and
and
also
kafka
to
a
degree,
are
all
session
oriented
protocols,
and
that's
also
something
that
you
probably
don't
have
yet
and
that
is
that
is
they're
all
connection-oriented.
A
When
you
come
from
a
world
of
mp,
of
http
and
and
also
for
an
rpc,
the
fact
that
you
have
an
underlying
tcp
connection
doesn't
matter
because
it's
abstracted
away
and
yes,
there
might
be
with
http,
2
and
http3,
there's
a
notion
of
a
session
underneath,
but
since
they're
all
building
on
top
of
http,
they
are
that's
all
just
details
that
isn't
cared
about,
but
with
you
build
a
link
and
the
link
is
affected.
The
way
how
you
transfer
messages,
but
those
links
can
also
be
routed.
A
You
can
so
you
can
go
and
create
in
an
amqp
network.
You
can
create
a
effectively
a
route
that
goes
for
multiple
hops.
That
is
just
there
as
a
transport
as
a
transport
channel
for
multiple
messages,
and
that
itself
right
is
something
that's
that
you
may
want
to
go
and
try
some
information
about
with
lgbt
webt.
You
also
have
sessions
with
kafka.
You
also
have
kind
of
you
also
have
sessions
of
sorts,
because
what
matters
in
in
many
of
these
messaging
scenarios
is
order.
A
C
Yeah,
so
so
in
open
telemetry,
we
have
this
basic
concept
called
links
which
we're
going
to
utilize
here
to
model
all
of
this
stuff.
Okay,
and
on
the
one
hand,
I
think
it
is
very
helpful
to
to
keep
this
split
apart
into
two
problems.
C
One
is
tracing
from
the
perspective
of
the
application
logic,
which
might
include
a
lot
of
links,
for
example,
fannin
and
fan
out
like,
for
example,
if
you
have
a
bunch
of
things
that
are
making
messages
that
are
then
processed
by
batch
on
the
other
side,
each
one
of
those
things
making
messages
is
going.
Each
message
is
going
to
have
a
span
and
a
trace
id,
because
they're
all
coming
in
from
separate
places.
C
When
you
do
that
batch
processing,
you
now
have
one
new
trace
with
one
span,
because
from
that
point
on,
the
transaction
involves
a
bunch
of
these
things
and
then
that
trace.
When
you
start
it
is
going
to
be
linked
to
all
the
other
traces
and
spans
that
that
came
in
so
like
it's
a
simple
model
of
just
like,
rather
than
it's,
not
one
big
trace.
C
In
other
words,
it's
each
hop
is
a
trace
or
each
leg
is
a
trace
and
then
we're
building
a
graph
of
how
these
traces
connected
to
each
other,
and
likewise
you
can
do
this
again.
With
you
know,
sessions
versus
messages
right,
it's
a
similar
issue
of
like
you
need
to
have
one
set
of
traces
that
is
modeling
the
topology
of
amqp
or
kafka
itself
in
some
way,
so
that
you
could
identify
okay,
the
the
slowness
or
the
issue,
I'm
having
is
actually
something
to
do
with
the
way.
C
Amqp
is
routing
data,
so
it
routes
things
a
certain
way
and
then
my
messages
are
slow.
It's
nothing
to
do
with
the
application,
logic,
processing,
the
messages
or
anything,
it's
just
something
to
do
with
the
routing
right,
and
that
could
also
be
be
modeled
as
as
links
the
problem
that
comes
in
here
in
is
going
to
be.
C
Do
we
end
up
with
just
a
giant
graph
that
is
unprocessable
because
it
it
starts
to
suck
everything
in
and
does
there
need
to
be
more
data
attached
to
these
links
to
allow
us
to
disambiguate
some
of
this
stuff,
not
not
to
just
derail
it?
But
just
to
say,
there's
another
issue
with
caching
and
caching
validation.
That's
very
similar
like
if
I
miss,
have
a
cast,
miss
and
have
to
go
to
database
that
slowed
my
transaction
down,
and
I
want
to
be
associated
with
the
thing
that
invalidated
the
cash
for
example.
C
That
would
give
me
a
lot
of
helpful
information,
because
I
noticed
the
problem
when
I
had
a
cash
miss
and
what
I
want
to
know
is
who
who
invalidated?
Why
is
this?
I'm
seeing
all
these
cash
misses,
but
what's
what's
causing
them
to
be
missed,
and
you
there's
been
attempts
to
use
tracing
to
link.
You
know,
link
those
traces
together,
but
you
end
up
with
a
graph-
that's
too
big
to
to
realistically
process.
C
So
I
I
think
we're
gonna
have
to
tackle
this
stuff
in
this
group
like
how
do
we
handle
all
this?
Every
message
is
going
to
have
its
own
span
and
even
potentially
trace
id,
and
then
we're
going
to
have
to
model
how
we
connect
these
graphs
together.
C
I
think
that
part
will
be
somewhat
straightforward
from
an
academic
point
of
view,
because
you
just
model
what
you're
doing
I
have.
I'm
processing
a
bunch
of
messages
in
a
batch,
so
this
one
transaction
is
associated
with
these
50
other
transactions
that
happened
earlier.
C
It's
there
will
be
some
stuff
in
the
weeds
around
like
okay.
How
do
we
actually
expect
tracing
analysis
tools
and
back
ends
to
to
process
this
data,
and
are
we
actually
just
creating
like
an
unrealistic
graphing
problem
for
for
people
and
if
so,
how
do
we?
C
How
do
we
add
more
metadata
that
allows
that
graph
to
get
chopped
up
into
you
know
analyzable
pieces
anyways?
That's
that's
a
lot.
F
It's
just
a
very
short
format.
Do
you
feel
it
would
be
helpful
if
we
show
what
we've
done
in
azure
monitor
with
messaging
systems?
As
a
starting
point,
I
can
share
the
the
visualizations
some
examples
of
things.
How
are
they
are
instrumented.
B
G
C
B
Yeah,
I
I
just
would
like,
because
you're
good
coming
to
short
and
obviously
rau
has
his
hand
up,
and
I
would
just
like
to
propose
a
pass
forward
here,
because
I
think
the
overarching
question
we
are
asking
here
is
here:
how
the
how
do
these
messaging
semantic
conventions
that
we
have
here?
How
do
those
relate
to
cloud
events
and
obviously,
that
it's
not
something
that
we
can
answer
today?
B
I
think
also
we
might
need,
or
we
will
need
to
work
together
with
like
the
cloud
events
group
to
figured
it
out,
especially
when
it
comes
to
the
context
propagation,
because
I
think
in
the
end
we
want
to
kind
of
have
some
kind
of
harmony
between
those
two
projects
and
no
conflict.
B
So
there
is
definitely
future
work
to
do
for
those
the
issue
on
the
pr
that
zhao
has
opened.
I
would
recommend-
and
I'm
open
for
other
suggestions
here,
but
I
would
recommend
to
put
the
pr
itself
on
hold
because
we're
clearly
not
yet
it's
not
yet
clear
how
this
should
be
solved,
but
that
definitely
we
should
keep
the
issue
open
and,
if
possible,
assigned
to
somebody
from
this
group,
because
that
is
something
that
we
need
to
discuss
and
tackle
in
the
in
the
near
future.
B
And
it
would
also
be
great
for
how,
if
you
could
keep
joining
this
group
and
help
driving
that
forward.
D
Yeah
sure
I
don't
know
if
I,
if
I
can
help
too
much
but
yeah
I
I
I
will
definitely
join
more.
Yes,.
D
Yeah
yeah,
but
just
one
thing
that
christian
said
that
maybe
this
attributes
could
be
could
make
sense
as
standalone,
not
as
part
of
the
message
in
which
I
think
now
that
I
understand
more
how
everything
works
also
makes
sense
to
me.
So
as
a
as
the
zpr
is
now,
I
don't
think
it
makes
no
sense,
because
it's
not
the
messaging
system
and
so
on.
So
I
I
would
also
be
fine
if,
if
you
want
to
move
it
to
another
another
area
in
the
specification
or.
E
Yeah,
there
are
certainly
some
concerns
around
messaging
and
cloud
event.
We
need
to
figure
out
with
context
in
other
areas,
but
to
have
the
attributes
of
a
cloud
event
on
some
common
place
that
can
be
attached
to
messaging
or
hdp,
or
even
just
other
spans
that
aren't
any
of
those
transports
if
the
mess
events
are
being
sent
around.
I
think
that
makes
a
lot
of
sense
and
it's
something
that
could
be
done
now
without
solving
some
of
these
other
problems.
D
B
That
that
is
something
we
could
do.
So
not
maybe
ted
can
tell
me
if
that's
okay,
but
then
trowel
could
maybe
create
another
pr
which
just
was
just
a
separate
document
for
those
card
event
attributes:
okay,
yes,
the
conventions
that
can
then
be
flexibly
attached
to
either
http
or
messaging,
and
it
will
be
experimental
anyway,
and
if
we
decide
to
change
this,
then
we
can
still
move
this
around
later
on
yeah.
C
However,
we
end
up
modeling
that
that
messaging
span
or
that
messaging
trace
like
just
saying
these
are
the
attributes
you
should
attach
to
it.
If
it's
a,
if
you
are
a
cloud
events
rapper
over
over
this
stuff,
yeah
that
that
seems
like
a
good
step
forward
for
sure,
okay
and
yeah.
D
Okay,
then,
I
will
I'll
just
send
a
new
pr
with
a
new
area
for
just
these
new
cloud
events
attributes
as
a
standalone
thing.
B
Yeah
and
let's
make
this
a
standalone
thing
and
then
later
I
think
in
the
discussions
when
we
come
to
discuss
context
propagation,
then
I
think
we
will
tackle
the
second
part
of
like
the
relationship
cloud
events,
which
is
the
context
propagation,
which
probably
is
not
that
straightforward
no
and
will
require
more
discussion.
B
Awesome
so
I
will
also
then
your
comment
on
those
issues
to
capture
our
discussion
here.
I
think
we
don't
have
time
to
cover
the
scenarios.
I
think
ted
thanks
for
taking
notes
by
the
way
you
added
this
project
structure
section
here.
I
think
we
have
about
yeah
four
to
five
minutes,
because
you
have
to
wrap
up
early.
Maybe
you
can
I'm
not
sure
if
that's
enough
time,
but
maybe
you
can
just
sketch.
C
Totally
yeah
totally
enough
time.
This
is
just
some
some
practicals
for
moving
forwards
that
we
were
discussing
in
the
apac
working
group,
which
is
just
we.
We
want
to
be
able
to
come
up
with
this
work
and
then
get
it
added
as
oteps
and
then
add
it
to
the
spec.
C
C
But
in
order
to
actually
follow
our
protocol,
you
need
four
official
approvers
to
approve
an
otep
before
it
goes
in,
and
then
two
approvers
again
for
that
information
to
get
added
to
the
spec
and
for
some
things
like
what
joe's
doing,
which
is
like
I'm
just
adding
some
some
attributes
here
for
a
domain
experimentally
like
that
that
doesn't
necessarily
need
an
otep
that
can
just
go
straight
into
the
spec,
but
again
who
who
approves
that?
C
And
so
we
get
hung
up
because
the
approvers
are
like
the
tc
and
like
project-wide
approvers,
and
they
get
nervous
about
this
stuff
because
they're,
not
a
domain
expert,
so
they're
like.
Should
I
approve
this,
or
should
I
not
and
then
have
to
kind
of
like
look
to
the
community?
We
have
to
poke
them
anyways,
it's
annoying.
So
what
I
would
like
to
do
is
have
semantic
specific
approvers.
C
C
So
there's
just
you
know
a
list
of
people
who
are
feel
they
are
up
to
speed
on
open,
telemetry
and
this
stuff
and
are
willing
to
be
available
to
review
these
things
and
also
are
willing
to
shoulder
the
responsibility
of
not
just
being
like
yeah.
That
looks
like
I
like
it,
so
I'm
gonna
hit
approve
more
like.
Has
the
community
agreed
on
this
or
not?
If
it
looks
like
there's
agreement,
I'll
approve
it?
C
If
it
looks
like
there's
not
agreement,
I
will
review
it
by
pointing
out
where
that
disagreement
is
and
saying
go,
go
resolve
it
even
if,
as
a
approver
you're
going
to
have
a
maybe
a
voice
in
that
conversation
as
well.
C
So
there
is
a
bit
of
responsibility
there
about
being
willing
to
like
wear
that
that
approver
hat,
and
so
I
just
want
to
to
propose
that
to
this
group,
if
they're,
whether
one
likes
that's
a
good
idea
and
two,
you
know:
are
there
people
who
are
willing
to
to
become
approvers
and
just
as
a
group
could
maybe
the
the
next
step
is
like.
If
there
are
people
who
want
to
be
approvers
to
to
to
let
that
be
known
and
then
like
as
a
group,
we
could
maybe
just
decide.
C
Are
we
happy
with
this?
Unfortunately,
we
don't
have
a
big
process
there
for
for
picking
approvers,
but
if
we
can
do
that,
the
next
step,
what
I
can
do
is,
I
will
rearrange
the
spec
so
right
now,
all
of
the
semantic
conventions
are
kind
of
scattered
and
embedded
in
everything
else,
and
what
I'd
like
to
do
is
extract
them
from
there
and
put
them
in
their
own
section,
called
semantic
conventions,
and
under
each
section
we
will
have
a
convention
by
by
group
or
type
so
http
db.
C
What
not
and
then
under
that
there
will
be
the
generic
you
know,
you'll
have
like
your
your
tracing,
your
metrics,
your
logging,
semantics
and
then
all
of
your
system,
specific
semantics
in
in
separate
files,
and
that
organization
will
allow
us
to
use
the
code
owner's
concept
in
github
to
easily
assign
ownership
by
group
or
or
even
get
into
like
some
specific
ownership
like
who
are
the
kafka
experts.
Something
wants
to
get
added
to
the
kafka
stuff.
C
E
I
love
the
idea
because
I
know
I've
had
problems
in
the
past
when
adding
stuff
to
the
conventions
for
messaging
already
there
that
a
lot
of
the
approvers
don't
have
enough
enough
knowledge
of
the
area.
So
you
have
to
explain
a
lot
of
stuff.
So
that
does
add
a
lot
of
time
to
the
approval
process
and.
C
They're
also
really
busy
yeah
so.
H
C
Yeah
we
we
talked
about
this
on
tuesday,
but
yes,
I
will.
I
will
create
an
actual
like
spec
issue
for
this.
I
think
it
should
be
a
spec
issue.
I
But
we
really
gotta
stop
because
you
know.
B
I
Okay,
does
anybody
know
if
we
got
a
clean
break
between
messaging
meeting
and
this
meeting.
I
I
Yeah
I
had
I
was
on
the
messaging
meeting
and
I
I
had
to
interrupt
them
to
say
we
have
to
leave
the
slack
leave
the
zoom
meeting.
Otherwise
the
recordings
will
get
merged,
which
I
guess,
isn't
the
worst
thing,
but
I'm
gonna
ping
back
on
the
community
issue.
I
opened
a
few
weeks
ago
to
we
needed
we
needed
another
zoom
channel
so
that
there's
not
back-to-back
meetings
on
the
same
zoom
channel.
J
Where
are
you
from
portland
oregon
portland,
so
it's
west
coast
time
zone?
What
time
is.
A
I
All
right:
well,
let's
jump
in
jason,
we
had
such
a
an
unexpectedly
packed
meeting
last
week
that
we
didn't
get
to
I'm
pretty
sure
you
added
this
yeah.
I
don't
know
how
much
it's
worth
taking
up
people's
time
to
to
dwell
on,
but
you
know
we
we
get
the
summary
nightly.
I
That's
comparing
no
agents
to
agent
in
a
few
different
aspects
and
like
in
a
few
areas,
especially
around
cpu
and
iteration,
mean
those
are
like,
amazingly
low.
In
my
opinion,
the
waiter,
like
a
lot
of
people,
may
not
know
how
to
read
this
at
all,
because
it's
kind
of
cryptic,
but
that
iteration
mean,
is
how
long
it
takes
to
do
a
single
pass
through
the
k6
script.
I
The
request
mean
is
an
individual
http,
rest
request,
and
so,
with
the
latest
version,
it's
about
11
milliseconds
with
no
agent,
it's
like
basically
still
11,
milliseconds,
so
performance.
There
is
not
impacted
by
the
agent.
That's
a
fully
instrumented
spring
boot,
app
running
and
exporting
to
the
collector
so
like
in
a
few
aspects,
we're
doing
like
very,
very
good.
I
think.
I
I
I
B
G
I
So
that's
that's
also
the
intent.
The
intent
is
not
to
be
able
to
explain
every
subtlety
and
nuance,
it's
more
to
provide
like
what
a
represent
representative
experience
might
look
like
for
a
user.
G
But
if
we
see
some
numbers,
which
probably
nonsense
that
undermine
the
our
trust
in
other
numbers,
totally.
B
I
Yeah
agreed
fully
fully
agree.
Are
you
seeing
my
excel?
I
am,
and
you've
got
two
columns
highlighted
yeah,
so
this
would
be
so
like
request
from
a
user
perspective.
I'd
say
this
is
probably
the
most
important
impact
to
their
requests.
Yep
and
just
looking
at.
I
was
kind
of
curious,
the
variance
over
time
like
there's
a
lot
of
variants.
The
good
thing
is
that
the
since
it's
running
on
the
same,
probably
noisy
bot
or
the
same
box,
the
with
and
without
the
difference
is
not
as
noisy
as
the
day-to-day
variants
right.
I
Yeah,
so
this
here
would
correspond
to
what
about
0.5
like
5
overhead,
which
is
yeah.
I'm
I'm
a
little
bit
hesitant
to
classify
it
as
a
percentile,
because
I'm
not
sure
that
it's
not
fixed.
If
you
can
convince
me
that
it's
you
know
percentage
based,
then
I'll
listen.
But
I
agree.
But
it's
like
half
a
millisecond
is
what
we're
talking
about.
I
Yeah,
as
as
a
user
app
takes
longer
right,
we
have
that
half
a
second
for
the
request
span,
but
then
also
for
longer
requests.
There
often
are
like
a
lot
of
database,
say:
10
20,
100
database
requests.
So
there's
some
overhead
on
there
also
yep.
G
I
I
I
I
All
right:
well,
thanks.
I
I
Yeah,
thanks
again
all
right,
transforming
okay
laurie.
I
This
is
some.
This
is
some
amazing,
more
amazing
internal
hackery.
So,
for
a
background
for
folks
who
don't
know
which
would
not
is
a
very
esoteric
jvm
thing
is
the
so
the
bytecode
instrumentation
does
not
in
the
jvm
does
not
get
applied
to
lambda
classes,
which
is
weird
and
there's
reasons
and
john
watson
who's.
Not
here
has
explained
some
of
the
background.
I
think
he
had
was
aware
of
why
that
is
not.
I
The
case
was
because
the
the
the
lambda
when
they
introduced
lambdas,
they
didn't
want
people
depending
on
the
byte
code,
because
they
wanted
flexibility
to
change.
A
I
Change
that
in
the
future,
but
it
causes
a
lot
of
problems
for
us
in
instrumentations,
because
our
instrumentations
don't
apply
to
them,
and
so
we've
got
very
a
few
different
workarounds
for
that
in
different
instrumentations,
and
one
of
the
bigger
problems
is
that
it's
caused
memory
leaks
for
us
in
a
couple
places
where
we
haven't
taken
enough
care
for
that.
I
So
lori's
got
some
amazing
instrumentation
that
hacks
into
the
internals
of
the
jdk
and
instruments,
those
lambda
byte
code-
and
I
both
love
it
and
nervous
about
it.
I
think
I
think
it's
we
can
address,
see
yeah
so
lori.
Do
you.
L
L
What
I'm
a
little
bit
scared
about
this
is
that
this
might
actually
break
with
native
compilation.
I
have
no
idea
like
how
the
native
compiler
will
treat
this.
Maybe
it
has
like
a
different
type
like
maybe
it
has
like
some
built-in
idea
how
long
that
one
devices
should
be
compiled.
Maybe
just
by
passes
this
code,
I
was
hoping
that
ben
evans
had
maybe
experimented
with
it,
because
he
has
an
application.
That's
compliant.
K
I
I
So,
while
ben,
I
think,
is
reconnecting,
so
I'm
not
too
concerned
about
that
laurie
in
terms
of
we'll
add
that
to
the
list
of
many
things
that
are
not
working
with
native
compilation
and
we'll
have
to
figure
that
out
ben
you're
back.
I
K
The
joys
of
a
linux
laptop
there
we
go
yeah,
so
I've
had
a
little
tiny
bit
of
time
to
look
at
this,
not
as
much
as
I
was
hoping,
but
my
my
current
theory
is
that,
okay,
I
don't
do
too
much
the
background,
but
basically
the
invoke
dynamics.
Op
code
that
gets
placed
at
the
the
site
then
has
a
bootstrap
method
which
goes
into
the
lambda
meta
factory,
which
dynamically
spins
an
inner
class,
which
is
which
implements
the
the
right
things
that
you
need.
K
My
understanding
is
that
those
invert
dynamic
sites,
especially
treated
by
the
native
compiler-
I
it's
not
entirely
clear.
I
didn't
have
enough
time
to
go
into
this
fully,
but
I
don't
believe
that
full
invoke
dynamic
sites
are
are
correctly
supported
by
graviam
yet,
but
lambdas
are
obviously
such
an
important
special
case,
but
I'm
pretty
sure
this
is.
K
This
is
actually
special
case,
so
it
basically
looks
at
the
bootstrap
method
to
see
whether
or
not
it's
it's
pointing
at
the
lambda
class,
the
lambda
meta
factory
and
then
modifies
that
code
accordingly.
So
my
my
thoughts
are
really
really
twofold.
Firstly,
that
means
that
what
lori's
done
should
work,
I
think,
but
more
broadly,
I
know
there
has
been
some
discussion
about
whether
or
not
that
inner
class
meta
factory
is
going
to
get
replaced
at
some
point
and
I'm
thinking
in
particular
here
about
hidden
classes.
K
And
if
that
happens,
then
I
think
that
the
the
the
what
laurie
has
done
won't
work
if
we
had
a
hidden
class
factory
instead.
So
it's
probably
worth-
and
that
would
be
true
not
only
in
the
in
the
native
compilation
case,
but
also
in
the
dynamic
vm
case.
So
someone
should
probably
go
and
check
out
what's
happening
with
hidden
classes
and
whether
that
that
work
to
to
replace
the
lambda
augmentation
is
actually
going
to
go
ahead
or
not.
K
Did
that
go
in
in
17,
are
you?
I
mean
hidden
classes
only.
K
Yeah,
I
mean
the
only
other
thing
I
wondered
about,
and
I
don't
know
whether
you
actually
investigated
this
or
not
is:
can
you
just
do
something
on
the
on
the
lambda
body
so,
rather
than
actually
targeting
the
the
you
know,
the
the
transformed
class
directly
and
modifying
that
I
mean
what
what's
wrong
with
just
putting
these.
You
know
the
instrumentation
code
directly
into
the
body
of
the
private
static
method
that
you
you
could
handle
back
to.
L
I
And
what
you're
saying
is
instrumenting
call
sites
versus
instrumenting
wrapping
the
method
bodies
themselves.
K
I
L
I
Yeah,
I
remember
my
experience
with
that
was,
and
maybe
the
coming
from
aspect
j
like
originally,
it
was
instrumenting
call
sites,
but
I
was
able
to
get
a
lot
of
performance
in
the
instrumentation
out
of
only
instrumenting
targeting
method,
because
you
can
just
look
at
those
method
signatures.
You
don't
have
to
inspect
the
bodies
of
every
like
piece
of
byte
code
that
comes
through.
I
L
I
Okay-
but
that's
great
to
hear
that
that
I
mean
if
this
has
worked
well
for
over
time
in
previously,
also
because
I
do
have
a
lot
of
confidence
in
our
tests
for
the
the
java
versions,
we
run
and
I'm
not
too
worried.
If
new
say
a
new
java
release
comes
out
that
breaks
it
because
we'll
catch
that
in
kind
of
a
similar
way
that
when
new
instrumentation
is
released,
new
instrumented
libraries
are
released
that
break
the
instrumentation.
I
Oh
last
question
I
had
laurie
was
the:
how
does
this
compare
to
bite
buddies?
I
haven't
looked
at
bite,
buddy's
lambda
instrumentation
strategy
have
you
and
how?
How
is
this.
L
I
didn't
look
into
too
deeply,
but
I
think
what
they
are
doing
is
that
they
basically
replace
the
whole
london
manufacturing,
so
they
need
to
ship
like
they
have
like
a
completely
custom
class
generation.
I
think
which,
like
I
have
done
all
kinds
of
crazy
hacks,
but
this
is
a
bit
too
crazy
for
me.
I
think
it's
seems
to
be
that
it's
really
hard
to
get
right.
L
I
I
That
I
I
think,
is
good
for
me.
I
think
the
one
thing
that
I
might
open
an
issue
to
consider
is
just
because
I
think
it
might
be
interest
might
be
worth
it
for
all
of
our
tests
is
running,
adding
some
old
java
8
version
to
our
test
matrix,
because
I
know
there
there
have
been
some
changes
in
some
java,
minor
versions
that
have
changed
things
like
the
way
that
we
that
have
affected
the
java
agent
in
the
past.
So.
I
All
right,
if
anyone
has
any
other
thoughts
on
this
pr,
please
speak
up
here
or
on
the
on
the
pr.
If
not,
what
do
we
have?
Oh,
I
haven't
approved
it
yet
so.
I
But
I
will
plan
on
merging
tomorrow
if
nobody
has
more
thoughts,
because
it's
definitely
it
simplifies
our
life
a
lot
because
the
lambda
classes
have
been
painful.
J
So
kind
of
a
rookie
question
about
that.
So,
basically,
when
did
we
found
out
that
lambda
classes
cannot
be
instrumented,
then.
I
Yeah
so
we've
known
that
for
a
long
time-
and
we
have
the
main
the
original
place
where
it
affects
us-
is
in
instrumenting,
runnables
and
callables
that
get
passed
to
executors
where
we
want.
We
want
to
carry
the
context
over
to
that
runnable
on
the
other
thread,
and
so
because
people
pass
lambdas
a
lot.
I
Now
those
don't
get
instrumented,
and
so
we
lose
context
so
to
work
around
that
we
had
prior
to
this
pr
instrumented
the
executor,
execute
methods
and
we
wrapped
those
lambdas
with
a
wrapper
class
which
does
get
instrumented
or
that
we
manually
instrument
in
order
to
work
around
that.
I
Sure
check
out
this
link
also
because
this
link
is
just
nettie,
so
the
more
recent
problems
it
has
caused
has
for
us
have
been
in
neti
and
there's
actually
like.
I
don't
know
if
all
of
these
are
related,
but
certainly
nettie
and
lambda
pulled
up
a
lot.
I
know
several
of
these
are
like
this.
One
definitely
was
a
problem
with
lambda's
memory
leak.
This
was
a
problem
caused
by
lambdas
that
this
pr
would
have
prevented.
I
This
was
yeah.
We've
we've
had
our
share
of
problems.
J
Yeah,
because
of
reminds
me
the
you:
have
you
have
the
the
loss
of
contacts
contacts
in
kathleen
in
core
teams?
You,
because
you
need
to
pass
it's
actually
running
on
the
same
thread.
Local
and
sometimes
like
contacts
can
run
over
each
other.
So
I
guess
it's
not
the
same
the
same
idea
but
like.
I
Threading
is
is
a
also
a
recurring
problem
in
this
project.
Yep
and
kotlin
is
kotlin,
actually
is
one
of
the
nice
ones
because
they
provide
some
nice
hooks
that
we've
been
able
to
hook
into
true,
true
others,
or
not.
I
Actually
we're
going
to
discuss
in
office
hour
meeting
tonight,
project,
reactor
context,
propagation
and
prop
project
reactor,
which
is
very
complicated.
There's
a
good
discussion
over
here
in
this
issue.
If
anybody's
interested
can't
wait
for
fibers.
M
I
Fiber
fibers
will
be
like
a
lightweight
threading
and
allow
stuff
to
run
willy-nilly
on
any
thread
that
you
want.
No,
I'm
sure
there
will
be
frameworks
that
facilitate
that,
but
very
lightweight
threading
model.
J
So,
essentially,
contact
switching
like
yeah
yeah,
just
like
one
on
one
single
thread,
actual
thread
a
few
semi
thirds,
so
it's
kind
of
similar
to
two
cutting
core
routines.
If,
if
it,
if
I
got
it
correctly,
yep
exactly
nice.
J
I
It's
just
going
to
be
challenging
to
keep
instrumentation
context.
J
Yeah
yeah,
so
so
coughlin
actually
has
some.
They
have
some
yeah
some
some
way
to
deal
with
that,
and
also
you
kind
of
need
to
to
pass
the
the
context.
Each
time
you
you
context,
switch
between
those
according
so
it's
like
some.
We
had
some
issues
with
that.
M
Related
question:
is
there
a
way
we
can
suppress?
The
kotlin
is
a
delicate
flower
and
this
api
is
really
hard
to
use
every
time
we
compile
the
java
sdk
because,
like
I
just
think,
it's
hilarious,
but
every
time
I
try
to
compile
the
java
sdk
there's
a
message
about
kotlin's
co-routines,
where
it
says
like
hey.
This
is
a
delicate
api.
Please,
please
take
care
when
you
use
it.
I'm
like.
I
totally
understand
we
did
cover
teams
before
but
like
can
we
suppress
it
if
we
specifically
for
that
it?
M
M
K
So
the
context
here
in
case
folks,
don't
know
this
is
that
that
kotlin
kind
of
it's
curating
sort
of
simulate
the
threads
which
which
were
the
fibers,
which
don't
really
exist
yet
at
the
end
level.
So
it
actually
sets
up
this
kind
of
state
machine
thing
and
that's
that's
what's
causing
the
the
the
the
slightly
scary
sounding
warning
message:
it's
one
of
those
things
where
you
know
it
should
be
fine,
but
you
know
we
all
know
that
there
is
a
world
of
difference
between
should
and
always
would
be.
I
M
Yeah,
like
we
suppress
warnings
in
a
bunch
of
other
cases
where
you
know
we
we
violate,
say
java
type
annotations,
because
we
know
better
in
our
code
where
we
suppress
generic
warnings.
Can
we
just
suppress
warnings
in
that
particular
code
where
we
think
we
know
better
or
is
it
there
for
a
reason?
I
guess.
D
M
G
You
know
new
contributors,
it's
important
reporting
issue
because
that's
almost
impostor
syndrome.
I
have
this
problem
about
problem
over.
Probably
everybody
else
already
learned
how
to
how
to
ignore
that.
I
am
the
only
one
suffering.
No,
you
are
not
the
only
one
suffering
everybody
else
is
suffering.
So
please
open
an
issue.
M
G
I
I
All
right,
let's
see
the
next
item
we've
been
talking
about.
Switching
up
the
currently
the
we
have
published
two
artifacts
java
agent,
all
and
java
agent,
and
the
all
one
has
all
the
instrumentations,
oh
no,
they
both
have
all
the
instrumentations.
I
The
all
one
has
all
the
exporters,
otlp
jaeger,
zipkin
and
the
not
all
one
has
no
exporters
and
so
for
a
few
different
reasons,
but
primarily
that
it's
been
sort
of
a
source
of
confusion
for
users
like
it
feels
like
our
default
should
be.
The
battery's
included
one,
the
one
that
people
can
just
get
up
and
running
most
easily
feels
like
that
should
have
the
exporters
in
it.
So
we're
thinking
to
switch
the
meaning
and
have
this.
I
The
java
agent
have
the
exporters
and
introduce
a
slim
artifact,
which
either
has
no
exporters
or
actually
now
that
we
have
the
otlp
over
http
exporter,
and
it
is
actually
very
slim
now
that
well,
it
will
be
in
one
seven
honor.
I
just
removed
the
tls
and
bouncy
castle
dependency
that
it
was
pulling
in.
I
I
Instrumentations
into
the
contrib
repo
we're
thinking
of
breaking
out
about
half
or
so
of
the
instrumentations
into
contrib,
with
the
initial
sort
of
thought
on,
you
know:
here's
an
initial
list
of
things
that
I
made
a
long
time
ago.
So
it
definitely
needs
to
be
revisited
of
sort
of
what
we
consider
priority,
instrumentations,
that
we
would
keep
in
this
repo
and
then
split
the
others
out
into
contrib.
I
So
we
weren't
sure,
then,
once
we
do
that,
then
we
have
java
agent
contrib.
That
has
all
of
those
also,
but
honorable
made
kind
of
made
a
good
point
about
still
java
agent
is
all
about
convenience
and
simplicity
and
just
having
one
artifact,
you
know
primary
artifact
that
everybody
onboards
to
is
really
helpful.
I
I
So,
anyway,
that's
just
some
background
and
if
you
have
thoughts,
we're
kind
of
leaving
this
open
for
another
week,
just
to
kind
of
gather
just
to
sit
on
to
sit
on
it
and
gather
more
input.
I
M
M
Like
when
you
dive
into
what
do
you
mean
by
jar
size
they're,
like
the
the
cost
of
using
the
agent?
Basically
but
they're,
talking
about
total
cost
of
ownership?
Generally
they're,
not
like
again,
if
you
look
at
jar
size
as
just
one
of
the
dimensions
of
total
cost
of
ownership
for
running
the
agent,
that's
yeah.
I
I
Compared
to
would
if
we
only
had
the
otlp
http
would
probably
be
around
eight
megs.
It's
true,
but
even
those
25
million
bytes
is
really
not
that
big
in
2021.
I
I
I
Are
both
http?
The
proposal
is
that
it
would
only
have
otlp
over
http,
since
that's
only
like
a
few
hundred
kilobytes
versus
the
grpc
pulls
in
nete
and
ends
up
being
most
of
the
pulling
in
a
lot
of
stuff,
which
maybe
we
could
improve.
Though,.
I
Cool
this
one
was
just
reported
yesterday,
a
really
good,
find
from
a
user,
and
I
just
wanted
to
I
was
proposing.
I
was
going
to
propose
that
it
seemed
worthy
of
a
patch
release,
even
though
it's
not
not
a
regression
in
which
has
generally
been
our
bar
for
releasing
patches.
L
I
have
a
few
comments
for
this.
I
think
the
issue
said
that
he
was
using
one
for
one.
L
L
Have
run
into
the
same
issue
with
that,
okay
and
the
other
thing
is
that
I
think
our
current
implementation,
like
basically
anytime
you
get
the
connection.
It
does
the
query
to
figure
out
some
extra
information.
I
Can
you
say
that
last
part
again
lori
that
you
looked
at
a
heap,
there
were
500
megabytes
of
what
kind
of
objects.
L
Like
connection
info
or
whatever
it
creates
for
the
connection.
I
I
K
L
L
L
I
This
is
what
you
were
saying
as
it
made.
Switching
that
to
caffeine
would
help.
L
Like
it
might
have
helped
for
the
out
of
memory
around
if
I
saw,
but
it's
not
related
to
this
case,
it's
just
an
observation
from
a
different.
I
Cool
yeah,
let
me
I
will
just
guess
with
honorable
in
the
office
hours
but
yeah
I
had
had
some
questions.
I've
noticed
I've
had
some
questions
about
how
the
eviction
was
working.
Also.
I
And
yeah,
I
was
kind
of
wondering.
Also
the
weak,
lock
free
with
the
it
was
seems
a
misnomer.
I
When
you
look
at
the
deadlock
and
it's
all
deadlocked
on
weak,
lock,
free,
it's
definitely
not
lock.
Free.
I
J
Hey
so
just
small
update
about
the
log
adapter
pr
and
it
was
actually
merged
to
version
1.6,
I
think,
and
now
my
colleague
a
lot
is
working
on.
He
actually,
I
think
he
actually
finished
the
implementing
the
logging
log
exporter,
just
simple
version
that
essentially
just
and
converts
log
records
to
the
research
logs
and
then
just
logs
them,
which
is
a
good
start
to
see
that
things
are
working
as
expected.
J
I
will
check
with
him
what's
going
on
with
that,
because
he
actually
waited
for
the
pr
to
be
merged
to
be
able
to
do
that,
to
use
my
code
and
actually
I'm
I'm
switching
jobs
soon.
So
I
will
try
to
see
I
I
would
like
to
continue
to
contribute
and
stay
in
the
meetings
and
hopefully,
first
of
all
I
can.
I
can
continue
to
to
see
what
what's
need
to
be,
what
like
other
things
in
the
logging
or
other
places,
and
it
needs
help.
J
So
I
guess,
as
you
talked
about
before-
and
I
see
it
shooting
here-
the
instrumenter
api
is
something
that
you,
you
probably
need
help
with.
So
I
think
yeah.
I
think
it
that's
a
difficult
place
to
help
and
can
learn
it
a
lot
and
yeah.
That's
that's
it.
In
general,.
J
M
J
At
google
right
now,
oh
cool,
so
yeah,
I'm
joining
the
networking
team
in
google
cloud
and
yeah.
M
With
what
office
is
it
basically.
J
Yeah
amazing,
I
would
love
that
okay
cool
yeah,
so
I
will
I
will.
Obviously
I
really
want
to
continue
working
on
hotel.
I
really
love
the
project
and
gaining
tons
of
experience
with
all
of
you
guys
so.
I
I
love
it's
such
a
small
world,
all
right,
yeah.
M
M
We'll
chat,
we'll
chat,
offline
or
ping
me
on
slack
yeah,
yeah,
okay,
anyway,
sure
cool,
so
quick
metric,
sdk
update,
tldr,
there's
two
pr's
out
right
now
that
finish
the
public
facing
api,
so
the
one
is
finishing
off
the
exemplar
configuration.
M
We
might
expose
a
little
bit
more
configuration
publicly
for
users
around
exemplars,
but
that
it
it
finishes
up
the
wiring
I
have.
I
have
some
end
to
end
tests
for
it
where
I'm
watching
exemplars
come
through,
there's
a
bit
more
overhead
for
synchronous
instruments,
but
generally
it
gives
us
something
we
didn't
have
before
so
awesome.
The
other
bit
is
multi-exporters,
so
that
one
is
still
in
draft.
M
My
goal
was
to
finish
it
this
week
and
I
think
that
this
week
is
going
to
turn
into
probably
saturday
or
sunday
for
me,
unfortunately,
because
I
had
to
take
off
in
the
middle
of
the
week,
but
that
that
should
be
out
sometime
soon.
It's
already
had
some
review
and
as
of
that,
you'll
be
able
to
have
more
than
one
exporter
configured
for
metrics
and
it
all
of
the
exporters
will
now
abide
by
the
new
sdk
specification.
M
So
there's
two
concepts:
one's
called
a
metric
reader
one's
called
a
metric
exporter.
I
don't
know,
I
don't
know
how
much
detail
you
want
there,
but
basically,
once
those
two
are
done,
we're
into
polish
and
cleanup
and
ergonomics.
M
So
there's
a
lot
to
do
in
polishing,
cleanup
and
ergonomics.
I'm
just
really
really
focused
on
getting
that
user-facing
bit
done
so
the
next.
Hopefully,
the
next
release
of
the
sdk
and
the
auto
instrumentation
agent
can
have
a
you
know,
preview
of
what
we
think
the
metrics
api
will
be
for
users
so
or
the
metrics
sdk
api.
What
how
do?
How
do
you
call
that
now,
like
do?
We
have
a
thing
that
we
use
for
the
publicly
exposed
sdk
versus
the
api.
M
Well,
okay,
but
there's
a
difference
between
like
the
implementation,
the
sdk
and
the
api
of
the
sdk.
You
know
what
I
mean.
Oh
I
see
so
like
the
api
of
the
sdk
will
be
what
we
want
to
show
users.
The
internals
still
need
some
cleanup.
There's
still
error
messages
to
improve.
There's
still,
some
performance
wins.
I
think
we
can
get,
but
I
don't
know
what
we're
calling
the
sdks
api
that's
been.
The
focus
right
now
is
just
getting
that
that
part
of
it
done.
We
already
have
the
api
out
for
preview.
I
M
Cool
anyway,
so
that's
the
update.
If
you
want
to
link
to
the
2prs,
I
can
throw
them
on
there,
but
there's
not
a
lot
of
prs
open
right
now
on
the
sdk
happy
to
hear
feedback
and
as
you're
looking
at
the
agent
overhead
and
we
start
adding
metric
instrumentation,
I
think
you'll
we'll
probably
be
having
lots
more
discussions.
So
let
me
know.
I
All
right,
all
right,
we
have
one
minute
left
instrumenter
weekend
things
in
the
last
week,
instrumenter
api.
Thank
you,
mateosh,
chipping
away
up
to
71.
I
Now
done
I
remember
when
that
seemed
unfeasible,
but
that's
awesome
and
yeah
still
lots
of
good
stuff
for
any
new
contributors
who
want
to
do
some
work
in
their
their
good
projects.
Good
little
isolated
projects
learn
a
lot
about
the
internals.
I
This
one
actually
went
in
the
week
before,
but
I
brought
it
over
it's
a
good
one,
and
especially
now.
Well,
we
kind
of
talked
about
why
it's
also
potentially
helpful
for
this
slim
artifact.
If
that's,
if
there
are
for
the
one
percent
or
the
point,
one
percent
of
users
who
do
care
about
the
jar
file
size,
some
more
nettie,
we
continue
to
chip
away
at
the
instrumentation
problems.
This
is
the
another
amazing
bytecode
or
I
don't
bite.
I
Well,
it
does
generate
this
via
bytecode,
but
amazing
hackery
for
supporting
on
j-linked
binaries
that
don't
have
the
jdk
unsupported
or
which
so
unsafe
is
not
available.
I
The
kafka
instrumentation
has
changed
in
the
1
6
release
in
terms
of
it
models
the
spans,
the
way
that
the
spec
defines
now,
but
also,
if
you
do
have-
and
we've
heard
from
some
other
groups
like
node
and
that
sometimes
the
modeling
it
correctly
back-ends
that
don't
support
links
well.
This
looks
like
broken
traces,
so
we
did
add
an
experimental
option
to
suppress
that
and
to
go
back
to
the
more
simpler
producer
consumer
consumer
parenting,
the
producer
span.
I
This
is
more.
This
was
also
done
to
support
this
jdk
j-linked
without
jdk
unsupported.
I
added
this
in
here.
I
think
it's
just
to
show
that
we're
still
updating
still
making
some
changes
to
the
instrumentary.
A
I
Which
is
why
it's
good
that
we're
going
through
all
of
these
exercises
before
we
declare
the
instrument
or
api
stable
another
addition
to
the
instrument
or
oh,
yes,
thank
you
again,
mateoish
the
overtime
with
moving
things
to
test
containers
which
makes
our
tests
more
reliable
and
run
correctly
on
windows.
I
We
do
have
a
fake
collector
that
ingests
otlp
and
verifies
things
in
the
smoke
tests
so
and
we're
kind
of
saying:
okay,
actual
compatibility
with
the
real
collector
versus
just
a
fake
back
end
based
on
the
proto,
is
the
responsibility
of
the
sdk
project,
and
it
just
was
adding
a
lot
of
problems
so
may
as
well
just
deal
with
those
problems
in
one
of
the
repos
all
right,
and
I
went
way
over
time.
So
thanks
everyone
great
to
see
you
all
thanks.
As
always
thank.