►
From YouTube: 2021-11-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Hey
man,
where
are
you
located.
A
C
Yeah
well,
vendor
we
give
like
a
managed
tracing
gas
solution.
You
also
have
some
eggs
like
special
features.
We
try
to
extract
data
from
a
pre-production
and
use
it
to
verify
the
the
pros.
A
D
So
yeah,
let's
give
it
maybe
one
more
minute
and
start
at
eight
or
four,
it's
kind
of
half
a
minute.
D
My
screen,
so
we
can
see
the
agenda.
So
the
first
point
I
put
here
on
the
agenda
is
next
week's
thursday.
There
will
be
thanksgiving,
so
most
of
the
people
working
in
america
will
have
a
day
off
there.
So
I
was
quickly
checking
in
with
ted
before
and
asked
how
this
is
handled
with
the
with
this
open
telemetry
meetings
on
like
those
holidays-
and
he
said
most
instances-
they're
just
cancelled
on
american
holidays,
but
just
wanted
to
check
here.
D
Sounds
good,
then
we
will
skip
next
week
and
I
will
just
danielle
put
the
note
here
in
the
meeting
notes
so
that
also
the
people
who
are
not
here
in
the
meeting.
D
Kind
of
know
that
we
are,
that
will
not
be
there
okay.
The
next
point
is,
I
also
put
it
on
there
and
I
started
thinking
about
in
our
roadmap,
basically
that
we
worked
out.
D
We
had
this
kind
of
first
out
tab
where
we
worked
out
the
scenarios
and
all
the
without
the
scope
and
then,
as
the
next
point
in
roadmap,
there's
basically
an
old
tab,
then
kind
of
with
the
with
a
more
or
less
list
of
changes
that
we
want
to
make
to
the
spec
or
basically,
with
an
old
tab
that
kind
of
drafts
a
new,
a
new
version
of
the
spec.
We
need
that
before
we
actually
change
the
spec,
and
I
started
thinking
about
what
should
be
in
the
old
depth.
D
We
already
had
quite
some
discussions
that
fit
in
there
very
well.
D
I
think
there
are
some
discussions
from
the
cloud
events
discussions
that
we
can
take
over
there
and
I
started
summarizing
here
what
should
be
in
the
old
tab,
and
I
think
we
have
three
big
areas:
the
first
one,
that's
basically
what
we
talked
about
up
to
now,
it's
context
propagation,
then
the
second
area-
that's
still
ankier
to
me
and
we
didn't
talk
about
the
door
is
batching,
and
the
third
area
is
what
actually
makes
the
bulk
of
the
current
semantic
conventions
is
its
span,
names
and
attributes.
D
So
I
think
those
are
the
three
areas
where
we
need
to
where
we
need
to
that.
D
We
need
to
cover
in
this
old
tap
and
that
we
need
to
cover
in
the
semantic
conventions,
and
I
intend
that
in
maybe
december
I
start
to
kind
of
maybe
write
the
first
draft
of
the
of
the
dotep,
and
here
I
just
started
to
collect
some
notes
from
our
previous
discussions
and
I
thought
maybe
today
we
could
go
through
those
notes
I
made
for
context
propagation,
which
were
based
on
previous
discussions
that
we
had
and
also
that
I
had
with
some
people
individually,
such
as
that
we're
on
the
same
page
there,
and
maybe
they
can
even
start
shortly
talking
about
batching
and
that's
all,
I
think,
raul.
D
You
put
the
the
cloud
events
pr
on
the
trend.
Again.
Maybe
we
can
time
box
that
to
the
last
10
minutes
of
the
meeting
that
sounds
okay
to
everybody,
yep.
D
Okay,
so
for
context
propagation,
I
think
we
did
not
explicitly
point
it
out,
but
I
I
was
talking
to
some
people
and
I
think
this
different
layers
of
context.
I
think
that
came
up
again
and
again,
and
I
just
wanted
to
double
check
here.
If
that
makes
sense
for
everybody.
D
As
I
understand
it
basically
comprises
this
create
and
process
stages
and,
interestingly,
in
all
the
examples,
also
in
the
all,
in
the
existing
semantic
conventions,
we
always
have
create
and
process
spans
for
messages,
so
that
is
kind
of
this
layer
is
always
covered
and
then
for
the
transport
layer.
If
I
would
try
to
map
it
on
the
stages
that
we
have,
it
includes
the
published
stages,
the
receive
stages,
and
it
always,
it
also
would
include
like
any
kind
of
intermediary
instrumentation.
D
I
think
that
would
count
in
this
transport
layer
and
I
think
why
it
makes
sense
to
keep
those
two
layers
apart
is,
I
was
also
thinking
and
talking
with
people
about
use
cases
for
a
messaging
instrumentation,
and
I
oh
sorry,
I
lost
this
here.
I
clicked
somewhere
wrong
and
I
think
one
use
case
is
just
very
high
level
use
case
where,
let's
say
people,
maybe
just
have
their
their
application
map
where
they
see
how
the
their
services
are
connected
together.
D
Maybe
they
have,
they
have,
let's
say
30
micro
services
working
together
and
they
just
want
to
see.
Okay,
this
message
was
processed
here.
Where
was
this
message
created,
and
that
is
enough
for
them
to
know
they
just
want
to
make
these
links,
and
I
think
that's
a
pretty
that's
a
pretty
popular
scenario,
and
it's
just
this
view
from
this
high
level.
D
So
those
people
are
not
like
interested
about
transport
details
or
protocols,
even
their
use,
they're
just
gonna
see:
okay,
that's
how
my
service
is
connect
together,
and
this
was
sent
here
and
was
processed
here
and
that's
all
I
want
to
know.
So.
Basically,
that's
I
think
what
is
covered
by
this
application
context.
D
So,
for
example,
when
your
message
does
not
arrive
or
it
should-
and
you
want
to
look
into
that-
and
I
think
then
you
need-
or
does
this
be,
this
lower
level
information
from
this
transport
layer
like
where
was
the
measures
published
and
what
happened
in
between?
Why
did?
Why?
Did
it
not
get
through
the
received
state
and.
D
I
think,
having
this
two
layers
separate
in
in
our
mental
model,
I
think,
can
can
help
us
also,
when
kind
of
further
refining
the
scope
of
the
conventions
and
also
when
maybe
refining
what
the
actually,
what
we
need
to
cover
with
this
version
and
what
we,
with
this
first
version
and
what
we
maybe
don't
want
to
cover,
because
I
think
this
application
context.
That
is,
from
my
point
of
view,
that's
a
must.
D
I
am
not
sure
yet
how
far
this
will
be
part
of
this
messaging
semantic
conventions
or
how
far,
actually
that
might
be
covered
by
different
semantic
conventions,
maybe
not
wholly,
but
partly,
for
example,
you
always
have
like,
if
already
have
this
case,
when
the
transport
context
there
uses
http,
that's
already
a
different
set
of
semantic
conventions
and
when
the
context,
when
the
transport
goes
over
mqb
or
mqtt,
I
was
actually
pondering
whether
that
those
would
also
be
covered
by
a
different
protocol,
specific
set
of
semantic
conventions
and
not
by
those
messaging
semantic
conventions
that
we're
working
here
on.
A
D
I
think
that
is,
I
I
think
we
didn't
settle
on
any
decision
dario
how
they
are
linked.
I
think,
in
those
points
that
I
that
I
put
down,
there
is
just
that.
I
just
put
it
here
very
generally,
and
I
just
put
in
there
that
great
and
process
bans
for
single
message
need
to
be
correlated,
so
there
must
be
some
correlation
there,
whether
that
is
a
link
or
parent
trial
relationship.
D
I
didn't
I
didn't
put
it
in
direct.
I
mean
for
based
on
our
cloud
events
discussions.
We,
it
seems
we
are
tending
more
towards
using
a
link
for
that,
but
yeah
we
didn't.
We
didn't
yet
come
to
any
kind
of
definite
agreement
or
consensus
on
that
the
left
deliberately
open.
The
only
thing.
I
think
that
you're
clear
is
that
they
need
to
be
somehow
correlated
because
yeah.
That's
what
I
think.
That
is
what
unifies
all
the
examples
that
we
currently
have
with
kratom
process
and
they're.
Somehow.
B
The
part
I
I
have
a
question
about
is
how
much
of
the
transport
layer
should
or
shouldn't
be
in
the
in
the
messaging
semantic
conventions.
I
I
think
you
know
it
would
be
nice
to
see
the
concepts
covered
in
the
messaging
semantic
conventions
where
the
details
go
off
somewhere
else.
In
terms
of
you
know
how
the
context
goes
into
a
message,
whether
it's
http
header
fields
or,
I
don't
know,
amtp
message
annotations
or
whatever,
but
it
would
be
good
to
see
the
concepts
defined
in
the
messaging
semantic
conventions.
D
Definitely,
yes,
I
think,
I
think
what
I
think.
What
we
need
somehow
to
define
is
for
this.
When
we
talk
about
these
two
layers
of
context,
we
also
have
basically
two
in
open
telemetry
terminology.
Two
context
objects,
so
do
two
kinds
of
context:
information
that
we
need
to
manage
like
this.
D
For
this
application
context
for
being
able
to
link
creating
process,
we
somehow
need
to
communicate
this
context
from
the
create
span
to
the
process
ban,
so
that
needs
to
be
conveyed
somehow,
and
I
think
that
is
definitely
something
that
we
need
to
define
or
at
least
give
kind
of
options
of
how
this
can
happen.
How
can
you
convey
this
context
and
that
might
that
might
involve
protocol
specific
mechanisms,
for
example,
for
example?
D
Cloud
events
is
one
way
to
specify
that,
or
amqp
may
have
has
other
ways
of
kind
of
attaching
context
to
it.
So
I
think
we
need
to
clarify
how
we,
how
we
relate
to
that
or
how
we
use
those
mechanisms,
but
I
think
what
we
should
not
do
is
kind
of
give
a
detailed,
detailed
guidance
or
detailed
kind
of
conventions
of
instrument
instrumenting
and
creating
nqb
spans
or
mqtt
spans.
D
I
think
that's,
that
is
on
the
transport
level
layer,
and
I
actually
think
that
should
be
covered
by
a
different
set
of
semantic
conventions
that
then
can
be
used
together
with
messaging.
That
kind
of
you
can
have
both,
but
I
think,
should
be
a
defined
separately.
Otherwise,
I'm
just
afraid
that
that
the
scope
of
those
of
those
messaging
70
conventions
will
get
really
big
when
we
put
all
those
possible
transport
layer
protocols
in
there
too,.
B
Yeah
I
mean,
I
think
that
the
like,
I
said,
I
think
I
think
the
concepts
are
kind
of
common
between
transports,
but
the
how
is
different-
and
you
know
I
would
sort
of
see
the
implementation
of
a
propagator
interface-
would
define
the.
How
of
you
know
how
a
context
is
carried
in
in
these
different
transports.
B
But
in
terms
of
what
should
be
traced,
you
know-
and
maybe
it
may
be-
there's
differing
views
on
this,
but
I
would
think
would
be
somewhat
common
between
different
transports
and-
and
I
would
I
think,
would
be
nice
to
see
that
described
in
in
the
messaging
semantic
conventions.
D
That
is,
that
is
a
fair
point,
and
I
I
think
when
we
I,
I
also
don't
wanna
yet
kind
of
nail
down
and
say:
okay,
we
don't
do
this
transport
context
larry
at
all,
because
I
think
when
we
talk
about
batching,
I
I
think
when
we
talk
about
batching
and
we
talk
about
what
do
we
wanna?
What
kinds
of
patching
scenarios
do
we
wanna
support
or
what
kind
of
observability
scenarios
around
batching
do
we
want
to
support?
D
I
think
that
will
also
give
us-
or
this
will
heavily
influence
our
decision
about
what
parts
of
the
context
layer
we
will
actually
cover,
because
I
think
batching
in
those
two
when
look
at
those
two
layers
happens
mostly
on
the
transport
context
layer
except
this,
this
batch
processing,
which
is
a
different
part.
It
actually
happens
on
this
application
context
layer,
but
I
think
all
the
most
of
the
other
batching
scenarios
that
are
also
covered
in
the
examples
have
happened
on
this
transport
context.
D
F
D
Yes,
I
think
batching
is
still
a
very,
very
unclear
concept
for
us,
what
all
kind
of
force
in
there
and
it
comes.
It-
comes
up
again
and
again
in
the
in
very
different
in
very
different
usage
contexts,
and
I
think
that
should
be
the
next
thing
we
discussed
about
in
detail.
F
Yeah,
because
I
saw
also
some
other
issue
somewhere
talking
about
batching
in
in
databases.
I
think
there
was
something
this
week
that
I
saw
tomorrow.
It
was
the
same
so
but
for
for
for
us
in
this
case
it
would
be
a
problem,
for
example,
like
luke
said
at
some
in
in
one
meeting
about
the
attributes
so
like,
because
we
have
a
badge
being
transferred.
What
what
which
attributes
do
we
put
into
this
yeah
expanded
in
the
transport
layer
right?
F
D
Yes,
I
I
think
for
us.
The
first
thing
to
answer
is
what
what
the
what
batching
scenarios
do
we
want
to
cover
it
all
or
what
kind
of
observability
insight
do
we
want
to
give
the
user
into
those
different
kinds
of
batching?
D
That
might
happen,
and
I
think
that
might
there
will
be
then
some
I
think
we
might
then
discuss
about
you're,
including
certain
scenarios
and
not
including.
D
And
it's
still,
I
I
don't
have
yet
many
ideas
about
that,
so
I
actually
look
forward
to
discuss
that
and
the
only
thing
here
with
this
short
paragraph
here
I
I
just
hope
that
we
can
agree
on
what's
in
there
that
that's
okay
for
you,
I
I
think
you
agree
that
it's
it
might
be
not
enough
yet,
but
it's
covered
there,
but
I
hope
we
at
least
can.
D
F
D
So
basically,
I
have
based
on
this
distinction
above
here.
I
have
four
main
points
here
and
basically
the
first
one
is
that
we
say:
okay
does
context
propagation
works
for
the
application
layer
like
for
this
creating
process
database,
that's
something
that
we
need
to
define
in
the
messaging
semantic
conventions.
So
that
is
strictly
part
of
this.
This
document
that
we're
working
on
and
as
I
said,
that
that's
also
something
that
I
saw
in
all
the
examples
we
have
there,
so
we
always
have
creating
process,
and
I
think
that
is
just
the
minimum.
D
Otherwise
we
cannot
yeah
trace
anything,
so
we
need
spans
and
for
each
each
message
there
is
a
a
create
span
and
there
can
be
zero
or
more
process
bands
just
depending
on,
depending
on
the
messaging
scenarios
that
we
have.
Can
I
comment
on
this
one.
C
Yes,
I
I
think
like
we
should,
if
there's
a
processing
span,
that's
like
the
father
of
all
the
operations
that
are
being
done
downstream
for
a
processing
of
one
message.
It's
great.
We
should
should
like
to
try
to
have
it
where
possible,
but
I
think
that
in
lots
of
real
world
situations,
it's
impossible
to
create
this
processing
span.
C
C
So,
at
least
in
my
experience,
it's
not
always
possible
to
have
the
processing
span
and
I
believe
we
should
be
ready,
or
at
least
describing
the
spec.
What's
what
should
be
done
when
the
processing
span
is
not
available
and
because
otherwise
people
would
just
write
instrumentations?
They
will
not
be
able
to
create
it,
and
then
they
will
just
do
some
work
rounds
which
are
not
consistent
and
not
formalized
in
any
way
and
make
beckon
processing
much
harder
to
the
captions
and
I'm
not
sure
if
it's
like,
if
it
fits
into
this
agenda
here.
C
I
also
have
other
small
comments
about
the
things
that
I
believe
should
go
into
the
the
specification,
for
example
today
the
link
that
we're
putting
on
the
processing
span.
It's
just
like
a
link
without
any
annotation.
That's
like
it's
possible
that
there
are
other
links
which
are
not
carrying
the
sender
context.
The
sender
application
context.
C
So
if,
if
you're
writing
some
back-end
processing
and
you
receive
this
processing
span-
and
you
have
five
links-
there's
no
consistent
way
of
telling
which
one
is
the
one
that
should
be
used,
and
in
addition
to
that,
I
think
we
can
also
enrich
it
with
some
other
data.
For
example,
if
the
sender
is
sampled
or
not,
because
it
can
make
life
very
easy
for
like,
for
example,
if
you
know
that
the
sender
is
not
sampled,
then
you
know
that
you
can.
C
I
don't
know
visualize
it
in
some
way
or
not,
spend
time
executing
a
query
which
will
for
sure
not
return
anything,
and
we
have
this
data
with
it's
just
not
in
the
specs,
so
people
don't
place
it
in
their
instrumentations
yeah.
So
and-
and
I
have
a
few
other
issues
there-
I'm
not
sure
if
it's
the
time
to
to
bring
them
up.
What
do
you
think.
D
If,
if
that
works
for
you,
because
I
think
here
I
think
what
I
tried
to
put
together
is
just
the
minimum
and
if
you
could
just
add
here,
what's
missing
in
your
opinion
and
then
we
can,
then
we
can
discuss
like
your
the
comments
that
you
have
on
the
side,
maybe
partly
like
offline
and
what's
open.
We
discussed
in
the
next
meeting.
D
Because
for
this
one
here
that
is
a
very
good
one,
and
I
think
here
basically
I
said
each
sweatshirts
can
have
zero
or
more
process
bands
and
basically,
when
a
message
does
not
have
like
a
process
span
in
in
any
winter
trace
into
traces,
that
either
means
that
the
message
was
not
processed
at
all.
D
It's
one
option
or
is
it's
what
you
mention
here,
that
the
message
was
processed
but
for
some
reason
you
cannot
create
the
process.
Then
I'm
not
yet
completely
sure
about
reasons
why
that
is
the
case,
that
you
cannot
create
a
process
ban.
But
if
there
are
those
those
scenarios,
we
should
somehow
give
guidance
there
in
the
semantic
conventions
of
how
to
handle
those,
because
I
think
that's
that's
an
important
distinction
to
make
or
for
the
user
to
make
to
see.
Okay
was
this
message
processed
or
was
it
not
processed?
D
C
Can
give
an
example,
for
example,
in
a
sqs
if
you're
familiar
with
everything,
so
the
aws
sdk
you're
doing
a
receive
a
message,
and
you
get
back
like
an
array
of
messages
and
that's
what
returned
to
the
to
the
user
to
the
return
from
the
from
the
library
and
then
there's
no
way
to
tell
how
the
user
will
iterate
over
it,
like
just
as
a
plain
array
of
messages
and
the
user
can
do
whatever
he
wants
with
them.
You
can
for
loop
over
here.
He
can
map
off
for
richie.
C
He
can
just
take
a
few
messages
and
handle
it
in
any
way
that
he
wants.
There's
no
like
a
callback
that
says
it
poses
a
message
and
then
you
can
wrap
it
in
the
context
and.
F
But
isn't
it
so?
I
I
think
ludmila
actually
brought
a
very
similar
example
with
the
azure
sdk
at
some
point
that
there
was
also
like
receiving
a
list
of
of
things
and
and
no
idea
what
the
user
would
do,
and
I
think
it's
pretty
much
also
the
same
thing
that
happened
last
week
in
the
cloud
events
discussion,
because
the
cloud
advance
sdk
also
don't
have
a
consistent
way
of
dealing
with
things.
F
So
I
I
wonder
what
what
we
should
do,
because
yeah
I
mean
if,
if,
if
you
receive
something,
maybe
the
sdk
could
be
changed
in
a
way
that
you
can
have
some
way
of
hooking
into
it
or
exposing
something
into
it.
And
if
we
shouldn't
drive
this
to
be
to
be
done
as
well.
C
So
my
thoughts
on
on
it
was
that
sometimes
it's
just
impossible
to
to
automatically
detect
or
scope
the
singular
message
processing,
but
the
vendors
or
sdks
can
supply
like
a
decorations
or
enrichment
ways.
So
we
can.
C
This
span
is
part
of
processing
this
message
and
it
was
not
able
to
to
supply
processing,
spend
to
group
everything
together
and
sometimes
you
can
have
multiple
processing
spends
like
if
you
can
have
like
a
venter
meter.
That's
you
can
register
and
say
all
message.
Do
one
two
three
and
you
can
write
many
callbacks,
then
it's
much
easier
to
have
multiple
processing
spends
one
for
each
processing
within
the
application
so
like
in
real
world.
The
situation
are
much
more
complex
than
our
easy
example,
which
we
always
follows,
and
I
believe
we
should
discuss
it.
D
Definitely
I
I
think
the
main
problem
you're
hinting
here
at
amir
and
I
think,
yeah
as
frown
said.
We
saw
this
with
cloud
events
and
I
think
we
see
this.
We
will
see
that
with
messaging.
The
noise
in
general
is
that
I
think
for
the
transport
context
layer.
We
can
safely
say
that
the
like
for
sdks-
let's
say
let's
say
now-
the
aws
sdk
or
the
azure,
sdk
or
other
sdks
for
messaging
system-
can
create
those
bands
for
the
transport
context.
D
I
think
that
is
usually
100
in
control
of
this
sdk,
so
they
can
do
that
for
the
application
context
layer
it's
more
complicated,
so
the
may
not
be
in
the
right
position
to
create
all
all
those
bands
needed
for
the
application
context.
As
you
internet
with
processing,
you
said,
okay,
you
pass
those
messages
out
of
the
sdk
and
then
it's
up
to
the
user.
What
to
do
so
for
our
model?
D
Actually,
then,
the
user
would
need
to
create
this
process
span
for
the
single
message
and
then
link
it
to
the
create
span,
and
I
I
think
that
this
one
challenge
that
we'll
have
here
that
we
already
partly
have
for
cloud
events
to
to
kind
of
give
proper
guidance
for
that
scenarios.
D
So
it's
it's
like
we
saw
in
some
instances
the
sdk
might
be
able
to
create
this
band
in
other
instances
or
in
other
usage
scenarios,
the
sdk
might
not
be
able
to
create
dispense
and
then
what
kind
of
what
should
happen
in
those
instances,
I
think,
do
we
say:
okay,
the
user
needs
to
have
some
kind
of
callback
to
inject
this
context,
information
or
create
a
span,
or
do
we
have
a
different
ways
of
dealing
with
that?
D
C
It's
covered
these
issues.
I
have
some
other
issues
similar
to
it,
for
example,
if
the
processing
span
is
missing,
then
there's
no
way
there's
nowhere
in
the
trace
that
you
can
link
to
the
producer
s
pen,
it's
just
if
you
you
break
the
connection,
so
I
was
thinking
something
about
maybe
like
having
an
array
of
all
the
links
on
the
under
the
sieve
span.
So
if
the
processing
spans
are
not
generated
for
some
reason,
you
can
still
link
them
together.
C
D
This
is
another.
Actually,
I
think
to
summarize
what
you
proposed
here
is
that
when
we
say
okay,
we
don't
have
this
link
between
between
creating
process.
We
are
not
able
to
have
this
on
the
context
layer,
then
maybe
we
can
create
this
link
on
the
transport
layer
that
you
say:
okay,
we
have
this
receive
stand.
You
know
this
is
somehow
linked
to
this
other
published
spam
that
links
to
the
create
span
and
basically,
we
create
we
create.
We
link
it
together
on
this
lower
level
layer.
C
Yeah
we
should
have
somewhere
that
will
guarantee
that
that
will
always
be
available
and
and
consistent
in
my
opinion,
because
it
happens
like
that,
you
don't
have
the
processing
span
and
then
there's
nothing.
You
can
do
to
connect
those.
D
F
Actually,
that
could
be
a
good
idea,
because
we
could
also
always
add
the
links
on
the
receive
so
like
if,
if
the,
if
the
library
that
does
that
so
like,
for
example,
if
I'm
using
a
library
and
then
the
library
is
able
to-
and
we
know
that
it
is
able
to
capture
the
receive
and
send-
and
you
can
just
look
at
the
things
and
add
links
like
always
added.
C
And
in
additional
to
that,
it
will
also
make
it
possible
to
tell
how
many
messages
are
there
for
a
batch,
because
currently,
you
have
to
like,
have
the
entire
trace
like
aggregate
the
entire
trace
and
then
look
at
the
at
the
children
of
the
receive
and
like
enumerate
them
and
find
if
they
have
a
messaging
processing
attribute
and
then
count
them
to
tell
that.
This
batch
had
five
messages,
and
so
it's
very
complex
and
error-prone.
C
F
Could
also
be
there
could
also
be
a
an
attribute
on
a
span
if,
if
we
think
it's
also
like
in
the
receive
you
could,
if
you
know
the
amount
of
messages
that
you
need
to
have
in
the
receive
span,
you
can
also.
We
could
also
propose
an
attribute
that
says
how
many,
how
many
messages
are
in
this
patch
and
doing
this
receive.
C
C
It's
it's
very
interesting
for
us
like
to
to
know
if
it's
an
empty
receive
like
if
there
are
zero
messages
on
the
receipt
and
currently
there's
no
easy
way
to
tell
it
and
or
consistent
way.
So
I
actually
I
add,
on
my
instrumentations,
I
had
number
of
messages,
but
then
on
other
languages.
I
don't
have
it
so.
I
have
to
improvise,
make
me
very
happy
to
see
it
in
the
spec
and
I
guess
many
other
people
trying
to
to
use
this
trace.
Traces.
D
That
makes
sense
that
will
basically
be
a
feature
on
this
transport
context
layer.
So
when
we
talk
about
how
many
messages
are
they
received
in
a
batch,
I
think
the
force
on
this
level-
and
I
think
that's
also
a
good
first
first
use
case
then
to
add
to
this
batching
point
here
that
we
will
go
to
next,
but
did
this
this
point,
your
army?
That's
a
good
one.
D
I
will
leave
this
kind
of
as
an
open
question
here,
so
indicating
that
we
basically
need
to
answer
this
or
find
some
kind
of
satisfying
solution
for
those
scenarios
when
working
out
the
spec.
F
Maybe
also,
if
you
have
the
the
examples
that
you
that
you
faced
already,
maybe
you
could
also
put
them
there
like
the
real
world
things
that
you
you
faced.
That
would
be
great
as
well,
because
I
I
used
a
little
bit
messaging
system,
but
I'm
not
so
well-versed
on
it.
So
it
will
be
good
also
to
see
the
examples
that
are
like
the
problematic,
not
just
the
easy
ones
that
we
always
have.
C
Yeah
sure
I
can
add,
like
code
snippets
and
yeah,
the
print
screens
of
how
real
real
instrumentations
are
creating
data.
D
That
would
be
great
thanks
amir
and
you
have
to
just
quickly
try
to
cover.
Then
the
last
two
points
here.
Basically
here
we
said
that
yeah
we
have
a
span
for
create,
create
and
we
have
a
span
for
process
or
we
may
not
have
a
span
for
process
as
we
saw.
But
if
we
have
those
bands
for
creating
process,
then
we
want
to
have
those
to
be
correlated
in
one
way
or
the
other
I
mean
it's.
As
I
said.
D
I
left
it
open
here
whether
this
is
a
correlation
by
link
corporate
child
relationship
or
kind
of
a
grandparent
when
trial
relationship.
We
had
all
those
all
those
options
in
the
existing
examples,
but
to
further
kind
of
restrict
that
point.
I
just
put
this
fourth
point
here
that
I
said
that
those
span
relationships,
for
example
between
crate
and
process
bands.
D
D
That
is
consistent
across
all
the
scenarios
that
you
want
to
support,
so
I
I
think
that
is
possible
to
find
this
to
find
one
way
that
covers
all
the
scenarios,
but
I
think
that
when
we
work
that
out
in
further
details,
we
will
see,
if
that's
actually
the
case,
because
I
think
it's
important
that
when
you
have
like
this,
let's
say
this
process
state
for
a
span
that
you
have
an
easy
way
to
just
an
easy
and
consistent
way
to
just
navigate
from
this
process
span
to
the
create
span
of
the
message
and
that
you
don't
need
to
kind
of
take
into
consideration
two
three
or
four
ways
of
how
this
connection
might
be
made
and
then
to
decide
which
way
to
go.
D
You
also
would
already
need
to
be
aware
of
the
scenario
that
the
the
particular
messaging
scenario,
so
is
it
batching?
Is
it
not
matching?
Is
it
is
it
process,
batching
and
then,
depending
on
this
scenario,
kind
of
navigate
in
different
ways,
but
I
think
for
this
very
basic
relationship.
As
we
said
between
creating
process,
there
should
be
a
consistent
way
of
navigating
between
those
those
two
spans
that
that
is
kind
of
the
a
very
minimum
I
came.
D
I
came
up
with
also
based
on
our
previous
discussions
and
based
on
the
existing
examples
and
use
cases
that
we
saw,
and
I
just
tried
to
come
up
with
that-
that
we
can
say.
Okay,
we
have
a
first
basic
agreement
on
this
minimum
and
then
we
can
basically
work
on
top
of
that
and,
like
army
already
started,
see
what
is
missing.
What,
in
addition,
do
we
want
to
have,
and
also
just
to
have
a
first
yeah
sanity
check
and
see
if
that
makes
sense
for
everybody
here.
E
The
context
for
application
and
transport
weren't
necessarily
separate.
It
was
the
same
context
passed
between
them
and
then
the
separate
context
would
be
something
like
the
message.
Context
for
cloud
events
would
be
like
another
layer
on
top,
so
that
that's
kind
of
where
my
head's
at
right
now,
which
is,
I
think,
I'm
struggling
to
like
grow.
What
the
difference
between
the
two
contexts
is.
Is
it
the
same
context
just
in
different
stages,
or
is
it
two
entirely
separate
contexts
that
are
unrelated.
D
I
I
I
think,
the
I.
I
definitely
think
the
context
term
here
is
overloaded,
but
when
I
think
in
in
just
open.
D
D
The
when
we
distinguish
those
two
contexts
here
when
you
just
imagine
you
have
this
process
span
and
the
process
ban
made,
might
have
to
deal
with
two
different
kinds
of
contexts.
D
There
might
be
one
context
that
it
gets
from,
let's
say
incoming
http
request
that
triggers
this
message
processing
or
for
from
like
an
amqb
kind
of
package
that
triggers
that
triggers
the
message,
processing
or
it
might
come
from-
let's
say
a
a
receive
operation,
that
kind
of
receives
a
badge
and
then
splits
up
the
spans
and
then
handles
disbands
to
process
operations.
D
So
this
might
be
one
kind
of
context,
and
the
other
kind
of
context
is,
as
you
mentioned
here,
this
context
of
the
create
of
the
message
creation
that
is
on
the
other
end
of
the
line.
So
basically,
you
can
imagine
that
when
you
have
this
process
operation,
you
might
have
to
deal
with
this
message
that
has
these
two
different
contexts
and
that
you
somehow
need
a
need
to
handle.
E
Yeah,
I
I
think,
I'm
still
struggling,
because
I've
still
got
in
my
head
that,
like
there's
the
hotel
context
that
has
come
from
somewhere
else,
that
might
have
started
as
an
application
span
and
then
maybe
http
or
messaging
spam
was
added
to
it
and
that's
the
context
that
comes
across
and
then
maybe
there's
a
message
context
in
the
context
of
cloud
events.
E
Sorry,
I'm
overusing
context,
that's
kind
of
where
my
head's
at
so
I
see
that
in
that
case
there
would
be
two
but
for
example,
if
it's
not
a
cloud
event,
it's
just
a
regular
message
that
there
would
only
be
one
context
that
might
have
passed
through
application
and
transport
pieces
and
spans.
But
it's
just
one.
D
Yeah,
yes,
but
then
you
already
have
again
have
two
contexts
because
you're
one,
if
a
context
passes
unordered
through
this
through
this
transport
chain,
usually
the
context
when
it
goes
basically
through,
like
a
train,
it
kind
of
changes
from
step
to
step.
So,
for
example,
you
have
you
have
span
a
spin
b
and
the
spin
a's
when
b,
span
c
and
then
for
you
link
a
to
b
for
b.
Basically
a
is
the
parent
context
and
for
cbe
is
the
parent
context.
D
D
Yes,
that
is
basically
a
different
view
than
what
they
laid
out
here
in
the
beginning,
and
I
think
that
makes
sense
to
think
about
that
and
see
how
we
can
how
we
can
converge
there,
because,
basically,
what
you
say
is
that
the
other
that
the
messaging
convention
we
are
working
here
should
just
cover
the
transport
context
and
the
application
context
should
be
covered
by
different
by
different
conventions.
F
Because
the
problem
is
really
like-
let's
say
like
if
I,
if
I
produce
a
message
and
I
send
it
to
a
queue
or
something-
and
let's
say,
for
example,
that
I
use
a
library
that
that
has
auto
instrumentation,
so
they
will,
the
library
will
will
be
responsible
for
creating
the
the
transport
span
and
so
on,
like
the
act,
the
the
physical
thing
that
was
sent.
F
Current
aspect
is
they
say
that
the
physical,
so
I
will
have
the
physical
span,
send,
but
then,
when
I
receive
this,
it
might
be
like
a
day
later
or
so.
And
then
how
is
the
like?
How
do
I
link,
for
example,
that
I
received
this
and
it's
related
to
the
thing
that
I
was
sent
yesterday,
or
I
think
there
was
also
an
example
like
that:
a
long,
long
trace
that
happens,
that
keeps
keeps
happening
and
or
waits
for
input
from
somebody.
F
F
And
then
I
will
have
an
active
scan
because
I
receive
something
the
library
already
creates
the
receiving
span
or
something
that
the
transport
layer
span
and
then
that's
not
not
how
that's
no
way
linked
to
the
what's
the
method
that
was
being
processed
unless
it
unpacks
the
message
and
then
do
something
that,
like
the
the
instrumentation
library
unpacks
the
message
and
do
something
about
it.
But
then
you
in
the
situation
where
I
say
okay,
I
already
have
an
active
span.
F
Should
I
take
the
stand
that
is
in
the
messaging
and
then
make
that
as
a
parent
or
should
I
use
the
one
that
I
have
active
and
what
we
discussed
the
thing
last
week
was
what
what
happened
so
like.
If
I
do
this,
if
I
disregard
the
active
stand
that
I
have,
which
is
the
transport
layer
and
then
let's
say
it
fails,
then
I
lose
my
my
my
transport
layer
is
fan
like
because
there
was
an
exception
or
something
yeah,
but
I,
but
I
see
what
you,
what
do
you?
E
D
Yeah
and
and-
and
I
think
that's
also
where
some
kind
of
confusion
stems-
that
we
often
mix
those
two
layers
of
context
up
and
then
it
kind
of
gets
it.
It
kind
of
obfuscates
things
a
lot
and
I
think
it
might
help
us
to
just
try
to
separate
those
stages
and
also
maybe
maybe
think
that
we
have
different
contexts
that
we
have
to
deal
with
here,
and
I
mean
cloud
events
is
a
nice
example,
because
cloud
events
actually
nicely
separates
between
those
two
contexts.
D
So
in
the
cloud
event
you
have
this
when
you
look
at
the
different
tracing
extension
there,
they
have
this
application
context
that
they
put
on
the
cloud
event
message
that
goes
through
unchanged
through
the
whole
flow,
and
then
there
is
this
other
context
that
follows:
like
the
standard
context,
propagation
mechanisms
trace
state
and
trace
parent
and
that
context
changes
through
like
the
hops.
The
message
takes
so
cloud
events
or
actually
has
those
those
two
contexts
nicely
separated.
D
I
think
they
don't
call
it
vacation
context
and
transport
context,
but
that's
essentially
what
it
is,
and
I
think
that
is.
D
D
It's
not
necessary
without
seeing
the
transport,
but
it's
maybe
in
addition
to
the
transport,
because
when
I
when
I
have
lots
of
let's
say,
as
I
said,
micro
services
that
depend
on
each
other
and
communicate
with
each
other,
and
they
want
to
understand
how
do
they
connect
together?
Maybe
not
always,
but
I
think
most
of
the
times
when
I
want
to
see
those
connections.
I
don't
really
care
if
there
is
a
message
broker
in
between.
D
D
Yes,
please,
so
that's
also
why
I
wrote
this
down
here,
so
please
feel
free
to
add
or
comment
on
that,
because
it
would
be
great
if
we
kind
of
could
kind
of,
align
or
agree
on
some
kind
of
of
basis
and
then
continue
working
from
there.
B
Yeah,
as
you
can
see,
I
agree
completely
with
the
two
different
layers.
You
know
your
example
of
seeing
how
30
micro
services
interconnect
and
I
don't
care
how
the
messages
got
there
just
want
to
know
where
they
went
at
the
application
layer,
but
I
do
sort
of
align
with
ken's
thinking
in
that
you
know,
with
with
the
transport
layer
being
an
important
aspect
for
other
reasons,
as
you
mentioned,
for
you
know,
debugging
what
happened.
Why
did
it
take
so
long
or
you
know?
B
Why
did
this
message
not
get
from
here
to
here
to
there,
as
I
expected
so
again?
Just
from
my
point
of
view
reiterating
that
I
would
really
like
to
see
the
transport
layer
addressed,
and
I
don't
think
I
don't
see
why
it
depends
too
much
on
whether
it's
amqp
or
kafka.
I
think,
there's
a
lot
of
commonality
in
all
the
systems
and
the
steps
that
happen
along
the
way,
and
it's
really
just
comes
down
to
a
a
protocol
layer
detail
that
can
be
abstracted
through
a
propagator
or
something
like
this.
B
But
I
would
like
to
see
the
individual
steps
you
know
formalized
in
some
way,
because
I
think
that
will
help
with
you
know
a
common
approach
across
different
transports.
D
Yeah
that
that
kind
of
makes
sense
I
mean,
then
I
will
strip
it,
for.
I
will
take
this
out
here,
that
we
say
that
the
context
propagation
rules
for
the
transport
context
are
covered
by
different
semantic
conventions.
So
we
basically
say
that
we
are,
I
think
we
are
100
sure
that
those
application
or
regarding
to
no,
I
think
we
can
say
we're
sure
that
those
application
context,
propagation
rules
need
to
be
covered
by
those
messaging
semantic
conventions
for
linking
creating
process,
and
we
say
we
need
to
create.
D
D
So
we
might
have
zero
more,
and
we
also
can
say
that
those
crate
and
process
bands
need
to
be
correlated
and
they
need
to
be
correlated
in
a
consistent
way
across
scenarios,
and
I
think,
then,
what
what
is
open
here
is
actually
that
we
talk
about.
How
do
we
deal
with
this
transport
context
because
yeah
all
this
here
is
about
this
application
context?
D
So
then
we
need
to
think
and
discuss
about
how
do
we
want
to
cover
this
transport
context,
and
for
this
I
would
actually
suggest
that
the
the
next
time
we
start
discussing
batching,
because
that
is
or
for
me
personally
at
the
unclear
area
how
or
what
we
want
to
cover
there,
and
I
think
that
will
get
us
thinking
about
the
transport
context
layer
a
lot
and
that
might
us
then
help
to
figure
out
what
what
do
we
actually
want
to
cover
on
this
transport
context
layer.
D
Yeah
awesome
and,
as
I
said
if,
if
if
this
anything
that
we
have
here,
written
down
does
not
make
sense
to
you.
Please
comment
here
on
the
document
and
add
your
thoughts
and
otherwise
I
think
next
time,
then
we
will
just
write
out
start
with
a
batching
if
there
are
no
kind
of
questions
pertaining
to
this
that
are
open
next
time.
We,
let's
write,
switch
to
talking
about
batching.
E
D
Awesome
sorry
for
how
we
didn't
get
to
the
cloud
events
today.
Let's
try
to
do
it
next
time.
F
F
G
F
J
So
190,
I
think
we
will
hold
for
the
android
fixes
that
are
in
progress.
So
that's,
I
think
it's
fine.
If
that
goes
into
next
week,
it's
more
important
to
get
the
android
fixes
in.
L
J
H
I
think
it's
worth
trying
out
the
virtual
field
in
the
nether
listeners.
It
should
be
working
now,
probably.
H
J
Can
can
you
get
can
this
pr
land
without
that
I
was
just
gonna
say:
if
you
can
chunk
off
pieces
some
pieces
of
it,
then
we
could
follow
up.
I
can
follow.
L
J
And
on
this
one
I
mean
this
is
a
good
point
that
we
used
to
have
to
do
rappers
in
some
cases
where
we
were
expecting
lambdas
to
be
passed
in,
so
that
we
could
then
instrument
and
call
back
and
that's
shouldn't
be
required
anymore.
J
Yeah
and
ping
ping
with
update
end
of
your
tomorrow,
cause
I
can
tomorrow
afternoon.
I
should
have
time
to
poke
around
on
it.
Also:
okay,.
G
H
G
H
G
You
can
I
mean
you
can
once
it's
ready.
Just
if
someone
just
ping
me
I
can,
you
know
I
can
grab
the
instrumentation
api
snapshot
and
try
it
out
cool
the
ok
http
instrumentation
snapchat,
which
will
then
have
a
transient
dependency.
I
I'm
not
sure
whether
it
makes
sense
because,
like
1.9,
isn't
really
a
new
major
release.
It's
like
only
one
month
of
work.
I
J
Yeah,
so
we
got
this
feedback
from
kind
of
the
reason
why
we
drafted
those
patch
release.
Guidelines
was
based
on
some
feedback
from
a
user
that
they
would
always
they
would
not
take.
They
didn't
typically
take
the
latest
like
say
one:
nine
zero
was
the
latest.
They
would
take
the
one
eight
three
or
something
and
with-
and
that
was
sort
of
like.
J
I
guess
some
people
do
this.
I
wasn't
hadn't
seen
that
before,
but
it
kind
of
makes
sense.
It's
like
okay,
that
latest
patch
release
should
have
all
known
shouldn't.
Have
any
new
regressions
basically
is
sort
of
like
a
safe
release,
so
that
was
sort
of
the
logic
for
making
those
patch
releases
for
and
specifically
targeting,
regressions.
J
What
other
projects
have
done
certainly
open
for
other
approaches,
but
this
has
been
our
primarily
we
just
anything,
that's
a
regression.
We
will
patch
it
patch
release
and
then
memory
leaks
and
deadlocks,
even
if
they're
not
regressions
under
the
idea
that
those
you
don't
always
notice
those
right
away
when
deploying,
and
so
they
bite
you
in
production.
More
often.
J
M
Oh
man,
my
microphone
broken.
They
can't
recognize
the
usb
all
right.
I
was
going
to
say,
quick
question
on
patreon
there's
also
an
issue
with
otlp
metric
marshalling
that
might
be
part
of
the
patch
release.
But
how
are
you
treating
the
alpha
modules
like?
I,
I'm
really
happy
to
see
a
patch
release
for
those.
I
just
wanted
to
make
sure
that
that
counts
as
a
real
regression.
G
M
J
Cool
okay:
well,
I
check
in
with
onorak
has
thoughts,
but
I
I
think,
we'll
kind
of
stay
with
this,
with
the
idea
that
there
are
some
it's
a
sort
of
user-friendly
approach
for
guaranteeing
that
there's
no
or
guaranteeing
trying
to
reduce
the
number
of
or
having
a
safe,
a
safe
release,
kind
of
thing.
J
Oh
yeah
so
yeah,
so
I
had
noticed
there
were
those
discussions
in
sdk.
This
one
was
not
clear
to
me.
It
was
kind
of
reported
in
the
context
of
the
java
agent.
J
G
J
J
Okay,
all
right
ooh
reuben
will
visit
that
with
honorag,
and
so
these
these
two
sound
like
a
go
for
patching.
J
Cool
logging.
J
Oh
yes!
Yes,
the
logging
api
revisited,
so
the
question
of
whether
we
should
have
a
logging
api
or
not
part.
Seven.
J
The
question
I
wanted
to
ask
in
particular
was
so
it
seems
like
logs,
are
looking
like
the
preferred
approach
for
an
open
telemetry
for
tracking
custom
events.
J
And
so,
if
we
tell
users
say
like
I
tell
our
users
hey
to
track
custom
events,
you
should
emit
logs,
I'm
not
clear
what
I
tell
them
like
if
that
requires
the
sdk.
G
J
So
the
one
thing
that
I
find
awkward
about
that
is
custom
events
right.
Typically,
you
you
want
to
put
properties
attributes
on
those
and
the
logging
apis
are
very
awkward
to
do
that.
Like
do,
I
I
have
to
tell
them
okay
for
every
time
you
want
to
emit
a
custom
event
with
these
five
properties.
J
M
It
depends
what
library
you're
talking
about
log4j
actually
has
more
direct
support.
There's
this
there's
like
a
map
message
where
you
can
put
them
in
a
hash
map,
and
I
think
there's
this
weird
thing
where
you
have
an
object
that
you
can
extract
properties
from,
but
I
don't
remember
if
that
was
when
I
was
watching
the
I'm
stale
on
log4j2
by
the
way.
So
that
could
be
something
that
didn't
actually
make
it.
M
But
I,
if
I
recall
correct,
mapped
message
made
it
the
object
to
extract
properties,
I
think
is
there,
but
you
you
need
to
use
a
log4j
specific
logger
as
opposed
to
using
slf4j
or
java
util
logging,
and
so
like
the
the
weirdness
around
custom.
Events
is
like
the
best
way
to
do.
It
is
kind
of
against
convention
of
the
standard,
logging,
libraries
and
that's
that's.
That
is
weird.
J
G
So
I
would,
I
guess
I
would
say,
let's
put
in
a
custom
events
api
when
we
have
a
specification
for
a
custom
events
api,
because
we
don't
have
one
and
we
shouldn't
be
building
an
api
just
because,
just
because
in
open
telemetry,
we
should
do
it.
Because
it's
going
to
be
a
common
thing
across
all
the
languages.
N
M
I
think
one
of
the
reasons,
sorry,
one
of
the
reasons
I
think
they
don't
have
a
event
spec
here
is
like
go,
for
example,
has
structured
logging
and
it's
rather
popular
and
so
like
their
default
loggers
kind
of
have
structured
logging,
although
I
guess
there's
three
different
ways
to
do
it,
whatever
java
doesn't,
and
so
you
know
that
sig
might
be
like
you
know,
library,
you
know,
languages
probably
already
have
structured
logging,
and
we
are
here
with.
There
is
some
structured
login
in
java,
but
it
never
really
took
on.
M
G
Know
the
ebpf
group
is
that
the
right
acronym
that
networking
thing
they're
talking
about
designing
a
open,
telemetry,
custom
events,
api
and
they're
working
with
the
client,
telemetry
group,
or
we're
planning
on
working
together
to
design
this
custom
events
api.
G
I
think
it
sounds
like
this
is
also
something
that
would
hopefully
fulfill
the
non-network
non-client
telemetry
use
case
as
well,
but
if
this
is
something
that
people
feel
is
important
new
relic
and
other
people
who
have
event
apis,
you
should
absolutely
get
involved
with
this
discussion.
G
But
well,
it
sounded
like
trask
also
had
users
who
needed
this
so.
K
Drastic
for
what
it's
worth
so
I
I
have
I,
I
need
to
produce
some
custom
events
from
like
a
little
side
project
and
I
use
the
I
use
the
log
sdk
directly
because
it
doesn't
exist.
I
mean
all
this
stuff
is
alpha,
so
you
know
I
don't
mind
using
an
sdk
knowing
full
well.
That
might
not
be
the
pattern
that
I
stick
with
like
going
forward,
but
at
least
it
gets
me
by
for
now.
J
G
G
It's
actually,
I
think,
of
an
important
signal
that
we
don't
have
support
for
at
the
moment,
so
whether
we,
whatever
we
call
custom
events
or
events
or
whatever
the
name
of
it,
is
it's
different
than
a
logging
api,
and
I
think
that
I
would
be
very
supportive
of
having
this
be
something
that
is
defined
in
the
spec
and
then
have
a
the
sdk
implemented
by
emitting
otlp
logs,
for
example,.
M
I
am
very
supportive
of
that
as
well,
because
my
fear
is
if
we
combine
logging
in
events,
then
you
end
up
with
a
crappy
api
for
both,
but
like
the
the
key
with
events
is
structure
and
the
key
with
logging
is
strings,
so
you
know
like
they're,
not
necessarily
the
same.
M
I
know
they're
very
close,
but
like
like
from
an
api
standpoint
right
like
if
I
want
to
throw
an
object
in
as
my
event
and
extract
all
the
attributes
out,
so
that
I
can
send
it
over
the
wire.
That's
a
very
different
api
than
here's,
a
bunch
like
a
string,
a
map
of
string,
string
pairs
or
something
which
is
what
we
see
from
logging,
libraries
and
one
of
them,
is
a
little
bit
more
efficient
than
the
other
and
a
little
bit
more
elegant.
K
Yeah,
the
one
thing
I
would
say
is
that
mileage
varies
for
each
of
these
logging
frameworks
right,
so
some
of
them
are
a
lot
more
structured
than
others
and
end
up.
Looking
a
lot
more.
Like
that
event,
api.
J
Yeah
I
was,
I
was
looking
for
what
you
were
talking
about.
Josh
of
the
structured.
K
I
got
it
pulled
up
right
here.
There's
a
logger
has
a
the
ability
to
log.
What's
called
like
a
message,
the
interface
is
message
and
an
implementation
of
that
message.
Interface
is
what
joshua
was
referring
to.
It's
called
map
message
and
a
map
message.
Is
you
basically
create
it
from
a
map
and
it's
it's
the
structure
that
we're
referring
to.
M
There's
also
structured
data
message
thread
dump
message.
The
key
to
to
message,
though,
is
it
has
a
two
string
for
if
you
need
to
log
to
console,
and
then
it
has
the
ability
to
pull
out
properties.
J
K
I
I
will
say
that
so
the
with
the
log4j2
appender,
that
that
is,
there's
a
pr
open
for
that
right
now,
onorag
and
I
we're
having
a
conversation
about
whether
we
try
to
do
a
complete
mapping
of
the
log4j2
data
model
or
just
a
minimalist
mapping
pulling
out
the
minimal
set
of
fields
that
are
sort
of
specked
out
right
now,
at
least
in
experimental
fashion,
and
so
in.
K
In
going
down
that
conversation,
so
onorak
thinks
we
should
do
a
minimalist
mapping,
and
so
things
like
map
diagnostic
context
didn't
make
the
cut,
and
so,
if
you
did
try
to
do
this
type
of
structured
logging
with
log4j2
once
this
is
released,
I
don't
think
you'll
get
your
structure,
I'm
pretty
sure
you
won't
get
your
structure
in
there.
So.
K
G
K
Yeah,
so
I
just
I,
I
just
hope
that
log
say
can
make
some
decisions
about
whether
it
wants
to
have
an
event
api
and
whether
we
want
to
have
an
api
for
for
rum.
Because
you
know
I
I
don't.
I
don't
know
if
we,
if
we
go
project
to
the
future,
and
we
have
like
a
rum
api
and
an
event
api
and
an
appender
api
that
all
kind
of
go
to
logs
under
the
cover
the
log
sdk.
That
doesn't
feel
super
good.
G
G
J
One
option,
jack
that
we
always
have
with
that
we've
done
in
other
places
where
it's
not
specked
out,
is
hide
things
under
an
experimental
flag.
J
So
if
you
have
use
case,
if
for
this,
if
you're
wanting
this
sooner,
we
can
always
do
that.
K
Well,
I
think
the
most
useful
one
is
is
map
diagnostic
context
and
I'd
have
to
run
an
experiment
with
that
message:
interface
for
log4j2,
but
I'm
assuming
that
the
structure
that
you
give
it
ends
up
getting
like
accessible
via
this
log
event
get
context
data
which
is
map
diagnostic
context,
so
yeah.
I
think
I
think
that
would
be
high
on
my
wish
list
highest
on
my
wish
list.
G
Just
to
follow
up
on
the
rom
api,
I
think
I
think
the
rum
api
will
end
up
being
the
the
equivalent
of
the
instrumentation
api
on
the
java
side
on
the
regular
javascript
it'll,
be
convenience
apis
on
top
of
existing
apis.
G
Plus
plus
spans,
like
it'll,
be
it'll
just
like
we
have
just
like
our
instrumentation
apis,
metrics
and
spans,
and
maybe
eventually,
custom
events.
Similarly,
it
will
just
be
a
you
know:
a
convenience
wrapper
on
top
of
that
to
make
it
easy
to
do
the
instrumentation,
you
need
for
android.
K
Well,
so
in
that
vision,
actually
it
kind
of
presumes
that
there
is
a
at
least
an
appender
api
or
custom
event
api.
If
not,
then
you're
going
to
have
to
make
a
decision
about
which
log
api
to
use
well,
what.
G
We
can
design
that
api
without
knowing
exactly
what
the
underlying
implementation
is
going
to.
Look
like
like
right
now.
We
do
this
on
the
splunk
rum
instrumentation,
just
by
emitting
spans
zero
zero
duration
spans,
but
we
can
swap
out
that
in
that
implementation
with
custom
events
when
they,
when
that
is
available,.
K
K
K
So
you
have
to
have
your
own
runtime
dependency
on
on
the
log
sdk
for
to
work
and
that's
fine,
but
if
you
want
to
use
that
log4j2
appender
in
the
agent,
I
think
some
stuff
might
change
and
that
that's
a
comment
that
that
honorag
made
on
this
with
you
know
where
you
know
and-
and
I
don't
understand
how
the
agent
class
paths
are
like
structured
or
thought
about
too
much.
That's
like
an
outstanding
question.
I
have
there,
but
basically
he
was.
K
He
was
saying
that
it
how
to
phrase
this.
We
need
an
appender
api.
K
J
Yeah,
so
I
can
kind
of
show
you
in
this
diagram,
so
we
put
the
open
telemetry
over
here
in
this
agent
class
loader,
which
is
completely
isolated
and
then
in
the
user
class
loader,
where
the
log4j
lives.
We
would
inject.
We
inject
the
instrumentation
right
and
in
this
case
the
appender
and
normally
we
have
the
open,
telemetry
api.
So
while
the
sdk
is
isolated
over
here,
the
api
itself
is
shaded
in
the
bootstrap
class
loader.
J
Right,
we
wouldn't
be
able
to
reuse
the
same
instrumentation
inject
the
library
instrumentation
directly,
though
right,
because
the
library,
the
appender,
is
talking
to
the
logging
sdk.
J
G
Something
so
what
I'm
suggesting
is
because
the
agent
is
the
thing
that
needs
this,
put
it
in
the
agent
code,
put
it
in
the
instrumentation
code
base.
I
guess
that's
what
I
mean
what
I
mean
because
as
jack
has
proven
the
user,
the
end
user
doesn't
really
need
this,
they
could
use
it
and
it
could
maybe
make
the
make
the
appender
code
a
little
bit
simpler,
but
you
can
talk
directly
to
the
sdk
if
you
need
to,
if
you're
an
end
user.
J
Right
so
you're
basically
saying
take
this
exactly
this,
I
mean
we
could
take
this
artifact
and
just
move
it
into
the
instrumentation
repo.
G
Yeah,
exactly
that's
that's
what
I'm
throwing
out
as
a
possibility.
It
seems
like
it
could
be
a
solution
to
this
that
wouldn't
be
a
piece
of
the
kind
of
the
api
that
people
would
assume.
That's
going
to
be
eventually
moving
forward
into
stability.
K
I
I
kind
of
like
it
too.
I
think
you'll
have
kind
of
funny
class
names,
but
I
don't
think
that
really
matters,
because
it's
not
something
folks
will
use
yeah.
K
There's
yeah
there's
nothing
in
auto
configure
for
configuring,
the
log
sdk
and
there's
no
spi
hooks
either
for
configuring.
The
log
sdk
so
yeah.
J
K
M
M
G
Yeah,
I
will
say
that
I
think
there's
a
very
delicate
line
that
we
have
to
walk
around
logs,
specifically
because
there
are
so
many
logging
apis
out
there
figuring
out.
What's
going
to
be
like,
what's
foremost,
in
my
mind,
always
is
making
sure
that
we
don't
muddle
and
confuse
our
users.
So
the
least
that
they
have
to
worry
about
this
stuff,
the
happier
we're
going
to
be
having
to
support,
and
if
they're,
like
a
logging
api,
that
they
then
they
have
to
switch
all
their
stuff.
All
of
their
logging.
K
M
And
one
question
then
too:
the
java
util
logging
hackery
that
the
agent
does.
How
do
we
make
that
nicer,
not
in
the
agent
or
is
that
not
something
we're
worried
about,
because
I
was
trying
to
redirect
java
util
logging?
M
And
you
know
it's
like
a
systemwide.properties
file
or
like
weird
reflectionary
hacks,
that
you
do
and
the
slf4j
pender
has
this
giant
caveat
that
says
if
you
use
javy
to
login,
to
slf4j
expect
huge
degradation
and
performance
and
that's
just
what
we
did
do
we
have
a
solution
to
this?
Do
we
care
I'm
just
curious
like
like?
Do
we
assume
everyone's
gonna
be
using
the
agent,
so
it
doesn't
matter.
M
M
J
Cool
just
want
to
give
a
update.
Nikita
got
us
a
gradle
enterprise
subscription,
which
is
will
let
us
do
well
the
one.
The
one
use
case
that
I'm
excited
for
is.
We
can
look
at
flaky
tests
over
time
because
we
do
auto
retry
all
of
our
tests
just
because
of
flakiness
and
not
wanting
to
have
the
build
constantly
failing,
but
this
will
allow
us
to
actually
track
and
work
on
those.
N
L
Build
scans
are
public,
you
don't
have
any.
You,
don't
need
any
authorization
to
to
see
them.
So
you
currently
can
go
to
ge
spring.ioge.junie.org.
L
J
L
Yeah
only
problem,
it
doesn't
work
so
far.
It
turns
out
that
our
s3
build
cache,
for
some
reason
is
not
very
friendly
to
credible
enterprise.
L
N
I
mean
I
looked
at
that,
so
this
is
totally
derailing.
I'm
sorry,
but
I
looked
at
I
looked
at
the
size
of
those
I
put
it
in
the
channel,
but
yeah
the
size
of
the
shared
indexes
was
pretty
gigantic.
It
was
like
three
gigs
or
something,
but
it
would
be
really
nice
to
have,
and
I
think
it
might
actually
save
time
like
developer
time.
I
I
It's
super
horrible
and
starts
to
rebuild
the
indexes
it
takes
like
an
hour
for
the
instrumentation.
N
N
N
N
Like
jetbrains
publish
as
a
tool
to
generate
those.
L
G
If
this
doesn't
work
all
you,
eastern
europeans
can
drive
down
the
road
to
the
czech
republic
and
knock
on
their
door
and
get
them
to
make
it
easy.
O
To
be
honest,
I
think
that's
an
issue
on
intellij,
because
I
I
noticed
that
so
I
recently
upgraded
to
the
latest
version
and
I
was
using
a
really
old
version,
I
believe
was
from
2019,
because
that's
the
one
that
I
had
licensed
and
it
is
completely
different
how
you
open
the
project
in
one
and
the
other.
It
takes
like
five
minutes
or
even
less
to
index
the
full
project,
all
java
related
stuff
and
open
telemetry,
so
instrumentation
java,
sdk
java,
the
other
one,
the
contribs
in
the
old
version.
O
O
I'm
not
sure
I
I
is
anyone
here
like
using
like
an
old
intellij
version,
because
I
I
can.
I
can
even
show
that
I'm
not
sure
if
I
even
have
my
old
one
installed
if
I
removed
it,
but
I'm
pretty
sure
because
I
I
I
I
started
to
import
the
project
in
the
in
the
old
version
and
it
was
fine
and
then
when
I
changed
the
new
one,
it
was
just
like
a
pain.
L
O
So
I
I
also
work
in
like
a
another
another
large
codebase
which
is
quercus,
and
we
we've
seen
similar
issues
with
like
older
versions
of
intellij
being
more
nice
to
index
stuff
than
the
newest
versions.
L
G
N
Yeah,
so
the
thing
that
I've
noticed
is,
I
can
have
the
project
completely
fully
indexed
up-to-date
and
then
do
like
a
refresh
like
a
like
a
resync
remote
fork,
and
if
I
think,
if
anything
in
the
dependency
list
changes
which
it
does
like
all
the
time,
then
it
ends
up
re-indexing
a
lot
of
stuff
that
I
think
it
does.
It
shouldn't
conceptually
need
to
re-index
that
that's
kind
of
what
it
feels
like
to
me.
Yeah,
that's
one
thing
that
happens.
O
One
thing
that
can
help
is,
I
believe
there
is
an
an
option
I
usually
disabled,
when
I
install
where
you
say,
do
not
automatically
index
new
imports
of
of
stuff
right,
for
instance,
when
you
change
forks,
sometimes
we
get
like
new
modules
added
to
the
to
the
to
the
project
right
and
so
basically,
intellij
goes
over
there
and
starts
re-indexing
everything
right
now.
It
is
a
pain
because
things
if
you,
if
you
disable
that
it
means
that
you
have
to
go
to
each
model
and
do
it
manually,
but
usually
you
know
what
you're
looking
for.
O
O
O
I
I
usually
use
it
when,
when
I'm
changing
forks-
and
I
don't
want
to
go
for
the
entire
pain
and
then
when
I
go
to
sleep,
I
just
go
and
do
a
full
import
and
that's
fine.
O
K
Yeah,
another
complaint
is
intellij,
its
indexes
are
shared
across
projects,
and
so,
if
one
project
corrupts
the
index,
you
have
to
like
rebuild
the
indexes
for
all
your
projects
and
there's
like
this
outstanding
ticket
that
I've
come
across
multiple
times.
It's
like
years
old,
hundreds
of
comments
and
they
just
they're
just
not
interested
in
fixing
it
or
it's
too
complicated
to
fix.
But
if
the
indexes
were
per
project,
then,
like
the
scope
would
be
less
severe.
If
the
indexes
got
screwed
up
all
right.
O
J
Hey
so
we've
got
two
minutes
left
here
and
I
wanted
to
bend
josh's
ear
just
quickly
about
we
had
a
user
who
was
not
getting
in
the
java
agent,
not
getting
exemplars,
and
I
was
just
wondering,
if
is
that
something
that's
expected
to
work
or
not
work.
And
if
it's
expected
to
wait.
M
M
M
So
if
there's
a
span
in
context
and
it's
sampled
or
could
be
sampled
and
it's
a
synchronous
instrument,
you'll
get
exemplars,
but
you'll
only
get
them
right
with
the
bug
fix.
We
have
on
histograms,
so
you
have
to
be
using
a
histogram
instrument
actually
specifically.
M
You
will
get
exemplars
on
the
other
instruments
in
otlp,
but
I
don't
know
of
any
back-end
that
consumes
them
yet.
So
that's
a
different
discussion.
J
So
in
the
java
agent,
so
we're
capturing
server,
request,
metrics
and
those.
M
Let's
fix
that
that
would
be.
That
would
be
a
beautiful
if
we
could
fix
that.
So
there's
two
two
record:
well,
there's
a
couple
record
calls
on
on
counters,
but
one
of
the
record
calls
on
the
actual
api,
which
I
don't
think
instrumentation
uses
takes
context
in
as
an
explicit
parameter.
M
H
You
would
have
to
do
that
in
the
end
after
yeah,
because.
J
M
Right,
that's
awesome.
Do
you
need,
is
this
something
I
should
help
fix,
or
is
that
enough
information.
M
Yeah
this
the
whole,
so
I
just
want
to
have
a
side
comment
of
like
I
really
wish.
We
just
automatically
got
duration
from
span
end,
there's
so
much
complication
in
making
that
just
be
a
thing,
but
man
would
that
be
amazing.
M
J
Yeah-
and
we
really
want
people
to
use
the-
I
think
the
instrument
or
api
is
because
yeah
the
whole
thing
of
instrumenting
spans
and
metrics
traces
and
metrics
separately
is
also
painful.
So
I'm
hoping
that
we
really
promote
the
instrument
or
api.