►
From YouTube: 2022-11-03 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
B
A
It
going
too
early
yeah.
C
D
C
B
E
E
Yeah
I'm
I'm
really
hopeful
that
you
can
get
that
the
Jaeger
deprecation
push
through
that'd
be
great
I'm
I'm,
trying.
B
E
E
Jaeger
supports
otlp
natively
now
and
they've
deprecated,
their
own
exporters
for
their
proprietary
protocol
or
it's
not
proprietary,
but
they're.
The
anger
protocol,
but.
B
Yeah,
the
the
terminology
there
is,
they
call
it.
The
agent
yeah
adds
confusion
but
yeah
that
does
not,
and
it
supports
UDP,
which
we
also
don't
export
to
right.
Yeah.
A
Read
Jason's
excellent
blog
post
all
about
it
hasn't
been
published
yet.
B
A
B
Yeah,
so
the
the
ask
was:
hey:
that's
cool
that
we're
that
we
do
this
deprecation
I
think
everyone's
in
support
of
it,
but
we're
just
trying
to
figure
out
how
long
like
do
we
do
it
for
a
month?
Do
we
do
it
for
a
year
and
one
of
the
ways
to
try
and
solicit
feedback
is
through
this
blog
post,
so
that
was
the
that
was
the
guidance,
so
I
went
for
it.
That's.
E
So
interesting,
I
didn't
realize
that
part
about
the
agent
and
it
only
supporting
UDP,
which
we
don't
support
today,
right.
E
B
Well,
people
that
are
deploying
the
agent
as
a
sidecar
will
have
to
maybe
consider
using
the
collector
if
they're
not.
A
E
D
E
Well,
There's
No
Agenda,
so
I'm
gonna
do
a
little
plug
to
go
and
look
at
the
open,
PRS
related
to
new
jvm
garbage
collector
metrics
that
I'm,
proposing
or
I
guess
we're
kind
of
collectively
proposing
at
this
point,
because.
E
Either
one
I
I
just
reposted
the
links
that
you
posted
oh
great
last
week,
but
yeah.
So
you
know
so
far.
It's
just
the
the
approvals
that
we've
gotten
are
from
people
that
are
mostly
you
know,
have
been
part
of
the
conversations.
So
if
you
haven't
been
part
of
the
conversations
and
I
I'd
appreciate
some
feedback,
positive
or
negative.
A
E
So
that's
the
other
PR
yeah.
F
E
It
would
have
to
be
two
counters
to
be
useful
so,
and
we
kind
of
have
two
counters
today
we
have
the
garbage
collection
time
and
garbage
collection
count
and
a
histogram
contains
those
two
values,
in
addition
to
other
bits
of
information
that
are
potentially
useful,
so
histogram
is
kind
of
like
a
superset
of
the
count
and
the
sum
the
count
of
events
and
then
the
sum
of
how
much
time
took
it
was
taken
in
all
those
events
which
we
currently
report
today.
E
It's
also
worth
noting
that
we
currently
report
the
sum
and
the
count
today,
but
we
don't
include
any
dimensions
on
them,
so
you
just
know
in
general
how
much
time
was
spent
garbage
collecting
and
how
many
garbage
collection
events
took
place,
but
you
don't
know,
for
example,
which,
like
which
of
those
were,
stop
the
world
events
versus
other
types
of
maybe
concurrent
garbage
collection
events.
E
So
you
know,
even
if
we
were
to
just
go
with
the
count
and
the
time
I
would
want
to
add
some
additional
attributes
that
allow
you
to,
you
know,
interpret
them
more
usefully.
A
And
just
to
clarify,
for
others,
you're
talking
about
the
current
state
in
the
instrumentation
repo,
not
the
current
state
in
the
spec,
because
we
have
inspect
any
GC
metrics
correct.
B
Jack,
is
it
true
that
if
you
reported
every
the
duration
of
every
GC,
then
you
would
need
counts,
I,
guess
that
wouldn't
be
a
metric
anymore?
Would
it.
E
Yeah
so
yeah
you
could
you
could
report
those
as
events
it'd
be
a.
F
E
C
F
G
But
by
event
you
mean
the
like:
the
jmx
notifications
or
JFR.
G
I
I
was
asking
because
maybe
JFR
like
the
data
set,
that
GFR
provides,
is
better
I,
haven't,
checked.
I
would
just
assume
the
newer
thing.
E
Yeah,
okay,
so
just
going
back
to
the
histogram
versus
two
counters,
so
we'd
have
to
have
two
counters
one
for
time,
one
for
the
count,
and
so
why
is
a
histogram?
You
know
worth
I?
Guess
the
extra
data
egress
you
pay
for
it
being
exported,
because
it's
a
little
bit
bigger
than
a
couple
of
counters
right
and
so
I
guess
the
the
other
bits
of
data
that
you
get
with
a
histogram?
Is
you
get
the
max?
E
So
you
can
know
the
maximum
amount
of
time
that
a
stop
the
world
garbage
collection
event
took
that's
a
useful
bit
of
information
because
you
can
pin
whether
one
of
your
identify,
whether
like
a
spike
in
your
like
P99
or
p100
latency,
was
caused
by
a
stop
the
world
garbage
collection
event
and
the
other
thing
you
can
do
is
you
can
see
the
distribution
of
your
garbage
collection
event.
E
So
you
know
with
just
the
total
time
spent
and
the
number
of
events
you
can
find
the
average
garbage
collection
time,
but
you
can't
really
see
how
that's
distributed.
So
you
can't
see,
you
know
does
is
are
on
average
Our
Garbage
Collection
events
fast,
but
are
there
some
garbage
collection
events
that
take
significantly
longer
and
if
so,
what
is
kind
of
the
distribution
of
events?
Look
like.
A
And
if
somebody
wanted,
they
could
configure
this
to
be
like
a
zero
bucket
history,
if
all
they
wanted
was
min
max
average
count.
E
Yeah
yeah,
so
they
could
configure
a
single
bucket
histogram,
which
gets
rid
of
the
distribution
piece
but
retains
the
yeah
mid-max
sum
and
count.
That's
nice
and
I
was
gonna
say
you
could
use
views
to
switch
the
aggregation
from
a
histogram
to
a
sum,
but
you
that
that's
true,
you
can
switch
it
from
a
histogram
to
some,
but
you
can't
also
like
we
don't
have
a
count.
Aggregation
right
now
so
like.
If
you
wanted
to,
you,
know,
get
the
sum
and
the
count
you
couldn't
do
that
via
views.
B
B
E
E
So
if
you
have,
if
you're
dropping
all
your
attributes-
and
you
know
you're
not
recording
any
of
those,
then
there'll
just
be
one
duration
and
one
count
metric
okay,
you
know
it'll
be
dependent
on
your
back
end,
how
you
like
what
type
of
like
math
abilities
you
have
to
you
know
divide
those
or
whatever
you
want
to
do
on
those
yeah.
E
If
you
wanted
to
drop
those
Dimensions,
you
wouldn't
be
able
to
see,
for
example,
the
breakdown
between
different
types
of
garbage,
collector
events
right
right,
you
know,
I
think
like
I
said:
I
still
think
that,
even
if
we
were
to
go
with
just
the
duration
and
the
time,
we'd
want
to
record
these
additional
attributes
that
we're
not
recording
today,
which
is
the
garbage
collector
that
performed
the
event
and
then
the
the
action
that
occurred,
that
which
is
essentially
like
a
type
of
garbage
collection
event.
A
Ogden
was
asking
I
think
he
had
sort
of
a
sounded
like
just
was:
okay,
a
preference.
A
Not
or
wanted.
E
It
kind
of
brings
up
an
interesting
question,
though,
so
you
know
if
you
can
eat,
if
you
can
represent
the
same
information,
the
same
type
of
thing,
with
a
histogram
or
with
a
series
of
counters
like
when
do
you
decide
that
it's
worth
the
extra
tax
that
you
pay
for
histograms
versus
the
two
counters
like
I'm?
Not
really
sure?
It's
like
a
subjective
thing
right.
A
Yeah
I
mean
with
a
one
minute
granularity
even
with
without
the
histogram
you
have
the
with
the
attribute
of
the
cause
you
could
still
see.
You
know
how
much
time
was
spent
in
say,
a
stop
the
world
GCS
during
that
minute,
yep,
which
is
pretty
decent.
E
Yeah
is
that
you're,
essentially
just
trying
to
compare
the
additional
analytical
value
of
the
extra
data
you
get
versus
what
it
costs
you
to
export,
that
additional
information?
And
that's
like
a
that's
a
hard
equation,
because
I
think
it's
super
subjective.
Some
people
might
might
get
a
lot
of
value
out
of
that
extra
information,
the
distribution
and
having
the
max
other
people
might
ignore
that
and
it
just
might
be
Superfluous
for
them.
So.
E
A
So
we
probably
need
we're
going
to
need
at
least
a
buy-in
from
a
somebody
in
the
TC.
A
So
anyway,
it
was
fairly
recent,
so
we
can
follow
up
with
Ogden
or
Riley
tend
to
be
tends
to
be
conducive
to.
If
the
Java
folks
approve
a
Java
metric
that
it's
good
enough
for
for
him,
yep.
F
A
Yeah
I
can
ping
Riley
about
both
of
these
see
if
he
wants
more
approvals
before
looking
at
it
or
not,
and
just
for
other
folks
on
the
phone
it
does.
Even
though,
like
you
know,
these
are
gray
check
marks,
meaning
non
like
non-binding
versus
a
green
check
mark,
meaning,
you
know
truly
required
the
you
know.
People
do
the
TC
does
look
at
that,
and
it's
like
okay
I,
see
that
you
know
the
folks
in
the
Java.
Sig
are
on
board
with
this.
A
So
that
does
your
your
comments
and
reviews
and
approvals
do
are
meaningful,
even
though
we
aren't
green
and
Jack
to
game
the
system,
we
should
probably
have
one
of
us
submit
these
PRS
on
your
behalf.
You
can
because
Jack
does
have
a
green
approval.
Green
check
marks.
C
E
Well,
hopefully,
we
don't
have
to
play
the
game
too
much
longer
if
we
can
get
this
done,
because
I
think
after
this
you
know,
most
of
the
basis
are
is
covered
in
terms
of
jvm,
runtime
metrics.
So.
A
Oh
and
you
had
a
comment
on
what
other
PRS
about
holding
off
on
merging
until
the
spec,
to
avoid
churn
in
user
turn
makes
sense.
No
objection
here.
C
C
C
The
fact
like
how
we're
exposing
the
Prometheus
endpoints
inside
the
app
like
we're
using
just
a
I,
don't
remember
exactly
how
we're
doing
it
but
we're
integrating
into
spring
and
and
then
we
have
a
spring
circle
of
some
sort
that
we're
that
is,
that
is
exposing
the
metric
endpoints
for
Prometheus.
To
scrape
is
there
do
we
have
a
way
where
we
can
report
our
the
metrics
generated
by
RPA
our
apis
into
an
existing
Prometheus
registry
inside
the
jvm.
E
Yes,
so,
okay,
the
Prometheus
exporter,
it's
called
Prometheus
HTTP
server,
that's
in
the
core
repo
that
does
not
allow
you
to
do
that
correct.
So
that's
just
a
raw
HTTP
server
that
you
know
reads
the
metrics
and
then
presents
them
in
the
Prometheus
format,
but
in
contrib
I
believe
there
is
a
component
that
does
just
what
you're
asking.
C
C
Well
so
I
mean
that
would
be
a
little,
maybe
a
little
roundabout.
A
I
mean
I
think
that
was
sort
of
the
point
of
this.
That
component
was
as
a
migration
step
to
make
it
easy
to
do
what
you
want
to
do.
E
E
C
So
Prometheus
has
like
a
jmx
exporter
of
some
sort
that
automatically
grabs
data
out
of
jmx
and
publishes
it
on
Prometheus
like
I.
Don't
want
to
switch
any
of
that,
because
I
don't
have
to
change
any
of
my
dashboards
or
alerts
that
are
based
on
jvm,
the
jvm
metrics
that
are
produced
by
Prometheus
right
yeah.
C
So
I'd
like
to
keep
all
that
stuff
and
then
just
start
adding
some
custom
metrics
using
the
hotel
apis
rather
than
I
mean
I
can
use
the
Prometheus
apis.
It's
fine!
It's
not
a
big
deal
but
I'm
trying
to
figure
out
a
way
that
we
can
start
using
open
Telemetry
as
kind
of
a
gentle
introduction
for
the
rest
of
the
team
is.
C
C
F
C
C
E
E
It's
an
open,
Telemetry
Java.
It's
it
used
to
be
the
implementation
of
our
Prometheus
exporter,
but
it
you
know
it
takes
a
dependency
on
Prometheus,
and
so
in
order
to
rid
ourselves
of
that
dependency,
we
recreated
that
functionality,
which
is
an
HTTP
server.
So
this
just
lives
in
an
integration
test
module
at
this
point.
It
doesn't
it's
not
something
we
publish
but
I,
think
it
it
would
it
does
what
what
you're
describing.
E
Like
this,
that
that
call
on
line
44,
yeah
yeah,
this
registers
it
with
with
the
Prometheus
default
registry
yeah.
B
C
Because
the
collector
well,
why?
Why
would
that
be
a
salt
thing,
because.
C
C
B
A
F
D
E
Why
don't
you?
Can
you
click
on
that
and
then
follow
the
link
to
the
issue
because
I
John
asked
if
it
if
it's
still
blocked
by
the
spec
and
I,
responded
in
the
issue
about
this?
So
just
this
morning?
E
Okay,
so
the
spec
doesn't
have
contacts
as
an
argument
to
this
on
emit
method,
but
I
think
it's
just
like
an
oversight.
I,
don't
think
it's
intentional
and
yeah
I
think
it's
kind
of
a
no-brainer
to
to
include
the
argument
here.
It
would
be
symmetric
with
span
processor.
So
that's
like
a
a
big
positive
thing
and
omitting
it
is
an
argument
just
kind
of
it's.
E
It's
setting
up
logs
to
not
work
well
with
things
like
baggage
and
contacts
in
a
way
that
was
just
silly
like
I,
don't
I,
don't
I,
don't
see
why
what
argument
you
would
come
up
with
to
to
exclude
it?
It
would
kind
of
like
kneecap
log
processing
in
a
pretty
serious
way.
So
it's
not
in
the
spec
yet
I
I'm
gonna
make
a
push
to
include
it
in
this
spec.
But
you
know
I,
don't
know
if
we
need
to
wait
for
that
or
not
I
mean.
E
Ultimately,
this
is
all
experimental
stuff,
and
so
the
prototypes
that
we
write
help
influence
the
spec
so
I'm
not
sure
it's
kind
of
a
chicken
or
an
egg
issue.
C
E
That'd
be
funky,
so
if
you,
if
you
had
an
overload
situation,
then
the
SDK
would
call
the
more
specific
one,
the
one
with
two
arguments,
context
and
log
record,
and
that
one
you
know,
maybe
you
would
just
have
a
default
implementation,
that
just
delegates
to
the
single
argument,
version
and
I
guess:
it'd
be
up
to
the
people
implementing
log
processors.
E
To
understand
that
you
know
only
one
of
these
is
going
to
get
invoked
by
the
SDK,
and
you
know
it's
kind
of
silly
to
have
implementations
of
both
like
you
would
just
want
to
implement
one
in
the
more
specific
one.
If
you
needed
it.
A
But
that's
exactly
what
we
would
do
if
we
wanted
to
have
like
a
depretation
cycle
is
to
have
both
around
Mark
the
single
ARG
one
as
deprecated
have
that
default
method.
A
C
Yeah
I
was
just
mostly
trying
to
I,
mean
I,
don't
know
how
many
people
would
have
implemented
log
processors.
Yet
it's
probably
only
in
instrumentation
repo
at
this
point
likely
I,
don't
know
I'm,
not
sure,
but
I
guess
trying
to
avoid
API
thrash
for
early
adopters.
So
if
it
hasn't
been
stabilized
yet
but
yeah.
C
E
Yeah
so
I'm
happy
enough.
You
know
keeping
the
old
method
around
for
some
period
of
time,
and
maybe
we
don't
even
necessarily
have
to
Market
as
deprecated,
because
technically
this
spec
is
unresolved
on
this
yeah.
A
It
does
help
to
give
a
signal
of
like,
because
it's
confusing
to
see
too
and
like
which
one
do
I
override
right.
C
True,
we
needed,
we
need
the
experimental
annotation.
E
C
I
was
just
just
to
be
clear:
I
don't
feel
super
strongly
about
this
I.
Don't
have
I'm
not
implementing
log
processors
at
the
moment,
but
just
want
to
kind
of
try
to
be
the
voice
of
the
customer
as
much
as
I
can
yeah.
E
Hey
speaking
of
this
is
actually
something
that
I
should
have
put
on
the
agenda,
but
I
didn't
so
there
is
an
effort
to
to
stabilize
part
of
logs.
E
There's
there's
there's
several
different
Notions
about.
Oh,
you
know
what
exactly
the
event
is,
and
you
know
what
we,
how
we
name
fields
and
I
feel
like
you
know,
there's
there's
a
realistic
chance
that
we
bike
shed
this
for
a
long
time
and
so
there's
an
effort
to
kind
of
tease
those
two
things
apart.
Can
we
actually
stabilize
the
parts
of
logs
that
we
all
agree
on
while
leaving
the
events
part
experimental?
Well,
we
kind
of
do
the
inevitable
bike
shedding
and
so
I
wanted
to
get
I
guess
this
group's
opinion
on
that.
E
I
think
that
that's
a
great
idea,
I
would
love
to
see
log
logs.
You
know
stable,
at
least
in
some
capacity,
but
what
the
implication
would
be
is
that
we
would
have
a
log
of
Pender
API
out
there.
That
is
doesn't
have
an
events
piece
for
some
period
of
time,
because
that
would
be
the
remaining
unstable
part,
and
so
you
know
that
is
kind
of
John's.
E
Worst
fear:
Another,
Log
API,
you
know
I
I,
think
you
know
I
I
I'm,
a
big
Advocate
to
you
know,
have
it
but
severely
limit
its
usefulness
and
basically
make
necessitate
using
other
log
Frameworks
so
that
you
know
you
only
would
want
to
use
the
open,
Telemetry
log
API
for
bridging
other
log
Frameworks
in
but
yeah.
Let's
talk.
E
Like
like,
do
we
want
to
rehab
like
I,
think
if
we
want
to
talk
about
what
an
event
is
the
place
to
do?
That
is
the
log
Sig
and
you
know,
there's
no
shortage
of
bike
shedding
about
that
specific
thing,
but
I
guess
what
I
want
to
talk
about
with
this
group
is
is.
Would
we
be
opposed
to
the
idea
of
of
including
the
log
of
Pender
API
while
like
as
in
potentially
marking
that
is
stable,
while
the
event
stuff
still
gets
sorted
out.
C
I'm
definitely
not
opposed
to
that,
but
I
and
I
agree
with
you
wholeheartedly
that
that
API
should
not
in
any
way
shape
or
form.
Look
like
a
general
purpose.
Logging
API.
It
should
be
super
clear
that
it's
there
to
bridge
from
to
be
an
offender
like
used
for
vendor
implementations
for
existing
logging
apis,
because
I
don't
want
to
I
mean
we
definitely
don't
want
to
get
into
the
into
the
business
of
having
to
essentially
reinvent
log4j,
with
all
of
the
the
madness
that
is
in
those
apis.
E
Right
and
I
think
that
the
key
there
is
actually
on
the
SDK
side
of
things
so
like.
If
you
wanted
to
make
the
open
Telemetry
logs
more
like
log4j,
you
would
have
to
have
a
rich
ecosystem
of
processors
and
exporters
that
allowed
you
to
do
things
like
filtering
by
logger,
name
and
filtering
by
severity
and
then
export
to
all
different
types
of
locations
like
different
files
and
network
locations,
and
this
and
that
and
so
I
I
think.
C
I
think
there's
one
more
thing:
we
would
need
I
mean
logging
apis,
have
a
tremendous
plethora
of
convenience
methods,
also
right,
like
log.debug
log.info,
where
you
don't
explicitly
pass
in
the
severity.
We
need
to
absolutely
push
back
on
having
to
implement
those
sort
of
syntactic
sugar
methods
yeah
to
our
login
API
right.
So
that's
the
other
half
of
it.
I
agree
with
you
that
not
ever
building
this
Rich
ecosystem
that
would
be
required,
but
we
also
need
to
make
sure
we
don't
build
a
rich
API
that
would
be
required
for
it
to
be
generally.
A
So
Jack
do
you
think
we
would
move
the
still
have
an
experimental
event
API
or
just
remove
it
until
it's
there's
some
progress
made
at
the
spec
level?
Well,.
E
That
that's
something
that's
an
open
question
right
now
and
I
actually
have
an
action
item
to
go,
makes
a
couple
of
proposals
about
that.
So
exactly
how
would
a
language
like
Java
or
another
language
go
about
teasing?
These
two
things
apart,
like
you
know,
is
that
even
possible
and
so
I
have
a
couple
of
ideas
in
my
head
about
how
we
could
do
that
and
I
just
need
to
like
flush
them
out
on
paper.
So.
A
E
Not
going
to
try
to
articulate
the
things
right
now,
but
look
forward
to
it.
I'm
gonna
open
a
spec
issue
that
proposes
a
couple
of
different
options,
and
so
I'll
include
branches
that
have
working
code
about
what
that
might
look
like.
A
F
H
And
can't
we
just
use
the
event
API
to
piggyback
the
log
entries,
but
just
like
for
transporter
whatever,
after
the
the
translation
from
the
specific
logging
framework
is
done
without
Bender,
or
something
like
that.
A
F
H
Before
the
other
way
around
so
so
I
imagine
that
we
are
we'll
be
building
appenders
and
that
the
API
that
we
will
use
will
be
more
suited
to
to
integrate
with
those
offenders
and
from
that
interaction
we
will
have
events,
so
each
line
will
be
an
event
and
in
theory
we
could
use
the
event
API
to
transport
those
entries
they
are
just
logs.
C
H
E
An
event
is
a
more
specific
type
of
log,
at
least
that's
the
notion
that
open
Telemetry
currently
has
it's
experimental
still
and
it's
it's
it's
being
hotly
debated.
But
the
idea
is
that
an
event
is
a
is
a
log
which
has
a
particular
schema
associated
with
it.
E
So
to
say
so
it
has
an
event
domain
and
a
name
which
are
essentially
an
identifier
for
the
the
type
of
thing
that
happened,
and
you
know,
events
that
all
share
the
same
event
and
domain
and
event
name
are
expected
to
be
semantically
similar
and
share
similar
structures.
So.
H
A
Yeah
so
I
think
Bruno.
The
different
people
have
very
different
like
thoughts
on
events
and
it's
very
valid
like
a
lot
of
people.
Think
of
events
as
these
very
low
level,
events
like
you're
describing
I
think
part
of
the
conflict
here
is
that
several
monitoring
vendors
have
a
an
event
concept,
which
is
more
like
a
business
event
for
modeling.
H
But
this
this
has
to
be.
We
need
to
go
to
the
fundamentals.
I
I
haven't,
checked,
Wikipedia,
But
I.
Imagine
that
it
refers
to
an
event
not
very
far
away
from
what
I
said
cannot
be
based
on
some
interpretational
event
or
what
an
event
is
based
on
Legacy
code
that
they
have.
E
There's
there's
no
consensus
because
they're,
the
term
event
has
been
overloaded
so
many
times
and
so
we're
kind
of
in
a
tough
place,
which
is
why,
which
is
why
I'm
in
in
favor
of
kind
of
trying
to
decouple
these
things
and
try
to
deliver
value
to
our
users
by
stabilizing
the
parts
of
logs
that
we
all
agree
on,
while
kind
of
separating
out
events
into
something
that
we
can
continue
to
work
on,
but
don't
doesn't
necessarily
block
progress.
A
But
definitely
Bruno
check
out,
and
you
know
it
would
help
to
get.
You
know
more
perspectives
on
this
at
the
spec
level
because,
as
Jack
said,
it
is
being
very
hotly
debated
right
now.
E
Well,
there's
a
log
Sig
that
meets
on
Wednesdays
at
I.
Think
it's.
What
is
it?
10
a.m,
PST,
and
so
that's
where
most
of
the
discussion
takes
place
and
you
know
and
then
asynchronously
via
issues
and
Trask
is
linked
to
some
of
the
issues.
E
I
don't
know
we,
the
discussion
has
been
kind
of
circular
because
you
know
there's
lots
of
different
perspectives,
and
then
we
end
up
repeating
a
lot
of
the
same
arguments
that
we've
had
before
and
my
take
on.
It
is
that
it's
going
to
continue
for
a
while
and
we're
gonna
really
struggle
to
come
up
with
a
definition
that
everybody
agrees
on.
A
I
think
the
one
thing
that
seems
to
have
consensus
is
that
the
log
the
data
model
is
called
logs,
so
the
lowest
level
is
called
logs
and
that's
the
the
otlp
I
think
that
has
been
marked
stable
already.
E
E
You
know
short
of
introducing
a
new
signal
type
called
events
that
you
know
is
just
another
pillar
like
metrics,
traces
and
logs,
which
I
think
is
a
much
harder
conversation
to
do
short
of
doing
that.
Logs
is
the
lowest
level
thing,
because
the
the
Proto
is
stable.
With
that.
A
Bruno
I
think
this
issue
probably
is
the
most
concrete
around
your
question
about,
because
it's
I
think
tigrin
was
also
saying
I
think
he
even
linked
to
like
Wikipedia,
and
you
know
that
this
definition
of
events
is
like
low
level.
So
try
not
to
use
the
term
events
and
trying
to
come
up
with
a
new
term
that
more
accurate,
accurately
describes
what
people
want
from
this
business
type
event.
A
E
A
H
Just
just
a
question
just
for
confirmation,
so
the
collector,
the
open,
Telemetry
collector
and
the
Java
and
the
open
Telemetry
Java.
The
only
thing
that
connects
both
is
the
otlp
protocol
and
I.
Don't
know,
I,
don't
know
if
we
keep
track
of
the
protocol
version
and
if
we
change
that
and
when
we
change
that.
H
B
F
A
H
That
but
that,
but
there
is
another
another
threat,
but
it
was
based
on
the
version
of
the
specification
that
the
the
Java
implementation
was
using.
E
So,
in
terms
of
the
version
of
this
spec
that
open
Telemetry
Java
implements
at
any
one
time,
that's
it's
probably
not
checked
into
code.
The
closest
thing
we
have
is
that
we
we
tie
ourselves
to
a
particular
version
of
the
semantic
conventions
and
the
semantic
conventions
are
like
published
as
a
part
of
each
version
of
the
specification.
But
you
know
if
we
were
to
say
that
we're
compatible
with
this
version
of
the
specification
in
you
know
reflecting
the
kind
of
the
behavioral
things
like
what
are
the
names
of
the
methods
which
apis
are
available.
E
It
would
kind
of
be
tricky
to
do
also,
because
you
know
the
Java
implementation
is
kind
of
a
work
in
progress
related
to
this
spec.
Like
there's
this,
have
you
seen
the
the
spec
compliance
Matrix
in
the
specification.
E
It's
in
the
root
of
the
project-
and
you
know
this
describes
Which
languages
have
implemented
which
features
of
this
spec,
and
so
you
know
this
Java,
like
our
column,
changes
from
version
to
version
and
over
time
here,
but
we're
never
we're,
never
a
complete
implementation,
or
at
least
we
haven't
been
up
to
this
point.
And
so,
even
if
we
said
that
hey
open,
Telemetry,
Java
version
1.20
supports
version
1.12
of
the
spec,
it
would
be
like
a
partial
implementation,
because
there'll
be
like
kind
of
caveats
where
we
haven't
implemented.
Something
yet.
H
H
So
not
that
I'm
very
concerned
about
that
on
quarkus,
but
things
like
microprofile,
they
will
lock
a
version,
so
they
will
say
that
they
are
using
spec,
114
I,
think
it's
the
latest
and
Java
implementation
119.
H
and
they
will
create
a.
We
will
create
a
tck.
This
toolkit,
that's
will
test
a
bunch
of
features
and
that
other
that
the
implementations
of
microprofile
can
can
use
to
to
assert
validation
with
that
so
and
it's
kind
of
important
to
to
unders
to
track
those
those
versions.
We
can
always
look,
but
well
it's
it's
hard
to
understand
and
with
the
collector
as
well,
because
we
have
a
version
of
The
Collector,
it's
not
even
implemented
in
Java
and
well
for
certification
purposes.
H
We
can
certify
the
latest
collector
that
was
being
used
at
the
time,
but
we
don't
know
the
ranges
of
things
that
are
compatible.
So
at
some
point
this
is
going
to
be
a
problem.
A
So
this
is
for
other
people
who
are
implementing
microprofiles,
since
there
can
be
multiple
implementations
and
they
would
run
that
tck.
H
H
E
Well,
so
now
that
the
protos
are
stable,
the
collectors
should
be
compatible
with
open,
Telemetry
Javas.
You
know
the
protos
that
it
emits
for
the
foreseeable
future.
So
you
know,
we'd
have
to
have
a
2.0
version
of
the
protos,
which
you
know
is
not
even
being
vaguely
discussed
like
I
would
imagine.
I
would
be
surprised
if
that
happened
within
the
next
decade.
B
E
So
I
guess
the
only
question
is
comes
down
to
the
like
the
behavioral
aspects
of
the
specification
like
which
apis
exist
and
what
are
their
arguments
and
that's
such
a
that's
such
a
tricky
thing
because,
like
I
said,
you
know
the
we're
we're
con,
we're
like
evolving
constantly
to
do
our
best
to
meet
the
the
guidance
from
the
spec
and
we,
and
we
also
have
the
ability
to
make
decisions
from
time
to
time
where
we
kind
of
deviate
from
the
spec
like
in
in
that's
acceptable.
E
So
you
know
this.
The
spec
says
you
know
that
a
language
may
do
this
and
we
might
take
advantage
of
that
to
not
Implement
portions
of
the
spec.
So
you
know
I
think
trying
to
say
that
we're
tied
to
a
particular
version
of
the
spec
is
just
you
know.
You're
gonna
have
you're
gonna
need
to
qualify
that
with
a
lot
of
exceptions,
and
that's
going
to
be
a
really
big
like
task
to
qualify
that
to
like
you
know,
come
up
with
all
the
the
differences.
H
Yeah
so
for
now
the
tck
is
quite
simple:
it
doesn't
have
that
many
tests,
but
it
will
increase
with
time.
I
expect
as
people
use
more
and
more
features
and
expand
the
number
the
the,
because
we
want
to
have
like
mandatory
instrumentation
for
this
application
servers
or
whoever
is
implementing
this,
and
they
will
always
look
standard
out
of
the
box
independently.
H
A
H
Yeah,
so
everything
that
takes
these
experimentals
shouldn't
be
on
the
certification
process.
H
A
All
right,
I,
gotta,
playtime
cup,
yep
great
to
see
y'all
Bruno,
would
like
to
talk
about.
Maybe
within
the
micro
profile
context
this
issue
next
week.
I
don't
know
if,
if
you
have
context
for
this
from
Emily
or
if
she
can
join
oh.
H
The
the
stable
next
week,
okay,.