►
From YouTube: 2023-03-07 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
B
Hello,
everyone
so
Carlos
added
the
first
item
in
the
agenda,
so
maybe
let's
wait
a
minute
and
if
he
doesn't
join,
we
can
start
from
the
second
one.
C
With
that,
probably
yes
totally
so
yeah,
basically
I
put
there
some
items
that
need
some
review.
The
first
one
is
about
DB
statement,
cleanup
sanitization,
sorry,
and
there
is
General
agreement.
I.
Think
now,
there's
just
final
details
that
we
would
like
to
clarify.
C
I,
don't
know
whether
people
have
chance
to
review
that
there's.
Basically
the
thing
about
how
restrictively
should
be
because
y'all
already
does
that
and
yeah
the
wording.
Basically,
how
strong
should
it
be
Amir
I
think
Amir
is
not
here
he's
in
the
messaging
group
usually,
but
he
was
giving
some
advice
on
that
front.
C
The
answer
from
the
author
is
that
putting
that
shoot
back
would
leave
things
the
way
they
are
at
this
moment
instead
of
having
a
strong
suggestion.
So
please
review
that
I've
approved
that
if
you
think
it's
it's
a
good
start,
otherwise
I
will
probably
just
talk
to
Amir
directly
or
popping
him
there,
and
otherwise.
We'll
emerge
in
this
I
think
now
we're
in
a
safe
spot
because
we
released
yesterday.
So
we
still
have
time
too
remember
the
change.
We
think
so
any
comments
on
the
front.
B
C
Not
anymore
honestly,
so
I
just
wanted
to
and
honestly
I
want.
I
was
very
tempted
to
merge
it,
but
you
know,
since
it
was
slightly
political
at
the
beginning,
I
want
to
just
to
you
know
to
allow
everybody
to
have
a
final
look.
You
know
before
I
manage
that.
Okay,
perfect,
the
next
one
Alex
elastic
common
schema
donation.
Yeah,
you
see
online,
so
you
want
to
start
yeah.
D
Thanks
yeah
this:
this
is
an
an
update
on
the
topic
so
with
elastic
had
discussions
during
the
last
few
weeks
and
we
came
to
a
conclusion
to
a
decision
so
we're
happy
to
do
a
full
donation
of
the
ECS
so
as
proposed,
which
means
it
would
mean
basically,
a
merger
of
elastic
common
schema
and
open
Telemetry
semantic
conventions.
I
updated
the
corresponding
Otep
yesterday,
so
yeah,
please
have
another
review
of
the
updates.
Also
tick
run
here.
You
mentioned
last
time,
so
yeah
I
hope
this.
D
This
address
now
and
yeah
looking
forward
for
the
next
steps
and
please
let
us
know
if
there
are
any
other
yeah
concerns
or
questions
with
with
the
current
update
of
the
old
Tab
and
I
hope
we
can
proceed
with
that
soon.
B
Okay,
yeah,
that's
great
thanks
for
the
update
I
believe
both
the
governance
committee
and
Technical
Community
are
planning
to
discuss
this
topic.
We
haven't
had
a
chance
to
discuss
it.
So
thank
you
for
the
update
and
we'll
take
a
look
at
it.
D
Yeah
I
think,
like
the
the
general
direction
changed
in
in
terms
of
instead
of
both
key
must
coexist.
Basically,
our
proposal
is
now
to
really
merge
them
and
I
think
this
basically
addresses
all
the
main
open
questions
that
were
commented
on
the
otab,
if
not
display.
B
Okay,
so
I
guess:
let's:
let's:
let's
do
this
we'll
take
the
updated
all
Tab
and
we'll
take
it
from
there.
C
Okay,
in
that
case,
thanks
so
much
Alex,
let's
move
to
the
next
one.
Let
me
share
my
screen:
there's
only
a
couple
of
items
after
that
trust
will
take
over.
So
let
me
quickly
share
my
screen
there.
We
are
thank
you
so
much
so,
basically,
this
is
about
adding
link
or
record
link.
C
You
know,
as
you
may
have
heard,
we
have
here
some
options
and
we
we
want
to
I,
wanted
initially
to
start
discussing,
adding
Dev
like
it's
an
operation
that
can
be
implemented
one
way
or
another
like
at,
for
example,
adding
that
through
adding
links
to
end,
but
then
Jamaica
added
a
comment
about
some.
Basically,
you
know
how
this
will
impact
sampling
that
he
had
mentioned,
but
this
is
in
this
comment,
he's
actually
addressing
that
in
more
detail.
C
I
guess,
that's
probably
an
interesting
thing
to
discuss
here
now
is
that
he
was
wondering
what,
if
we
could,
where
to
have
those
extensions
for
languages
that
find
it
hard
to
have
new
members
to
interfaces.
What
if
we
have
like
a
module
or
back
crash
or
Global
level
functions,
you
know
like
trace
spandlingo,
for
example.
So,
instead
of
adding
this
to
your
to
your
span,
I'm
breaking
you
know
the
interface
for
such
languages.
You
have
this
functionality
this
way.
C
Probably
this
is
not
as
uniform,
but
this
could
be
of
great
help,
and
this
could
you
know
it
would
this?
Will
this
would
help
go
on
C,
plus,
plus
and
probably
other
languages
that
are,
you
know,
specific
needs
for
that
for
the
rest
of
the
languages
it
could
be
like
you
know,
just
probably
just
add
that
to
the
span
interface
directly
instead.
A
E
I,
don't
know,
there's
a
I
think
going
to
be
a
question
of
nice,
of
an
interface
that
would
actually
provide
to
users
which
may
motivate
trying
to
break
the
spin
interface
instead
of
doing
this,
but
I
mean
that's.
Definitely
something
that
we've
considered
in
the
past
is
adding
functions.
The
problem
is
that
the
function
is
not
scope
to
the
thing
it
acts
upon,
and
so
it
becomes
a
little
bit
more
confusing
as
to
like
what
can
act
upon
a
span
at
this
point,
but
yeah,
it
is
something
we've
considered.
E
I
do
want
to
ask
a
little
bit
about
cmct's
comment
because
he
talks
about
sampling
and
are
we?
Are
we
ready
to
go
ahead
with
this
sampling
strategy
because
it
also,
you
know,
like
one
of
the
things
I'm
thinking
about
is
like
this
is
a
reason
to
not
add
it.
E
The
on
ends
method
of
the
span,
but
you
know
that
would
be
for
the
sampling,
but
if
there's
no
reason
to
actually
do
the
sampling
right
now,
you
could
always
add
a
solution
in
the
future
to
add
links.
You
know
in
between
at
some
point
between
the
start
and
end.
F
Right,
I'll,
just
weekly
support,
my
own
opinion,
I
I,
did
offer
the
idea
of
a
structured
encoding
of
the
event
to
help
with
the
compatibility
issues
and
I
also
offered
a
couple
reasons.
Why
I
think
putting
those
events
into
the
Tracer
in
real
time
could
be
useful
for
observability
solution.
F
I
think
the
sampling
topic
is
perhaps
a
little
speculative
to
use
that
as
justification,
I
did
want
to
I
mean
like
one
of
my
biggest
hits
back
in
my
prior
work
on
tracing
in
my
old
employer
involved
is
essentially
these
links
being
annotated
in
a
way
that
you
could
click
through
them
in
your
Z
pages.
F
So
you
have
a
z,
Pages
app,
that's
watching
all
your
spans
and
you
go
to
that
Z
page
to
say.
What's
happening
to
my
expands
being
able
to
see
the
links
that
have
been
created
in
the
middle
of
the
operations
can
help
you,
because
if
those
are
are
spans
that
are
blocked,
you
can
actually
click
through
them
to
find
what's
blocked,
and
so
you
might
have
a
you
know:
microservices
Arrangement,
where
you
have
many
services,
you
go
to
the
Z
page.
F
Your
links
are
all
real
time
available,
so
you
can
click
through
them
and
find
what's
the
100
deadlock
and
so
on.
It's
just
a
use
case
that
I've
seen
in
the
past,
so
I
thought
I'd
recommend,
suggest
it.
I
don't
have
too
many
strong
feelings
about
this,
though.
E
Yeah,
that's
kind
of
an
interesting
one.
You
mentioned
that
you
provided
a
data
model
for
the
events.
Linking
is
that
in
the
other
PR.
F
I
think
I
was
just
describing
it
in
words
in
this
comment.
Above
you
know,
they're
they're,
the
the
only
available
sort
of
interfaces
that
we
have,
that
we
can
extend
are
to
add
semantic
sort
of
interpretation
onto
the
event
or
to
add,
like
a
new
field,
to
the
end
options
of
the
span
and
I.
Think
I
was
trying
to
say
that
encoding
a
span
link
as
a
set
of
structured
attributes.
You
know
you
have
a
trace
ID
and
a
span
ID,
and
you
have
some
more
attributes
that
were
provided.
F
That's
the
structure
of
a
link
and
it
does
fit
into
our
data
model
pretty
well
and
there's
this
parallel
PR
that
tigrett's
been
promoting
I,
think
that
has
us
adding
dictionary,
valued
attributes
and
list
valued
attributes,
and
that
gives
you
a
way
to
represent
span
links.
That
was
the
comment
that
I
made
yeah.
E
I,
this
is
I,
think
one
of
the
things
that
I
wasn't
quite
getting
there's
another
PR
Carlos.
That
I
think
you
opened
about
adding
span
links
as
events
in
the
add
event,
method
and
Yuri
was
against
it,
based
on
income
being
incompatible
with
our
data
model
of
the
event
and
I
I.
Don't
I
didn't
see
the
incompatibility,
so
I
I
was
kind
of
confused,
but
I
also
haven't
had
the
time
to
respond
to
that
issue.
E
C
No,
basically,
that's
the
thing
that
we
were
trying
to
think
of
Alternatives.
The
problem
is
that,
indeed,
as
somebody
that
has
to
ingest
that
data,
you
have
to
basically
check
for
both
the
actual
links
on
the
links
in
that
that
are
coming
through
us
as
events,
and
there
was
even
a
discussion
about
maybe
deprecating
links
like
the
way
they
exist
now
in
you
know,
in
in
our
proto-representation
and
just
use
events
all
the
time,
for
example,
but
yeah
that
didn't
solidify
into
anything.
E
Okay,
I
I
kind
of
remember
the
last
time.
I
had
this
conversation
in
this
meeting.
We
had
this
ending
point
where
we
realized
that
we
actually
needed
to
have
a
better
understanding
of,
like
the
data
model
of
both
links
and
events
before
we
were
able
to
I
think
make
a
determination
on
that
one.
C
So
honestly,
I
think
that,
like,
for
example,
one
of
the
options
that
it
was
that
was
mentioned,
like
basically
deprecating
links
and
go
all
the
way
through
events-
it's
it's
not
a
bad
one.
It's
an
interesting
one,
but
also
it
would
mean
the
fact
of
breaking
a
lot
of
people.
You
know
like
doing
some
stuff
like
behind
the
scenes,
the
practitioner
and
stuff
already
so
I.
Don't
see
that
as
something
that
we
would
do
honestly
immediately,
probably
long,
not
even
long
term
medium
term.
C
I
can
imagine
us
trying
to
you
know,
go
that
way
once
this
is
validated,
but
for
for
you
know,
for
the
for
the
current
time,
I
don't
see
honestly
that
happening.
I
can
imagine
a
tab
and
a
lot
of
discussion
there
on
the
front
and
I
I
and
to
be
honest
with
these
good
people
blocking
us
for,
like
six
months,
if
I
were
to
guess.
C
Yeah,
actually,
that
that's
I
guess
that's
my
point
why
I
would
be
hesitant
now,
after
these
questions
happened
about
adding
links
as
events,
because,
indeed
such
discussion
hasn't,
you
know
and
I
mean
and
there's
only
some
of
us
involved
in
that
discussion
once
we
bring
the
rest
of
the
community,
can
imagine
like
a
lot
of
more
Cycles,
like
you
know,
being
required
to
accomplish
this
changes
that
were
to
happen.
C
But
if
I
were
to
guess,
I
mean
yeah
I,
don't
know
like
yeah
I.
Think
that
honestly
I
see
that
we
have
a
few
imperfect
Solutions
here.
So
the
question
is
which
one
are
we
going
to
take
on
that
way?
So,
let's
continue
discussing
another
line.
Only
somebody
has
something
to
say
here
now
that
we,
you
know
we
could
discuss,
but
basically
it's
like,
which
ones
which
one
from
these
imperfect
Solutions
we
can
go
with.
C
The
the
messaging
working
group
really
needs
this
for
reasons
that
we
explained
in
the
past
and
we
need
to
be
unblocked
but
yeah.
Basically,
the
thing
is
which
one
from
this
imperfect
solution
is
the
one
that
would
make
things
work.
More
is
fine.
I
also
don't
see
lalit
here,
who
is
a
C
plus
plus
maintainer.
Let's
support
him
for
his
opinion,
so
yeah.
C
Okay,
we'll
put
a
comment
in
that
case
about
this
short
discussion
we
have
moving
on
then,
let's
go
to
about
this
one.
The
clearing
Hotel
PS
table
I
blocked
that
for
now,
basically,
all
these
signals
are
stable
already.
So
basically
would
like
to
know
Mark
what
will
be
a
stable
in
general.
It
makes
sense,
but
there's
an
issue
that
in
in
the
Proto
repo
about
this
and
yeah,
basically,
it's
Duty
Grand,
just
basically
mentioning
what
are
the
conditions
to
declare
otlp
1.0
as
a
full
thing.
B
Yeah,
so
there's
two
parts
here
right:
one
is
marking
the
the
documents
in
the
specification
repo
as
fable
and
those
documents
don't
contain
the
the
details
of
the
protobub
definitions,
those
are
in
a
separate
repository
and
for
the
product
above
definitions.
There
is
a
separate
discussion
about
Stronger
guarantees
stronger
than
what
is
necessary
for
wire
compatibility.
B
Strictly
speaking,
the
protocol
is
all
about
wire
compatibility
right
so
that
two
participants
can
speak
and
understand
each
other.
For
the
protobots,
though,
we
want
to
give
also
source
code
level
compatibility
guarantees
so
that
whoever
is
consuming
the
product
files
can
rely
on
that.
That
is
not
settled.
What
exactly
we
want
to
guarantee?
We
we
we,
we
do
not
have
the
agreement
yet
right.
There
is
still
under
the
discussion.
B
I
don't
know
if
we
need
to
wait
for
that,
though
right
we
can
the
the
sections
that
describe
the
behavior
of
the
protocol
in
the
specification
repo
they
are
at
the
moment.
I
think
we
we
were
going
with
the
subsections
individually,
marking
them
as
a
stable
and
they
are
ready,
and
now
they
are
all
essentially
marked
as
stable.
So
what
the
the
only
thing
that
is
remaining
there
is
to
just
remove
the
labels
for
top
sections
and
just
Mark
the
entire
document
as
stable.
B
Do
we
want
to
wait
additionally
for
this
one
for
the
Proto
repo
to
also
to
to
be
marked
1.0,
essentially
the
source
code,
a
protophiles
source
code
compatibility
to
also
be
stable
before
we
do
the
spec
document
we
mark
it
stable
I
personally,
don't
think
so,
but
maybe
others
have
thoughts
that
that
maybe
we
should
be
doing
that
I
don't
know
so.
G
I
have
a
question:
I
I,
believe
people
are
asking
for
this
because
they
want
to
start
using
the
generated
code.
Mark
my
words
or
not.
This
is
why
they
want
to
mark
that
one
zero,
because
they
want
to
have
a
one
zero
on
that
repo.
They
generate
the
artifacts
and
they
put
one
zero
there
and
they
start
spreading
it
in
the
world.
Is
this
something
we
are
feeling
confident
offering.
B
I
said
I
I,
don't
know,
what's
the
answer
to
that
and
you're
right,
so
that's
that's
one
of
the
probably
reasons
why
people
want
to
Mark
the
Proto
repo
1.0.
But
my
question
was
different.
Do
we
need
to
settle
this?
This
question
before
we
can
mark
the
old
TLP
pages
in
the
specification
report
is
stable
or
no?
Oh,.
G
B
That's
that's
what
I
think
as
well
right.
So
if
that
is
the
case,
then
we
can
go
ahead
and
do
that
and
independently
from
that,
make
a
decision
on
what
we
guarantee
for
the
source
codes
of
the
product
files,
which.
H
H
Can
we
set
a
time
box
for
establishing
data
we've
been
talking
about
this
for
months
on
and
off
and
not
making
progress?
How
do
we
get
to
that
point.
G
H
B
H
G
Yeah
but
I
I
DC
is
about
so
there
are
pages
in
the
specs
that
are
not
119
or
they
are
not
considered
stable,
and
what
I'm
hearing
from
tigran
is
to
Mark
these
pages
in
the
specification
as
stable.
Is
that
correct,
tigraine
or
not?
Yes,.
G
B
So,
just
to
be
clear
right:
what
marking
otlp
specification
as
stable
means?
It
means
that
the
behavior
of
the
protocol
observable
behavior
of
the
protocol.
It's
a
network
protocol
is
going
to
be
stable,
we're
not
going
to
be
breaking
anything
so
the
wire
encoding,
how
the
the
client
and
the
server
behave.
Those
can't
change
independently
from
that
I
think
that's
my
opinion.
That's
independent!
B
There
is
a
definition
of
the
of
the
wire
of
the
wire
format
in
the
form
of
protophiles
right
expressed
in
in
a
product,
buff
language,
and
that
is
still
in
zero
point
x
right
that
is
not
declared
stable
and
we're
debating
about
what
how
that
should
be
declared
stable.
What
what
even
stable
means
for
that?
What
are
the
guarantees
for
that
I?
B
Think
that's
an
independent
discussion,
but
I
may
be
wrong,
so
I'm,
hearing,
I,
guess
Anthony
you're
saying
those
two
are
so
we
can't
say
that
the
protocol
is
stable
unless
we
also
agree
on
the
on
the
definitions
being
stable
source
code
wise.
Is
that
what
you're,
saying
or
you're
just
asking
for
more
more
concrete
plans
of
doing
the
product
definitions
also
stable.
H
B
A
B
Am
not
saying
we
shouldn't
be
declaring
the
protophiles,
stable,
I,
think
we
should
it's
just
that
I'm
saying
we
we
we
don't
have
to
block
marking
the
spec
document,
that's
stable
until
we
have
a
decision
on
this,
but
if
the
broader
thinking
is
that
they
are,
they
are
interdependent.
Then
then
I
don't
mind.
I.
H
B
I
agree
with
you:
I
agree:
it's
yeah.
We
shouldn't
be
doing
that.
At
the
very
least.
We
have
to
I
guess
fully
conclude
that
open
issue
there
and
it
is
open
and
we
can't
close
it
until
we
concluded,
but
we
have
to
come
to
an
agreement
there
on
the
Proto
issue
on
1.0
for
the
product
repository
I
agree
with
you.
This
can't
be
declared
as
a
victory
for
that
I
agree.
C
B
I
think
we
keep
it
there
and
we
revive
that
that
issue
it's
been
I
guess
it
was
blocked
by
a
couple
of
issues
that
are
now
resolved
and
we
can
continue
working
on
that
I
don't
know.
What's
the
EPA
but
I
agree
with
Anthony
that
we
need
a
conclusion
on
that
one
as
well
soon
quickly
not
like
in
many
more
months
right.
C
I
guess
that
okay,
so
in
that
case
only
somebody
has
a
strong
opposition
to
Mark
in
these
documents,
specifically
because
each
subsection
is
stable,
a
stable,
please
because
we
don't
have
enough
time,
but
just
for
this
specific
document,
please
add
a
comment
in
that
issue:
sorry
in
the
pr
or
even
block
it.
If
you
feel
strongly
against
that,
otherwise
I
will
unblock
it
myself
and
let's
see
what
happens
and
yeah,
let's
definitely
keep
this
issue
around.
C
C
C
Actually
fixed,
no
problem,
yeah,
no
problem,
as
many
of
you
heard,
there's
a
physics
with
instrumentation
and
such
instrumentation.
We
have
resource
detectors
which
gas
you
know.
I
still
do
the
resources
based
on
your
environment
like
kubernetes,
for
example,
you
are
in
you.
Have
you
get
kubernetes,
you
know
attributes
in
the
resource,
and
one
of
the
questions
is.
Why
did
we
were
to
do
something
like
that
here?
You
know
among
other
situations
and
the
situation
here
is
that
basically
service
name
when
there's
it
there's
no
value
specified
from
the
user.
C
Quite
often
the
unknown
value
label
or
the
strain
is
very
bad.
So
there's
a
few
discussions
here
happening
about
what
to
do
there
and,
as
I
said
before,
there
was
one
suggestion
about,
for
example,
trying
to
use
this
resource
detectors
idea,
but
instead
of
you
know
going
directly
to
set,
you
know,
service
name,
I,
don't
I,
wonder
whether
NEC
has
done
this
before,
as
I
said
before
it
could
be
resource
detectors,
but
specifically
for
service.name
Java.
J
Has
a
couple
of
resource
detectors
that
are
out
there
that
try
to
you
know
provide
a
service
name
if
none
is
available,
they
are
you
know
there
there.
It
uses
this
resource
detector
concept,
but
the
priority
is
lower
than
if
the
user
were
to
specify
it
and
so
like
in
Spring
applications
and
other.
You
know,
situations
where
a
library
or
web
framework
is
is
present
and
may
provide
a
useful
service
name.
We
try
to
inspect
it
and
and
obtain
it
from
there.
C
K
C
K
I
I
think,
but
but
I
think
it's
worth
probably
calling
out
explicitly
because
otherwise
it's
it
may
be
confusing
like
understanding
if
it's
possible
or
do
we
need
to
you,
know,
fall
back
to
this
default
process.
Name
so
I
think
that
we
should
just
call
out
or
either
with
some
note
or
something
that
there's
it's
still
possible
to
have.
It
can
be
suicide
by
the
resource,
detector
and
I.
Think
that's
good
enough.
C
C
Well,
I
mean
just
to
clarify
I
think,
for
example,
the
way
Java
does.
That
is
that
you
have
psdk,
and
you
know
I
know
you
know,
like
the
specification
says,
and
basically
you
provide
an
artifact
on
top
of
the
SDK
that
does
that
and
that
that
artifact
passes
that
information
to
the
SDK.
So
basically
you
keep
the
SDK
the
way
it
is
now
it's
just
like
you
know.
You
know
an
extra
layer
on
top.
H
K
H
Is
used
right,
I
think
at
least
that's
how
it
is
in
Go
is,
is
that
the
SDK
takes
a
resource
which
the
user
may
have
configured
through
using
resource
detectors
or
may
have
constructed
manually.
C
Okay,
for
the
sake
of
time,
I
think
that
Jack
said
you
mentioned
that
Java
has
this.
If
you
could
just
put
a
link
in
the
chat
or
in
the
doc
or
in
the
issue
itself,
just
to
Showcase,
you
know
how
Java
is
doing
that
and
probably
just
say
that
this
is
probably
the
option.
One
in
that
case
would
be
Robert
going
forward.
C
If
that
makes
sense
and
yep,
we
can
probably
add
a
note
somewhere,
like
you
know,
like
or
I,
don't
know,
I
think
that
this
feels,
like
it
overlaps
with
the
configuration
group
not
in
what
they
need
to
solve.
It's
just
like
you
know
what
kind
of
area
they're
covering
what,
of
course,
not
related
to
more
work
for
them,
but
anyway
we
can
probably
add
a
note
somewhere.
C
Okay,
so
that
okay,
so
basically
that
hopefully
unblocks
people-
and
you
know
the.net
information
group-
can
move
forward
with
the
resource
detectors
and
bigger
than
authoritical
and
at
least
and
let's
move
on,
we
sorry,
let's
move
to
have
a
discussion
here.
Let's
see
whether
the
concept
of
ritual
detectors
helped
you
or
not.
Hopefully
it
does.
C
If
not,
let's
discuss
that
next
quicker
again:
okay,
oops
I,
don't
know
what
to
do
here.
Can
we
split
that
remaining.
J
Do
you
think
we
can
time
access
to
10
minutes
instead
of
15
to
20.
yeah.
C
A
B
Can
you
guys
see
okay,
so
we
at
the
logging
seek
had
arrived
at
a
point
where
we'd
like
to
present
you
guys
what
we
have
so
and
the
it's
it's
about
the
what
we
call
logs
Bridge,
API
and
SDK,
and
we
are
preparing
to
Market
stable
and
before
we
do
that
we
wanted
to
communicate
what
we
have
with
with
your
thoughts
with
the
community.
B
So
let
me
I,
guess
briefly
tell
you
what
we
I
guess,
we'll
clearly
I'll
tell
you
what
we
we
have
so
far.
What
is
already
stable,
then
what
that
new,
logs,
Bridge,
API,
msdk
SDK,
looks
like
and
then
Jack
will
show
you
the
Prototype
one
of
the
prototypes
that
we
have
in
Java
and
and
then
we'll
tell
you
what
we
plan
to
do
next
after
that,
so
what
what
is
already
stable
very
quickly?
B
We
have
a
data
model
for
logs
logic,
Digital
Data
model,
which
explains
what
a
log
record
is.
We
have
the
the
representation
of
that
log
record
in
otlp
in
the
protocol,
and
this
both
of
these
are
already
declared
stable
right.
We
are
not
going
to
be
making
breaking
changes
to
to
any
of
this
part.
What
we're
working
on
right
now
is
is
an
API
and
SDK
specification
for
open
Telemetry,
and
that's
the
part
that
would
like
to
talk
about
today.
So
we
call
this
a
logs,
Bridge,
API
and
SDK.
B
B
The
purpose
of
the
of
the
API
is
to
serve
as
a
bridge
between
existing
logging
libraries,
so
things
like,
for
example,
look
for
G
in
Java
and
and
open
Telemetry,
so
that,
if
you
are,
if
you
use,
for
example,
again
log4j
in
your
application
normally
to
our
blogs,
you
can
continue
using
as
it
is,
and
just
configure
it
to
Output
all
those
logs
through
open
Telemetry
in
open,
Telemetry
compliant
manner,
essentially
we're
bridging
your
existing
code
to
start
working
in
an
open,
Telemetry
way.
B
The
API
itself
is
quite
similar
to
what
we
have
for
tracing,
so
we
have
the
logger
provider
corresponding
to
Tracer
provider.
We
have
logger
similar
to
Tracer.
We
have
a
processor
log
record
processor.
We
have
the
log
record
exporter,
we
intentionally
try
to
keep
this
consistent
and-
and
what
is
this
not
right?
What's
important,
this
library
is
not
is
not
a
general
purpose.
Logging
Library,
which,
which
is
a
replacement
or
similar
to
what
many
languages
have
so
you.
You
are
not
expected
to
be
using
this.
B
This
API
calling
the
API
directly
instead
of,
for
example,
using
Block
14
in
your
Java
applications.
This
is
also
intentional
in
the
future
we
may
introduce
an
API
that
is,
that
is
intended
by
to
be
called
directly.
But
that
is
not
what
we're
doing
right
now
and
and
and
that's
intentional
right.
We
we
want
to
make
sure
that
the
currently
existing
libraries
continue
to
be
used
by
users
and
we
make
it
easier
for
them
to
use
it
with
open
Telemetry.
B
So
essentially,
this
is
what
quality
is
going
to
look
like
right.
You
have
an
application.
You
have
a
long,
a
logic
Library
there
that
your
application
calls
or
4G.
For
example,
there
is
a
bridge
there
right
that
is
created
and
it's
part
of
open
climate
instrumentation
and
there's
a
logs
Bridge
API,
that
is
part
of
the
API
package
and
the
SDK
and
part
of
our
regular
SDK
package.
B
This
is
how
we
feed-
and
this
is
the
prototype
implementations-
that
we
have
that
correspond
to
this
picture,
so
we
have
prototypes
in
five
languages
and
products
include
of
the
API
and
the
SDK
implementations,
and
also
implementations
of
those
those
bridges
that
I
was
talking
about
right
so
that
you
can
use
with
log4j
or
log
back
in
Java,
with
a
logging
Library
python
the
standard
one
and
would
like
to
show
you
an
example
of
what
it
looks
like,
particularly
in
Java
Jack.
Do
you
want
to
take
it
away?
J
Yeah
I'll
take
over
the
share
for
all
right
briefly
foreign,
so
you
can
follow.
J
You
can
click
on
the
link
in
that
presentation,
which
is
included
in
the
meeting
notes
to
look
at
the
source
for
this
in
particular,
but
this
is
a
simple
Java
application
was
which
has
been
configured
in
order
to
kind
of
showcase,
the
the
various
appenders
that
we've
implemented
and
how
you
can
use
those
appenders
to
bridge
logs
from
log4j
for
a
Java
utility
logging
for
slf
for
J
and
log
back
into
open,
Telemetry
and,
ultimately
out
to
a
network
location
over
otlp.
J
In
this
case,
The
Collector,
you
know
what
this
quote
is
going
to
do
is
we're
going
to.
You
know
we're
going
to
initialize
open
Telemetry,
and
you
know
what
that
what
that
looks
like
is
we
configure
the
open,
Telemetry
SDK
with
the
logger
provider?
We
configure
a
resource
for
that
logger
provider.
We
configure
a
log
record
processor,
which
is
the
batch
log
record
processor,
and
that's
going
to
be
configured
to
you,
know
batch
up
log
records
and
send
them
over
otlp
to
a
collector
running
on
4317.
J
I,
have
you
know,
I
have
the
collector
configured
via
Docker
and
the
The
Collector
is
going
to
just
it's
going
to
accept
the
log
records
via
otlp
and
it's
going
to
log
them
out
to
the
console
using
the
logging
exporter
just
to
kind
of
show
what
this
looks
like
you
know
going
back
to
what
the
application
is
going
to
do.
There's
a
bunch
of
code
in
here
that
shows
a
variety
of
different
scenarios.
J
So
you
know
this
code
shows
using
the
log4j
API
and
it
logs
records
ins
with
the
contacts
of
a
span
and
outside
of
the
context
of
a
span.
So
we
can
see
Trace
contacts
propagation.
It
shows
structured,
logs
and
and
regular
old
logs.
You.
J
Here
we
have
slf4j,
which
is
another
popular
logging
API,
probably
the
most
popular
logging
API
in
the
law
in
the
Java
ecosystem.
Slf4J
is
configured
with
log
back
and
log
back
is
configured
within
a
Pender
that
routes
the
data
to
open
Telemetry.
You
know,
similarly
for
Joule
Java
utility
logging,
that's
what
Joule
is
an
acronym
for
and
then
finally
down
here.
You
know
this
shows
kind
of
how
you
can
use
the
API,
the
log
Bridge
API
directly.
This
is
the
type
of
thing
that
an
appender
would
be
expected
to
use.
J
You
know
normally
I
would
you
know
elaborate
on
some
of
this
a
bit
but
we're
kind
of
crunched
for
time.
So
I'm
going
to
show
things
more
briefly,
and
you
can
go
look
at
the
source
code
in
asynchronously,
so
just
to
kind
of
show
this
kind
of
an
end-to-end
example.
I
guess
I
want
to
show
one
more
thing,
so
we
have
in
this
application.
We
have
some
configuration.
J
This
is
the
log
for
J
configuration
and
this
tells
log4j
to
say:
hey,
take
all
your
logs
print
them
to
the
console,
also
print
them.
You
know,
send
them
to
the
open,
Telemetry
appender
and
that
appender
is
going
to
actually
Bridge
them
into
the
open
Telemetry.
You
know
Bridge,
API
and
SDK
there's
a
similar
configuration
for
log
back.
This
is
unusual
to
have
both
log4j
and
log
back
configured.
J
But
you
know
what
this
application
tries
to
do
is
show
a
lot
of
different
use
cases
all
at
once,
but
so
there's
a
similar
thing
going
on
here.
We
configure
log
back
to
send
to
the
open,
telemetry
appender
briefly
Let's
Run,
The,
Collector,
so
I'm
going
to
run
the
collector
and
then
I'm
going
to
run
this
application
in
a
separate
shell
and
this
application.
J
You
know
it
prints
some
log
messages
to
the
console,
and
you
know,
as
I
mentioned
it
exports
it
it's
configured
with
the
open,
Telemetry
penders
to
bridge
them
over
into
the
SDK
those
get
exported
to
The
Collector
and
over
in
The
Collector.
We
see
you
know
all
these
logs
coming
in.
J
You
know
it's
it's
a
lot
to
kind
of
parse,
but
you
see
things
just
to
call
it
a
couple
of
things.
You
see
things
like
you
know.
If
there's
if
the
logs
are
structured,
there's
attributes
that
are
extracted
and
included
on
those
log
records.
If
there's,
if
the
logs
are,
you
know
recorded
in
the
context
of
a
of
a
span,
then
the
the
trace
context
is
propagated
onto
the
log.
J
You
know,
another
thing
to
call
out
is
the
name
of
the
logger
is
included
as
the
instrumentation
scope
name
and
there's
there's
other
things
to
call
out,
but
we're
coming
up
to
our
time
box
and
so
I'm
gonna
end
there
really
quickly.
So
we
can
make
some
concluding
remarks
and
let
Trask
have
the
floor.
Work.
B
Yeah,
thank
you
Jack,
and
we
are
planning
to
mark
this
API,
the
bridge,
API
and
SDK
specification
as
stable
and
we're
planning
to
work
on
the
prototypes
and
turn
them
into
production.
Implementations
steps
after
this
could
be
about
adding
the
end
user
facing
API
for
languages
like
C,
plus
plus
right,
maybe
so.
This
is
not
what
we're
doing
right
now,
adding
events
API,
maybe
on
top
of
the
log
record
again
we're
not
doing
this
right
now.
B
What
we
want
to
do
is
to
Mark
the
work
that
we
did
so
far
as
stable
and
we're
calling
for
comments
here.
If
you,
if
you
want
to
share
any
thoughts
about
how
this
possibly
could
be
changed
now
is
the
time
to
stick
and
if
we
don't
see
any
any
new
suggestions
or
objections,
that's
what
we'll
do
right,
we'll
mark
the
spec
as
stable
there.
The
links
are
there.
Please
take
a
look
at
the
the
slides
and
correct
issues.
Comments
you're,
welcome
to
come
to
log
Sig
as
well
to
discuss
this
matters
so.
G
Okay,
can
I
have
only
one
comment,
so
the
comment
that
I
have
is
I
know
before
there
was
an
API
that
a
logging
API,
but
it
was
actually
more
or
less
an
API
to
export
logs,
not
to
not
to
record
log
records.
Correct
I
mean
there
was
at
one
point.
What
I'm
trying
to
say
is.
G
K
G
Want
us
to
make
sure
we
can
add
a
proper
login
API,
not
suggesting
to
do
that
again,
but
I'm
saying
like
just
make
sure
we
are
designing
this
in
a
way
that
if,
in
two
years
we
revisit
this
decision
and
we
want
to
add
an
API
and
the
SDK
that
is
very
similar
with
the
metrics
and
traces.
We
can
do
that.
So
that's
the
only
thoughts
that
I
have
and
I
want
to
make
sure
we
we
carefully
thought
about
that,
and-
and
that
is
there.
C
Thank
you,
so
much
I
suggest
that
we
take
the
rest
of
the
discussion
offline.
Sorry
for
that
there's
only
one
hour
for
these.
Thank
you
so
much
again,
lovely
work,
Trask.
L
For
your
patience,
no
worries
and
thank
you
all
for
putting
up
with
semantic
convention
stability.
So
this
is
this
week
is
actually
the
last
of
the
the
ends
the
six
week.
Http
semantic
convention
stability
working
group
Sprint.
L
So
our
goal
is
to
get
all
PR's
into
review,
not
to
get
them
merged
by
the
end
of
this
week,
but
to
get
all
our
recommendations
in
except
for
setting
aside
ECS
on
that
decision
will
play
out
separately
so
yeah
so
LED
Miller.
Do
you
want
to
cover
this
one.
M
Yeah
so,
as
you
know,
probably,
we
have
HTTP
semantic
connections,
but
we
also
are
working
with
General
instrumentation
work
group
on
the
general
stability
definition.
Josh
is
working
on
the
defining
what
stability
means.
M
So
what
currently
spark
says
that
we
rely
on
the
schema
transformation
process,
but
we
had
an
interesting
discussion
regarding
ECS
so
like
assuming
ship
stable
now,
and
then
we
switch
to
ACS
according
to
the
current
definition
of
stability.
This
won't
be
breaking,
even
though
we
will
change
basically
everything,
because
it
would
be
attribute
your
names
and
there
was
a
consensus
in
the
instrumentation
work
group.
This
is
not
the
change
we
would
do
after
stability.
So
what
time
we
want
from
this
audience
is
are.
M
Are
we
ready
to
say
that
schema
transformation,
given
it's
used,
maybe
not
by
everyone
in
up
in
Telemetry
world,
to
say
the
least?
But
are
we
saying
that
we
are
ready
to
rely
on
this
after
stability
and
if
not,
then
maybe
we
can
keep
that
part
experimental
for
now
stabilizing
the
current
set
of
conventions
was
assumption
that
attributory
names
are
breaking
so
I
wanted
to
hear
opinions
about
it
and
try
to
come
up
with
some
suggestions.
L
Great
yes,
please,
it
helps
a
lot
to
get
the
broader
Community
feedback
on
these
issues.
A
lot
of
nuances,
around
stability
I
jump
into
this
one,
Sue
and
I
think
without
Josh.
Here,
maybe
we
will
skip
over
this
one
and
go
to
here,
because
Christian
and
tigran
had
some
good
comments
here.
L
So
what
we're
trying
to
what
we've
struggled
with
in
the
stability
working
group
is
what
does
Telemetry
stability
mean
on
the
consumer
side
and
there's
a
couple
reasons
for
that?
One
is
all
the
things
that
can
happen
schema
in
the
pipeline
that
people
can
do
and
we
don't
necessarily
have
a
definition
of
a
clear
definition
of
what
people
can
or
can't
do
or
any
enforcement
and
the
other
is
what
does
stability
mean
from
version
to
version
are
currently
in
our
the
Josh's
PR,
for
what
does
semicon
actually
enforce?
L
It
applies
to
the
consumer
side.
It
says
these
are
the
keys
that
you
pass
into
these
API
calls,
and
that
is
what
is
enforced
from
schema
version
to
schema
version
which
there's
a
natural
output.
I.
Guess
if
we
say
that
there's
just
nothing
in
the
pipeline
so
definitely
open
to
I,
mean
I
I'm
happy
to
loosen
the
language
here,
because
I
think
that
was
part
of
the
concern.
L
L
And
the
let's
see
this
here-
this
was
a
follow-up
from
last
week
would
like
to
get
I
think
there
were
a
couple
of
folks
last
week,
tigrant
included,
who
were
concerned
about
the
ACD
semcon,
depending
on
net
attributes,
which
are
not
marked
stable,
and
so
what
this
does
is
splits
those
out
and
adds
a
warning
to
the
top
of
the
attributes
which
are
used
by
HTTP
semantic
conventions.
L
So
would
really
appreciate
some
of
reviews,
comments
block
it,
whatever
just
need
feedback
on
this,
if
if
this
is
an
acceptable
route
to
go
forward
or
if
we
need
to
do
something
stronger,
we're
really
trying
not
to
have
to
mark
net
attributes
as
stable
as
part
of
HTTP,
we
know
that
we
we
do
need
stable
net
attributes.
We
also
need
stable
exception
attributes.
We
need
a
lot
of
things,
we're
just
trying
to
those
limit
the
scope
of
this
initial,
very
initial
stability
effort.
L
L
L
L
And
that
is
I
think
that's
it
just
open
the
floor.
Any
questions
about
any
of
the
semantic
convention,
stability
issues.
L
All
right,
then,
please
comment
as
I
said:
block
approve,
I,
don't
care,
we
just
need
feedback
from
the
broader
Community
so
that
you
know
we're
not
just
in
our
own
Echo
chamber,.
C
Sounds
good
thanks
so
much
for
that
effort.
We
only
have
three
minutes
left,
but
I
don't
think
we
have
that's
enough
time
for
any
discussion
so
and
since
we
were
discussing
a
lot
of
things
in
this
college,
we
just
we
just
take
a
break
if
it
doesn't
make
sense.