►
From YouTube: 2022-12-07 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
D
H
Yeah,
so
this
proposal
that
I
kind
of
putting
out
here
is
a
straw
man,
but
I
want
to
just
validate
that.
This
is
not
the
way
we
want
to
do
this
or
identify
concretely
what
what
the
alternative
is.
H
So
you
know
I
think
as
I
recall,
the
Sig
decided
that
structured
log
data
should
be
placed
in
attributes
as
opposed
to
body,
and
we
we
basically
codified
that
in
a
sort
of
weak
Way
by
saying
that
the
body
should
be
a
strain
and
so
therefore
there's
really
not
anywhere
else
to
put
structured
log
data
except
attributes,
but
you
know
since
then,
we've
you
know
I
think
made
of
many
many
attempts
to
sort
of
sell
that
to
the
community.
H
I
know
some
of
those
are
still
ongoing,
but
I've
seen
I,
don't
know
it
seems
like
there's
not
been
much
progress
on
that,
and
so
I
just
was
taking
a
step
back
and
thinking
about
do
we
can
we
technically
still
make
any
kind
of
change
here
that
would
give
us
a
place
that
we
can
put
these
structured
log
data
besides
attributes
and
besides
body
and
I.
H
Think,
as
Tegan
pointed
out
on
this
issue,
we
did
leave
the
door
open
for
the
body
that
it
can
have
structured
data,
but
we're
effectively
saying
that
it
should
not
at
least
for
first
party
applications.
So
if
we
have
any
first
party
applications,
let's
say
the
like
The
Collector,
that
is
going
to
ingest
like
Legacy
log
formats,
in
a
sort
of
official
way
where
we
want
to
represent
the
structured
log
data.
Then
we're.
H
Of
violating
that
that
requirement
for
that
or
that
suggestion,
if
we're
just
going
to
put
it
in
the
body
frankly,
I
feel
like
if
we
can't
put
it
in
attributes,
I
I
would
like
to
just
put
it
in
body.
But
given
that
we've
declared
the
data
model
stable
I
mean,
is
this
a
problem?
Can
we
can
we
really
walk
that
back
or
can
we
can
we
make
a
very
broad
exception
that
just
says?
H
First
party
applications
should
use
a
string
for
the
body,
except
when
it's
the
collector
and
it's
doing
structured
logs,
I,
don't
know
I'm
just
kind
of
curious
with
other
people's
thoughts
here.
E
I
can
go
first,
I
apologize,
there's
noise
in
the
background,
there's
no
meteors
in
this
office
at
all.
So
from
from
the
Google
perspective,
we've
been,
we
started
looking
at
this
a
little
bit
more
when
we
sort
of
saw
an
issue
where
the
the
default
behavior
of
The
Collector
receivers,
just
like
without
any
other
considerations,
sending
straight
to
Google.
Cloud
logging
leads
to
some
Google
cloud
loggings,
like
the
way
that
it
used.
It
translates
to
Google.
E
Cloud
logging
structure
doesn't
map
quite
as
well
as
what
we'd
want,
because
we
sort
of
have
like
the
payload,
which
is
either
a
text
with
a
string
message
or
a
Json
body
with
a
bunch
of
data
about
that
was
parsed
out
of
the
log
and
then
a
labels
section,
which
is
generally
users
like
deliberately
specifying
custom
labels
for
their
logs
and
the
logs
that
are
coming
out
of
the
collector
by
default.
E
Why
is
it
that
both
special
custom
user
attributes
that
they
might
want
to
add
that
are
specific
metadata
about
the
log
is
being
put
in
the
same
place
as
just
like
data
that
was
parsed
out
of
the
log
like
I'm
wondering
why
those
are
going
in
the
same
place
when
they
could
just
as
easily
go
in
the
body?
Since
they
are
sword,
they
are
technically
the
log.
It's
just
that
they've
been
parsed
into
like
a
sort
of
like
a
Json
kind
of
structure.
G
E
G
I'm
not
aware
of
a
general
recommendation
like
that
for
the
collector
we
have
one
for
the
or
the
API
or
the
first
part
of
the
application,
so
that
doesn't
really
apply
to
the
collector.
So
I
may
be
wrong,
but.
H
G
H
The
data
models
sort
of
recommendation,
perhaps.
G
I
guess,
but
one
could
say
that
it
is
because
that's
how
it's
implemented,
not
because
there
is
some
sort
of
policy
about
doing
that
right
and
if
you
believe
that
a
different
approach
would
result
in
a
better
structured
log
record,
with
receivers
behaving
differently.
Putting
the
data
to
a
different
place,
then
I
think
it's
open
for
discussion.
So
I
think
that
discussion
needs
to
happen
specifically
in
the
context
of
a
particular
Receiver
right,
the
particular
data
source,
the
file
log
receiver,
for
example.
You
you
think
that
it
needs
to
behave
differently.
G
That
means
a
discussion
specifically
for
fire
log
or
for
free
Winlock
right
those
they
behave
likely.
They
may
make
their
their
own
choices.
The
the
people
who
implement
the
receivers
in
the
collector
typically
make
their
own
choices.
I,
don't
think
we
have
like
strict
policies
around
what
choices
they
should
be
making.
Although
yes
you're
right,
there
is
some
wording
around
that
in
the
data
model,
but
it
doesn't
really
prohibit
or
prescribe
a
particular
Behavior.
Maybe
maybe
it
hints
to
that
I'm
guessing.
E
Well,
it's
supported.
Yeah,
like
a
processor,
could
decide
to
parse
things
to
body
like
we
could
like
find
that
processor
could
parse
things
to
body
and,
like
that's
allowed,
it's
just
that
that
other
bit
in
that
that
spec,
that
says,
first
party
applications
for
first
party
applications
should
be
a
string
and
maybe
I
should
clarify.
It
is
the
first
party
app
first
party
application.
Does
that
also
include
like
logs
coming
from
like
receivers
in
the
collector
like
from
hotel
contrib?
E
G
G
The
issue
that
you
see,
then,
if
I
understand
correctly,
is
that
we
I
guess
we
strongly
recommend
not
putting
structured
data
in
the
body,
and
you
see
use
cases
where
it
is
necessary
to
do
so
and
to
be
able
to
do
so
you're
suggesting
to
introduce
a
new
field
essentially
which
doesn't
have
this
restrictions
and
instead
it
actually
recommends
to
do
so
to
put
structures
data
into
that,
and
the
reason
is
because
it's
also
already
in
a
stable
part,
and
we
do
not
recommend
doing
that
I
think
it's
not
really
necessary
to
do
so.
G
G
That's
that's!
How
I
see
it
like
that's
the
definition,
but
the
difference
of
the
shoot
versus
math
and-
and
you
listed
some
use
cases
where
you
would
like
to
record
structured
data
and
to
me
that's
good
enough.
You
you
show
that
you
list
the
use
case.
You
explain
why
you're
good
to
go
put
it
in
the
body.
I,
don't
see!
Why
why?
Why
is
not
it's
not
good
enough?
G
We
we
list
one
exception
there
I
think
already
in
the
spec
we
say
in
this
case
you
can
ignore
this
Rule
and
we
can
list
10
20
as
many
as
we
want
exceptions
to
to
the
true
and
say,
and
we
don't
even
have
to
list
like
the
specs
says,
it's
open
for
interpretation
for
anyone
who
populates
the
the
body
field.
You
can
ignore
it.
If
you
have
good
reasons,
that's
my
understanding
of
the
definition
of
the
of
the
semantics
of
what
should
really
means
so
I
may
be
wrong.
I'd
like
to
hear
from
others.
H
It
to
in
my
mind
this
effectively
means
that
we
should
recommend
to
users
that
they
should
put
structured
logs
in
the
body
and
to
disregard
that
clause
in
the
in
the
data
model,
because
in
order
for
you
know,
as
Braden
was
saying
in
order
for
their
exporter
to
work
correctly,
structured
logs
need
to
be
in
the
body
and
yeah.
We
don't
want
that
to
be
different
for
every.
H
You
know
where,
if
you're
using
a
Google
export
to
do
it
this
way
whatever,
because
then
you
can't
export
to
multiple
places,
so
we
basically
just
have
to
say
you
really
should
put
structured
logs
in
the
body
and
I'm
fine
with
that
yeah.
It
just
feels
kind
of
contradictory
to
to
me
that
we're
also
saying
you
shouldn't
so.
G
That
I
think
that
was
the
intent
right.
We
we
we
didn't
mean
to
prohibit
in
any
way
putting
the
structured
load
there.
So
maybe
we
just
need
a
clarification
there
that
if
your
data
source
really
gives
you
a
structured
log,
it's
exactly
the
right
thing
to
do.
To
put
it
in
the
body
right.
There
is
no
need
to
try
to
to
flatten
it
to
convert
it
into
some
sort
of
string
and
put
it
in
the
body.
That's
that
was
not
the
intent.
The
idea
was
that
you
are.
G
You
have
a
Greenfield
development,
you're
thinking
about
producing
some
new
logs,
and
you
don't
know
whether
you
want
the
body
to
be
a
string
and
put
some
additional
information
in
the
attributes,
or
you
want
to
put
both
the
string,
the
the
human
readable
description
and
the
attribute
attributes
in
the
body,
because
the
data
model
allows
both.
So
this
is
the
the
reason
we
put
the
recommendation
there
as
a
as
a
developer
as
an
end
user
who
uses
open,
Telemetry
and
wants
to
produce
open
plan
log
records.
What
do
you
do?
What
do
we
recommend?
G
We
recommend
you
to
if
you
have
a
human
readable
string,
put
it
in
the
body
and
put
the
the
the
attributes
additional
structured
information
in
the
in
the
attributes
field
right,
but
this
doesn't
apply
to
the
cases.
When
you
already
know
you
have
a
structured
data
in
a
particular
form
which
fits
perfectly
fine
into
the
body
itself.
Great
just
do
that
right.
There's
no
reason
not
to
do
that.
So
really.
That
was
the
intent
with
with
that
particular
phrase
in
the
spec
I
think
we
just
maybe
maybe
need
to
make
it
clearer
right.
Yeah.
G
Yeah
so,
and
that
hopefully
should
make
it
clearer
also
for
the
collector
receivers
that
they
are
free
to
do
that
and
in
particular,
if
Brayden,
if
you're,
if
you
think
that
the
behavior
of
a
particular
receiver
is
incorrect,
if
it
could
be
changed,
then
sure
I
mean
I.
Think
the
best
thing
to
do
in
that
case
is
to
open
an
issue
in
the
collector
repository
and
discuss
it
there
right,
so
that
maybe
we
need
to
change
it
there
right
and
in
the
spec.
Let's
clarify,
definitely
Mike,
you
have
your
hand
up.
J
Yeah
quick
question,
so
I'm
done,
we
have
structured
logging.
Today
you
basically
get
two
pieces
of
data.
You
get
like
a
message
template
which
is
a
string
that
sort
of
has
like
holes
for
the
key
value
Pairs,
and
then
you
have
those
values.
So
today
we
put
the
message
template
in
the
body
we
put
the
the
data,
the
tags
in
attributes.
J
G
Think
what
you
do
is:
is
it
a
very
nice
approach
really
I
I
would
do
it
the
same
way
right
it
fits
very
nicely.
So
what
I
was
saying
is
not
to
swing
all
the
way
in
the
opposite
direction,
make
a
180
turn
and
say
no
put
everything
in
the
body.
All
I'm
saying
is
that
there
are
valid
cases
when
you
would
want
to
put
the
data
in
the
body
only
and
that's
totally
fine
right.
We
recognize
that.
So
what
what
you
described
I
think
sounds
like
a
perfect
fit
what
you
did.
C
I
had
a
question
that
was
I,
I,
guess
kind
of
tangential
to
this
about
the
vendor
specific
sections
of
the
log
data
model.
So
that's
part
of
what
brought
up
this
question
is
from
our
Google
end.
We
Define
in
that
data
model
and
I
can
link
to
it
that
you
know
we
say
that
attributes
will
be
going
to
labels
and
that's
part
of
the
tracing
back
the
confusion.
C
G
Yeah
I
think
you
should
control
that
as
a
vendor.
You
should
control
your
mopping
to
open,
Telemetry
I
think
it
was
wrong
for
us
to
mark
this
as
stable
as
well.
It's
not
really
our
decision
to
make
as
open
Telemetry
what
the
mapping
of
a
particular
vendor's
format
to
open
Telemetry
should
be.
It
just
happened
to
be
in
the
same
document,
and
those
are
really
examples
right
this.
These
were
not
intended
to
be
normative,
binding,
like
like
rules
that
you
have
to
follow.
C
Thanks
for
the
clarification,
I
think
that
that
would
be
a
good
idea
too,
because
it's
just
kind
of
confusing
if
this
is
stable
in
binding
and
yeah.
G
Yeah,
let's
do
that
I
I,
don't
think
we
ever
intended.
We
really
wanted
to
make
the
data
model
stable,
but
I
personally
never
thought
that
this
was
kind
of
just
occurred
to
me
that
the
examples
are
in
the
same
document
and
somehow
we
managed
to
make
them
stable,
labeled,
stable
as
well,
but
I
think
that's
wrong.
We
shouldn't
be
doing
that.
G
It's
it's
wrong
also,
if
you
think
about
every
other,
so
this
essentially
in
some
cases,
introduces
or
refers
to
semantic
conventions
like
we
say
that
net
top
peer.ip
attribute
should
be
populated
and
those
semantic
conventions
are
all
unstable
today,
so
there
is
really
no
way
we
can
rely
and
say
that
these
mappings
are
stable
when
really
the
underlying
semantic
conventions
themselves
are
not
yet
stable.
H
Cool
thanks
for
talking
this
through
I
know.
This
was
a
sort
of
an
old
problem
that
seemed
settled,
but
it's
been
kind
of
bubbling
up
multiple
times
because
of
this
confusion,
so
I
think
just
a
little
clarification
will
help
and
I'll
open
a
PR
for
that.
K
Hey
folks,
actually,
if
you
don't
mind,
I'm
going
to
share
my
screen
just
so
that
we
can
look
at
the
issue
that
I
have
linked.
K
So
I
opened
this
issue
a
few
weeks
ago,
and
we've
made
some
progress
on
it
in
the.
In
the
meantime,
the
issue
is
open,
basically,
to
start
the
discussion
of
what
it
might
take
to
for
us
to
Mark
the
log
API
and
SDK
stable,
the
primary
motivation
being
that
there
are
a
number
of
components
that
have
been
developed
against
the
API
and
SDK
that
have
been
spec
and
we'd
like
to
move
towards
getting
those
components
stable,
as
well
as
part
of
opening.
K
This
issue,
I
had
I've
edited
this
issue
in
the
in
the
last
day.
I
originally
just
had
an
outline
of
things
off
the
top
of
my
head.
That
would
be
required
for
us
to
move
forward
and
I've
since
modified
this
to
be
kind
of
a
checklist,
because
we've
we've
actually
completed
some
work
and
also
there's
a
couple
of
open
PRS
that
still
require
some
eyes
some
review,
but
before
diving
into
like
any
of
those
details,
I
I
just
wanted
to
kind
of
like
take
a
step
back
now
that
we've
made
some
progress.
K
I
wanted
some
guidance
from
this
group
in
terms
of
like
can
can
we
feasibly
move
continue
to
move
forward
towards
this
goal?
Are
there
other
items
that
are
you
know,
I
haven't
captured
here?
Do
we
need
like
a
more
sophisticated
process
like
we've
done
with
other
signals
like
an
actual
project
board
or
whatever
to
move
forward
in
this
effort?
Anyways
I
just
wanted
to
open
it
up
initially
to
Just
Thoughts
of
of
folks.
How
can
we?
How
can
we
go
forward
here
so.
G
There's
one
thing
that
I
would
do
additionally:
I
would
want
us
to
present
to
people
outside
this
league
what
we
have
so
far
the
logging
API
before
we
declare
its
table
present
and
and
ask
for
feedback
and
see
see
if
people
agree
that
what
we
have
is
is
reasonable
to
the
to
the
spec
fig,
maybe
to
I,
guess
to
to
broader,
like
groups.
I
want
to
make
sure
that
there
is
an
awareness
of
what
we
have
right
so
that
people
know
what
we're
doing
here.
G
There's
no
surprises
when
we,
when
we
declare
stable
and
then
after
that
people
start
looking
at
it
and
and
objecting
to
what
we
have.
So,
let's
make
sure
we
have
that
as
a
line
item,
and
we
should
probably
do
that
more
than
one
time
to
to
a
few
different
audiences
to
make
sure
that
I
guess
the
community
at
Large
and
and
also
others
who,
who
have
who
should
have
a
saying
in
this,
are
aware
and
know
what
is
happening
here.
G
Think
that
the
good
time
to
to
do
that
would
be
when
we
as
a
group
feel
comfortable
with
what
we
have
we.
We
are
we're:
okay,
with
the
with
the
API
ourselves.
We
we
no
longer
see
the
need
for
more
changes,
we
all
agree
and
then
we
can
go
back
to
to
the
larger
community
and
say
this
is
what
we,
what
we
produced
so
far,
and
we
want
to
declare
it
as
a
stable
API.
What
do
you
think
so?
If
we're
there?
G
K
Okay,
that's
fair
I'll.
Add
that
as
a
line
item
to
this,
but
then
yeah,
my
my
follow-up
asked
then
to
folks
would
be
one
the
the
things
that
have
not
been
checked
off
here.
Two
of
them
are
PR's
that
are
open
getting
those
reviewed
merged
people
see.
That
is
the
path
to
go
forward
with.
G
K
G
B
I
I
think
I
saw
some
comment
from
you
somewhere,
suggesting
that
if,
if
in
case
we
decide
to
take
out
the
events
API
do
we
still
need
the
log
API?
Are
you
still
thinking
that
I'm.
G
A
L
Alan
one
quick
thought
on
this
issue:
I,
don't
think
it's
title
is
accurate
anymore.
K
That
is
yes,
I
kind
of
had
that
realization,
as
well.
L
K
If
there's
not
any
other
thoughts,
I
I
did
want
to
just
open
up
this
issue.
Really
quick
I
think
it's
a
minor
one.
I
opened
this
late
last
week.
Basically,
there's
you
know
in
my
whole
thought.
Process
in
this
thing
was
that
I've
just
been
reviewing
the
log,
SDK
API
kind
of
identifying
things
and
and
there's
a
number
of
spots
where
there
are
some
to
do's,
I
figure.
K
You
know,
prior
to
stabilization,
we
probably
wanna
address
those
somehow
well,
there's
this
one
section
about
built-in
exporters,
it's
kind
of
a
unique
section:
the
metrics
and
the
trace
spec,
don't
really
have.
A
K
There
is
some
priority
here
so
like
with
the
trace
with
the
trace
SDK.
We
do
have
an
SDK
exporter
folder,
but
it
does
not
have
anything
about
the
otlp
exporter
which
is
with
this
section
is
is
all
about.
K
It
seems
to
me
that,
for
the
most
part,
the
otlp
exporter
spec
has
its
own
spec,
where
it
speaks
to
all
the
signals,
so
I
think
most
of
the
otlp
specific
concerns
are
already
kind
of
covered
by
the
the
spec
it
doesn't
need
to
buy.
The
otlp
export
inspection
doesn't
need
to
be.
You
know,
require
a
signal,
specific
specification
document,
so
I
was
curious
to
get
people's
thoughts
on
just
removing
this
section
wholesale
for
now
and
in
a
future
date.
K
G
Need
is
there.
We
have
the
definition
of
how
to
do
all
TLP
exporting,
as
you
said,
for
traces
and
metrics
as
well
right,
so
maybe
just
split
this
into
a
separate
file,
and
especially
because
for
logs
we
want
to
also
support
exporting
not
just
to
network
destinations,
but
also
to
files.
I
think
it's.
It
warrants
its
own
place
right.
G
Yeah
yeah
the
file
exporter,
there's
there's
a
few
I
guess
one
really
one
sentence
there,
which
says
that
we
can
export
to
a
Json
or
binary
protobuf,
right
and
I.
Think
it's
important,
I
I,
don't
think
we
should
just
delete
it.
Probably
just
move
it
to
a
separate
file
where
we
can
refine
it
further
and
where
it
doesn't
need
to
block
the
stability
of
this
document.
If
we
want
to
study
like.
K
So
if
we
want
to
keep
this
this
template,
what
do
you
think
about
moving
it?
As
as
like
a
to-do
into
the
exporters
pack
itself,
the
otlp
exporter
spec,
instead
of
like
a
subfolder
of
logs,
because
I
actually
see
like
a
file
exporter
potentially
being
useful
for
both
traces
and
metrics
as
well,
albeit
I,
do
agree
with
you
like
file
logs
is
probably
the
one
of.
G
G
L
That
was
me
so
the
file
exporter
I
think
it's
I,
think
it's
conceptually
it's
it's
important
in
Java.
We
don't
have
an
implementation
of
this
I'm,
not
sure
if
there's
anything
an
implementation
of
it
in
in.net.
We
have
something
that's
like
close
to
it.
We
allow
you
to
log
out
traces,
metrics
and
logs
to
standard
out
in
in
otlp
Json
format.
L
That's
that's
part
of
that's
available
in
all
signals
in
Java,
but
you
know
it's
not
it
doesn't
we
don't
send
those
to
files,
we
just
send
those
to
standard
out
and
so
I
think.
What's
kind
of
missing
in
this
specification
are
like
the
parameters
about
you
know
which
file
you
want
to
log,
your
your
messages
to
and
what
you
want
to
do
from
a
file
rotation,
standpoint
and
so
on.
L
So
what?
But
you
know
in
turn:
I
I,
I,
I
I,
like
what
Alan
said
so
I.
You
know,
I,
think
that
this
concept
of
logging
to
an
otlp
format
to
a
final
file
is
is
relevant
for
all
the
signals
not
just
logs,
although
it
is
most
important
for
logs
what
if
we
were
to
create
a
new
file
under
this
protocol
directory
that
is
in
here.
That
is,
you
know.
Maybe
we
call
it
file
exporter
and.
C
L
File,
exporter
would
just
be,
it
would
be
experimental
and
it
would
describe
it
would
kind
of
be
a
placeholder,
for
you
know
what
we're
talking
about
yeah.
G
It
makes
sense
and
I
think
the
the
standard
out
exporter
should
be
very
similar,
I'm
guessing
to
the
file
exporter,
at
least
when
you're
doing
a
Json
output.
So
it's
not
like
it's
it's
going
to
be
completely
different
right,
so
maybe
that
includes
as
well
the
standard
out
but
yeah
I,
think
yeah
I
think
it's
not
a
bad
idea.
We
have
a
definition
of
what
the
otlp
Json
should
look
like.
If
we
write
two
files
right
somewhere
in
the
spec
I'm
like
I,
can't
find
it
right
now,
but
I
remember
we
did
that
work,
I.
L
Think
it's
in
the
serialization
directory,
so
experimental
serialization
in
the
top.
G
M
L
Yeah,
so
in
that
world,
okay,
we
have
a
general
otlp
exporter
description
in
protocol
exporter.md
that
describes
how
the
otlp
exporters
work
for
all
the
signals
and
we
in
and
so
we
can
delete
the
otlp
exporter
from
the
log
SDK
document
and
then
we
can
add
a
new
document
called
protocol,
slash
file
exporter.md.
That
is
the
combination
of
you
know,
what's
talked
about
in
the
log
SDK
and
this
Json
serialization
document,
and
then
we
can
delete
to
the
file
section.
The
file
exporter
section
from
the
log
SDK
as
well.
G
L
L
Otlp
metric
exporter
has
some
special
behavior
not
shared
by
the
other
signals.
It
has
the
ability
to
configure.
You
know,
default
aggregations
and.
C
G
Yeah:
okay,
okay,
yeah!
Let's,
let's
have
one
shared
document:
I'm,
not
sure
what
goes
into
that.
If
it's
only
about
exporting
to
network
destinations,
we're
going
to
refer
to
the
otlp
protocol
and
say
you
just
Implement
that,
but
maybe
there
is
more,
we
can
put
there
I!
Think
it's
fine!
Let's
try
that.
K
Okay,
I'll
I'll
update
that
at
the
issue
with
with
some
of
the
decisions.
If,
if,
if
you
don't
mind
accepting
that
issue
or
triaging,
it
I
don't
mind
putting
out
the
pr
to
make
those
changes.
We
just
discussed.
G
A
K
A
K
M
A
G
So
the
next
couple
of
items
I
added
there-
these
are
discussions
that
we
had
in
the
past
like
opened
issues
I,
don't
think
we
have
a
lot
of
comments
on
those
or
I
see
some
started
coming
there.
So
I,
don't
I,
don't
know
if
we
want
to
discuss
them
again
here
in
this
meeting.
We
can.
G
We
don't
have
to
up
to
you
guys,
so
the
first
one
is
about
adding
really
a
bridge
for
cloud
events
API,
and
maybe,
if
we
do
that,
maybe
deleting
the
events
API
and
those
probably
need
to
be
dependent
decisions
really.
B
B
We
wait
until
we
get
more
clarity
on
how
Cloud
event
fits
into
all
this,
so
I
actually
reached
out
to
someone
named
Doug
who's
actor
in
the
cloud
events
slack
Channel.
He
said
he
will
have
someone
join
this
sick
and
ask
me
to
create
an
issue
in
in
their
repo
I.
Haven't
done
that
yet
so
I
think.
If
somebody
from
their
side,
you
know
joins
her
meeting
and
then
we
start
understanding,
Cloud
events
further
I
think
eventually
we
can
settle
on
this.
But
initial
thoughts
are
that
I
I.
Think
cloud.
B
Events
like
what's
more
important
for
for
the
ramsig
is
the
the
event
specification.
B
F
Yeah
and
that
pretty
much
sums
up
what
I've
in
a
different
form
for
what
I've
got
in
my
comment
there,
where
I
I
think
we
should
have
it
so
that
we're
not
precluding
the
option
of
someone
building
a
bridge,
but
we
just
say
the
cloud
events
fields,
map
into
a
logo
event
a
log
record
like
this,
and
that
way
people
are
free
to
either
drag
in
the
the
current
events.
F
Api,
for
you
know,
server
side
code
where
they
can
deal
with
the
heavyweight
Bridge
versus
you
need
a
lightweight
simple
wrapper,
just
to
generate
the
object
and
pass
it
into
the
logging
API.
L
So
I
think
that
would
require
us
to
resolve
this
issue
of
event
domain
versus
event
name.
We
have
two
separate
fields
for
those
which
doesn't
allow
a
mapping
from
cloud
events
as
far
as
what
we
understand,
I
think
that
could
be
a
nice
approach.
L
So
you
know
last
week
when
we
were
talking
about
potentially
adopting
the
cloud
event
API
as
a
replacement.
One
of
the
benefits
that
we
discussed
was
that
it
allows
us
to
defer
decisions
about
certain
fields
and
mechanics
until
we
learn
more
about
the
use
cases.
L
If
we,
if
we
want
to
have
our
own
event
to
API
and
we're
conscious
of
of
a
bridge
between
the
cloud
events,
API
and
open
Telemetry
events,
then
we
kind
of
get
that
same
benefit,
because
if
we
can
ensure
that
cloud
events
can
map
to
open
Telemetry
events,
then
we
can
start
out
with
an
event
API
that
is
really
narrow
and
small
in
scope
and
add
an
addition.
L
Additional
functionality,
as
needed,
potentially
taking
parts
of
the
the
cloud
events
API
that
that
we
deem
important
so
essentially,
we
can
use
them
that
API
and
the
bridge
between
hope,
Cloud
events
and
open
Telemetry
events
as
a
sort
of
guide
for
what's
important
and
be
less
prone
to
making
bad
decisions.
B
So
I
asked
in
the
slack
channel
of
cloud
events
about
the
domain.
You
know
why
it's
part
of
the
same
type,
same
field
and
I
think
their
thinking
currently
is
that
they
expect
the
consumers
to
to
be
to
be
aware.
You
know
or
have
an
out
of
demand.
You
know
decision
on
what
domains
you
know
they'll
be
receiving,
and
so
they
handle
accordingly.
B
G
B
Now
I
think
they
are
equivalent.
It's
just
that
you
know
they.
They
expect
the
consumers
to
be
aware
of.
You
know
what
domains
to
expect
and
and
therefore
you
know,
look
for
them
explicitly
in
you
know
in
the
in
the
type
field.
L
Yeah,
if
a
consumer
is
looking
is
a
if
a
consumer
has
this
out-of-band
process
to
become
aware
of
the
domains
of
the
prefixes,
then
it
can
essentially
build
an
allow
list
of
expected
prefixes
and
look
for
those
and
whereas,
if
you
have
an
event
domain,
you
don't
have
to
have
that
allowed
list.
You
can.
You
know,
search
for
The,
Unique,
set
of
event
domains
and
essentially
discover
without
that
out
of
band
process.
What
your,
what
Your
domains
are.
M
B
I
I
think
it
also
has
an
implication
of
you
know
like
the
The
Producers.
You
know
not
specifying
any
prefix
like
I.
Think
one
example
I
gave
last
time
the
you
know
they
they
don't
have
any
prefix.
You
know
they
are
just
publishing
without
any
prefix.
So
so
it
all
depends
on
how
the
consumers
are
expecting.
The
data.
B
I
think
one
of
the
reasons
we
introduced
domain
was
to
be
able
to
not
worry
about
any
conflicts
across
domains,
so
I
think
the
the
individual
teams.
You
know
we
do
not
need
the
domain,
but
I
think
as
a
spec.
If
you
were
to
approve
the
semantic
conventions
for
the
event
names,
then
it
becomes.
You
know
a
lot
of
work
to
you
know
avoid
conflicts,
and-
and
that
was
the
reason
why
you
introduced
domain.
M
I,
don't
see
how
that
makes
any
more
work
for
us.
It
just
moves
where
that
lives
right.
We
just
then
need
to
ensure
that
the
the
num
the
names
the
event
names
are
unique.
Otherwise
we
would
still
need
to
make
sure
that
people
aren't
producing
events
with
an
empty
domain
which
would
lead
to
the
same
issue.
L
So
Santosh
I
think
what
you're
saying
is
is
that
if
we
go
with
the
the
prefix
approach,
whenever
we're
describing
event
semantic
conventions,
we
have
to
go
and
do
a
thorough
search
to
confirm
that
that
prefix
hasn't
been
used
anywhere
already,
and
thus
that,
like
the
event
names
can
the
events
are
sufficiently
unique,
but
that
same
problem
exists
even
with
Event
Event
domain.
So
even
if
you
say
like
the
like,
you
come
up
with
a
semantic
convention
and
say
the
event
domain
is
going
to
be.
L
You
know,
client,
hotel.client
or
something
you
still
have
to
go
through
and
confirm
that
you
know
that
hasn't
been
used
anywhere
else.
So
we
could.
We
could
affect
it
like
what
we
could
do
is
we
could?
We
could
reserve
certain
prefix
name
spaces
for
open,
Telemetry
semantic
inventions
to
make
sure
that
that's
an
easy
process.
L
F
A
B
Sorry
I'm
not
fully
following
so
so
you're
suggesting
that
we
eliminate
the
domain
attribute
and
and
do
what
can.
L
We
all
we
can
do
at
that
point
is
make
recommendations
about
like
how
you
form
prefixes,
and
we
can.
We
can
use
normative
language
like
must,
and
we
can
say
that
you
know
if
you're
as
as
nav
suggested,
we
can
say
that
if
you're
creating
custom
domains
outside
of
open
Telemetry
you
you
know,
you
need
to
start
them
with
x,
dot
or
something
like
that
or
as
I
suggested.
L
You
know
if
you're
using
open,
Telemetry
event
semantic
conventions
that
you
always
need
to
start
with
otel
DOT,
so
you
know
you
can
create
recommendations,
but
at
the
end
of
the
day,
they're
just
recommendations.
Somebody
could
omit
the
prefix
altogether
and
just
put
in
an
event
domain
at
the
top
level
or
event
name
at
the
top
level.
N
So
it
sounds
almost
like
we're
starting
to
create
classes
of
domain
name
event.
Prefixes.
Would
it
make
any
sense
to
change
that
domain
to
be
kind
of
the
class
of
like
these
are
open,
Telemetry
events?
These
are
you
know,
Cloud
events.
These
are
user
events
rather
than
asking
a
prefix
on
the
name
like
so
it's
changing
the
the
semantics
of
the
domain
to
be
a
larger
group
rather
than
a
smaller
group.
So
it's
not
a
specific
convention.
It's
a
class
of
connections.
L
That
that's
a
an
idea
that
crossed
my
mind
about
how
to
keep
event,
domain
and
Map
cloud
events
while
retaining
event
domain.
So
you
know,
let's
say
you're
trying
to
come
up
with
a
mapping
convention
for
cloud
events
to
open
Telemetry
events.
L
You
say
that
all
you
always
use
the
event
domain
Cloud
event,
that's
always
your
event
domain
and
then
the
event
names
are
these
kind
of
fully
qualified
class
name
style
that
is
used
in
Cloud
events
today,
so
you
are
you're
guaranteed
not
to
have
any
conflicts,
foreign
I'm,
not
sure
I'm,
not
sure
whether
I
see
any
Merit
in
that
or
not.
But
it's
like
you
know
when
we
were
talking
about
whether
or
not
it's
possible
to
Map
cloud
events
to
open,
Telemetry
events.
That
crossed
my
mind.
B
So
can,
can
you
give
an
example?
Let's
say
you
know
just
hypothetically,
let's
say
we
want
to
represent
a
kubernetes
event
named
X
and
and
it's
a
kubernetes
event.
So
let's
say
you
know
with
with
domain,
you
know
we
would
say,
domain
is
kubernetes,
event,
name
is
X.
So
what
would
that
be?
With
a
single
attribute.
B
And
that
kubernetes
prefix
is
is
added
to
the
specification
somewhere.
L
Yeah
so
there'd
be
a
semantic
convention
that
says
that
you
know,
defines
the
different
kubernetes
events
and
describes
their
structure
of
the
event
names.
So
you
know
in
this
case
kubernetes
Dot
and
then
the
event
name.
L
You
know
if
you
want
to
have
more
assurance
that
you're
not
going
to
collide
with
prefixes
else
from
elsewhere.
You
could
have
the
event
name,
be
hotel.kubernetes,
dot,
X
and
then
you,
you
can
have
more
assurance
that
you
won't
have
collisions.
F
Yeah
I
think
we've
got
prefixes
to
find
elsewhere
like
AWS
and
Azure
right.
So,
like
I,
don't
remember
it's
for
for
a
span
somewhere,
we
say:
AWS
Dot
is
owned
for
all
AWS,
specific
stuff
and
Azure
doctor
zone
for
all
Microsoft,
specific
stuff
and
I.
Think
the
kns
is
also
defined
in
that
list
somewhere
as
well.
I,
don't
recall
where
it
is.
B
Okay,
yeah
I
mean
as
long
as
the
prefixes
are
also
registered
in
in
the
specification
I
think
it
should
be
fine.
M
But
I
think
that
would
only
cover
the
things
that
we
specify
right.
So
if
kubernetes
defines
their
own
events
and
event
structure
and
they
want
to
have
an
event
name,
that
is,
you
know,
io.kates.eventname
great.
They
can
do
that
and
they
Define
that
everything
that
we
Define
would
be.
We
would
start
with
otel
or
io.otel,
or
you
know.
However,
we
decide
to
ensure
uniqueness.
F
So
having
hotel.browser.name
is
a
bit
excessive.
G
I'm
just
possibly
lead
to
another
standard
from
the
open
site
which
security
schema
framework
which
defines
schemas
for
Events.
Maybe
there
you
have
a
slight
position
for
the
use,
a
combination
of
they
have
three
fields
or
two
Fields,
depending
on
how
you
turn
which
to
find
them
the
schema,
and
then
they
also
compose
the
fields
into
a
single
which
they
call
fun
highlighting
so
they
kind
of
people
they
have
both
the
separate
fields
and
composite
view.
So,
let's
start.
G
F
I
I
think
it's
more
actually
different
domains
depending
they're
trying
to
cater
for
for
all
and
exactly
like.
We
are.