►
From YouTube: 2021-11-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
let's
start,
then
I
think
you
have
the
first
couple
items.
C
Yeah
so
a
couple
of
spec
related
prs
that
are,
I
think,
just
about
ready
to
merge.
I
just
wanted
to
call
them
out
in
case
anyone
has
last
minute
feedback
and
also
I
just
wanted
to
ask
in
general,
I'm
not
I'm
not
clear
on
how
we
get
how
we
get
specification
pr's
merged
in
the
end.
Is
this
something
you
do
you
can
do
tigran
or
I.
A
Can
merge,
there's
usually
so
there's
the
minimum
number
of
approvals
from
from
the
code
owners
from
the
spec
approvers
necessary.
I
think
that's,
two
or
three
I
don't
remember,
but
we
also
require
the
minimum
cooldown
period
of
24
hours
or
one
business
day.
I
think
so,
let's
wait
a
bit
and
I
think
it
would
be
very
useful
to
have
a
couple
more
eyes
on
this,
because
this
was
kind
of
a
debated
topic.
A
So
I'd
like
to
see
more
approvals,
but
it
looks
promising.
At
least
we
have.
I
think
we
have
free
approvals
and
no
no
comments.
So
I
guess
people
are
now
willing
to
accept
this
as
a
solution
we
have.
We
have
two
other
speculars
in
this
call.
So
please
christy
and
david.
Please
have
a
look
at
the
pr.
C
Cool
and
yeah
that
basically
just
goes
for
both
of
those
first
two,
so
yeah
thanks
appreciate
you
guys
not
looking
at
those.
C
Right
but
generally
in
general,
would
seem
to
have
some
consensus
of
calling
this
concept
media.
You
know
the
medium
through
which
the
log
was
transmitted.
A
A
No
good,
okay,
so
the
next
one
is
also
around
log
data
model
fields.
It's
the
trace
plugs.
I
was
it
wasn't
clear
to
me
what
the
purpose
of
the
flags
is.
So
I
we
discussed
that
in
the
spec
meeting
yesterday
yeah
yesterday,
and
I
think
the
explanation
I
heard
makes
sense
to
me.
I
posted
what
I
understood
as
a
comment
on
that
issue.
A
I
would
like
maybe
someone
else,
some
other
people
to
also
read
the
explanation,
to
see
if
it
does
make
sense
and
if
it
does,
we
just
close
the
issue.
We
keep
the
the
field
there.
A
D
A
That
is
defined
for
samples
as
the
only
sorry
for
trace
spikes.
That's
the
only
flag
that
is
defined
today
for
trace
plugs
single,
beat
essentially,
and
the
idea.
E
A
A
Then
the
log
record
that
can
be
associated
with
that
trace
and
that
span
may
still
be
sent
to
the
back
end.
Let's
say
it's
a
critical
error.
High
severity
error:
you
still
want
that
to
be
sent,
but
when
that
arrives
in
the
back
end,
this
indicates
that
the
trace
id
and
span
id
fields,
although
they
are
recorded,
they
are
not
going
to
be
found
anywhere
in
the
back
end,
because.
A
Create
a
connection
between
in
the
ui,
essentially
where
you
click
on
the
log
record
and
shows
you
all
the
it
shows
a
trace
id
that
was
associated
with
that
local.
Essentially,
it's
a
hint
to
the
ui
on
how
to
treat
the
trace
id
and
span
id.
Are
they
expected
to
be
found
in
the
back
end
or
not
expected
yep?
So
this.
B
A
D
So
we
want
to
so
thanks.
Sometimes
it
helps
just
having
somebody
like
actually
explain
it
and
yeah.
I
know
the
words
are
pretty
much
exactly
the
same,
but
sometimes
it
helps
to
hit
life,
so
so
this
means
this
is
sampled.
As
for
you
know,
the
state
of
you
know
the
surrounding
trace.
If
there
is
one,
but
it
doesn't
mean
that
the
lock
itself
is
sampled
right.
So
that's
the
thing
correct.
D
A
A
A
E
Yeah,
so
I
wrote
up
some
semantic
conventions
for
using
log
records
for
as
events
so
for
each
of
the
fields
I
have
described
what
they
mean
in
the
context
of
an
event.
A
Can
you
maybe
clarify
what
is
your
understanding
of
the
word
events
and
the
reason
I'm
asking
is
because
open
telemetry
specification
explicitly
says
that
the
log
records
and
events
are
the
same
thing.
They
are
just
different
names
of
the
same
concept.
So
if
you
see
the
difference,
maybe
it's
worth
calling
out
what
what?
What
are
the
differences
that
you
see
and
essentially
that's
maybe
a
special
kind
of
an
event
or
a
special
kind
of
a
law.
Breaker.
E
Yeah,
I
think
the
most
important
difference
is
the
the
name
field
right.
I
think
the
all
the
events
are
named
events
so
that
multiple
events
with
the
same
name
have
the
same
structure
or
have
the
same
way
to
interpret
have
the
same
set
of
attributes.
E
A
E
A
A
A
A
Initially,
we
had
something
called
component
which
was
supposed
to
record
the
type
of
the
resource,
so
it's
kind
of
very
similar
to
what
you're
describing
the
type
of
the
event
right
recorded
in
the
name,
and
we
we
that
was
the
initial
design
in
the
specification
we
moved
away
with
them
from
that,
because
we
found
that
there
may
be
actually
multiple
types
present.
At
the
same
time,
with
the
type
for
the
resource
being,
let's
say
a
process,
a
process
is
a
resource,
a
service
is
a
resource
or
maybe
a
pod
is
a
resource.
A
You
may
want
to
record
this
all
at
the
same
time
and
they
they
they
are
all
come.
They
all
come
with
a
set
of
their
own
attributes
right,
so
there
is
no
single
type
to
refer
to.
Similarly,
I
think
we
may
have
this
exact
same
problem.
We
try
to
say
that
the
name
defines
what
attributes
or
what
sort
of
body
you
expect
to
see
in
the
log
record.
I
think
it's
we're
going
to
hit
the
exact
same
problem.
A
What
kind
of
information
in
this
single
event
is
recorded,
which
you
can
extract
and
use
like,
like
you,
can
have
several
attributes
which
describe
a
file
operation
and
several
others
which
describe
an
http
outgoing
request
right
in
the
same
event,
for
whatever
reason
right?
Maybe
you
don't
want
that
to
be
in
a
single
event,
but
maybe
you
do
maybe
there
are
use
cases
for
that
having
a
single
field
play
that
role
of
an
arbiter,
if
you
will
of
what
is
recorded
in
the
event,
maybe
to
restrict
it.
So
we
need
to
be
careful
with
that.
E
You
don't
make
this
mandatory.
Is
that
all.
A
E
No,
no
so
in
terms
of
use
cases,
actually,
the
even
the
ebpf
has
events.
So,
okay.
E
There
are
use
cases
where
people
will
use.
E
Well,
well,
I'm
okay
with
this,
but
think
about
you
know
what
is
commonly
you
know
used
for
events
right,
you
know
named
a
type
is
common
in
events.
You
know
when
other
people
look
at
it,
you
know
going
forward
and
there
was
an
example
john
had
where,
when
you
record
an
event,
you
record
the
the
api
takes,
you
know
the
name
or
the
type,
and
then
it's
contents
right.
So
it's
common
to
have
a
name
type
for
an
event
when.
A
A
Like
like
common,
I'm,
not
aware
of
common
event,
apis,
to
be
honest,
I'm
aware
of
logging
apis
like
logs
are
very
common,
but
if
there
are
some,
then
it
would
be
very
useful
to
have
a
look
at
those
and
understand
what
is
shared
across
all
of
those
apis
right.
How
we
can
generalize
that
and
make
make
that
and
reflect
that
in
the
log
data
model
that
we
have.
E
D
E
So
actually,
this
is
not.
This
is
mandatory
only
from
a
semantics
perspective.
So
when
we
build
an
api
for
events,
the
apis
would
take
the
name,
but
in
the
data
model
this
you
know
did
not
be
marked
mandatory.
So
I
I
didn't
know
how
to
represent
us
in
this
text.
D
E
Well,
it's
it's
tricky
right,
I
think
looks
like
you
know.
This
is
mainly
coming
from
logs
as
the
primary
consideration,
so
even
severity
numbers
we
are
detects.
You
know
they
could
have
been
attributes
too.
E
D
Since
ebpf
was
brought
up
brought
up,
and
then
I
think
you
know
there's
also
this
notion,
I
think
you
know
we
have
there's
some
effort
to
get
flowmill
kind
of
into
into
the
open,
telemetry
kind
of
umbrella
right.
Do
we
do
we
know
kind
of
how
they
are
looking
at?
That
is
there
sort
of
additional
input
it.
A
Exactly
it
is
right
now,
but
it
was
literally,
I
think,
a
couple
weeks
ago,
if
I'm
not
wrong,
that
the
donation
was
accepted
and
there
is
a
work
group
which
I'm
not
part
of.
So
I
don't
know
what
exactly
is
happening
there,
but
there
is
an
eppf
work
group
that
is
now
responsible
for
moving
forward
with
that
we
have
a
splunker,
I
believe,
jonathan,
who
is
actively
engaged
there.
A
If
you
want,
I
can
connect
you
with
with
him
and
then
you
can
ask
questions
if
you
have
anywhere
or
maybe
maybe
attend
the
day.
I
think
they
have
a
significant
regular
signature
as
well.
D
Santosh,
are
you
aware
that
this
is
going
on?
It
seems
like
this
may
be
overlap
with
you
know.
Some
of
the
things
that
you
need
to
get
done
might
make
sense
to
kind
of
sync
with
those
folks
as
well,
and
then
maybe
bring
that
back
here
or
drag
somebody
from
from
here
over
there,
because
they
need
to
kind
of.
E
Figure
out
you
mean
with
the
collaborate
with
the
ebpf
members.
D
E
Yeah
yeah,
I
posted
a
message
in
that
channel,
but
nobody
said
anything,
but
I
plan
to
join
their
meeting
tomorrow
and
bring
this
up.
A
Oh
yes,
please
do
I,
I
kind
of
assumed
you
are
part
of
the
working
group,
so
so
here
not
definitely
do
do
join
the
they
have.
I
think
they
also
have
an
implementation
of
the
ebpf
collector,
which
uses
log
records
open,
telemetry
law
breakers
to
represent
ebpf
events
yeah,
so
they
they
may
have
kind
of
went
through
this
exercise
already
themselves.
So
it
may
be
very
useful
for
you,
for
you
guys
to
to
sink
in.
E
Yeah
they
actually
have
represented
their
records
a
bit
differently.
They
have
the
the
payload
in
the
body
right.
So
let
me
show
you
quickly
so
in
this
video
hi.
A
E
B
D
D
E
A
That
is
a
regular,
structured,
structured,
key
value,
key
value
list
that
you
can
put
in
the
body
it's
json
equivalent,
but
it
is
in
structured
form.
As
far
as
I
can
see,
it's
not
protobuf.
It
is
encoded
in
protobuf
in
the
serialized
form,
but
in
memory
it's
it's
a
structured
data
yeah.
So
I
would
highly
recommend
santos
to
speak
with
with
the
with
the
current
dbbf
guys
specifically
on
this
topic.
A
Maybe
there
are
other
use
cases
for
the
events
as
well
likely
there
are,
but
it
can
be
useful
to
understand
why
they
did
it
this
way
or
and
if
you
want
to
suggest
the
changes
they
are
the
best
people
to
maybe
talk
to
with
regards
to
how
ebpf
is
represented
in
open,
telemetry
law,
breaker
and-
and
I
I
guess
we
were
also
happy
to
help
as
a
log
seek,
but
we
don't
know
the
details
of
the
bps.
I
guess
most
of
us
hearing
this
seek
to.
A
C
Then
the
body
does
exist
as
a
structure.
It
can
be
structured,
but
it
is
recommended
not
to
be
basically
so.
The
reason
it
needs
to
be
structured,
though,
is
to
basically
support
things
that
we
don't
have
control
over
that
need
that
need
that,
in
order
to
unambiguously
map
to
our
data
model,.
A
So,
essentially
in
case
of
ebpf,
the
question
is:
do
we
treat
it
as
first
party
application,
which
we
are
just
developing
or
as
something
that
exists,
possibly
outside,
of
open,
telemetry's
control?
In
this
particular
case,
there
was
a
history
of
ebf
collector
and
they
chose
to
use
the
body.
They
mapped
the
data
that
the
ebpf
collector
the
existing
one
produced
to
the
body.
D
Well,
maybe
some
of
us
can
join
the
call
tomorrow
there
and
just
get
a
sense
of
where
these
guys
are
at
you
know
and
and
sort
of
show
up
also-
and
you
know,
represent
the
fact
that,
like
if
it
is
tomorrow,
indeed,.
D
D
D
So
yeah
I
can
I
mean
I
can
I
probably
have
time
tomorrow,
if
that,
if
that
fits,
you
know
I
might
just
I
might
just
dial
in
there
together
with
you,
santosh
and
you
know,
just
listen.
Obviously,
you'll
see
where
their
heads
at
take
around
my
hunch
is
that
you're,
probably
more
than
busy
on
a
lot
of
different
working
groups,
so
yeah.
E
Okay,
could
you
also
comment
on
this
attributes,
so
basically,
your
recommendation
is
that
we
should
use
attributes
for
the
payload.
E
A
Yeah,
that's
so
that's
that's
a
recommendation
right.
Maybe
there
is
good
reasons
not
to
follow
that.
So.
E
I
think
the
only
one
reason
I
could
think
of
is:
if
you
know
people
have
complex
objects.
You
know
deeply
nested
objects.
You
know
today
the
attribute
values
don't
support
anything
other
than
the
primitive
types
right,
but
the
attribute
names
could
have
that
dotted
notation
namespaced
notation,
so
that
could
be
an
alternative
to
represent
nested
structure.
But
I
I
don't
know.
A
We
limited
the
api,
the
tracing
api
and
the
metrics
api
to
only
allow
primitive
attributes
and
arrays,
but
the
reasoning
for
limiting
was
oh,
let's
not
introduce
complexity
because
we
don't
need
it.
So
if
there
is
a
new
evidence
that
we
do
actually
need
it,
then
then
maybe
we
do
need
it
right.
So
we
shouldn't
be
limiting
that
and
maybe
that's
maybe
we
don't
need
it
still,
don't
need
it,
maybe
in
tracing
api
or
metrics
api,
but
we
need
it
in
the
logging
sdks
right,
because
the
logging
sdks
need
to
handle
a
variety
of
use.
A
E
A
I
don't
know
if
it
will
be
accepted,
absolutely
not
certain.
It
probably
should
be
also
discussed
in
the
specifications
thing,
because
maybe
maybe,
if
we
do
that
for
logs,
then
the
question
is:
why
is
it
not
uniform?
Why
we
don't
do
that
for
the
rest
for
traces
and
metrics
as
well,
so
it
kind
of
is
something
that
will
it
will
take
a
while
to
to
come
to
an
agreement
from
everybody.
C
Would
point
out
that
I
did
open
a
pr
on
this
on
the
specification
repo
a
while
back
and
it
seems
to
be
heading
towards
consensus
that
logs
at
least
could
have
arbitrary
data
in
the
attributes
it
kind
of
timed
out,
because
there
were
some
open
questions
that
I
think
we're
only
now
resolving.
So
once
these
prs,
like
the
first
pr
that
I
brought
up
today,
once
that's
merged,
I
think
it
would
be
appropriate
to
reopen
the
pr
and
continue
the
conversation
there.
E
E
I
have
a
question
on
the
severity
text,
so
is
it
an
enum
type,
as
in
there
were
about
20
to
30?
You
know
different
strings
that
were
mentioned.
Yeah
are
those
the
only
ones
allowed
there
because
for
I
was
thinking.
F
A
Now,
that's,
that's!
Not
that's!
That's
not
the
case.
The
severity
text
can
be
anything
you
want
it
to
be.
That's
that's
the
purpose
of
the
field,
if
you,
if,
if
if
it's
one
of
the
predefined
ones,
you
just
omit
the
severity
text
and
the
severity
number
tells
you
what
text
is
equivalent
to
that
particular
number
so
that
severity
number
is
one
of
the
one
from
the
enums
right.
It's
how
many
numbers
we
have
there.
A
But
if
you
have
a
custom
severity
text,
that's
what
you're
recording
in
there
right
and
the
severity
number
you
specify
the
most
appropriate
corresponding
number
to
to
what
you're
trying
to
record
right.
If
it's
a
high
severity,
you
probably
use
one
of
the
errors
or
fatal
ones
with
a
low
severity.
So
maybe
it's
a
meaningful
or
warning.
E
God
all
right
so
one
so
the
last
thing
on
this
topic
is
so
in
in
case
we
decide
to
introduce
an
api
for
events,
then,
on
the
same
data
model,
there
are
apis
meant
for
events,
and
no
api
is
meant
for
logs.
So
would
that
cause
any
confusion?
A
A
A
E
So
I
think
the
the
main
reason
for
an
events
api
is
really
to
ensure
that
a
name
is
provided
and
and
that
bodies
should
not
be
used.
E
E
So
far,
there
are
two
use
cases,
two
places
where
this
is
needed.
One
is
the
ebpf
and
second,
is
all
the
client
side
like
mobiles,
browsers
and
iot.
A
It
would
be,
I
think
we
need
to
see
the
use
cases
right
so
for
ebpf.
I
think
it
is
going
to
be
a
standalone,
collector
or
a
process
which
which
doesn't
necessarily
need
an
api
in
open
telemetry,
because
it's
going
to
be
implemented
in
a
single
language.
You
don't
need
an
api
for
10
different
languages.
That
open
telemetry
supports
to
implement
your
ebpf
collector,
so
you
could
argue
that
it
is
tailored
to
ebpf.
Ebpf
can
do
whatever
it
wants
to
do.
There
is
no
need
for
an
api.
It's
an
internal
ebpf
api.
A
Do
whatever
you
want
to
do
there
for
the
the
other
use
case
for
the
client-side
telemetry,
perhaps
that
that
that
that
is
a
real
use
case
that
open
planetary
needs
to
handle
well
and
have
an
api
for
that
that
I
don't
know
to
be
honest
right,
it
would
be
very
useful
to
have
this
use
cases
somewhere
described
and
then
explain.
Maybe
I
guess
if
there
is
a
there
is
a
way
to
plug
an
existing
logging
library,
for
example,
on
javascript
right.
A
E
It's
unlikely,
though
I
think
we
could
explore,
but
it's
unlikely
just
because
the
logging
libraries
don't
take
the
even
type
or
name.
A
A
A
F
E
Okay,
so
I
think
what
I
could
do
is
I
could
come
back
with
the
two
things
you
know,
maybe
reference
apis
in
other
situations
for
events,
what
do
people
use
for
apis
for
events
and
also
a
lot
of
examples
of
on
events
that,
where
you
know
we
could
see
whether
it
is
and
what
kind
of
attributes
you
know
we
will,
whether
we
need
complex
attributes
or
not
and.
A
A
C
F
C
Under
there-
and
you
know-
please,
please
join
that
discussion
if
you
want
sure-
and
then
I
also
just
relating
to
the
discussion
about
whether
or
not
name
or
really
anything
else
should
be
a
top
level
field
or
not.
I
I
only
recently
learned
of
this
and
that
so
I
thought
I'd
share
real
quick,
some,
some
of
you
guys
who've,
been
around
in
this
for
longer.
C
I'm
sure
already
knew
about
all
this,
but
in
the
spec
there's
a
pretty
clear
description
of
exactly
what
is
expected
of
a
field
that
should
be
a
top
level
field
versus
just
being
part
of
basically
part
of
attributes
and
there's
a
couple
of
qualifying
conditions,
so
probably
probably
good
to
keep
those
in
mind
when
we're
having
these
discussions
too,
but
basically
well,
I
I've
linked
it
there.
So
you
know,
please
take
a
look
sure.
E
Link,
it
sorry.
C
Sorry,
under
your
issue,
there
data
model
requirements
for
top-level
fields
on
the
sorry
on
the
meeting
notes.
A
Okay,
the
next
one
I
posted
it's
a
it's
a
fix.
Essentially
it's
a
mapping
bug.
Please
have
a
look
approve
for
syslog
mapping,
nothing
complicated
there
fairly
obvious,
then
next
steps.
Then
I
think
you
posted
this
right.
C
Yeah,
I
just
I
went
through
the
backlog
basically
and
just
tried
to
identify
items
that
seemed
to
be
just
outstanding
things
we're
going
to
have
to
resolve
us.
You
know
in
the
next
few
months.
These
are,
I
think,
all
fairly
unrelated
and
fairly
small
questions,
but
things
I
thought
we
maybe
should
try
to
discuss
and
just
see
if
there's
any
consensus
on
or
what
the
next
steps
are.
C
So
the
first
one
was
someone
had
raised
a
question
of
whether
or
not
there
should
be
a
maximum
size
for
the
log
body,
and
I
think
the
main
point
that
we're
making
is
that
if
we
don't
establish
one
now,
you
know
before
we
mark
this
as
stable.
Then
we
really
can't.
We
can't
go
the
other
way
with
this
later
right.
C
A
So
the
maximum
size
for,
for
the
traces,
for
example,
for
span
attributes,
I
think
the
sizes
are
defined
for
sdks,
but
not
for
data
models,
so
data
model
does
not
put
any
limitations
itself.
The
limitations
are
defined
inside
the
sdk.
So
if
you're
using
open
language
sdk,
it
will
enforce
those
limitations.
A
A
D
Long
and
painful
experience
on
this
topic,
like
whatever
number
you
pick,
is
gonna,
be
too
small,
and
that's
I'm
not
sure
I
mean
you
know,
I'm
it's
very
painful
you
would.
This
is
like
a
640.
Kilobytes
is
enough
for
everything
kind
of
situation
where
you
build
your
system
and
you
define
a
max
lock
length
because
you
know
you
got
previously
bitten
by
putting
it
to
you
know
one
kilobyte,
you
set
it
to
64
kilobyte
next
thing.
D
D
So
as
soon
as
you,
then,
when
you
set
a
limit,
then
you
have
to
go
into
this.
Also
describing
what
happens
if
that
limit
is
hit-
and
you
know
you
know
split,
you
know
over
multiple
events,
in
which
case
you
then
need
to
discuss
whether
you
need
to
add
correlation
ids
to
piece
them
back
together.
This
ridiculous
show.
So
if
we
can,
the
reality,
though,
is
that
at
some
point
you
know
whether
we
put
it
in
the
data
model
or
not.
Somebody's
gonna
have
to
put
a
number
there.
G
D
I
think
from
the
server
side
implementation
you
don't
want
it
to
be
an
infinite
stream
right.
It's
just
my
my
my
experience
has
been
that,
like
whatever
number
you
pick
is
you're
gonna
have
you're
gonna
suffer,
so
I
don't
really
have
a
great
answer
there,
which
is
probably
not
a
very
constructive
kind
of
contribution,
but
yeah.
It's
tough.
A
Yeah,
it's
it's
a,
I
guess
good
arguments.
One
other
argument
is
that
that
limit
depends
on
your
possibly
your
server
implementation
and
that
number
probably
also
changes
over
time.
As
the
servers
become
they,
they
have
more
memory,
more
performance.
You
can
handle
more.
So
if
you
make
it
a
hard
part
kind
of
limiting
in
part
of
your
specification-
and
you
can
no
longer
change
that,
although
possibly
in
five
years
from
now,
you
could
handle
significantly
more
right.
A
So
you
likely
do
have
that
limitation
on
your
server.
You
enforce
that
when
you
receive
data,
but
maybe
that's
configurable
on
your
server.
Maybe
you
change
that
as
you
move
to
some
other
hardware
right
which
can
handle
more
so
yeah,
I
think,
and
in
open
telemetry
sdk,
it's
configurable
anyway.
It's
not
a
single
number.
You
can
change
that
in
the
sdk.
A
B
Yep,
perhaps
what
we
should
do
is
instead
of
like
setting
the
number
just
provide
means
for
making
action
on
some
numbers,
so
by
default.
This
can
be
like
disabled,
like
this
check,
but
we
can
maybe
have
some
capability
in
collector
on
receiver
or
exporter
that
when
this,
when
there
is
number
configured
and
the
size
is
exceeding
this
number,
then
maybe
we
just
cut
off
the
remaining
part.
Yeah.
A
F
Would
be
kind
of
follow
along
with
what
header
like
http
headers
does,
is
don't
specify
a
limit
but
specify
something
that
happens
if
you
exceed
a
server's
limit.
So
http
has
a
lim
no
limit
on
how
many
headers
you
can
provide
on
a
or
how
big
the
headers
could
be
on
a
single
request,
but
it
also
specifies
the
error
code
413
of
too
big.
D
In
addition
to
this,
what
if
we
allow,
what,
if
we
have
in
the
data
model
itself,
a
way
to
express
that
the
event
has
a
had
a
precursor,
so
in
other
words,
if
somebody
does
actually
have
to
split
that
they
can
at
least
point
her
back
to
the
original
one,
and
and
now
I
just
pulled
it
up,
it
turns
out
that
we
don't
actually
have
an
event
id
definition
in
there.
I
believe
so.
If
we
want
to
actually.
D
Yeah,
so
that
is
actually
something
that
we
might
be
cons
that
we
might
want
to
consider
as
a
as
a
you
know,
as
at
least
as
a
suggestion,
or
some
some
way
around
that
so
so
the
implementer
at
least
knows
what
to
do.
If
they
get
that
error
right,
they
can
retry.
But
you
know,
maybe
maybe
try
and
chop
it
up
into
1k,
or
maybe
the
error
includes
the
limit
from
the
server
or
like
there's
something
along
those
lines
that
could
potentially
fit
into
the
data
model.
A
A
If,
if
we
decide
that
we
want
it
but
doesn't
have
to
be
a
top
level
field
and
that's
okay,
we
can
be
fair
that
two
later,
because
semantic
conventions
can
be
added.
That's
not
a
problem
doesn't
prevent
us
from
declaring
the
data
model
stable,
but
yeah,
and
I
guess
one
of
the
dangers
of
going
into
that
direction
is.
A
We
are
maybe
slowly
reconstructing
distributed
tracing
there,
which
allows
fans
to
refer
to
each
other,
the
parent
span
and
all
that
stuff,
which
kind
of
is
logical,
maybe
you're,
arriving
to
similar
solutions
given
somewhat
maybe
similar
problems
like
the
the
trace
idea
is
exactly
that
right
for
the
specs
yeah.
D
I'm
okay
with
I'm
not
necessarily
saying
it
needs
to
be
pardoned
of
the
top
level.
I'm
just
like.
Let
me
start
by
saying
like
maybe
I
should
rephrase
it
and
say
that,
like
if
I'm
an
implementer,
I
would
look
at
you
know
some
sort
of
recommendation
of
what
to
do
and
like
if
the
recommendation
is
encapsulated
in
a
semantic
convention.
D
Yeah.
Okay,
that's
cool!
All
right!
I
don't
know
we
can
argue
whether
it
has
to
be
top
level
or
not.
I'm
not,
I
don't
know,
might
might
as
well
put
it.
There
might
make
it
easier
because
then
we
might
not
have
to
further
define
it
right,
because
the
argument
would
then
be
that
how
am
I
supposed
to
refer
to
the
precursor
if
you're
not
like
in
a
top
level
data
model?
D
If
you
don't
actually
give
me
away
in
that
same
data
model,
to
describe
the
actual
identity
of
that
precursor
itself,
which
is
the
event
id
thing?
If
you
make
it
semantic
conventions,
then
at
least
you
can,
you
know,
suggest
to
people
that
they
handle
it.
You
know
using
their
own
80
mechanisms,
including
putting
that
id
into
the
into
the
attributes,
but
at
least
we
have
a
stance
on
it
right
like
a
recommendation.
That
would
probably
be
better
than
nothing.
A
Yeah,
okay,
I
don't
know
my
my
initial.
Maybe
reaction
to
this
is
this
is
going
to
be
an
uncommon
case
if
we
want
to
handle
it,
that's
fine,
let's,
let's
handle
that,
but
probably
it
belongs
in
semantic
conventions
rather
than
at
the
top
level.
I
don't
know
that's
that's
that's
my
gut
feeling
at
the
moment
at
least.
C
And
it
seems
like
you
know
what
christian
what
you
just
said
there
like
the
question
of
whether
or
not
to
have
an
event
id
in
the
data
model.
It
could
be
partially
justified
with
this
as
a
use
case
for
that,
but
then
it
also
allows
us
to
not
take
a
stance
on
what
a
max
size.
Just
it's
like
you
know
we
have
this
receipt
or
this
record
id.
If
you
need
it,
you
can
use
this
to
reassemble
law
records.
C
You
can
define
your
own
limits.
This
can
be.
This
can
be
left
to
implementations,
basically,
whether
it's
our
sdks
or
or
elsewhere.
But
in
terms
of
the
question
of
whether
it's
a
spec,
I
think
specifically,
what
I'm
hearing
is
that
the
log
body
size
is
not
directly
a
spec
concern.
I
agree
with
that.
A
D
F
A
A
C
Okay,
I
will
try
to
summarize
what
I
heard
here
on
that
issue
and
we'll
see
if
we
can
close
it.
C
Do
we
want
to
talk
more
about
the
concept
of
the
record
id
there's,
there's
an
issue
open
for
it,
because
we
at
least
ought
to
try
and
decide
what
we'd
like
to
do
with
this
issue.
C
A
Yeah,
I
think
the
the
the
main
decision
we
need
to
make
is
whether
I
guess,
even
if
it's
necessary,
whether
it
needs
to
be
a
top
level
field.
If
it's
not,
then
we're
good
right
now
we
can
defer
it
to
later.
That's
the
decision.
We
need
to
make
it
and
it
seems
like
for
that
sort
of
decision.
We
need
to
understand
that
how
common
it
is
to
see
this
use
cases
seems
like
it's
not
very
common
to
me.
A
The
the
splitting
thing
right,
but
the
log
record
id
has
another
use
case,
which
is
duplication,
duplicate,
detection
right.
If
you
somehow
send
the
same
low
credit
card
twice,
the
id
allows
you
to
detect
the
duplicates,
that's
a
different
use
case
and
probably
also
not
very
common.
I
don't
know
I
don't
know
so
I
it's
hard
to
tell
how
important
that
particular
use
case
is.
Do
we
require
everybody
who
generates
any
log
records
to
put
an
id
there?
A
They
have
to
generate
something
sequentially
I
mean
it's,
it's
fairly
easy
to
do.
If
you're,
if
you
have
a
logging
library,
it's
not
a
problem
to
actually
put
a
number
there,
but
if
you
don't
do
that
for
everything
they,
if
some
some
rare
source,
which
actually
does
that,
then
then
it
becomes
an
edge
case.
Like
you
know,
why
do
we
need
a
data
model,
make
it
a
semantic
convention.
D
D
Different
opinions
on
this
I
mean
they
will
all
have
to
have
something,
but
some
folks
might
want
to
sign
on
the
collector.
Someone
might
want
to
decide
on
the
back
end
right
and
id
id
is
one
way
for
duplication.
The
other
way
would
be
to
have
some
sort
of
hash
in
there
again
that
sort
of
opens
up
a
lot
of
additional
decisions
that
I'm
not
sure
how
independent
of
actual
back-end
they
are
to
some
degree.
D
If
you
have
the
collector
acting
as
an
intermediary,
especially
if
you
train
the
collectors,
then
the
collectors
have
a
back
end.
You
know,
but
I'm
wondering
whether
we
not
rather
leave
that
open,
as,
as
you
said,
as
a
some
way
of
using
your
own
semantic
conventions.
You
know
like
in
the
exporter,
basically
right.
C
A
C
F
C
Like
this,
it
seems
like
in
the
interest
of
moving
forward.
We
might
want
to
just
push
towards
that
semantic
convention
instead
or
recommend
it
via
ninja
convention.
What
type
would
we
use
then?.
A
A
Yeah,
I
don't
know,
I
don't
know
I
I
my
gut
feeling
is:
we
should
defer.
This
can
can
be
added
in
future
versions
of
the
data
model
can
be
added
in
backwards,
compatible
manner
like
if
the
field
is
missing.
There's
no
id,
that's
fine!
You
deal
with
that.
How
you
dealt
with
everything
in
the
past,
there's
no
lock
record
id.
If
there
is
one,
then
maybe
you
can
use
that
if
you
want
to
this
is
one
of
the
cases
where
I
think
adding
it
later
should
be
fine.
C
I
was
just
trying
to
clarify
to
understand
better.
Here.
Was
your
point
about
the
data
type
that
if
we
don't
establi
like
lock,
that
in
as
part
of
the
data
model
that
we
might
have
trouble
later
or
could
that
as
well
just.
D
They're,
not
gonna,
wanna
touch
that
the
inner
up
case
is
interesting,
but
like.
A
Time
so
maybe
some
sort
of
a
metal
recommendation.
I
think
we
should
try
to
focus
on
things
which
can
be
breaking
changes
for
the
data
model
like
the
name
field.
Maybe
trace
plugs
right
stuff
that
we
probably
maybe
want
to
remove
or
maybe
change.
Somehow
that's
the
critical
thing
right,
that's
the
more
things
that
we
want
to
make
sure
we
get
right
so
that
we
can
declare
the
data
model
stable
things
that
can
be
additions
like
the
record
id
or
was
the
other
thing
like
the
long
receipt,
timestamp
right
that
can
be
also
added.
A
We
should
think
about
what
that
addition
would
mean
in
the
future
whether
it
can
be
handled
nicely
in
a
backwards
compatible
manner
and
if
yes,
probably
defer
that
until
after
we
declare
1.0
data
model
as
stable
and
then
then
only
after
that,
because
these
are,
I
guess,
all
of
those
are
non-trivial
like
if
we
start
we
try
to
solve
all
of
this,
then
maybe
we're
not
going
to
be
able
to
to
do
what
we
wanted
to
do
with
the
data
model.
Stability.
C
A
A
Or
whatever,
like
like
it's
done
for
for
the
rest
of
the
things
for
the
traces
yeah,
that's
a
that's
a
good
idea.
So
maybe
let's
do
that
because
we
have
a
bunch
of
issues
labeled
for
label
for
logs
inspect,
but
small
portion
of
those
actually
is
necessary
for
us
to
declare
the
data
of
stable.
Let's
do
that
yeah!
That's
I
like
that.