►
From YouTube: 2022-11-30 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
A
A
What
where
to
send
the
event
without
the
need
to
deserialize
the
data,
and
they
say
that
the
context
can
be
can
be
recorded
on
somewhere
in
the
envelope
depending
on
the
on
the
protocol
or
where
the,
where
the
event
is
recorded.
I
think
this
kind
of
is
very
similar
to
what
we
wanted
to
do
with
with
the
event
domain
on
the
scope
right.
So
we
said
that
the
domain
we
could
look
at
the
Domain
and
make
a
decision
about
the
batch
of
events.
A
It's
very
very,
very
similar.
It
seems
like
they.
They
had
a
similar
need
and
made
a
similar
design
there.
So
there
I
understand
why
why
they
have
this
separate
event,
data
separated
essentially
from
the
context.
The
context
in
their
case
is
the
type
the
type
of
the
event
which
is
our
domain
class
name,
timestamp
stuff
like
that
right
and
many
of
these
things
are
some
of
these
things
are,
in
our
log
record,
like
the
timestamp
as
a
dedicated
field,
but
some
others
like
we
we
just
for
now.
A
They
are
just
regular
attributes,
although
we
give
them
specific
names
in
the
semantic
conventions.
So
it
was
an
interesting
grid,
so
made
me
wonder
whether
we
I
guess
we
lost
essentially
this
ability
right
by
moving
the
event
domain
to
the
log
record.
We
we
can
no
longer
have
a
similar
capability,
so
I
guess
what
they
have.
This
kind
of
was
confirming
that
we
were
maybe
on
the
right
track.
Right,
I,
don't
know
many
any
thoughts
on
this.
Anyone
I.
B
Think
cloud
events
also
supports
batching
of
messages.
I
don't
know
exactly,
but
a
batch
would
corresponding
scope
right.
So
if
they
have
the
context
attributes
at
a
batch
level,
then
it
would
be
equivalent.
A
A
If,
if
your
goal
is
to
make
some
sort
of
a
routing
decision,
you
look
at
the
type
maybe
of
the
event
and
decide
where
to
send
it.
This
was
very
similar
to
what
we
were
thinking,
how
the
event
domain
could
be
used,
and
it's
I
guess
no
longer
possible
to
do
that.
Similarly,
efficiently,
like
they,
they
they
can
do
with
Cloud
events.
B
Well,
data
I
haven't
seen
any
apis
like
when
you
say
efficiency.
What
did
you
have
in
mind
so.
A
With
Cloud
events
that
this
is
what
literally
they
say
when
you
read
the
definition
of
what
the
context
is:
here's
what
they
say
these
attributes-
and
this
refers
to
the
context
attributes
while
descriptive
of
the
event
are
designed
such
they
can
be
serialized,
independent
of
the
event
data.
This
allows
for
them
to
be
inspected
at
the
destination
without
having
to
serialize
the
event
data
right,
so
they
are
independently,
essentially
inspectable,
independently
they're
serializable
I.
A
C
I'm,
coming
into
this
a
bit
late,
what
types
of
fields
are
in
their
context,
I'm.
A
Or
yeah
yeah
the
type,
the
type
which
contains
our
domain
and
domain.
It's
the
the
combined
domain
and
name
timestamp,
the
ID.
They
have
a
notion
of
a
unique
ID,
the
source
which
is
our
resource,
which
is
already
separate
in
our
case
and
I,
think
that's
probably
yeah.
They
have
a
version
as
well.
So
really
the
the
thing
that
we're
missing
here
is
really
the
essentially
the
the
type
or
the
domain
which
are
equivalent
in
this
case.
C
Yeah,
so
that
that
to
me
is
isn't
really
equivalent
to
our
scope,
because
one
scope
contains
many
records
and
you
know
there's
a
there's,
several
bits
of
information
that
seem
to
be.
You
know,
identifiers
or
descriptors
about
individual
records,
not
not
a
collection
of
Records.
C
Just
you
know
at
first
hearing
that
I
wonder
if
we
could
do
something
similar
with
you
know,
top
level
Fields
versus
nesting,
everything
in
attributes,
for
example,
if
our
event
domain
and
name
were
top
level
Fields,
then
you
could.
You
could
say
that
you
know
you
can
inspect
some
of
these
top
level
Fields
without
having
to
necessarily
deserialize
the
attributes
in
order
to
make
routing
decisions.
Yeah.
A
I
mean
yeah
I
guess
it
would
say
that,
but
that's
more
difficult
to
do,
because
it
requires
changes
to
the
to
the
data
model
to
the
protocol,
probably
can
be
done
in
a
backwards
compatible
way,
especially
if
we
consider
that
the
domain
and
name
are
experimental
and
we
are
free
to
break
them
the
semantic
conventions
it
you,
you
still
don't
get
the
same
efficiency
where
you
could
you
could
I
guess
have
a
batch
of
those
records
with
the
same
domain
under
the
the
same
scope
right
so
I
think
with
the
cloud
events
they
they
are.
A
They
they
are
capable
of
doing
this.
I
didn't
check
that
specifically,
which
transports
allow
that
they
have
bindings
for
multiple
protocols,
but
the
way
they
they
speak
about
it.
They
say
the
context
is
on
the
envelope
of
the
transport
body
and-
and
they
also
say
they
support
batching,
so
I
I'm,
assuming
that
that
means
the
same
context
for
the
for
for
the
entire
batch.
So
maybe
it's
I,
don't
know.
We
should
read
more
that
I
only
had
time
to
read.
A
I
guess
this
front
page
of
the
specification
in
a
way
they
have
more
detailed
findings
and
all
the
stuff
in
separate
documents.
A
Yeah
totally
you're
you're,
absolutely
right.
The
scope
and
the
context
are
not
equivalent
at
all
I'm
saying
the
individual
Fields,
the
event
domain
and
the
the
type
they
are.
Essentially
they
again
they
are
not
totally
equivalent,
but
the
usage
and
the
idea
to
put
them
in
a
separate
place
on
the
wire
is
very
similar
to
why
we
thought
we
wanted
to
have
a
domain
in
a
separate
place
in
the
scope
and
not
in
the
log
record.
A
A
Theory,
I
guess
I,
don't
know
how
you
do
that
in
practice,
because
these
are
the
in
the
problem
of
definition.
These
are
represented
as
repeated
Fields.
The
attributes
I
mean,
and
you
have
to
decode
I
guess
we
could
partially
decode
without
decoding
the
values.
I
mean
possibly,
but
it's
it's
going
to
be
hard
to
gain
significant
I,
guess,
efficiency
from
trying
to
do
so,
whereas
with
with
the
way
that
they
frame
the
context
being
recorded
on
the
envelope
like
I,
believe
they,
my
understanding
is.
This
could
be
just
a
HTTP
header
right.
A
C
I,
hopefully
my
connection's
better
now
I
was
saying
that
partial
partially
decoding
hasn't
been
super
compelling
to
me,
because
I
think
it's
it's
fairly
tricky
to
do
in
practice.
C
You
know
I
do
recognize
that
you
can
do
it.
I
I
did
but
I
question.
How
many
receivers
of
this
will
actually
do
that
in
practice?
Yeah,
you
know
if,
if
that
is
a
motivating,
if
that
is
a
motivating
factor,
one
alternative
to
you
know
putting
everything
in
event.data
would
be
to
have
like
a
top
level
field.
That
is
your
event
context.
So
you
know
separating
that
out
from
the
attributes
and
the
event.data
and
putting
it
in
a
top
level
field
that
is
more
easily.
C
A
And
and
you're
right
and
the
log
record
itself
contains
some
of
those
fields
which
they
put
in
the
context
like
the
the
timestamp,
for
example,
they
put
the
timestamp
in
the
context
and
we
put
it
in
the
log
record
which
is
separate
from
the
attribute,
so
you
don't
really
have
to
decode
all
of
the
attribute
any
of
the
attributes.
I
call
to
look
at
the
the
timestamp
or
any
other
field
in
the
log
record
right,
but
I
agree
with
you
like.
A
If,
if
you
want
to
partially
decode
some
attributes,
not
all
of
those,
but
you
have
to
decode,
let's
say
some
at
least
until
you
find
the
event
domain
and
event
name,
but
you
skip
the
event
data
and
don't
don't
decode
the
value
of
the
event
data.
It's
going
to
be
I.
My
guess
would
be
virtually
no
one
is
going
to
to
to
to
bother
doing
this
such
a
complicated
thing
there.
It's
not
easy
to
do
correctly.
C
A
A
Yeah
so
anyway,
we
can
I,
guess
we
can
move
forward,
maybe
maybe,
let's
think
about
it
after
the
meeting
and
see
if,
if
there,
if
it
means
anything
for
us
or
we
just
we
just
say:
okay,
we
lost
that.
That's
that's!
That's!
Okay,
we're!
Okay
with
that,
so
the
next
one
I
had
is
essentially
another
thing
that
we
mentioned.
It's
combining
the
domain
and
domain
into
a
single
attribute
like
again,
similarly
to
Cloud
events.
A
They
have
this
one
attribute
called
type
which,
when
they
say
should
be,
the
value
of
that
attribute
should
be
prefixed
by
either
reverse
DNS
of
who,
whoever
is
responsible
for
the
definition
of
the
attribute,
followed
by
some
sort
of
value,
so
they
give
an
example
of
com,
dot,
GitHub
dot,
to
request
dot
open.
So
that's
that's
a
value
of
a
type
of
an
event.
A
So
if
we,
if
we
want
to
have
a
similar
definition,
then
the
domain
and
name
that
we
have
today
would
need
to
be
combined
into
another
into
one
attribute.
I
I
opened
an
issue
for
this
I
think
it's
worth
discussing
so
I,
don't
know.
If
the
the
fact
that
the
domain
is
now
in
the
log
record
enables
this
essentially
I,
don't
know
if
we
have
to
do
this,
but
it's
now
possible
to
do
and
hard
to
tell
if
it's
beneficial
depends
on
what
you
want
to
do
with
the
with
the
two
Fields
separately.
C
Yeah
I,
don't
have
any
strong
feelings
on
this.
A
couple
of
things
that
come
to
mind
is,
you
know,
go
into
one
field
with
that's
a
fully
qualified
or
a
reverse,
fully
qualified
domain
domain
name
as
the
benefit
of
having
better
mapping
with
Cloud
events.
C
So
that's
nice
and
then
the
other
thing
that's
nice
is
you
can
tell
that
something
is
an
event
by
the
presence
of
one
attribute
instead
of
two,
so
there's
less
opportunity
to
have
you
know
data
that
is
confused
as
events,
but
really
isn't,
because
it
only
has
one
of
the
two
attributes.
That's
probably
a
minor
thing:
yeah,
no
strong
feelings,
though
I
think
I'd
be
happy
enough
if
it
was
one
field
or
if
it
was
two
fields.
B
In
in
the
like,
when
we
introduced
The,
Domain,
I
think
or
initially
we
had
the
domain
field
as
optional
and
then
we
later
changed
this
to
mandatory.
So
so
I
guess
we
need.
If
we
have
one
field,
then
we
need
to
define
the
semantics
if
the
prefix
is
missing.
What
does
that
mean.
A
Also,
it's
not
so
if
we
use
dot
as
a
separator
for
the
domain
and
name
values,
then
there's
no
good
way
to
actually
then
split
the
value
into
the
domain
and
and
domain,
because
the
the
domain
itself
also
uses
dot
as
a
separator
right.
So
if
you
want,
let's
say,
for
example,
build
a
UI
where
you
choose
to
show,
let's
say
you
hope
you
want
to
filter
by
the
domain
right.
A
A
Needs
to
be
a
different
separator
in
that
case,
that
is
disallow
it
as
a
separator
of
of
a
domain
name
and
as
a
separator
of
an
event
name
essentially
right,
which,
which
then
also
makes
it
different
from
the
cloud
events,
because
for
some
reason
they
didn't
think
it
would
be
a
problem
and
they
they
just
use
dot
for
for
the
separator
everywhere.
So
it
becomes
essentially
impossible
to
un
unmerge
after
you
measure
these
two
things
and
to
do
this
unambiguously,
essentially
is
impossible.
C
So
they
were
probably
more
focused
on
being
able
to
avoid
ambiguity
in
collisions
in
event,
names
and
less
focused
on
being
able
to
Define
separate
domains.
You.
C
A
Yeah,
you
can
still
make
routing
decisions
if
you
know
specifically
what
domain
you're
looking
for
right.
So
you
can
do
prefix
matching.
Essentially,
if
you
know
the
value
you're
looking
for,
but
you
can't
do
things
like
to
me
that
give
me
a
group
by
the
issue,
a
group
by
query
right
Group
by
domains,
that's
no
longer
possible.
If
you
don't
know
what
values
you're
looking
for.
E
You
know
I
I,
think
there's
also
kind
of
a
maybe
a
little
bit
larger
question
in
terms
of
like
how
closely
do
we
want
to
align
with
Cloud
events,
because
you
know
I
think
that
choice
kind
of
follows
from
that
that,
if
we're
like
okay
well,
Cloud
events
is
one
thing
we're
something
else.
Then
it
doesn't
matter,
but
if
we
want,
if
we
want
to
be
compatible
and
that
that
changes
that
calculus
quite
a
bit.
A
B
Well,
at
least
there
is
a
with
respect
to
the
domain
and
name.
There
is
a
one-to-one
mapping,
so
there
is
no
incompatibility.
B
Yeah
and
I
said
that,
with
respect
to
domain
and
name,
both
the
specifications
have
both
the
parameters.
It's
just
that
in
one
case,
there
are
two
separate
fields
in
1K.
In
other
case,
it's
one
field.
So
as
long
as
you
know,
the
cloud
event
spec
clarifies
how
the
domain
and
the
name
need
to
be
separated.
B
C
A
A
B
A
It's
I,
don't
think
it's
a
problem
for
the
receivers
right.
The
receivers
I
mean
what
what's
the
problem.
It
becomes
a
problem
if
you
want
to
do
a
specific
operation
on
that
value,
if
you
don't
need
to
do
that,
the
way
that
they
Define
the
usage
of
the
field,
they
say
everywhere-
I
see
they
say
routing
decision
for
routing
for
for
policy
for
policy
enforcement.
These
other
thing
that
I
mentioned
and
well
observability
is
very
vague,
so
I,
don't
know
what
that
means
routing.
You
can
do
that.
A
It's
when
you
want
to
do
things
like
group
by
right
or
or
say,
show
me
all
known
values
of
the
domains
that
I'm
receiving
on
observing
when
you
start
defining
queries
like
that,
that's
when
it
becomes
a
problem
and
they
they
clearly
didn't
think
that
it's
necessary
and
I'm
not
sure
it
is
necessary.
To
be
honest,.
C
A
A
But,
but
even
with
the
very
first
example,
they
have
that
that
heuristic,
that
you
just
said,
doesn't
work,
look
at
this
from
GitHub
dot.
Well,
that's
the
the
DNS,
the
domain
name
and
then
pull
request.open.
That
I'm
guessing
is
the
event
name
in
our
understanding
right,
so
that
that
didn't
work.
What
you
just
described.
A
I,
don't
know,
maybe
we
say
fine,
that's
also
fine
I,
don't
know
we
don't
care,
I
mean
I,
I,
don't
know
if
we
have
this
like.
What's
what's
the
use
case
for
that,
what's
the
data
source
which
produces
Cloud
events
and
which
we
want
to
collect
and
somehow
put
inside
an
open,
Telemetry
event
in
a
way
that
that
looks
like
an
open
television
event
right,
there
is
processible
later
as
an
open
flame,
G
band.
Do
we
care
I
have
no
idea?
A
B
So
if
we,
if
we
were
to
export
these
logs
into
the
you
know
our
our
systems,
the
open,
Telemetry
based
Telemetry
vendors
systems,
then
maybe
you
know
you
will
need
a
mapping
and
in
fact
they
do
have
an
extension
to
map
with
open
Telemetry.
I,
don't
I
think
that's
related
to
traces,
but
eventually
one
could
have
a
mapping
with
open
Telemetry
logs.
A
A
A
E
Okay,
so
I've
not
really
worked
with
events
very
much
so
I'm
kind
of
stepping
like
I'm,
not
sure
what
to
say
with
a
lot
of
everything.
I
can
do
these
things,
but
it's
for
the
people
that
are
working
with
events.
It
seems
like
one
of
the
questions
I
would
have
is
if
we
just
use
the
cloud
or
the
cloud
events
model
and
had
a
you
know:
otlp
transport,
probably
logs,
that
just
carried
the
cloud
events
model.
E
Would
that
meet
your
needs
and
is
that
something
that
would
be
useful
that
basically
we're
delegating
that
event
model
to
them?
Is
that
something
would
and
then
also
you
know.
The
next
question
is:
is
that
something
we'd
even
want
to
consider,
but
it
just
like
two
semi-compatible
event:
models
seems
worse
than
either
one
unified
or
too
entirely
separate,
but
it
seems
we
definitely
need
to
be
able
to
translate
these
Cloud
events
one
way
or
the
other
I.
A
So
so
think
of
it,
this
way,
David
you
have
you're
building
a
backhand
which
is
open,
Telemetry
compatible.
It
is
capable
of
receiving
all
TLP
data
which
can
come
from
multiple
sources
from
the
open,
Geometry
collector
from
the
sdks.
We
build
and
you're
building
your
backend
UI
so
that
it
can
do
the
things
that
I
was
describing
like
filter
by
the
type
of
the
not
type
by
the
domain
of
the
event
right.
So
I
only
want
to
see
kubernetes
events.
A
There
is
a
variety
of
kubernetes
events,
but
they
all
have
the
domain
equals
Ka
test
or
whatever
we
choose
seems
like
a
reasonable
feature
to
have
in
the
back
Edge
right
now.
You
also
want
to
receive
Cloud
events.
Why
not
right
in
the
same
system
and
somewhere
somewhere?
Maybe
when,
when
you
receive
the
cloud
events,
you
have
to
do
this
translation
because
internally,
you
want
to
have
this
single
data
model
for
for
storing
the
events.
A
And,
ideally
you
want
this
translation
to
happen
in
a
way
that
the
other
capabilities
of
your
system
continue
to
work
right.
The
filtering
by
the
domain
of
the
event.
This
is
where
it
is
valuable
to
be
able
to
Define
this
mapping
in
a
way
that
that
makes
sense
conceptually
right,
the
semantics
are
preserved,
somehow
otherwise
yeah
sure
you
can
like
somehow
envelope
the
cloud
event
inside
the
log
record.
A
You
know
in
a
different
way:
I'll
dump
everything
into
a
single
attribute,
call
it
Cloud
event
value
or
whatever
sure
it
will
be
stored
somewhere,
but
you
lose
the
ability
to
apply
the
same
capabilities.
You
have
in
your
system
to
the
cloud
event
right
now.
You
have
to
build
similar
capabilities
for
different
data
source
right
right.
This
is
what
you
lose
if
you,
if
you
don't,
if
you
don't
try
to
force
to
be
in
a
way
compatible
and
online
on
the
on
the
concepts
and
the
values
of
the
the
those
attributes.
E
Absolutely
agree
and
I
I
think
I
asked
my
question:
poorly
I,
guess
I'm.
What
I'm
trying
to
say
is
that
is
the
model,
the
event
model
that
we
have
been
building,
giving
us
anything
like
how
much
is
it
adding
on
and
capabilities
above
if
we
just
standardized
on
cloud
events
yeah,
so
we
didn't
have
two
incompatible
things
that
we
were
trying
to
to
reconcile.
Yeah.
A
So,
essentially,
with
regards
to
this
specific
topic,
you're
saying,
if
we
give
up
the
two
separate
attributes
and
and
have
a
single
attribute,
which
is
exactly
the
same
value
as
for
the
event
expects,
so
we
Define
it
in
the
same
way.
The
semantics
are
the
same.
If
we
give
up
that,
then
what
do
we
lose
right?
Is
it
a
significant
loss?
Are
we
okay
with
that
loss
or
no?
A
E
Like
you
know,
just
just
to
kind
of
give
a
a
slightly
different
Vision
on
this
right,
you
know
we're
really
looking
at
the
otlb
transport
side.
But
if
we
look
at
the
receiver
side,
you
know
being
able
to
have
a
cloud
events
receiver.
That
then
pulls
in
like
all
of
their
different
transports
could
be
really.
E
C
And
to
push
on
that
thread
a
bit
further,
maybe
just
like
a
slightly
different
or
maybe
it's
the
same
thing
that
you're
talking
about
David.
So
what?
If?
What?
If
there
was
no
open,
Telemetry
event
API
at
all?
What
if
there
was
only
a
cloud
events
API
for
producing
Cloud
events?
And
we
wrote
a
standard
appender
to
bridge
Cloud
events
into
the
open,
Telemetry
log,
SDK
or
API.
E
B
Are
we
are
actually
doing
the
same
thing
effectively
because
I
I,
the
the
next
topic
in
the
list
I
have
is
essentially
in
the
attributes
in
in
the
log
record
attributes
we
are
going
to
follow
everything.
That's
in
the
so
the
context
attributes
are
not
in
the
event.data.
The
context
attributes
are
all
the
attributes
and
the
data
is
in
another
attribute
called
event.data.
A
A
There's
what.
C
Is
that
it's
in
the
chat?
Okay,
sorry
should
I
linked.
C
Yeah
correct
and
there's
sdks
for
a
number
of
other
languages
too.
It
looks
like
eight
or
nine
eight.
A
That's
that's
interesting.
What
do
you
guys
think
about
so
if
this
exists
and
if
we
can't
have
appenders
that
simply
bridge
this
to
our
logging
API,
do
we
even
need
an
events?
Api,
our
own
events,
API
or
you
could
use
this
and
and
just
Santosh,
I
guess
I'm
asking!
Maybe
this
question
to
you:
do
you
think
that
for
client-side
events,
you
could
just
use
the
cloud
events
SDK
with
with
obviously
with
an
attender
that
Bridges
it
through
to
open
telemetry.
B
Well,
I
think
API
itself,
you
know,
is
not
a
big
requirement,
but
finally
I
think
the
second
part
how
it
Maps
into
the
log
record.
B
F
A
B
Yeah
but
no
I
I
said
you're
asking.
B
I'll,
let
me
answer
that
so
I
think
I
meant
I
mentioned
in
one
of
the
comments
too,
that
you
know
if
you
look
at
who
is
who
all
are
asking
for
the
API?
It's
not
just
the
ram
right
the
it
is
also.
The
different
vendors
today
have
apis
for
customers
to
create
events.
So
are
we
going
to
ask
them
also
to
use
the
cloud
events
API.
E
In
in
you
know,
back
to
what
tigrin
was
just
saying
right
there,
there
are
a
lot
of
different
event,
AP
or
event
formats,
but
what
we're
kind
of
talking
about
is
that
Central
format
right.
So,
if
you
know
AWS
isn't
on
board
the
the
cloud
events,
training
they've
got
their
own
event
format
and
you've
got
an
AWS
receiver
that
internally
we
we
format
it
into.
You
know
our
internal
representation,
which,
if
we
went
down
this
way,
would
be
the
cloud
events
representation
and
then
we
go
in
and
we
process
it
in
a
uniform
way.
E
B
Yeah
I
think
I
I,
don't
see
a
big
concern
for
that
I'm
sick.
If
it's
just
the
API,
that's
will
be
taken
out
because
I
think
the
meat
is
in
the
second
part.
B
How
do
we
map
the
data
into
the
lock
record
structure,
yeah.
C
Well,
you
know
I
think
that's
an
interesting
conversation
to
have,
but
I
think
we'd
have
to
assume
that
all
of
the
bits
of
data
that
are
on
the
cloud
event
would
make
it
onto
our
log
record.
And
so
you
know
you
could
safely
assume
that
if
you
can
use
the
cloud
events
API
to
represent
your
data,
that
all
that
data
would
make
it
onto
your
open,
Telemetry
log
record,
and
you
know,
make
it
to
your
back
end.
So.
A
C
No
I'm
just
saying
like
you
know
we
could
we
could.
We
could
argue
about
the
semantics
of
that,
but
I
don't
think
it's
actually
that
important.
Like
you
know,
all
the
data
is
going
to
end
up
there.
The
more
important
thing
is
I
guess
the
discussion
of
whether
this
is
a
direction
that
we
would
want
to
go.
Yeah.
D
I
think
that
doing
so
would
reinforce
the
the
kind
of
thing
that
I
keep
coming
back
to,
which
is
that
we're
building
a
log
record
transport
protocol
and
other
things
can
be
learned
on
top
of
that.
But
the
thing
that
we
should
care
about
is:
do
we
get
that
log
record
transport
format
right
and
worrying
about
building
an
event?
D
Api
is
something
that
should
be
done
on
top
of
that,
whether
that's
Cloud
events
or
we
build
our
own
and
then,
when
you
get
to
domain-specific
event,
apis
those
are
built
yet
one
level
removed
from
that
even
I
think
so.
From
our
perspective,
it
should
matter
little
whether
that
event
API
is
one
we
build
or
it's
when
the
cloud
events
already
has.
C
Yeah-
and
you
know
just
kind
of
thinking
out
loud
a
bit
here,
so
we
could
we
could,
in
an
early
version
punt
on
having
an
open,
Telemetry
event,
API
and
say
hey.
If
you
want
to
use
events
or
something
like
events
with
open
Telemetry,
we
recommend
you
use
the
cloud
event
API
and
these
Bridges
and
we
could
Define
a
compatibility
document
that
describes
how
to
author
those
bridges.
C
But
you
know,
as
time
goes
on
if
we
find
that
cloud
events
are
kind
of
diverging
from
how
we're
thinking
about
events
in
open
Telemetry
or
if
the
events,
if
the
cloud
event
API
is
insufficient
in
any
way,
then
that
could
be
the
motivating
the
motivation
to
develop
a
dedicated,
open,
Telemetry
event.
E
E
And
I
I
think
it
would
also
be
worthwhile
if
we're
looking
at
kind
of
blessing,
the
cloud
events
API,
where
that
we
should
probably
talk
to
them
at
some,
you
know
fairly
soon
and
and
see.
If,
like
that's
something
that's
you
know,
we
can
build
an
ongoing
relationship
where
it's
not
just
they
would
choose
you
and
then
I.
D
A
A
Okay-
let's
do
that
then.
Please
do
that
and
I
think
it's
worth.
It's
definitely
worth
thinking
about
this
as
a
possible
Direction
I
clearly
see
the
benefit
of
not
having
our
own
API
just
yet
right
and
then
maybe
later
in
the
future.
But
for
now
we
say
this
is
the
cloud
events
API
you
can
use
it.
B
A
A
login
API
in
Block
4G,
there's
a
there's,
a
events
API
in
Cloud
events.
They
all
work,
fine
with
open
Telemetry
through
Bridges
who
are
penders.
We
Define
how
they
are
mapped.
How
the
mapping
of
the
the
fields
happens.
It's
it's
non-ambiguous.
You
can
use
it.
I
I,
like
it
I,
don't
know
for
some
reason.
Yeah.
C
And
one
other
benefit
of
this
is
well,
so
we've
been
kind
of
circling
around
what
exactly
are
the
design
constraints
of
an
event
API
like
what
do
we
want
to
incorporate
into
an
event
API
so
by
punting
on
that
decision?
For
now
we
can
you
know
we
can.
We
can
aggregate
a
list
of
the
of
the
of
the
things
that
we
actually
care
about
in
an
event
API
and
not
jump
to
conclusions
so
I
agree
which
gives
us
more
time
to
collect
data,
make
better
decisions.
Yeah.
A
B
I
mean
we
don't
have
much
time.
Shall
we
discuss.
B
B
We
talked
about
it
in
the
last
meeting
and
I
I
created
this
issue
to
explain
what
we
want
specifically
so
and-
and
that
is
we
want
to
be
able
to
describe
a
schema
in
such
a
way
that
you
know
we
can
validate
that.
You
know
each
type
of
an
event.
Each
type
of
you
know
span
has
specific
attributes
that
we
are.
We
are
expecting
and
I
noticed
in
the
semantic
conventions.
Today
there
are
separate
some
documents
like
for
HTTP.
B
You
know
these
are
the
expected
and
it
has
attributes
from
multiple
name
spaces
like
HTTP
and
the
net.
Similarly,
for
DB
there
are
DB
dot
as
well
as
net
namespace.
So
is
that
the
is
that
a
formal
way
to
describe
what
each
object
contains
is?
Is
this
even
possible
or
like
intentionally
explicitly
available
in
the
semantic
conventions
in
open
Telemetry,
or
is
that
something
that
could
be
added?
If,
if
there
is
interest.
A
So
I
I
mean
we
can
extend
the
the
generator
the
yaml
file.
If,
if
it's
not
expressive
enough,
it
depends
on
what
do
you
want
to
Define
right
specifically
because
I
think
for
now
what
what
is
possible
is
basically
a
list
of
a
list
of
flat
list
of
attributes
right,
just
a
set
of
key
value
pairs.
The
most
you
can
do.
A
There
is
say
that
the
value
that,
what's
the
type
of
the
value
and
the
possible
like
an
enumeration
of
the
of
the
possible
violence
there
right,
but
if
you
want
to
to
be
to
be
more
expressive
there
like,
if
it's
nested,
nested
values,
they're
kind
of
something
more
complicated,
which
I
think
is
what
we
expect
to
happen
with
the
events.
Then
then,
what
exists
is
maybe
not
good
enough
right.
In
that
case,
not.
B
Not
just
the
nesting
I'm
I'm,
more
initially
interested
in
in
the
contents
aspect
like
this
event
type.
Has
these.
A
So
conditional,
essentially
you're
saying
somehow
the
presence
of
some
attributes
based
on
the
presence
of
of
another
attribute
or
or
not
presence,
actually
the
value
of
another
attribute
something
like
that.
I
think
that's
definitely
not
possible
with
the
current
generator.
It
doesn't
allow
expressing
ideas
like
like
that
conditional
presence
based
on
values
of
other
attributes.
It
doesn't
do
that
at
all,
but
you're.
Looking
for
that
right,
that's
what
you
want
to
do
essentially
have
some
sort
of
a
table
which
says
if
the
value
of
event
name
equals.
A
True,
then
this
the
following
attribute
should
be
present
and
a
list
of
attributes
with
definitions,
yeah
right
so
I
I,
don't
know.
Maybe
you
could
express
this
in
a
prose
like
because
you
you
can
write
like
actual
human
readable
sentences
there.
It's
not
supposed
to
be
like
a
machine,
readable
definition
anyway,
yeah
yet
at
least
semantic
conventions,
so
maybe
yeah
you!
You
could
do
that
right
in
the
description
include
that
these
n
attributes
should
be
present
if,
like
the
event,
type
equals
two
right.
Something
like
that.
B
Okay,
okay,
yeah
I,
think
surely
I
think
we
can
start
with
that.
But
as
a
follow-on
you
know,
is
there
any
interest
for
other
teams
on
this.
A
Or
when,
when
you
say
other
teams,
you
mean
outside
logging,
you
mean
like
for
spans
or
metrics
or
what?
What
do
you
mean?
Yes,.
C
C
Conditional
requiring
so
that
that
that's
come
up.
So
if
you
look
at
the
semantic
inventions
for
traces
and
https,
some
of
the
fields
are
conditionally
required
and
they
say
things
like
this
attribute
is
required.
If
and
only
if,
this
other
condition
is
true
and
then
they
express
that
condition
in
prose,
and
so
there
there's
some
elements
of
what
you're
talking
about,
but
in
terms
of
the
nesting
I
think
I
think
was
it
tigrin?
C
B
I
think
it's
there
in
the
next
agenda
item.
B
Yeah,
but
that
that
PR
from
tigran
I
thought
its
scope
is
only
enabling
map
values.
A
map
for
attribute
values.
You.
C
Know
it
is,
it
is
but
like
the
examples
that
he
gives
in
that,
PR
is
the
list
of
other
use
cases
for
complex
types,
and
so
like
that
list
of
use
cases,
it
would
be
kind
of
the
the
set
of
things
that
might
be
interested
in
in
the
same
problem.
You
were
talking
about,
got
it:
okay,
okay,
okay,.
A
B
The
idea
here
is
also
that
having
schemas
defined
allows
us
to
take
a
schema
first
approach,
where
there
are
security
teams
that
are
looking
for
understanding
for
for
different
type
of
objects.
You
know
what
exactly
is
being
received,
so
it's
it's
going
to
be
helpful
in
the
long
term.
Having
you
know,
schemas
defined
yeah,
I.
A
B
That
yeah,
okay,
so
for
now
yeah
I'll
start
with
just
listing
in
a
in
the
MD
file
and
not
worry
about
the
ml.
A
Which
one
it's
the
event
of
data
you
mean.
B
Yeah
yeah
yeah,
because
the
event.data
I
I
think
depends
on
basically
the
semantic
conventions
tools
in
the
build
repo.
The
build
tools,
repo
I,
think
it
doesn't
allow
the
attribute
values
to
be
a
map.
B
A
C
It's
a
quick
note
on
that.
So
I
was
doing
a
little
prototyping
in
Java
to
see
if
Java's
attributes
representation
could
be
extended
to
support
complex
types
without
you
know
any
sort
of
breaking
changes
and
it
seems
like
it
can
be.
I
got
I
got
pretty
far
in
that
prototype
and
it
seemed
promising.
I
did
I
have
gotten
some
pushback
from
the
Java
Community
about
whether
or
not
we
actually
want
to
do
that
so
yeah.
C
B
Yeah
I
think
that's
expected.
So
can
you
have
them
comment
in
degrees,
PR,
I,.
C
I
can
how
about
this?
How
about
I,
how
about
I
clean
up
my
prototyping
a
bit
and
comment
on
tigrants
PR
to
say:
hey
I
proved
out
that
this
is
possible
in
Java
and
maybe
I
can
see
some
of
the
folks
and
and
ask
for
their
feedback
directly
yeah.
A
Yeah
I
mean
yeah.
If
we
lose
the
this
PR,
then
we
just
need
to
add
the
Machinery
just
for
logs
right
in
the
generator
or
wherever,
whatever
else
we
need.
I
still
believe
this
is
useful
for
other
signals,
so
really
I
don't
want
to
give
up
immediately
right.
Let's
try
to
make
this
happen.
If
it
doesn't,
then
we
just
do
it
for
logs
only.
B
A
Okay,
so
and
I
think
event.
Data
also
probably
depends
on
that,
and
also
on
the
other
discussion
we
had
about
adopting
Cloud
events
more,
in
which
case
maybe
I
think
it
impacts
what
we
do
with
event
data
as
well.
B
A
So
you
know
what
I
think.
Yes,
so
if
we
have
any
a
cloud
event
which
has
a
separate
definition
of
the
data,
it's
very
separate
in
their
case
I
think
it's
natural
that
we
map
it
to
a
separate
our
tribute
in
the
log
record
and
a
good
name,
maybe
is
event.data.
A
So
maybe
we're
forced
to
do
that
if,
if
we
go
the
if
we
choose
to
to
adopt
Cloud
events
as
our
initial
understanding
of
the
events,
then
somehow
it
also
nudges
us
into
the
direction
of
using
the
event.data
as
an
object
name,
something
like
that.
But
yeah.
Maybe
that's
a
supporting
argument,
not
not
an
argument
against
or
an
argument
to
do
something
differently.
There
yeah.
A
Okay,
I
I
think
I'll
just
open
a
PR,
sorry
an
issue
to
to
have
also
offline
discussions
about
better
adoption
of
cloud
events,
and
then
we
will
also
seek
to
find
the
right
people
from
their
community
and
and
see
if
we
can
work
together
see
what
comes
out
of
that.
Please
do
do
that.
Santosh.
If
you
know
anyone
and
I
will
try
to
find
myself
other
people
as
well,
but
we
probably
need
to
find
more
than
one
person
anyway,
so
yeah
you
can
do
that
and
I
can
do
that
in
parallel.
A
F
Than
what
we've
been
discussing,
yeah
I
just
I've
had
this
PR
open
for
a
little
while
I
think
Jack
most
recently,
so
this
PR
introduces
environment
variables
for
controlling,
attribute
limits
of
attributes
on
live
records.
F
The
name
of
the
attribute
or
sorry
environment
variable
is
open
for
discussion.
Question
I
think
Jack
lays
out
some
pretty
good
reasoning.
He
refers
to
the
naming
that
we
have
for
attributes
for
traces
for
for
tracing,
so
we've
got
some
environment
variables
with
traces
in
the
name
and
spans
in
the
name
and
it's
kind
of
based
off
of
the
context
in
which
the
environment
variable
resides.
F
So
if
others
feel
differently,
I
don't
know
I'd
be
open
to
changing.
I
personally
do
not
have
a
strong
opinion
in.
A
C
I
think
I
think
we're
there's
going
to
be
inconsistency
somewhere,
and
it's
just
it's
just
a
matter
of
where,
where
we.
C
C
F
Were
to
play
Devil's
Advocate
with
your
argument,
I'd
be
like
oh
my
gosh,
you
know
at
least
I
think
in
the
mind
of
an
end
user.
You
know
traces
and
spans
are
kind
of
a
thing
that
they
they
understand
that,
like
the
difference,
the
subtle
difference
here
between
logs
and
log
records
feels
more
like
an
open
telemetryism
than
it
than
it
does
more
of
like
a
like
a
standard
industry
kind
of
terms.
So
I
could
argue
the
other
way
as
well.
E
D
I
do
think,
though,
that
the
the
distinction
that
the
Jack
outlined
in
that
comment
makes
a
lot
of
sense
like
if
you're
talking
about
things
that
are
talking
about
logs
as
a
signal
you're
dealing
with
more
than
just
a
log
record
there.
But
when
you're
talking
about
setting
specific
attributes
on
a
log
record,
then
you
are
dealing
with
a
particular
record.
E
Exactly
I'm
sorry
I
was
trying
to
amplify
what
that
I
agree
with
with
Jack's
proposal.
I
think
that's
exactly
right,
but
it's
very
specific.
A
Yeah
great
I
think
so
I
guess
the
question
is
going
to
be
whether
it's
going
to
be
accepted
because
of
the
moratorium
on
the
new
variables.
Right,
probably
should
because
it
fits
the
constraints
that
were
possible.