►
From YouTube: 2022-05-18 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
A
Yeah
now
I
I
just
want
to
get
some
initial
thoughts
on
this.
So
for
the
rum
events,
the
ram
agents
are
typically
the
you
know
they.
They
can't
do
any
authentication
with
the
back
end,
so
they
are
going
to
send
data.
You
know
unauthenticated
and
so
for
the
back
ends
to
trust
that
you
know
some
of
us
in
the
client
telemetry
cigar
thinking
of
adding
in
some
payload
validation
mechanisms-
and
I
realized
I
noticed
you
know
we
have
something
called
schemas
in
open
telemetry.
A
They
they
don't
serve
the
purpose
of
validation,
but
I
wanted
to
see
if
there
are
any.
You
know,
reactions
initial
thoughts.
If
we
were
to
consider
extending
the
open,
telemetry
schemas
to
you
know
include,
you
know,
data
validation.
You
know
in
the
scope,
not
a
full-blown,
but
something
simple
to
do.
Some
basic
validation.
B
So
I
think
it's
a
possibility.
The
initial
design
for
schemas
was
to
serve
a
specific
purpose,
which
was
the
the
ability
to
evolve
the
shape
of
the
emitted
telemetry
to
to
allow
it
to
change
in
a
controlled
way
right,
so
that
the
schema
describes
the
the
changes.
Essentially,
it
tells
what
the
deltas
from
one
version
to
another
are.
It
doesn't
describe
the
full
shape
of
the
data
so
right
now,
so
it's
not
useful
for
saying
whether
the
particular
telemetry
confirms
to
this
schema
or
it
doesn't.
B
It
only
allows
you
to
take
telemetry
and
convert
it
to
another
version
of
that
schema
family,
but
this
topic,
I
think
we
touched
it
a
few
times
in
the
past.
The
ability
to
describe
all
of
the
of
the
entire
schema
not
changes
of
the
schema,
but
all
of
it,
the
current
shape
of
the
data
right,
the
expected
names
of
attributes,
the
expected
values.
B
What
is
allow
it?
What
is
not
allow
it?
It
is
more
complicated,
I
believe,
as
a
capability
right,
so
you
will
need
to
make
the
schema
file
expressive
enough,
so
that
what
is
permitted
but
by
that
file
is
all
of
the
possible
things
that
we
want
to
express
in
telemetry
yeah.
So
it
will.
I
mean
it
will
make
the
schema
file
much
larger.
B
Its
maintenance
will
become
significantly
more
complicated,
and
I
think
if
we
do
that,
it
needs
to
be
tied
to
the
semantic
conventions
generator
in
a
sense,
because
the
the
yaml
files
that
we
have
that
are
the
source
of
the
semantic
conventions.
We
generate
them
from
that
in
a
way
that
is
already
what
you
would
call
a
schema
expressed
in
yaml
form,
except
it's
not
really
fully
suited
for
using
by
a
validator.
B
It's
it's.
It's
really
more
like
something
that
we
use
to
generate
human
readable
semantic
conventions.
It's
not
formalized
enough
to
become
an
input
to
a
validator
right
of
the
telemetry,
so
your
validator
needs
to
take
as
an
input
the
the
telemetry
like
the
rom
events
right,
the
log
record
and
and
the
the
schema
file,
and
then
we'll
do
this
validation
right.
Do
this
matching
see
this
data
matches
this
part
in
the
schema,
etc?
The
semantic
conventions
as
they
are
now
are
not
good
enough
for
the
threat
for
that
job.
B
They
need
to
be
a
lot
more
formal
like
it
needs
to
become
a
formal
description
language
for
for
data
right,
and
I
don't
know
if
we
want
to
invent
our
own
or
we
use
something
that
exists
out
there.
Maybe
there
is
something
that
we
can
reuse
so,
but
again
I
wouldn't
want
to
make
it
completely
detached
and
separated
from
the
semantic
conventions
generator.
Whatever
is
the,
however,
in
whatever
means
we
describe
the
the
shape
of
data,
the
schema,
I
think
it's
highly
highly
desirable
that
that
same
data
is
used
for
generating
the
semantic
conventions.
B
C
A
You
know
visible
to
the
public
right,
so
the
any
otlp
data
you
know,
the
agents
and
the
mobile
apps
and
the
browser
apps
they
send
will
have
to
be
unauthenticated
and
that's
how
a
lot
of
products
like
google
analytics
you
know,
that's
how
they
work.
They
they
don't
do
any
authentication,
but
they
have
measures
in
place
at
the
receiving
side
to
you
know,
identify
any.
You
know.
A
Anything
going
wrong
right,
so
one
of
the
measures
we
were
thinking
is
to
at
least
do
some
payload
validation,
like
you
know
what,
if
it
is
generated
by
like
half-baked
data
right,
you
know
people
are
sending
you
know
just
for
the
sake
of
you
know,
hogging
our
cpu,
but
the
content
of
the
payload
itself
is,
is
not
you
know
it
is.
It
is
otlp
compliant,
but
you
know
the
the
content
is
all
garbage
right.
So
we
want
to
do
some
basic
invalidation
of
the
content.
C
Yeah,
but
so
I
under
okay
now
I
understand
the
problem.
I
was
participating
in
developing
real
user
monitoring
at
sumo,
and
this
is
essentially
the
same
thing
like
client-side,
monitoring
and
and
the
problem
is
that
you
can.
You
cannot
make
any
assumptions
about
the
data
like,
so
I
think
that
you
still
if,
unless
this
is
something
that
it's
only
within
your
organization,
you
still
need
to
provide
some
some
headers.
So
if
we're
talking
about
vendors,
then
they
will
need
to
provide
some
authorization.
C
So
you
can
tell
like
each
organization
even
owns
the
data,
so
that
still
something
needs
to
be
included,
that
bart
it's
pretty
much
public
and-
and
this
is
this-
is
like
a
huge
problem.
So
we've
been
considering
several
solutions
there
and
and
for
reference
one
one
class
of
solutions
was
adding
calculation
of
some
some
payload
like
by
by
the
client.
So
the
client
could
not
abuse
this
very
much,
because
there
is
some
operation
that
takes
some
cpu
cycles.
C
That
needs
to
be
calculated
each
time
and
it
generates
some
some
token,
which
then
is
validated
by
the
server
and
the
validation
is
much
quicker
than
the
generation.
So
it's
it's
that
there
are
also
some
algorithms
that
that
allow
for
that.
But
the
problem
we've
been
facing
is
that
many
of
these
algorithms
are
based
on
time.
C
So,
let's
say,
there's
like
a
certain
time
time
period,
when
a
given
kind
of
token
is
being
generated
and,
and
the
clients
frequently
have
their
time,
not
synchronized,
for
whatever
reasons
like
this
is
not
common,
but
it
might
happen
and
for
many
customers
this
is
actually
a
bigger
problem
to
lose
some
data
than
let's
say,
have
this
hypothetical
issue
with
abusing
the
end
point
and
sending
some
fake
data.
Another
class
of
of
the
solution
here-
and
this
is
something
that
was
maybe
more
interesting-
is
having
tokens
that
come
from
two
sources.
C
So
what
one
source
is
coming
from,
the
let's
say
the
user.
Essentially,
they
may
provide
some
sort
of
source
of
that
token.
They
can
replace
it
quickly
if
needed
like
if
there's
some
leak
or
maybe
have
it
separately
for
for
each
session,
even
by
the
user,
and
then
the
then
sim
distance
like
a
much
more
interesting
and
sophisticated
solution.
However,
it
will
probably
need
to
be
extended
in
the
protocol.
C
Like
so
the
the
protocol
that
the
client
uses
would
need
to
to
provide
support
for
that
kind
of
thing,
and
I
I
found
this
interesting.
I
recall
that
some
time
ago
we've
been
also
discussing
the
issues
with
time.
Let's
say
differences
like
you
can
actually
measure
the
difference
of
the
time,
as
reported
by
the
client
and
and
by
the
server,
and
this
can
be
also
used
for
normalization,
and
this
is
like
a
similar
problem
because
it's
also
touching
the
protocol.
C
B
B
Short-Lived
tokens,
jamaica,
essentially
you're,
saying
something
that
is,
that
needs
to
be
rotated.
You
you
don't
hard
code
it
once,
and
then
anybody
who
has
access
to
it
can
generate
whatever
and
send
any
garbage
to
the
back
end,
but
something
that
is
short-lived.
Essentially,
that
is
that
crisis
coordination
between
the
back
end,
where
the
clients
of
the
application
is
coming
from.
B
So
if
it's
a
javascript
that
is
loaded
from
somewhere
at
the
time
of
the
loading,
you
probably
can
generate
a
token
with
very
short-lived
that
you
cannot
re,
you
can
only
use
for
a
short
period
of
time
but
which
can
be
renewed
if
you
connect
through
otlp
before
tlp
is
aware
of
that
on
every
response.
It
can
give
you
another
token
to
use
next
time,
but
it
always
rotates.
So
there
is
no
way
like
if
you
copy
paste,
it's
not
going
to
work
right.
One
of
those
is
going
to
lose
access,
something
like
that.
B
C
Exactly
because,
with
kind
of
a
schema
validation,
I
think
it's
useful,
but
you
can
still
send
data
that
is
perfectly
valid
for
the
schema,
but
it's
fake.
So,
for
example,
you
can
send
data
about.
I
don't
know
some
error
locks
which
are
not
really
true
or
like
some,
some
events,
duration.
That
is
not
true,
and
this
can
mess
up.
Let's
say
the
dashboard
that
someone
has
built
based
on
this
data,
but
it's
perfectly
valid
data.
So
it's.
B
Yeah,
I
don't
think
schema
validation
is
going
to
prevent
abuse.
It's
more
like
catch
mistakes.
To
me
right,
the
clients
may
be
unintentionally
producing
wrong
data,
and
you
do
want
to
know
whether
what
you
receive
on
the
back
end
is
actually
valid.
Are
you
going
to
be
able
to
process
it,
or
if
this
is
chunk
right,
you
want
to
reject
it,
maybe
as
soon
as
possible,
but
I
agree
with
you.
I
don't
think
it's
going
to
prevent
intentional
abuse
right.
B
It's
it's
only
raising
the
bar
slightly
to
those
who
just
randomly
generate
garbage
and
make
it
a
bit
more
difficult.
You
have
to
randomly
regenerate
something
that
is
not
really
garbage
but
looks
like
it
confirms
to
the
schema.
It's
probably
not
a
very
high
bar
to
clean,
like
if
people
want
to
do
that,
they
can
do
that
right.
A
Yeah,
especially
sorry
go
ahead.
Yeah,
especially
with
otlp.
The
payload
format
is,
is
public
right
with
so
far.
You
know
some
of
the
you
know
the
payload
for
each
of
our
products,
you
know
has
been
proprietary,
so
it
takes
more
effort
for
people
to
you
know,
reverse
engineer
what
would
be
a
valid
payload.
It's
not
hard,
but
it's
more
work,
whereas
with
the
open
telemetry,
the
format
is
public.
So.
D
And
I
was
just
going
to
add,
though,
like:
if
so,
if
the
goal
is
to
produce
a
high
bar
to
prevent
abuse,
I'm
not
sure
the
short-lived
token
gets
us
very
far
on
that
front,
because
anybody
that
is
looking
to
abuse
this.
We
just
need
to
understand
whatever
the
whatever
the
protocol
is,
the
like
clients
use
to
connect
and
obtain
new
tokens
from
the
collector,
presumably
that
they're
sending
to
and
that's
going
to
be
open
source
code.
D
B
B
A
We'll
need
more
time
for
that.
Okay,
so
we'll
we'll
go
through
these.
You
know
two
three,
four
five,
these
four
topics
quickly,
so
I
think
the
first
one
is
yeah.
I
I
saw
your
you
know
note
on
the
you
know,
reply
to
my
comment
on
using
the
resource
to
identify
a
domain
and
what
I
really
meant
was
that
by
domain
domain
itself
is
probably
not
a
like
a
well-defined
concept.
A
So
what
I
mean
by
domain
is
an
event
name,
you
know,
could
be
unique
within
a
given
domain.
Like
let's
say,
browser
is
a
domain.
Mobile
is
another
domain.
Kubernetes
is
another
domain,
so
things
you
know
very,
you
know
high
level.
Apm
could
be
another
domain.
A
So
within
that
domain,
your
all
your
event,
names
are
unique,
and
if
that
domain
can
be
identified
at
the
resource
level,
then
each
attribute
again
doesn't
need
to
have
like
you
know,
browser.prefix
right
so.
A
So
the
domain
as
a
scope
is
is,
we
are
thinking
of
using
some
resource
attributes
to
identify,
and
I
was
thinking
that,
maybe
that
is
sufficient.
B
So
I
I
guess,
let
me
maybe
step
back
a
bit
and
you're
right.
Let's
try
to
define
what
the
domain
is
right.
What's
the
purpose
of
that
domain,
essentially,
it
serves
as
a
namespace
for
the
event
names
and
within
one
domain
you
should
guarantee
that
there
are
no
clashes
in
the
in
the
names
of
the
events
yeah.
It
essentially
forces
you
to
coordinate
creation
of
new
event
names
within
that
particular
domain.
B
Some
other
domain
can
make
independent
decisions
about
what
the
event
names
they
want
to
do,
but
whatever
you
call
a
domain,
you
force
everybody
who
works
in
this
domain
and
who
wants
to
invent
an
event
name
right.
They
want
to
come
up
with
a
name
for
a
new
event.
They
have
to
coordinate
together.
It
has
to
be
some
central
repository
which
lists
all
of
the
event
names
or
describes
the
rules
about
the
event
names
which
guarantee
that
all
those
event
names
within
that
particular
domain
are
unique.
There
are
no
clashes.
B
That's
the
purpose
right!
That's
how
you
would
do
that
yeah
now.
If
we,
if,
if
that's
a
valid
definition,
then
I
think
if
we
say
that
the
browser
is
the
domain
to
me,
that
is
too
large
a
domain.
I
mean
it's
too
big,
in
my
opinion,
to
try
to
enforce
the
uniqueness
for
everything
that
comes
out
of
the
browsers,
because
and
again
the
same
example.
If
I'm
writing,
let's
say
profiling
for
javascript
that
can
run
inside
the
the
browser
and
I'm
writing.
Let's
say:
client-side
instrumentation
that
generates
user-initiated
events.
B
Those
are
probably
different.
Work
groups
right,
different
people.
We
are
by
saying
that
that's
the
same
domain,
we're
forcing
these
people
first
of
all
to
work
together.
They
must
coordinate
together,
but
we're
also
saying
that
the
profiling
in
the
browser
may
have
different
names
of
the
events
than
the
profiling
in
the
mobile
or
the
profiling
or
the
back
end
right
that
I
don't
think.
B
That's
that's
a
very
correct
approach,
because
profiling,
like
they,
they
self-identify
themselves
as
a
work
group
that
has
an
interest
in
doing
the
profiling
and
and
probably
they
would
want
to
make
a
decision
about
that.
So
you
see
how
that
is
orthogonal
to
what
the
browser
is
right.
Profiling
is
kind
of
it
can
be
a
cross-cutting
concern
for
browser
mobile
backend
everything
there
right,
but
profiling
events
likely
they
need
names
right
so
and
those
names
need
to
to
be
to
be
guaranteed
not
to
conflict
with
other
events.
B
So
to
me
that
is
a
domain
like
a
domain
essentially
well
from
my
perspective,
it's
something
that
in
a
way
mirrors
people
organization
here
at
open
climate
change.
People
say
that
they
are
a
work
group.
They
name
themselves
as
a
work
group
to
me.
That's
a
domain
because
when
you're
a
work
group,
yes
you're,
then
responsible
for
coordinating
within
that
work
group
and
making
sure
that
the
names
of
the
events
that
you
come
up
with,
they
are
unique.
They
don't
conflict
within
that
domain.
A
Well,
browser
is
also
another
group,
but
it
is
a
larger
vertical.
There
are
teams
that
work
on
the
entire
browser
application
monitoring
right
so,
but
I
understand
that
profiling
is,
is
a
specialty
that
cuts
across
you
know
multiple
verticals
and
that
can
be
a
common
team.
So.
B
Again
with
that
example
of
a
mobile
application,
if
I
have
a
mobile
app
that
maybe
it
generates,
let's
say:
user
events
like
client-side
events,
ram
events
right,
that's
one
type
of
the
events.
Another
type
is
profiling.
Events
from
the
same
application
right
and
the
third
is
like,
like
application
log
records
right
three
different,
those
are
three
different
domains,
really
right
or
maybe
two
domains
and
another
one
is
not
really
a
domain.
B
But
it's
everything
else
right,
something
like
that,
but
those
implementations
of
the
profiling
and
of
the
ram
events
implementations
themselves
and
the
proposals
for
what
do
you
name?
The
events
may
be
done
by
different
people,
different
work
groups.
Really
we
want
to
enable
this
possibility
to
for
them
to
work
independently
for
these
people.
B
So
the
round
folks
say
that
we
want
to
name
our
domain
rom
right
and
the
profiling
team
says
they
want
to
name
them
profiling
as
soon
as
you
you
and
that's
the
only
thing
that
we
need
to
coordinate
throughout
or
inflammatory
make
sure
that
rum
is
not
equal
to
profiling
right
and
then
they
are
free
to
go
and
do
whatever
they
want
to
do
within
that
particular
domain.
I
think
this
is
important.
We
need
to
make
it
possible.
Okay,.
A
So
then,
with
that
understanding,
what
that
means
is
that
the
domain
cannot
be
identified
at
the
resource,
so
it
has
to
be.
You
know,
at
a
deeper
level.
So
if
we
go
with,
you
know
the
domain
as
an
attribute
at
the
instrumentation
scope
level.
I
have
a
question
on
that.
So
anytime,
I
hear
the
term
instrumentation
scope.
A
If
I,
what
comes
to
my
mind,
is
you
know
it's
something
that
will
be
you
know
generated
by
the
are
are
created
by
the
instrument,
the
library,
authors
right,
the
third
party
library,
authors,
yeah,
now
the
same
library
it's
possible
to
use
in
in
different
domains.
A
B
A
B
Yeah
yeah,
so
that's
that's!
That's
an
er!
That's
an
instrumented
library,
java
folks,
right
instrumentation
for
that
library
and
in
that
instrumentation
they
obtain
two
different
scopes
with
the
same
library
name,
same
version
number,
but
different
scope,
attributes
and
one
of
the
scope
attributes
would
be
the
domain
right.
So
you
will
have
two
different
scopes
and
when
you
write
the
instrumentation
in
your
code,
you
use
one
or
the
other
meter
or
tracer
or
logger
depending
on
what
exactly
is
that
that
you're
doing
right
now?
B
Are
you
emitting
profiling
events
use
the
logger
that
is
obtained
with
the
domain
equals
profiling?
Are
you
doing?
Are
you
emitting
right
now?
Ram
events
in
your
code
in
your
instrumenting
code
then
use
the
other
logger
which
is
associated
with
another
scope.
So
authors
of
instrumentation
libraries
will
obtain
as
many
loggers
as
they
need
to
as
as
many
domains
they
they
intend
to
produce
data
about
for
that
single
particular
instrumented
library.
A
But
how
do
they
know
about
the
domain
in
context
as
in
they
know
what
so
the
so
the
library
authors
you
know
they
will
rely
on
the
domain
parameter
being
available
at
runtime.
Then.
B
D
B
What
is
it
the
attributes
right,
but
you
know
that
your
intent
is
to
start
emitting
ram
events
right,
because
I
want
to
then
install
an
a
listener.
Right,
add
event
listener
to
my
my
javascript
to
my
browser
document
and
then
all
of
the
events
that
are
produced
by
that
callback
I'm
going
to
emit,
as
as
log
records
with
the
domain,
equals
ram
right.
A
Where
does
that
value
run?
Come
from
the
library
author
doesn't
know
it
ahead
of
time
or
during
the.
D
A
A
B
B
It
may
be
a
set
of
one
or
more
domains
right.
So
if
I
bring
instrumentation
for
http
client
library
right,
that
may
include
profiling.
That
may
include
rom
events
as
well
right
and
maybe
that's
a
configurable
option.
Maybe
that
or
maybe
they
are
separate
right.
There
is
an
instrumentation
for
the
same
http,
client
library.
There
are
two
different
instrumenting
libraries,
one
produces
profiling
events
and
the
other
produces
rom
events.
If
you
need
both
you,
you
take
profits,
dependency.
You
set
up
both.
B
If
you
need
only
one
you
install
only
one
but
the
authors
when
they,
then
they
develop
these
libraries.
They
know
right.
How
do
you
do
this
instrumenting
in
the
javascript
world,
you
probably
are
going
to
do
something
like
window
dot,
app
event,
listener
right
for
all
of
the
possible
events
and
when
the
the
click
event
happens,
your
callback
is
executed
your
javascript
callback
and
in
that
callback
you
need
to
produce
a
log
record
right.
So
when
you're
producing
that
block
record,
you
know
that
you're
actually
producing
ram
events.
It's
not
like
you,
don't
know.
A
Okay,
okay,
so
so
we're
essentially
saying
that
the
library
author
should
know
ahead
of
time,
because.
D
If
they
want
to,
if
they
want
to
emit
events
associated
with
the
dome
with
a
domain,
that
is,
I
guess,
defined
as
a
part
of
the
open,
telemetry
semantic
convention.
So
I
I
think
that
you
know
there's
also
this
situation
where
somebody
wants
to
emit
events
that
are
domain
specific
to
their
application.
They
can
specify
a
domain
that
they
don't
know
ahead
of
time.
Well,
they
know
ahead
of
time,
but
it's
specific
to
them.
D
A
Yeah
and
and
most
likely,
the
domain
attribute
may
be
missing
as
well,
which,
in
which
case
it
will
be
in
a
you
know:
common
global
name,
space,
public
name
space.
Let's.
D
B
We
could
enforce
it
if
we
if
we
provide
another
way
to
obtain
a
logger,
which
is,
if
you
obtain
it
for
the
purpose
of
emitting
events.
Then
you
have
to
supply
the
domain
name,
something
like
that
right.
So
not
we
can
have
get
logger
and
get
event
logger,
which
is
almost
almost
the
same
thing,
but
it
also
requires
you
to
supply
the
domain
name
and
it
will
populate
it
as
a
scope
attribute,
in
that
case
yeah.
I
think.
D
B
Yeah,
so
this
this
could
be
dot,
get
could
be
a
way
to
obtain
a
generic
logger
and
for
get
like
scope,
not
scope.
I
don't
know
what
the
right
name
for
that,
but
it
would
be
another
getter
that
would
then
require
make
it
mandatory
to
supply
the
domain
name.
In
addition
to
the
library
name,
the
version
name
of
the
version
name
is
optional,
the
library
name
and
the
domain
name
would
be
the
required
attributes.
B
That
will
give
us
a
scope
attribute
in
this
correct
word:
yeah,
okay,
okay,
so
that's
if
we
want
to
kind
of
yeah
make
it
make
it
difficult
to
make
a
mistake
right
to
make
it
difficult
to
forget
to
supply
the
domain.
Okay,
okay,
I
mean
it's
very
similar
to
what
we
have
with
the
log
event
right.
You
could
that
that
that's
that
it
forces
you
to
supply
the
event
name,
so
we
could
similarly
force
to
supply
the
domain
name.
Okay,.
A
Okay,
I'll
go
with
that,
so
okay,
so
with
that,
then
at
a
signal
level,
any
event
name
that
we
pass
there
is
no
or
we
don't
have
any
say
on
whether
that
should
have
you
know
a
specific
prefix
or
not.
B
Yeah,
I
think
in
that
case
it's
no
longer
necessary
right.
So
the
the
the
idea
is
that
if
there
is
any,
I
guess,
concern
about
event,
names
being
non-unique,
you,
then
you
must
use
a
domain
and
the
domains
is
domain
is
what
what
ensures
that
all
the
names
within
that
domain
are
unique
between
the
domains.
There
is
no
longer
such
guarantee,
and
that's
that's
fine
right.
A
Okay
and
lastly,
on
this
topic,
you
suggest.
A
Adding
these
two
attributes
event.name
and
even
that
domain
in
the
semantic
conventions-
and
you
mentioned,
reserving
the
event
namespace.
What
does
that
mean
that
reserving
a
namespace
well.
B
B
A
Okay,
all
right.
Okay,
we
again
watch
thanks.
So
the
next
topic
is,
you
know
this
diagram
and
I
I
think
we
we
can
go
with
what
you're
saying.
I
think
my
question
is
the
the
reason
I
added
that
event
builder
in
the
in
the
logger
interface.
I
should
open.
B
So,
with
regards
to
the
diagram,
can
you
can
you
go
go
back
to
the
diagram,
because
for
me
it's
still
a
question
that
I
don't
I'm
not
sure
about.
We
have
so
from
the
perspective
of
the
of
what
are
the
packages,
our
our,
I
guess,
normal
approach
with
tracers
and
meters
is
that
there
is
an
api
package
and
there
is
an
sdk
package
right
here.
B
I
and
there
is
an
application
package
right.
So
there
is
probably
three,
but
here
I
have.
I
have
actually
a
separate
logging
logging
library
sdk,
and
this
is
what
I'm
not
sure
about.
Does
it
need
to
be
separate
from
log
sdk,
so
it's
essentially
a
dependency
of
log
sdk
or
this
we
just
put
it
on
all
in
on
one
sdk
yeah.
So
I
think.
A
I
think
you
wanted
to
limit
what
can
be
created
from
the
logger
right,
whereas
for
any
anyone
that
wants
to
create
you
know
log
records
with
the
ability
to
set
anything
and
everything
you
know
you
would
use
the
the
bottom
sdk.
A
My
concern
is
that
today
there
are
implementations
for
events.
People
are
creating
events,
but
not
following
you
know
the
conventions
that
we
are
defining
here
like,
for
example,
one
would
be
that
you
know
we
are
saying
that
you
know
all
events
will
have
an
attribute
with
even
dot
name
right
and
you
know-
take,
for
example,
the
kubernetes
events
or
the
ebpf
events.
So
they
are
not
following
that
convention.
That
is
number
one
number
two.
They
I
think
the
kubernetes
events
receiver.
A
A
Yeah,
so
they
can't
use
the
low
level
one
right,
because
that's
in
implementation,
only
so
you
you
need
them
to
use
an
api.
B
Still
can
be
different
from
the
logger
that
is
used
for,
for
the,
for
the
event
purposes
can
be
different.
I
guess
my
question
was
more
about.
Are
they
literally
different?
Let's
say
we're
talking
about
java
board?
Are
they
literally
different
jar
files
or
they
are
the
same
because
we
have?
We
have
trace,
trace
api
jar.
We
have
trace
as
decage
out
they
right.
There
are
two
different
jar
files
and
the
the
sdk
one.
If
you
don't
you
don't
add
it
to
the
application.
Then
the
api
is
a
node.
Essentially
that's
how
it
works
today.
B
B
Yeah
I
mean,
logically,
that
same
jar
file
can
still
include
all
of
those
boxes
yeah
they
they
can
be
different
levels
of
apis.
One
can
be
higher
level
more
like
a
event
oriented
api,
the
other,
the
second
one
can
be
lower
level
where
you
allow
fine
control
about
what
the
shape
of
the
log
record
is
with
full
control,
and
then
we
can
call
them
high
level
low
level
api,
whatever
names
we
want
to
give
them,
but
it
can
be
the
same
jar
file.
B
Is
I
guess,
is
there
a
situation
when
you
would
want
to
take
dependency
on
one
of
this,
but
not
the
other
and
somehow
putting
them
inside
one
jar
file
creates
a
problem
like
let's
say
so
when
you,
when
you
bring
the
the
sdk
as
a
dependency
today,
what
happens?
Is
you
have
to
explicitly
install
the
sdk
right?
You
have
to
explicitly
enable
it
at
the
application
startup
time.
B
If
you
don't
do
that,
even
if
you
compile
with
the
dependency
it's,
it
doesn't
do
anything
right
that
the
api
remains
a
no.
Is
that
correct?
That's
that's
how
it
works
today
right
it
remains
unknown.
So
I
okay.
So
then
I
don't
see
a
problem.
I
think
this
can
be
inside
the
same.
We
can
call
it
sdk
logging,
sdk
log,
sdk
or
whatever
the
name
is
and
then
have
two
essentially
two
sets
of
apis
and
the
implementation
of
the
high
level.
B
A
Yeah,
so
so
now
I
I
still
want
to
understand.
Do
we
really
need
two
implementations
at
the
low
level
and
high
level?
Because
if
the
api
is,
if
there
are
requirements
from
you
know,
other
applications
to
generate
events,
you
know
using
the
ability
to
when
they
they
need
the
ability
to
set
any
field
in
the
log
record,
then
it's
it's
essentially
full
featured
logging
implementation.
Then
it's
it's
essentially
one
in
the
same
implementations.
D
D
B
Any
log
records
yeah.
If,
if
the
answer
is
yes,
then
I
guess
the
existing
log
emitter
essentially
should
be
merged
into
into
the
the
upcoming
broker.
They
become
one
thing:
yeah,
there's
no
difference
between
the
two.
I
don't
know
what
the
answer
is.
Maybe
I
guess
we
would
just
write
the
code
and
see
what
it
looks
like.
Maybe
it's
more
of
a
question
of
prototyping
and
seeing
what
the
ergonomics
of
the
resulting
api
are,
what
they
look
like.
A
A
Yeah,
it's
it's
essentially
the
same
thing
we
just
discussed
where,
in
the
in
the
logger
class
in
the
logger
interface,
I
had
this
event
builder
in
addition
to
these
convenience
methods.
A
B
B
Is
event
specific,
which
is
fine
and
the
the
question
I
had
is:
do
we
want
both
the
builder
pattern
and
both
the
the
other
the
odd?
What
was
the
log
event
like
the
methods,
and
I
mean
that's,
that's
probably
something,
though,.
A
You
mean
this
could
return
a
log,
a
log
record
itself,
and
we
don't
need
both
of
these.
You
mean
is
that
what
do
you
mind.
A
B
A
A
B
B
B
B
D
D
And
then,
by
the
time
you
configure
your
open
telemetry
instance.
You
know
all
of
the
sdk
the
fact
that
it's
the
sdk
implementation
is
abstracted
away.
You
just
get
a
tracer
provider
instead
of
an
sdk
tracer
provider
and
from
that
tracer
provider
you
say
get
a
tracer,
but
the
sdk
prefix
is
is
included
when
you're
configuring,
the.
B
B
D
And
actually
in
the
interest
of
of
being
consistent,
so
you
know
we
talked
about
this
idea
of.
Do
we
have
two
apis,
a
high
level
api
and
a
low
api
that
and
then
like
a
separate
implementation
of
that,
and
so
that
that
is
that's
compelling,
because
it
means
that
we
don't
have
a
like
an
application
logs
api
that
competes
with
log
for
j
and
log
back.
B
Yeah,
so
don't
advertise
our
api
as
something
that
you
can
use
as
an
end
user
for
logging
purposes,
advertise
it
for
the
narrow
specific
case
where
that
is
the
purpose
of
it
is
to
install
appenders
and
then
use
your
login
library.
I
guess
that
may
be
enough.
I
mean
we
show
the
intent,
that's
how
it's
intended
to
be
used
and
that
that
may
be
enough
like
we
don't
have
to
actually
prevent
it.
Split
it
in
the
api,
give
it
different
names.
Maybe
that's
that's
unnecessary,
especially
because
we
lose
the
consistency
right.
D
All
we
have
really-
and
I
guess
so,
a
question
though
our
we
have
this.
We
have
this
concept
of
a
logger
and
we
we've
been
talking
about
how
events
have
domains
associated
with
them
and
we've
been
proposing
making
the
domain
a
required
field.
B
Again,
I
think
if
this
was
completely
separate
development
unrelated
to
other
parts
of
open
telemetry,
probably
the
answer
would
be
yes,
but
because
we
have
tracers
and
meters
and
the
way
that
they
work.
I
think
again,
consistency,
maybe
maybe
is
more
important
here,
so
we
just
make
it
a
single
logger
and
then
the
documentation
tells
you
how
you
use
it
for
the
two
primary
use
cases
one
is
for
building
appenders
and
the
other
is
for
meeting
events,
and
then
they
show
you
in
the
form
of
examples.
They
look
different
right.
D
B
Just
useful
you
can
obtain
yeah,
you
can
obtain
a
logger
which
doesn't
have
a
domain
and
then
use
it
for
emitting
events.
Then
they
become
events
without
domains,
which
I
guess
it's
fine.
Maybe
why
not
like?
Like
I'm
building
my
own
application,
I
control
everything.
I
don't
need
domains.
I
just
I'm
going
to
produce
some
events
from
that.
D
Maybe
that
is
okay,
because
you
know
like
there's
the
http
conventions
which
have
required
fields
like
some
of
those
attributes
are
required,
but
you
can
still
emit
telemetry
without
those,
even
though
it's
required
by
the
semantic
convention.
So
in
a
similar
way
yeah,
we
can
say
that
domain
is
required
but
like
if
you
omit
it,
you
know
you're,
just
not
producing
spec
compliance.
It's.
B
B
A
So
one
thought
is
that
so
domain
is
specific
to
events
right,
it's
not
applicable
to
log.
So
in
the
when
we
say
log
provider.get,
you
know
for
the
events
you
know.
Maybe
we
could
make
the
domain
mandatory
for
the
application
logs
in
case
you
know
we
we
keep
it
up.
We
don't
require
it.
B
B
It's
not
going
to
be
part
of
that
discussion,
okay
and
then
there
is
going
to
be
another
section,
which
is
about
oh,
how
you
do
events
and
all
that
stuff
using
this
api
and
that
one
will
everywhere
that
that
is
relevant,
we'll
mention
event,
domains,
event,
names
and
stuff,
like
that.
So
kind
of
two
quite
separated
words
in
a
sense
right.
A
Okay
and
in
the
api
should,
I
add,
separate,
get
methods
one
for
events
and
one
for.
B
I
think
so,
I'm
not
sure
what
the
names
are
going
to
be,
but
the
one
that
is
intended
for
for
for
events
should
make
it
required
to
to
supply
the
domain
name,
something
like
that
and
again
that
that
is
maybe
also
not
required.
I
mean
again,
we
can
follow
the
I
the
the
the
same
logic
right
we
say:
okay,
here's
the
example:
here's
the
semantic
conventions.
This
is
what
you're
supposed
to
do,
you're
supposed
to
include
the
domain.name
or
what
is
it
event.domain
if
you're
obtaining
it
for
eventing
purposes?
D
Okay
and
santosh,
so
you
know
scope,
attributes,
obviously
aren't
a
thing
yet
so
they're
proposed
and
we're
talking
about
adding
domain
as
a
scope
attribute
so
like
in
some
ways.
I
think
we
kind
of
have
to
leave
that
as
a
placeholder
until
scope
attributes
are
incorporated,
as
a
part
of
you
know,
the
standard
open,
telemetry
java
apis,
so
you
know
which
you'll
be,
which
you're
extending
effectively.
A
So
scope
attributes
are
equivalent
to
duplicating
the
same
attributes
at
each
signal
right,
so
one
way
is
to
leave
it
out.
Another
way
is
to
you
know,
take
it
in
the
you
know
in
in
this
method,
and
then
you
know
pass
it
as
a
signal
attribute
for
now.
D
The
place
that
they'll
manifest
is
so
see
the
logger
builder
down
there
on
53.,
so
we're
going
to
have
the
the
builders
have
an
api
that
allows
you
to
optionally
associate
a
set
of
attributes
with
the
logger
when
you're,
building
it
or
tracer
or
meter
or
whatever,
and
so
yeah
like.
Maybe
you
could
add
that
method
on
the
logger
builder
now,
and
you
know
that
would
reflect
what
it
looks
like
in
the
field
in
the
future.
D
A
Okay,
so
set
domain.
Something
said
well:
human
domain
yeah.
D
I
don't
know
yeah
we'll
see
how
it
turns
out
like
it
all
depends
on.
You
know
what
we
choose
we
can.
We
can
say
that
you
know
you
just
set
us
attributes
and
you
set
an
attribute
called
event
domain
equals.
You
know
the
name
of
your
domain
or
we
could
have
an
explicit
method
called
set
domain
which
effectively
does
that,
but
in
a
more
you
know,
type
safe
way
or
rigid.
Why.
A
Yeah,
the
difference
would
be
that
let's
say
you
obtain
the
logger
with
a
certain
domain.
Then
at
the
log
record,
at
the
logger
builder
or
the
rock
record
level,
you
could
overwrite
the
domain.
B
B
B
A
I'll
I'll
omit
that
that
setter
at
the
log
record
level
for
now
I'll
only
keep
it
at
the
you
know.
The
log
provider.get.
B
B
B
B
A
I
I
still
have
two
more
topics.
I
I
don't
know
how
much
time
you
have
you
have
to
leave.
A
Okay,
I'll
I'll
go
with
the
the
last
one
first
simpler
one.
So
I
was
thinking
that
for
the
span
events
we'll
we'll
just
say
that
you
know
there
can
be
an
option
at
the
trace
provider,
configuration
to
generate
log
records
for
span
events
and
that
if
the
tracing
is
turned
off
where
the
trace
provider
is
using
a
no
op
implementation,
then
even
the
log
records
will
not
be
generated
just
to
keep
it
simple.
So.
B
You're
saying
as
a
substitution
for
the
span
events,
we
will
instead
generate
log
records
when
you
call
the
ad
event
method
on
the
spam.
Instead
of
actually
meeting
a
span
event,
it
will
do
a
different
thing.
It
will
do
a
metal
auger
record.
Is
that
how
we
okay
yeah?
Would
it
not
be
a
surprising
behavior?
Do
we
really
want
something
like
that.
A
So
I
think,
for
like
ted
and
for
some
others,
I
think
they
believed
that
the
span
events
were,
you
know
it
was
a.
I
don't
know
the
right
number
to
make
a
mistake,
so
they
but
it
since
it's
part
of
a
stable
spec.
It
can't
be,
you
know,
removed,
but
they
want
to
recommend
people
to
use
lock
records
for
for
events
and
therefore
provide
that
configuration
option
at
the
tracer
level.
A
B
A
B
A
All
right
so
then,
the
last
topic
is
really
the
association
of
span
context
to
log
records.
How
do
we
prevent
that
like?
What
do
we
put
in
the
api.
B
This
this
shouldn't
be
the
default
behavior
right
in
the
majority
of
cases.
Hopefully
it's
it's.
It's
a
good
thing
to
do
this
association
of
the
context
with
the
log
record,
sometimes
you're
saying
maybe
we
want
to
prevent
it
well,
to
be
honest,
I'm
not
sure
what
do
we
want
to
make
it
look
like
in
the
api.
A
I
think
I
think
the
I
I
don't
have
a
proposal,
but
I
think
the
problem
is
really
that
this
is
a
decision
in
the
sdk
right.
I
think
currently,
the
the
log
emitter
sdk
the
way
it's
designed
is
it.
It
automatically
takes
the
span
context
if
it's
available
in
the
context
yeah,
so
that
as
a
default
behavior
is,
is
not
correct.
D
I
think
it
is
correct.
I
just
think
you
need
to
send
the
ability
to
potentially.
C
D
Signal
to
the
sdk
to
not
do
that,
and
so
like
what
you
know.
We
have
this
log,
this
log
record
builder.
What
if
there's
just
a
method
on
there
that
says,
omit
context,
okay
and
that
need.