►
From YouTube: 2021-09-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
D
F
F
First
of
all,
metrics
update
yeah,
who,
who
wants
to
give
an
update
on
this
one.
B
So
the
metrics
sdk
is
almost
ready
for
the
experimental
release.
We
have
just
one
very
small
pr:
it's
removing
the
warning
message,
we're
allowing
all
the
languages
to
start
doing
the
implementation,
so
so
please,
review
and
and
help
to
merge
that
then
we're
ready
to
go
and
after
that,
we'll
focus
on
the
sdk
spike
feature.
Freeze.
B
There
are
a
couple
non-blocking
issues
we'd
like
to
address
before
feature
freeze,
and
then
three
languages
are
currently
doing
the
prototype
actively
and
and
once
we
got
the
beta
state
for
all
three
languages,
we
should
be
able
to
have
confidence
to
declare
the
spec
as
stable
once
we're
there,
and
the
last
message
is
for
other
languages
that
are
not
yet
started
after
this
experimental
release.
It
is
the
official
signal
that
we
encourage
other
languages
to
start
working
on
this.
F
Nice,
thank
you
so
much
yeah,
this
small
pr
that
you
mentioned
one
nine
one
two
is
that
the
last
one
you
need
before
release
nice.
F
F
I
think
he's
not
around,
but
one
thing
I
saw
is
that
he
has
two
whatevs:
one
of
them
was
merged,
the
other.
The
second
one
is,
let
me
look
for.
That
is
168.
I
think
this
is
the
other
one
that
we
want
to
merge
relatively
soon.
Hopefully
there
are
already
some
reviews.
Actually
there
are
more
reviews
than
yesterday.
I
think
I'm
gonna
put
it
there.
F
In
the
agenda,
so
please
review
that
once
this
is
done.
Hopefully
we
have
everything
in
place
to
iterate
on
the
actual
pr
in
the
actual
specification
for
this.
So
please
take
a
look
yeah,
let's
move
on
from
from
there.
F
Okay
instrumentation
instrumentation
of
date
is
johannes
ortega
round.
I
think
that
is
on
holidays,
maybe
johannes
round.
F
No,
I
think,
there's
nobody
from
the
instrumentation
group
today
here:
okay,
let's
move
the
next
one.
Then
the
next
item
yeah,
I
put
a
few
items
there
myself.
The
first
one
is
for
capturing
http
request
response
headers
as
attributes
this
pr
has
been
standing
for
some
time
now.
There
are
two
approvals
so
far.
It
needs
more
reviews.
There's
one
interesting
part,
a
suggestion
about
capturing
all
of
these
headers
in
an
instant
map.
F
However,
we
don't
support
nested
maps
at
this
time,
so
we
have
to
decide
what
we
want
to
do,
whether
we
want
to
hold
this
or
we
want.
We
want
to
go
ahead
with
what
we
have
now
or
what's
the
plan.
So
please
review
that
one,
I
think,
even
though
it
doesn't
seem
so
critical.
F
I
think
it's
a
bad
idea
to
leave
a
pr
stale
go
stale.
The
second
item
is
a
little
bit
more
important.
It's
about
making.
You
know
what
value
to
have
for
the
default
compression
for
otlp
no
c
that
is
stable
has
support
for
this.
However,
this
will
be
changing
soon
because
both
go
and
javascript
support
these,
and
they,
if
I
remember
correctly,
they
are
planning
to
do
a
1.0
release
this
week.
F
So
we
have
to
decide
what
to
do
there,
honestly,
whether
we
should
do
compression
by
default
not
or
no
compression
by
default.
C
We
talked
about
this
yesterday,
if
I
remember
correctly-
and
there
was
a
problem
related
to
to
the
fact
that
this
may
affect
the
collector,
and
we
may
need
to
do
something
about
that.
That
being
said,
I
haven't
tested
anything,
and
I
don't
know
if
both
grpc
and
htp
can
initiate
from
the
client
the
compression,
because
that
means
we
don't
have
any
problem.
But
if
that's
not
true,
we
may
have
a
problem.
F
Yeah
that
makes
sense
that
would
make
sense
to
me
danielle
or
javascript
or
romantic,
or
do
you
have
any
opinion
about
that.
A
I
mean
right
now,
so
I'm
looking
right
now
we
only
defaulted
gzip
on
the
http
export.
Our
focus
is
for
that
is
on
the
the
web
browser.
So
you
know
we
haven't
seen
any
problem
with
it.
It
works
with
an
unmodified
collector.
D
And
on
the
go
side
we
default
to
no
compression
for
both
grpc
and
http.
We've
got
a
single
configuration
for
both
looking
at
it.
It
looks
like
a
thing
where
we
can
change
the
defaults
without
changing
our
api
would
be
a
behavior
change
and
as
long
as
both
grbc
and
http
will
fall
back
to
no
compression,
if
the
other
end
signals
that
they
can't
deal
with
that,
I
think
that's
a
change
that
we
can
make
after
a
1.0
release.
G
Do
we
need
that
all
languages
have
the
same
behavior
here
or
the
defaults
may
be
different
for
the
different
languages,
because
that's
what
makes
sense
for
the
particular
language
like
for
for
javascript?
If
it's
in
browser,
I
guess
it
does
make
sense
to
make
the
default
to
be
compressed,
but
for
languages
where
you
typically
expect
the
sdk
to
send
to
local
machine.
Where
you
have
the
collector
running,
it
is
less
so
I
guess
right
and
then
for
the
collector
itself,
which
is
different
from
the
sdks.
G
C
It
is
very
dependent
if,
if
I
know
correctly
on
the
http,
the
connection
the
the
compression
can
be
initiated
by
the
client
and
the
server
responds
with
yes
or
no
or
I
don't
remember
exactly
how
it
is.
But
what
I'm
trying
to
say
is
what
does
it
happen
if
a
client
sends
with
gzip
gzips
the
the
request
and
then
fails
the
the
server
fails
to
pass
the
the
response?
The
request.
G
G
The
server
I
think,
can
signal
that
it
does
not
support
gzip,
but
we
can
actually
mandate
it
in
the
specification
like
the
protocol
requires
that
the
server
supports
compression
so
that
that
is
no
longer
a
problem
and
it's
only
a
decision
that
the
senders
need
to
make
whether
they
want
to
send
it
compressed
or
no,
knowing
that.
Certainly
the
receiver
supports
both.
F
G
G
For
the
receivers
yeah,
how
is
it
the
braking
change
for
the
receivers
like
do
you
accept?
Because,
because
it's.
C
C
C
I'm
fine,
I'm
fine
to
make
it
mandatory
for
the
receivers
to
support.
That's,
I
think
the
the
right
start
before
we
can
make
a
decision
on
the
client.
G
Yeah,
I
think
that
slightly
simplifies
things
right
receivers
support
both
it's
mandatory
and
then
the
clients
can
make
their
decisions
and
then
the
question
there
is:
is
it
an
individual
decision,
let's
say
per
language
or
per
exporter,
or
is
it
a
universal
decision
that
we
make
for
all
I'm
not
so
certain
that
it
needs
to
be
a
universal
decision?
That's
what
I'm
not
sure
about.
C
C
Let's
do
the
correct
thing:
let's
start
by
doing
the
receiver
mandatory
part,
somebody
should
take
that
action
item
and
second
action
item
will
be,
and
maybe
this
is
something
that
we
can
discuss
even
here
if
the
client
should
be
by
default,
gzip
or
not,
I
think
in
the
sdk.
It's
fine
to
not
be
gzip
in
the
majority
of
the
case.
Only
in
the
browser-
and
maybe
mobile
in
case
of
java
needs
to
be
gzip.
Otherwise,
I'm
fine
without
being
dizzy.
G
Sorry
you're
saying
that
you
want
it
to
be
by
default
on
for
all
languages.
No,
I
said.
C
For
for
things
that
are
remote
java
java
for
for
android
web
for
for
js,
maybe
sweet
swift
for
for
apple,
and
I
think
that's
all
that
I
can
think
of
yeah.
C
F
A
So
the
collector
right
now
does
support
gzip
on
the
receiver,
so
I
suppose
at
least
in
js
we
can
document
that
we
are
we're
g-zipping
in
the
browser
by
default.
If
you're
using
the
official
collector,
it
should
work.
But
if
not,
you
should
check
your
compression
settings
or
something
like
that.
C
Yeah,
that's
true,
but
also,
as
tigran
pointed
we
can
make
part
of
the
the
protocol
to
say
that
an
official
receiver
should
support
this
to
to
avoid
that.
That
problem.
For
you.
A
Yeah,
okay,
make
sense
yeah,
I'm
not
aware
right
now
of
any
receivers
that
don't
support
it.
If
anyone
does
know
of
one,
I
would
appreciate
it
if
you'd,
let
me
know
so
that
I
can
document
that
on
the
the
browser
explorer.
This
is
only.
C
For
tlp
sorry
correct:
yes,
it's
for
ltlp,
but
it
depends
if
you
are
using
a
load
balancer.
So,
for
example,
if
you
are
a
vendor
supporting
otlp,
if
you
are
using
a
load
balancer
for
grpc,
I
don't
know
if
that
does
support
this.
If
I'm
not
mistaken
http,
I
don't
know
how
it
is
with
http
and
what's
the
handshake
protocol
there,
but
yeah
anyway,
we
need
to
somebody
with
more
knowledge
about
the
http,
rfc
and
stuff
should
tell
us
if
we
are
completely
wrong
or
not.
C
A
G
C
D
C
Yeah
so
but
but
that's
independent,
I
mean
if
if
daniel
is
doing
the
fallback
he's
safe,
but
on
our
side
we
want
to
not
have
to
do
all
the
the
fallbacks
for,
for
everyone.
So
probably
is
good
to
to
have
this
mandatory
in
the
protocol.
So
I
don't
know
who
wants
to
own
that
action
item.
F
C
Sure
ping
ping
me
or
slack
or
anywhere
new.
F
Sweet
yeah,
it's
hard
to
do
it
fast,
so
the
go
and
the
javascript
releases
are
not
compromised.
Okay,
perfect!
That's
a
plan,
then,
okay.
Thank
you
so
much
for
that.
Let's
move
to
the
next
item.
George
sureth.
H
Yeah,
as
there
was
an
otep
from
a
long
while
ago,
around
doing
resource
migrations
in
the
sdk.
The
idea
here
is:
you
should
be
able
to
leverage
our
telemetry
schema
and,
if
I'm
an
exporter,
I
should
be
able
to
code
against
a
specific
version
of
the
resource
schema
for
semantic
conventions
and
then
have
a
consistent
interface
of
what
things
I
can
like.
What
attributes
I
have
available
in
the
sdk.
H
There's
two
open
questions
on
the
otep.
The
first
one
is:
how
should
we
have
access
to
the
schema
url?
What
does
that
look
like,
and
how
do
I
leverage
it
and
the
other
thing
is:
does
this
make
sense
to
get
baked
into
resource
as
proposed,
or
should
this
be
a
separate
utility?
That's
added
to
the
sdks,
I'm
not
aware
of
any
other
contentious
points,
but
I
this
otep
got
a
lot
of
attention
and
not
a
lot
of
approval.
It
has
one
approval
and
it
has
a
bunch
of
questions.
H
So
I
guess
what
I'm
asking
is,
besides
those
two
open
questions
which
I
think
actually
can
be
resolved
in
implementation
and
rather
easily
what
what
is
there
anything
else?
That's
missing
here
that
we
need
to
discuss
in
person.
H
I
mean
that's,
that's
a
possibility.
I
still
think
you
have
that
problem
then,
where
you
can't
write
an
sdk
exporter
against
a
stable
api
or
you.
D
H
Yeah
for
context,
the
google
exporters
that
and
the
google
gcp
resource
detectors
actually
have
to
be
released
in
lockstep
now,
and
so
we
do
not
have
them
in
the
open
telemetry
code
base.
We
have
them
released
together
in
a
google
code
base
so
that
they
work
together
because
we
can't
rely
on
the
stable
api.
H
So
this
is.
This
is
one
of
our
attempts
to
try
to
fix
that
by
providing
a
consistent
view
of
what
resources
are
meant
to
have
and
meant
to
be
and
leveraging
the
telemetry
schema
to
like
know
when
changes
have
happened
and
to
code
against
a
specific
version
of
that
in
the
exporter.
H
C
How
how
how
is
the
fact
that
you
have
your
own
repository
helping
you
because
contributes,
for
example,
if
they
were
in
contribute,
as
other
detectors,
they
will
be
updated
with
everything
all
the
time.
H
Yeah,
I
guess,
if
we
put
if
we
put
both
in
contrib,
that
that
could
help
except
the
well
anyway.
There's
there's
a
couple
reasons
I
can.
I
can
step
into
all
of
them
if
you
want,
but
no
no.
H
H
So
but
in.
H
This
affects
anyone
who
wants
to
provide
a
resource
detector
and
have
a
stable
api
for
it
or
wants
to
leverage
the
upstream
open
source.
If
someone
would
provide
a
resource
detector
for
google
cloud
right,
I
would
I'd
love
if
there
was
a
consistent
one
that
existed,
that
you
could
rely
on
that
stable
and
then,
if
it
doesn't
have
to
come
from
this
google
specific
private
repo
to
have
stable
semantic
conventions.
If
I
want
to
code
against
semantic
conventions
and
expect
you
know
a
particular
thing,
do
we
want
to
force
people
to
go
through
a
collector?
H
G
Yeah,
well,
I
I
agree.
I
believe
this
is
necessary.
I
didn't
prove
it
already.
I
have
only
one
comment
here
with
regards
to
the
implementation.
G
I
think
it
would
be
the
right
thing
to
do
to
implement
the
conversion
logic,
the
schema
conversion
logic
as
an
independent
library
and
use
it
in
the
sdks,
so
that
that
logic
is
also
available
to
other
implementers
of
the
schema
aware
components
like
back
ends.
Maybe
right
need
to
do
the
schema
conversion.
G
H
I'm
absolutely
fan
of
that.
I
agree
with
that.
That
was,
I
guess
I
didn't
say
I
I.
This
was
one
of
the
open
questions
that
I
haven't
resolved
in
the
specification,
but
that
was
the
plan
for
implementation
is
actually
to
pull
this
functionality
out
as
its
own
independent
consumable
piece
that
we
can
leverage
in
sdks.
I
think
it
just
it
needs
to
exist.
The
other
question
then,
around
access
to
schema,
urls
and
bundling.
G
H
Yeah
I
that
sounds
good.
I
I
also
agree
that
any
implementation
will
have
to
be
super
flexible
and
where
a
good
schema
from,
but
those
so
those
were
the
two
open
questions
I
think
can
be
resolved.
I
can
add
comments
to
the
otep.
I
guess
the
real
question
is
so
bogdan
suggested
that
if
this
should
only
exist
for
the
collector,
is
that
still
a
open
concern
with
this
functionality?
H
C
Will
be
able
to
do
a
library
that
will
do
the
conversion
for
two
reasons?
First
reason
being
you
don't
depend
on
protos
and
most
likely,
the
library
that
you're
gonna
write
in
the
sdk
will
work
against
the
internal
data
formats.
The
trace
data
or
metric
data,
for
example
in
java,
will
not
work
against
the
protos.
So
it's
going
to
be
less
useful,
as
suggested
by
tigran,
in
my
personal
opinion,
unless
you
do
it
to
work
against
the
protoss,
but
that's
not
useful
for,
for
example,
for
other
exporters
that
are
exporting.
G
D
C
C
But
actually
quick
question
about
that
anthony?
Why
do
you
not
use
the
p
data
right
now.
D
D
Yeah
yeah
the
question
of
whether
our
own
glp
exporter,
uses
p
data
or
not,
is
something
we
can
address
later
on.
I
think
we've
discussed
using
that
to
get
otlp
json
support,
which
would
be
problematic
with
our
current
proto
implementation,
but
that
should
be
something
that
that
we
can
handle
internally,
post
1.0,
okay,.
C
G
So
that
that's
that's
very
valid
you're
right
pogba,
but
at
the
minimum
I
would
make
the
the
part
that
is
about
loading,
the
schema,
parsing
it
validating
and
all
that
that
definitely
can
be
a
separate
library
right,
at
least
that
portion
and
maybe
the
transformations
if
you
could
define
them
in
terms
of
some
sort
of
abstract
interfaces
where
the
data
is
not
defined.
Concretely.
C
Yeah
so
josh
surya
to
answer
your
question.
It
may
be
very
useful,
as
you
pointed
to
have
it
in
all
these
cases,
but
at
a
minimum.
What
we
can
try.
C
My
suggestion
is:
you
should
try
to
have
it
in
the
collector,
because
even
though
may
not
help
you
in
100
of
the
cases,
it's
going
to
help
you
in
80
of
the
cases
and
then
and
then
learn
from
understand
better,
because
I'm
I'm
pretty
confident
that
we
haven't
defined
fully
the
mutations
and
everything
into
the
schema,
url
and
we'll
learn
during
this
process
and.
F
C
H
C
H
Yeah,
it's
not
in
the
spec,
so
obviously
you
shouldn't
implement
it
for
java
because
it's
not
specified
yet
it's
only
an
otep
which
is
in
my
opinion.
So
this
is
how
I've
been
running
this
and
this
could
be
wrong.
The
otep
is
a
design
dock.
Once
we
agree
that
this
design
is
useful,
then
we
can
go
implement
it
and
provide
prototypes.
Once
we
have
enough
prototypes,
we
can
define
a
spec
based
on
them.
H
Once
we
have
that
spec,
then
it
gets
implemented
everywhere
right
so
yeah
as
long
as
that's
the
process
we're
following
then
my
question
is:
is:
is
there
something
fundamentally
flawed
in
this
design?
And
what's
in
that?
What's
in
that
otep,
where
we
shouldn't
even
start
the
prototype,
and
if
not,
you
know,
then
I'd
like
to
continue
to
make
progress
here.
C
Oh
yeah,
definitely
I
think
the
blocking
issue
is
probably
lack
of
attention
to
dotep.
So
I
would
say
that
is
not
is
not
most
likely
is
not
something
that
you
have
in
the
hotep
is
the
the
lack
of
attention
to
that.
C
D
I
I
I
may
need
to
read
this
again,
but
it
seems
like
there
might
be
two
use
cases
here
that
need
to
be
treated
separately.
One
is
the
one
that
can
be
handled
by
the
collector,
which
is
I,
as
a
an
operator
of
a
system,
want
to
ensure
that
what
I
export
always
conforms
to
schema
x.
You
know,
maybe
my
my
downstream
data
destination
only
supports
up
to
schema
x,
and
I
need
to
make
sure
that
everything
gets
translated
to
that.
D
The
other,
though,
which,
as
an
sdk
maintainer
is
concerning
to
me,
is
resources
and
the
creation
of
resources
from
disparate
detectors
and
the
way
the
resource
specification
currently
stands.
If
two
resources
attempt
to
be
merged
and
they
have
different
schema
versions,
that's
an
error
and
a
new
empty
resource
is
returned
and
neither
of
them
can
be
used.
H
I
that's
that's
fine,
except
I
also
have
other
oteps
to
try
to
resolve
this
problem
like
this
is
just
part
one.
I
think
that
this
is
actually
going
to
be
a
rather
serious
problem
with
resource
detection
going
forward.
We
already
have
questions
about
whether
or
not
you
merge
resources,
for
example
kubernetes.
H
There
was
a
whole
thing
about
whether
or
not
the
kubernetes
pod
ip
address
is
different
than
the
net
ip
address,
and
how
do
we
merge
if
I
have
a
resource
detector
that
detects
the
host
ip
address
and
one
that
detects
the
kubernetes
pod
ip
address?
How
do
I
merge
those
two
resources
together
in
resource
detection?
Do
I
get
one?
Do
I
get
the
other?
H
A
lot
of
people
consider
these
things
to
just
aggregate
together
where
all
the
labels
go
together,
which
has
fun
implications
for
metrics,
especially
if
you're
using
things
like
an
ip
address
that
can
change
right
if
it's
a
dynamic
ip.
So
in
any
case
the
point
being
is,
I
think
I
think
we
do
need
to
start
diving
into
this,
and
I
think
users
will
start
running
into
problems.
H
I
I'm
happy
to
start
driving
these
oteps
and
pushing
prototypes
and
things
it's
it's
more
like
what
I'm
asking
now
is.
Are
there
blockers
to
this
resource?
If
you're
saying
we
need
to
be
careful
with
our
attention
and
time?
That's
fine!
That's
that's
not
necessarily
blocking
like
progress.
It's
more
is
this
direction
bad.
H
Okay,
cool,
if
folks
have
a
chance
to
review
the
otep,
I
I'm
not
going
to
make
any
prototypes
until
there's
at
least
consensus
and
approvals
on
the
otep,
just
to
confirm
that
we
are
on
board
with
us.
So
please,
please!
If
you
haven't
reviewed
it,
take
a
look
at
it
bogdan.
I
think
you
might
want
to
go.
Read
the
resource
merge
section
over
again,
because
there's
a
bunch
of
information
on
merging
resources
there.
H
That
might,
I
might
need
to
expand
in
the
otep
why
that
matters,
but
I
think
it
was
called
out
pretty
well.
C
Is
there
any-
and
I
know
probably
I
shouldn't
ask,
but
in
case
of
multiple
steps,
multiple
types
like
this:
how?
How
can
we
give
people
a
high
level
picture
of
things
that
will
follow
up
or
or
things
that
are
related.
H
That's
a
that's
a
great
question
for
resource.
I
I
ran
into
the
problem
personally,
where
the
context
was
too
large
to
fit
in
a
document
that
I
felt
was
consumable
by
the
community.
So
I
tried
to
target
a
specific
small
problem
instead
of
the
overall
large
one,
so
I
have
an
idea
of
of
like
the
overall
problem
and
what
I'd
like
to
do
personally,
but
I
felt
like
it
was
way
too
much
to
read
for
an
individual
problem.
I
can
take
the
time
to
document
that
we
ran
into
this.
H
I
think
with
metrics.
We
ran
into
this
with
sampling,
where
sometimes
the
context
of
what's
going
on
is
is
hard
to
get
down
on
paper,
and
so
I
erred
on
the
side
of
making
progress
as
opposed
to
documenting
everything
I
yeah.
If,
if
you'd
like,
I
can
spend
more
time
on
a
larger,
contextual
otep
on
different
things
around
resource
that
need
to
happen.
H
H
That's
it
effectively
talks
about
that,
but
not
well,
and
so
I
can
expand
on
that.
That's
fine.
C
Okay,
I
will
I
will
do
that
read
it
may
worth
considering
in
the
future
when
we
have
this
kind
of
a
problem.
Maybe
spending
30
minutes
during
this
meeting
and
having
somebody
not
discussing
the
hotep,
but
maybe
discussing
the
overall
picture
of
how
they
see
things
just
to
give
people
a
heads
up
about
the
overall
situation
and
again
it's
not
going
to
be
a
discussion
of
have
this
interface
or
that
interface.
But
things
like
okay,
we
have
a
a
problem
with
resource
detection.
C
We
have
a
problem
with
the
with
maybe
multiple
tracers
being
or
multiple
plugins
in
the
instrumentation
plugins
coding
against
different
schemas
for
for
span
attributes.
You
come
for
metrics
attributes
and
so
on
and
so
forth
and
all
of
these
discussions
and
what
are
kind
of
what
are
requirements
for
for
sdks
and
what
are
nice
to
have
for
these
kids
and
so
on
and
so
forth.
So
I
I
think
anyway,
it
would
be
good
to
have
a
totally.
H
Presentation,
it's
fair
I'll.
I
think
I
think
what
would
be
more
useful
because,
again,
not
everyone
can
attend
this
meeting
I'll,
try
to
throw
together
a
scoping
document
that
describes
the
general
problems
with
resources
that
we're
experiencing
now
and
things
that
need
to
get
resolved
and
it'll
just
call
out
problems,
not
solutions.
The
the
other
scene
right
now
is
a
specific,
highly
targeted
solution
to
a
very
specific
problem,
and
it
doesn't
cover
the
whole
scope
but
I'll
try
to
get
that
together.
It'll
be
it'll,
be
a
little
bit,
though.
C
C
G
G
Sorry
one
more
thing:
carlos:
I
have
a
kind
of
a
prototype
implementation
in
go
which
does
the
schema,
parsing
loading
validating
and
also
does
the
conversion.
I
don't.
I
don't
think
it
does
everything
that
the
schemas
can
do
today,
but
it's
kind
of
sort
of
a
reference
that
maybe
can
be
used
I'll
post
a
link.
C
A
question
for
you
tigran:
how?
How
do
you
ensure-
because
I
think
the
schema
url-
it's
also
very
dependent
on
the
way,
how
you
define
the
semantic
conventions,
the
the
language
that
we
use
there
and
the
structure
that
we
use
there?
How
can
we
ensure
that
everything
works
on
on
the
same
data
model
of
the
semantics.
C
I
see
okay,
I
may
be
wrong,
but
do
we
have
a
data
model
for
that
schema
file.
G
H
And,
and
to
clarify
one
of
the
goals
behind
this
otep
and
the
targeted
problem
fix
is
to
get
implementations
of
schema
translations,
so
we
can
start
actually
using
them.
I
don't
think
we
have
used
a
single
schema
translation
yet,
and
I'd
really
love
to
have
that
in
our
back
pocket.
So
we
can
evolve
so
anyway,
we'll
get
there.
It'll
be
fun.
H
G
Lot
of
sources
which
actually
meet
telemetry
with
schema
urls,
I
added
a
couple
in
the
collector
to
some
receivers
and
in
the
detectors
in
go
sdk,
so
I
think
that's
also
work
that
needs
to
happen
so
that
the
data
is
there.
There
is
something
to
convert
right
and
I
think
we're
starting
to
to
see
that
coming.
C
Okay,
these
carlos
you
are
talking
and
oh.
F
Yeah
actually,
first
of
all
sorry
jonathan,
when
we
were
talking
about
the
gc
as
default
compression,
you
mentioned
that
maybe
we
should
consider
other
compression
formats
but
correct
me
if
I'm
wrong,
but
I
think
that
one
of
the
reasons
to
use
gcp
is
that
it's
well
used
everywhere.
F
But
if,
if
you
have
any
opinion,
anybody
else
against
this
or
in
favor
of
these
just
please
comment
in
the
actual
pr.
That's
that
was
that
from
the
gc
part
and
then
finally,
the
login
updates.
Is
there
somebody
from
the
login
group.
G
Yes,
I'm
here,
but
I
don't
know
where
we
are.
I
guess
there
was
a
discussion.
Yesterday,
john
you're,
you
were
discussing
with
david
right,
john
john
watson.
Do
you
know
where
the
java
implementation
is?
I
know
there
is
a
draft
pr
that
does
a
few
things,
but
I
don't
know
where
we'll
end
it
with
that.
G
E
And
talking
to
david
yesterday
there's
a
lot
of
lack
of
clarity
at
the
moment
about
what
we're
actually
trying
to
accomplish,
with
with
logging
and
open
telemetry.
At
the
moment,
I
thought
I
had
a
very
firm
understanding
of
it,
based
on
my
conversations
with
you,
but
it
sounds
like
david
had
a
very
different
conversa
understanding,
there's.
G
B
G
I
think
that's
that
that
we
agreed
right
is
there
anyone
who
thinks
that
we
should
be
building
a
loading
api,
but
you
just
literally
mentioned
logging
api.
So
no,
no!
No
logging,
sdk
api
for
logging,
libraries
right,
that's
what
it's
called
in
the
otec.
It
says
api
for
for
for
logging,
libraries,
not
for
end
users.
G
E
C
For
me,
the
answer
should
be:
we
ship
it
only
in
the
sdk
and
whoever
is
considered
configuring.
The
final
application
can
configure
our
exporters
for
log4j,
for
example,
to
hit
otlp
logs
collected,
but
I
think
I
should
come
to
tomorrow's
meeting
to
catch
up
with
whatever
is
happening
in
the
logging
api.
H
Yeah,
I
think
the
elephant
is
some
people
want
a
structured
event
logger,
which
is
different
than
a
string,
locking
api
that
has
structured
data
attached
to
the
strings,
and
I
think
that's
what
what
came
out
in
that
thing.
So
maybe
just
directly
answering
that
of
like
is
the
logging
specification,
for
you
know,
sdk
api
semantics
going
to
have
a
structured
event
logger
or
is
what
we
have
attached
to
traces,
what
we're
providing
for
structured
events
or
is
that
just
something
we're
going
to
handle
sometime
in
the
future
and
we're
not
thinking
about
that
problem?
H
Right
now
and
the
I
think
people
look
at
the
protocol
for
logs
and
say:
oh,
this
is
structured
cool.
I
should
be
able
to
fire
structured
events,
and
then
they
expect
an
api
to
let
them
fire
structured
events
and
they
look
at
the
java
existing
logging,
apis
and
say
man
java
didn't
really
expose
this
at
all.
We
should
expose
a
structured
java
log
in
api
event
thing.
That
is
something
that
I
remember
happening.
I
don't
know
what
it
was
eight
years
ago
when
they
did
log4j2
that
was
longer
than
eight
years
ago.
H
I'm
that
just
don't
tell
me
how
far
ago,
anyway
point
being,
I
think
that
is
the
question
that
needs
to
get
directly
answered
of.
Are
we
going
to
build
such
a
thing
and,
if
so,
when?
Because
I
think
that
was
the
big
disconnect.
C
There
are
multiple
questions
like
this
that
that
being
said,
and
I
I
what
I'm
trying
to
point
here
is
probably
we
we
should
start
and
we
should.
We
should
make
a
roadmap,
and
we
should
start
only
with
the
sdk
part
that
we
are
interested
in,
even
though
the
events
may
not
be
structured.
B
And
regarding
the
roadmap,
I
I
always
have
a
question
for
c
plus
plus
we've
done
two
prototypes
and
based
on
the
research
we've.
We
tried
to
find
what's
the
most
popular
c-pass
plus
log
name,
and
the
answer
is
nothing,
they
all
have
pros
and
cons.
So
if
someone
have
idea
about
like
which
logging
api
is
well
established
for
c
plus
plus,
we
should
just
go
and
use
that
very
high
pay
tool
to
like
avoid
reinventing
the
wheel.
It
seems
we
we
need
some
api
for
class
files.
C
If
there
is
a
structural
api
already
present
in
c
plus
plus,
and
that
api
offers
you
to
hook
a
different
exporter
than
whatever,
and
we
can
hook
our
export
pipeline
that
we
build
in
the
sdk,
I
think
I
think
we
should
build
more
hooks
like
this
anyway.
So
if
somebody
is
using
the
google
logging
api,
they
should
be
able
to
expose
this
data
to
the
collector
provider
otlp,
and
we
should
enable
that
if
they
are
using
something
from
microsoft,
which
I
bet
there
exists,
they
should
be
able
to
do
the
same
thing.
C
B
G
Right
which,
which
is
cross-platform,
has
logging
and
it's
expandable.
So
there
are
some
like
it's
just
that
there
is
no
single
most
popular
one.
I
think
that's
what
you're
saying
yeah,
but
we
can
choose
one
and
support
on
just
even
to
demonstrate
that
it's
possible
and
the
other
libraries
maybe
can
pick
that
up
and
add
their
own
appenders
or
whatever.
It's
called
in
that
particular
life.
C
For
this,
the
reason
why
I'm
saying
that
we
don't
have
a
way
is
because,
sooner
or
later
we
may
add
into
these
request
response
messages,
some
very
specific
rpc
things,
like
maybe
a
sequence,
id
or
or
some
kind
of
time,
random,
timestamps
that
we
may
need
in
this
protocol,
which
are
not
useful
when
you
store
these
informations
on
the
disk
yeah.
So
you.
F
G
G
G
C
So
the
the
thing
is
even
for
that,
so
right
now,
right
now
for,
for
example,
in
the
collector
should
so
what
I'm
trying
to
say
if
we
add
a
field
to
the
request,
object,
let's
assume
field
for
sequence
id
should
that
still
become
available
through
the
pipeline,
or
we
should
send
only
the
data
to
through
the
pipeline
and
and
not
send
the
the
the
the
sequence
id
or
fields
that
are
only
important,
the
exporter,
receiver
and
immediately
after
you,
you
left,
the
receiver
is
no
longer
applied,
like
a
sequence
id
or
I
don't
know
whatever
property
is
there.
C
G
G
G
C
C
They
want
to
return
the
repeated
our
repeated
resource
spans,
for
example
correct
yeah,
but
they
could
not
return.
Our
request
object
because
we'll
be
kind
of
inappropriate,
even
though
it's
it's
100
compatible
with
what
they
define
in
their
proto,
but
cannot
use
the
request
object
in
their
proto,
because
it's
actually
a
response
to
to
a
ui
asking
for
some
data.
It
would
be
super
weird
to
use
that.
F
C
G
Ideally,
that
would
be
inside
the
request,
but
you
can't
do
that
anymore.
It
will
be
a
breaking
change,
so
I
don't
know.