►
From YouTube: 2021-08-24 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
let's
start
we
have
10
people,
let's
see,
hopefully
more
people
will
join
soon.
Thank
you
for
joining.
Let's
go
over
the
agenda
items.
First
of
all,
john
watson
should
release
a
patch
release
for
semantic
conventions,
corrections.
B
Yeah,
so
there
was
a
pr
put
in.
I
don't
know
before
the
last
release
that
updated
data
values
for
mobile,
a
lot
of
mobile
networking,
and
it
turned
out
that
a
bunch
of
those
values
didn't
translate
really
into
like
constants.
That
could
be
easily
transformed
using
the
the
build
tools
and
rece.
There's
I
put
in
a
link
to
the
pr
that
corrected
that.
So
the
question
is:
should
we
release
a
patch
or
do
we
want
to
just
save
this
to
the
next
spec
release?
B
A
Yeah,
I
drove
a
message
to
tigran,
probably
the
the
person
to
help
here.
He
hasn't
answered
me.
Yet,
what's
your
feeling
in
this
john,
what
do
you
think
should
we?
We
should
do.
B
Well,
I
mean
at
least
I
think,
for
java
we're
kind
of
stuck
in
a
heart
in
a
bad
place,
because
if
we
want
to
use
the
existing
ones,
we
have
to
munch
the
the
generator
code
to
generate
things
that
actually
make
valid
symbols,
and
because
we
have
to
do
that,
then
a
transformation
via
schema
won't
actually
necessarily
work
very
well,
because
the
values
we're
having
to
make
up
values,
or
at
least
labels
for
the
values
that
don't
exist.
B
B
Well,
the
yeah,
the
current
release.
One
has
the
bug
in
it,
but
the
the
pr
that
I
linked
has
been
merged
and
corrects
the
issue.
C
C
C
D
B
Because
I
don't
know
that
anything's
actually
implemented
any
schema
transformations
anywhere
and
that's
maybe
that's
okay,
because
many
conventions
are
still
not
labeled
as
stable,
but
it
feels
like
we're
kind
of
continually
weaseling
our
way
out
of
the
issue
here
and
that
we're
we're
not
really
getting
ourselves
closer
to
stable
semantic
conventions
and
yeah
anyway.
B
C
D
Yeah,
it
feels
to
me,
like
probably
the
right
thing
to
do
then,
is
to
do
a
patch
have
a
schema
translation.
You
know
that
have
the
translation
information
in
this
schema
from
one
six
to
one
six
one
and
then
just
nobody
ever
releases
a
160
version,
and
so
the
the
translation
information
should
be
unbroken.
If
say,
someone
tries
to
go
from
one
five
to
one
six
to
one
six,
one
through
the
translation
steps,
but
no
no
library
would
actually
exist
to
generate
the
one
six
values.
B
A
E
Can
I
can
I
ask
a
question
so
for
john:
let's
pretend
like
you
just
don't
really
semantic
conventions
for
that
version
you
should
still
be
able
to
do
transformations
to
and
like
from
that
version
to
a
java
version.
Correct
like
it's
just
you
can't
express
the
code
but
like
the
values
on
in
in
otlp
would
be
the
same.
B
E
And
be
broken
right,
so
I
guess
what
I'm
suggesting,
though,
is
if
you
don't
release
a
semantic
conversion
convention
library
against
that
version,
and
you
just
release
against
the
next
version.
You
would
still
be
able
to
convert
telemetry
that
produces
against
that
version
too.
The
versions
that
java
does
support,
so
you
would
just
skip
that
version
in
java.
Is
that
an
acceptable
thing?
Yeah.
B
Okay
and
we'll
just
wait
till
one,
seven
yeah.
E
I
think,
given
that
we
have
schema,
we
should
be
actually
like
that
that
rewrite
should
be
included
in
the
schema
and
that
we
should
be
leveraging
that
going
forward
for
these
kinds
of
changes
like
we
should
not
just
flip
these
things
after
they're
submitted.
So
I
don't
think
we
should
actually
merge
semantic
conventions
that
change
values
at
all
without
a
schema
associated
with
it.
That
explains
how
those
things
move.
B
We
released
a
version
of
semantic
mentions
that
were
literally
just
broken,
like
the
like.
They
were
just
nonsensical.
I
think
we
would
want
to
do.
We
would
want
to
release
a
patch
without
a
transformation
if
there
weren't,
like
a
transformation
possible
because
of
some.
I
don't
know
something
that
someone.
E
A
Okay,
I
don't
know,
I
think
I
can't
I
can
follow
on
death
with
tigran.
Well,
probably
yours
is
your
interest
as
well
yeah.
I.
G
Should
get
a
patch
out
the
door
as
quick
as
possible
right
and
just
ensure
absolutely
make
sure
all
the
maintainers
are
aware
they
shouldn't
release
against
160,
including
maintainers,
who
are
not
on
this
call.
G
It
does
seem
like
a
good
idea
to
do
a
dry,
run
and
start
actively
thinking
about
these
schema
transformations
and
how
we're
really
going
to
do
this,
though,
because
that
does
seem
like
something
that's
that's
coming
at
us
and
I
assume
we
haven't
built
out
the
infrastructure
we
need
to
to
handle
that
kind
of
stuff.
So
maybe
it's
kind
of
lucky,
maybe
a
little
a
little
fright
here.
D
Yeah
I
haven't
done
160,
yet
in
either
the
go
or
the
collector.
So
I
would
be
perfectly
fine
with
jumping
the
straight
to
a
one,
six
one.
If
one
six
is
where
these
were
introduced
and
then
there's
a
very
low
likelihood
that
the
values
ever
appear,
but
we
have
the
schemas
in
case
they
do.
G
D
You
know
yeah.
I
think
that
the
collector
is
a
place
where
a
schema
transformation
processor
needs
to
be
built
anyways.
So
now
it's
probably
a
good
time
to
to
try
to
get
that
on
someone's
plate
and
use
this
as.
H
A
A
Okay,
if
that's,
if
that's
it
for
this
issue,
let's
move
to
the
next
one
yeah
there's
this
pr.
I
mentioned
that
yesterday.
Let
me
share
my
stream
for
a
second.
A
This
is
something
we
left
out
out
of
1.0
and
this
basically
for
specifying
otl
or
otp
exporter
protocol
for
traces
and
for
metrics
and
josh
made
a
fair
comment
there
about
maintainers,
you
know
well,
we
need
really
need
their
opinion.
E
E
The
fear
that
we
had
previously
was
that
actually
you
wouldn't
be
able
to
communicate
with
each
other
with
those
values.
So
what
we
did
was
we
just
reserved
them
and
deferred.
Now
I
did
a
quick
investigation
on
this
because
I'm
the
one
blocking
it
so
I
figure
that's
fair.
I
should
be
the
one
to
help
resolve
it
and
from
what
I
can
tell
most.
People
are
using
the
open,
telemetry
collector's
ability
to
consume
ltlp
as
their
notion
of
whether
or
not
they
are
speaking
the
correct
protocol.
E
So
what
they
do
is
they
take
the
collector
and
they
expose
a
http
port
and
they
try
to
speak
to
it
with
proto.
They
try
to
speak
to
it
with
json
or
they
try
to
speak
to
the
the
grpc
port
and
that
that's
kind
of
currently
how
we're
defining
the
compatibility
layers
in
practice.
E
So,
if
we're
comfortable
saying
like
sdk,
like
you
know,
I
don't,
we
don't
have
a
formal
specification
of
exactly
what
http,
grpc
or
http
json
mean
in
terms
of
like
http
2,
http
1,
all
that
kind
of
junk.
But
if
we
say
whatever
the
collector
implements,
let's
just
take
that
and
specify
that
as
the
thing
that
this
term
means,
I
think
that
that
could
work
across
the
sdks.
E
But
that's
why
this
fell
apart
previously,
and
that's
that's,
that's
the
only
thing
I'm
calling
out
is
you
know
we
this
got
blocked
before
and
I
think
we
need
to
resolve
that
before
we
move
forward.
But
in
terms
of
how
we
move
forward,
it
looks
to
me,
like
everybody,
is
compatible
with
collector.
E
So
if
we
just
specify
what
the
collector
does
for
these
various
symbols,
I
think
we're
fine.
I
Yeah
commenting
question
like
are
all
sdks
supposed
to
implement
these
because
I
think
a
number
of
of
ecosystems,
I'm
pretty
sure
this
is
how
it
works
in
javascript.
It's
the
way
we
intended
to
work
in
ruby
in
that
all
these
ice
creams
are
packaged
separately,
so
like
having
an
environment,
it's
like
it's
really.
The
package
that
determines
the
transport
and
an
environment
variable
would
have
no
effect.
If
you
for
some
reason
included.
I
You
know
three
different
packages:
the
the
grpc
exporter
and
the
the
otlp
protocol
over
http
exporter
and
then
the
json
over
http
exporter.
Then.
I
Sense,
but
there
would
be
no
reason
to
actually
include
those
three
packages
in
your
project.
You
would
only
pick
the
one
that
you
were
interested
in,
so
I
feel
like
there
needs
to
at
least
be
some
caveats
that
if
you
don't
have
like
a
package
that
supports
multiple
transport,
then
the
package
specifies
the
transport.
E
So
the
caveats
are
already
there:
you
only
have
to
implement
one
of
them
and
the
the
rule
is,
if
you
use
this
as
an
environment,
variable
configuration
option,
it
needs
to
abide
by
this
set
of
rules
like
if
you
you
so
when
we
specified
it'll,
be
if
you
use
this
environment
variable
to
configure
your
exporter.
E
It
has
to
be
this
kind
of
exporter
with
this
kind
of
compatibility
right,
but
if
like,
for
example,
for
ruby,
if
it
doesn't
make
sense,
if
you
don't
have
that
environment
variable
configuration,
then
that's
kind
of
a
different
story,
and
I
there's
there's
a
there's,
a
question
of
sdk
configuration
that
I
think's
a
little
bigger
than
this
particular
issue
that
we
should
talk
about.
But
but
that
said
like
for
java,
for
example,
I
know
if
you
can
figure
things
manually.
E
None
of
the
environment
variables
take
effect
right,
and
I
think
that's
also
true
in
like
c
plus
plus
and
a
few
of
the
other
sdks
that
I've
looked
at.
So
I
think
that
that's
in
line
with
other
sdks
and
if
you
look
at
the
spec
you'll,
see
that
there's
a
caveat
of
you
don't
have
to
support
all
of
these
and
we
don't
expect
people
to
support
all
of
them,
but
for
the
ones
you
support.
That
variable
needs
to
be
this
and
it
needs
to
be
exactly
you
know.
E
I
Sorry
go
ahead:
matt,
oh
yeah,
I'm
just
gonna
say
it's
not
totally
clear
from
looking
at
the
spec
that
this
is
where
it's
supposed
to
work.
I
can
kind
of
maybe
read
into
it
that
way,
but
yeah
it
like.
I
guess
what
I'm
saying
is
that
the
there's
no
problem
with
supporting
the
environment
variable
it
just
doesn't
make
sense,
because
it's
kind
of
like
the
choice
is
already
like
I've
included
this
package.
I
I've
only
included
the
grpc
package
in
my
project,
so
it's
kind
of
just
that's
the
only
exporter
that
there
really
isn't
a
choice
of
exporter
and
if
you
set
like
a
different
protocol,
there
would
be
no
way
to
actually
make
that
a
valid
configuration.
I
G
Is
it
possible,
I
mean,
and
certainly
at
least
in
some
languages,
maybe
not
ruby,
but
while
user
doesn't
need
to
have
more
than
one
exporter
package
installed,
they
might
have
more
than
one
installed
just
because
they're
they're
shipping,
a
distro
that
has
several
options
baked
in,
for
example,
so
that
a
company
can
ship
one
distro
across
their
entire
fleet
and
then
be
able
to
flip
some
switches
using
environment
variables
or
a
config
file
to
adjust
what
a
particular
service
is
actually
using.
G
I
It's
technically
possible,
I
just
it
seems
like
a
slightly
un
unlikely
case,
but
it
is
something
that's
possible.
It
could
be
supported,
but
I
guess
you
would
have
like
this
weird
thing
where,
like
these
environment
variables,
really
depend
on
a
pre-pre-requisite
of
you
kind
of
pulling
in
all
these
packages
to
begin
with
like
otherwise,
they
still
are
something
that
really
won't
have
had
an
effect.
G
Yeah,
I
mean,
for
example,
like
a
more
straightforward
reason
why
someone
would
do
it
is.
Do
you
want
to
be
using
the
otlp
exporter
versus
the
jaeger
exporter
versus
some
other
exporter,
and
maybe
you
have
several
of
those
installed
because
you've
installed
some
kitchen
sync
sdk.
G
G
Obviously,
there's
a
reason
why
people
are
are
flipping
these
options
right,
I'm
sure
like
jack,
is
on
the
call.
I
assume
you're
you're,
pushing
this
in
because
you're
in
a
situation
where
you're
needing
to
pick
between
like
json
and
proto.
G
I
could
see
this
being
a
thing
in
like
node,
for
example,
so
I
kind
of
wonder
if
we
need
to
be
like
taking
a
more
coherent
approach
to
configuration
and
kind
of
wrangling.
All
of
this.
It's
the
thing
we've
talked
about
in
the
past.
That
seems
to
be
kind
of
coming
at
us.
We're
hitting
a
we're
hitting
some
kind
of
like
complexity
limit
with
the
amount
of
configuration
available.
E
I
agree
that
we
should.
We
should
be
looking
at
that.
I
think
we
also.
Hopefully
we
can
find
time
to
address
it.
I
think
a
lot
of
us
are
swamped
with
other
things,
but
yeah.
I
totally
agree.
One
thing
I
do
want
to
call
out,
though
I
want
to.
E
I
want
to
divide
this
a
little
bit
so
the
sdk
setup
matthew's
talking
about
for
ruby,
I
would
argue,
is
the
advanced
setup,
which
is
the
first
thing
we
released
where
a
user
fully
customizes
every
single
package
that
they
have
and
programmatically
builds
things
if
the
spec
does
not
allow
that,
then
there's
an
open
question
of.
Can
we
change
the
language
in
the
spec
to
make
sure
it's
obvious
that
that's
allowed
and
that
that
I
think
is
matt's
concern
that
I
think
does
need
to
be
addressed.
E
So
so,
if,
if
that's
not
true
today
that
you
can't
build
like
highly
customized
advanced
person
fully
pulling
in
every
single
module
they
want,
then
I
think
we
should
fix
that.
The
second
question
is:
should
we
have
these,
like
out
of
the
box
baked
in
tons
of
options,
configuration
parameters
that
customize
this
thing
to
hell?
Yes,
I
think
the
answer
to
that
is
also.
Yes,
I
think,
that's
that's
how
we
make
this
nicer.
I
I
recommend
everyone
go
look
at
the
auto
configure
package
in
the
java
sdk.
I
really
like
that
thing.
E
I
also
don't
like
it
because
I
think
it
needs
more
kind
of
structure
from
the
spec
of
like
here
is
what
configuration
of
an
sdk
should
look
like,
as
opposed
to
just
everything's
a
job
of
property
right.
So
I
think
it's
both
a
great
example
and
an
example
where
we
can
do
better,
but
I'd
recommend
everyone.
Look
at
that
because
it's
it's
it's
an
example
of
what
I
think
you
know
we
it's
a
direction.
E
We
can
move
that
I
think,
is
powerful
and
it's
what
ted,
I
think,
is
kind
of
hinting
at
here.
J
So
so
what
jack
here
from
from
new
relic?
So
what
matt
was
suggesting,
where
ruby
kind
of
introspects
which
packages
are
available
and
makes
a
decision
on
which
exporter
you
to
use
based
on
that
the
java's
auto
configure
module,
doesn't
do
that
anywhere.
So
it
depends
on
explicit
configuration
everywhere
rather
than
implicit,
and
that's
kind
of,
I
think
the
question
that's
being
teased
out
here
is:
is
it
okay
for
the
sdks
to
have
implicit
configuration
based
on
you
know,
introspection
or
does
everything
need
to
be
explicit?
J
I
think
if
I
were
forced
to
choose
I'd
rather
have
it
be
explicit,
so
you
know
there's
an
environment
variable
that
asserts
which
exporter
you
expect
to
use,
and
if
that
package
isn't
available
at
your
run
time,
then
you
throw
an
exception
and
you
fail
fast.
So,
but
that's
kind
of
I
think
the
question
that's
being
asked
here.
I
I
would
like
to
clarify:
that's
it's
not
actually
how
ruby
is
working.
It's
it's
not
introspecting,
anything,
it's
kind
of
just
whatever
package,
you've
included.
This
is
the
exporter
that
you're
using
it
doesn't
try
to
make
make
any
decisions
for
you
and
yeah.
I
I
think
it's
actually
a
moot
point
for
ruby,
because
there's
only
a
single
package
for
now,
but
I
know
as
as
more
packages
are
introduced,
there's
not
going
to
be
like
a
single
package
that
gives
you
multiple
exporters.
You
would
have
to
kind
of
pull
them
in
one
by
one
and
I'm
pretty
sure
this
is
the
case
for
javascript.
Today
like
there
is,
there
is
an
exporter
for
each
one
of
these
protocols,
but
it's
really
highly
unusual.
I
think
that
anyone
would
ever
pull
in
more
than
one.
So
it's
unusual.
C
I
Guess
that's
kind
of
my
question
of
how
useful
this
environment
variable
is
if
it's
literally
the
package
whatever
is
in
your
package.json
or
your
ruby
or
your
your
gem
file.
That's
determining
the
exporter
that
this
is
just
kind
of
added
unnecessary
configuration.
Actually,
I
do
see
from
the
collector
perspective
where
it
has
all
these
things
usually
packaged
where
it
is
somewhat
useful,
but
I
guess
what
I
think
is
that
not
all
of
the
sdks
kind
of
follow
this
same
pattern.
J
Well,
I
go
back
to
what
ted
was
mentioning
about
the
you
know,
bundles
that
people
could
publish,
and
so,
if,
if
somebody's
publishing
a
bundle,
then
there's
many
exporters
available
and
there's
not
just
one
and
so
which.
D
We've
seen
this
with
some
open
source
applications
build
kit
when
they
migrated
from
open
tracing
to
open
telemetry,
ended
up,
adding
a
mechanism
for
registering
exporters
and
then
making
them
available
via
configuration
through
the
environment,
so
that
several
exporters
could
be
built
into
a
binary
that
was
shipped
and
then
at
runtime,
the
the
user
could
select,
which
of
those
to
use
via
the
environment
variable
and
that's
mechanism
that
I
would
like
to
see
imported
into
the
go
sdk
in
some
manner.
D
G
Yeah,
why
are
protocols?
There
may
be
like
a
clear
example.
I
think
anthony
though
that's
a
great
example
of
why
why
someone
would
want
to
do
this.
It's
a
they're
shipping,
a
service,
not
just
they're,
not
an
application,
but
even
within
an
application,
you
could
have
several
different
wire
protocols
that
you
would
want
to
have
installed.
G
For
example,
you
might
be
using
the
aws
distro
of
stuff,
which
includes
you
know
that
amazon
wire
protocol,
you
may
be
trying
to
migrate
between
b3
and
w3c,
and
so
you
would
need
to
have
multiple
protocols
installed,
and
you
would
need
to
be
able
to
control
that
through
configuration
variable,
which
ones
were
actually
running
it.
It
makes
me
feel
like
like
there
needs
to
be
yeah
some
way
of,
like
maybe
just
more
generic
plug-in
registration.
G
That's
just
a
thing
we
haven't
really
defined
in
the
sdk
is:
if
you've
got
for
every
kind
of
plug-in
that
we
offer
there's
a
possibility
that
the
user
could
have
multiple
plugins
installed
or
like
available
as
packages,
but
at
runtime
may
want
to
conf
figure,
not
just
how
those
plugins
work,
but
which
ones
are
actually
turned
on.
So
there
needs
to
be
some
kind
of
registration
mechanism
here
and
I
don't
think
we've
defined
that
that
anywhere
for,
like
any
pluggable
portion
of
the
sdk
at
this
point.
J
And
tab
one
little
additional
piece
of
contacts
for
you
know
your
example
of
choosing
the
wire
protocols.
Yeager,
zip
zipkin.
The
spec
today
does
support
the
ability
to
have
multiple
trace
exporters
that
are
comma
separated,
and
so
that
implies
that
you
would
have
multiple
exporters
available
in
your
package
right.
G
K
Yeah
and
I'm
almost
close
to
the
deadline,
because
I
I
said
that
I'm
gonna
start
thinking
more
about
this
in
the
second
half
of
august.
So
it's
probably
the
time
and
it's
because
of
josh.
E
G
So
I
think,
there's
there's
like
two
action
items
here.
It
seems
like
clear:
we
need
to
to
get
on
thinking
about
configuration
and
plug-in
registration
coherently
and
get
something
coherent
put
into
the
spec
on
that
to
make
sure
like,
what's
going
on
in
ruby
and
what's
going
on
in
node
and
what's
going
on
and
java
have
some
coherency
and
whatever
we're
deciding
is
going
to
be
a
thing,
that's
going
to
be
workable
across
languages
or
leave
enough
room
that
languages
can
differ
in
ways
that
they
need
to.
G
But
then
there's
like
a
more
immediate
question
of
like.
Should
we
be
blocking
jack's
pr
for
this
particular
item
right
here
jack?
How
how
urgent
is
this?
Are
you
running
into
something
like
I'm
curious?
What
what
motivated
this
particular
change?.
J
So
this
was
just
motivated
by
you
know:
we
we
have
some
customers
that
are
interested
in
using
otlp
http,
and
we
have
that
available
now
in
in
in
java
and
yeah.
So
there's
because
we
don't
do
any
sort
of
like
introspection
of
what
which
packages
are
available
and
use
that
to
choose
which
which
exporter
we
we
use
in.
You
know,
for
example,
the
open
telemetry
java
agent.
You
know
it.
It
requires
some
sort
of
system,
property
environment
variable
to
to
set
that
and
so
yeah
that's
that's
kind
of.
What's
driving
this.
G
G
I
think
what
matt
is
saying:
you're
using
otlp
and
now
you're,
picking
which
protocol
within
otlp
versus
are
these
just
separate
exporters
like
we
already
have
an
environment
variable,
I
believe
for
configuring
which
exporter
you're
using,
and
should
these
just
be
values
for
that
environment
variable
rather
than
generating
new
ones.
Just
does
that
make
sense.
J
Yeah
yeah,
those
are
it's
clear:
the
distinction
between
those
two
options
and
I
think
they
would
equally
satisfy.
You
know
the
the
need
that
I'm
coming
from,
but.
G
Could
we
go
with
the
direction
of
adding
these
as
values
to
the
the
existing
exporter
selector,
because
I
think
longer
term
that'll
make
sense.
I
think
that's
kind
of
what
matt's
saying
it's
like
in
a
lot
of
languages.
All
these
are
all
actually
like
separate
exporters.
It's
not
a
it's,
not
an
otlp
exporter
that
has
multiple
transports
baked
into
it,
and
multiple
protocols
baked
into
it.
J
Yeah,
I
think
the
the
one
difference
there
is
that
these
you
know
three
different
new
exporters
or
two
additional
exporters,
otlp
http
and
otlp
http
json.
You
know
they
share
a
lot
of
configuration
properties,
basically
all
of
them,
and
so
you,
you
need
some
way
to
say.
Although
these
are
different
exporters,
they
they
share
the
same
configuration
options
which
are
specified
over
here.
A
I
have
something
to
say
it's
a
very
small
thing.
I
was
talking
to
matt
in
the
past
and
now
as
well
about
this
would
be
a
good
compromise
to
you
say
this:
there
is
a
set
of
environment
variables
which
includes
these
environment
variables.
They
are
not
expected
to
be
supported
at
the
sdk
level.
They
are
oriented
towards
agents,
bundles
or
distros,
so
no
not
supported
by
sdks
because
probably
doesn't
make
sense
there
or
at
least
optional.
Does
it
make
sense?
No,
does
that
make
sense.
G
I
mean,
like
we
implicitly
ship,
a
bundle
quote
unquote
right,
like
you
like,
there's
whatever
you
would
call
vanilla,
open,
telemetry
right
like
there's,
there's
like
gonna,
be
open,
telemetry
distros,
where
we
ship
you
something
I
I
do
kind
of
feel
like
the
sdk
should
be,
should
be
managing
this
stuff.
G
D
Yeah,
I
think
that's
the
steps
that
that's
missing,
though,
is
that
register
and
have
the
sdk
then
use
the
environment
configuration
to
figure
out
which
one
to
use
like
in
in
go
right
now
these
are
all
separate
packages,
and
we've
very
consciously
made
a
decision
to
keep
them
separate
packages
and
modules
to
minimize
the
import
footprint
for
users
who
want
to
use
the
explicit
configuration
options
and
only
pull
in
you
know
something:
that's
not
grpc
not
pulling
in
protobuf
so
that
we
don't
have
a
very
bloated
sdk
for
them,
but
for
the
users
who
want
to
do
that
and
say,
give
me
everything
bundle
it
all
in
and
then
let
me
figure
out
a
runtime
that,
I
think,
is
the
thing
we're
slightly
missing
the
story
on.
G
Yeah,
I'm
a
little
terrified
of
the
idea
that
there'd
be
a
whole
bunch
of
different
registration
systems
built
around
open
telemetry
for
plug-ins.
I
kind
of
feel
like
if
that's
a
need.
We
should.
We
should
like
take
that
on
and
just
have
one
way
of
doing
it
and
so
like.
I
would
like
personally
to
keep
distros
as
lightweight
as
possible
and
have
distros
really
just
be
like
a
way
of
bundling
together
plug-ins
for
for
an
sdk
and
have
that
be
a
distro
and
to
not
have
distros
grow
a
lot
of,
like
other
necessary
components.
C
G
G
I
think
it
generally
works
like
that.
Pretty
sure
it
works
like
that
and
go
what
seems
to
be
missing
here
is
a
way
for
someone
to
have
more
like
generic
setup
instructions
where
the
user
is
able
to
register
multiple
exporters,
but
registering
the
exporters,
doesn't
automatically
cause
them
to
run,
and
you
don't
necessarily
have
to
pass
all
the
configuration
options
to
those
exporters
or
other
plugins
in
code.
G
Instead,
what
you
could
do
is
pass
an
sdk,
a
bunch
of
plugins
that
it
has
to
install
you
possibly
probably
have
to
do
that
in
code
in
most
languages,
or
at
least
some
and
then
a
configuration
file
that
tells
it
how
to
set
all
this
stuff
up,
and
that's
the
bit
that
we're
missing.
G
G
I
forget
who
who,
who
you
said,
did
it,
but
someone's
already
had
to
build
their
own
registration
system
and
we've
kind
of
done
this
at
light
step.
So
I
guess
that's
my
concern.
I
think
there
should
be
there
should
be
one
and
we
should
we
should
bake
it
in,
because
that
will
allow
us
to
have
one
universal
configuration
file
right.
G
If
we
don't
figure
out
how
that
works,
it's
going
to
be
this
kind
of
weird
mishmash
where
we're
defining
the
environment
variables
and
the
configuration
file
options,
but
we're
leaving
it
to
other
people
to
figure
out
how
that
should
work,
that
that
seems
like
a
weird
middle
ground
and-
and
I
don't
want
there
to
be
18
different
flavors-
of
like
open
telemetry,
config
files.
G
A
G
So
I
think
we
should
take
this
on.
This
sounds
like
a
thing
I
know
we're
busy
with
metrics
and
other
things,
but
this
feels
like
it's
getting
more
critical
because
we're
just
seeing
more
and
more
uptick
in
usage
of
open
telemetry-
and
this
is
one
of
those
practical
needs
in
production
usage
of
being
able
to
manage
a
bunch
of
stuff
through
configuration
files,
so
that
operators
can
install
something
generic
and
then
have
some
control.
G
So
I
think
we
should
wrangle
this
in
in
september
at
least
get
a
plan
together,
so
that
maintainers
can
can
have
some
expectation
and
start
looking
at
their
their
sdks
and
thinking
about
this
stuff,
so
yeah
in
the
short
term.
I
think
for
this
particular
pr
I
would
like
to
see
it
just
come
back
as
different
exporter
options,
with
the
caveat
that
some
of
these
exporters
share
the
same
environment
variables
for
configuration.
G
I
guess
that
could
be
weird
if
you
wanted
to
install
both
proto
and
json
and
give
them
like
different
configurations.
Like
that's
weird,
maybe
you
just
can't
do
that
for
now
with
environment
variables,
but
it
would
be
better.
I
think,
to
just
have
these
as
different
exporter
options,
because
that's
actually
how
it's
implemented
in
a
lot
of
languages,
and
then
that
will
just
get
folded
in
with
our
ability
to
to
manage
multiple
exporter
options.
At
the
same
time,.
K
By
the
way,
if
I
want,
let's
say
hbase
or
any
product
that
ships
binary,
I
would
not
link
all
the
exporters.
I
would
just
simply
give
people
only
otlp
and
and
say:
hey.
You
have
collector,
you
can
deploy
it
as
a
sidecar
if
you
want
and
send
it
to
any
format,
you
want
I'm
not
going
to
maintain
all
the
dependencies
and
and
build
all
the
things
in
my
binary,
but
I'm
not
a
person
so
good
that
we
have
better
people
that
want
to
to
support
that
burden
of
of
supporting
multiple
dependencies.
D
G
I
yeah
I
kind
of
agree
it
doesn't.
I
I
don't
understand
why
someone
would
want
both
like
proto,
http,
proto
and
grpc
running
for
otlp
running.
At
the
same
time
like
I
don't
understand
why
someone
want
to
run
those
both
at
the
same
time,
but
I
can't
see
someone
wanting
to
run
otlp
and
zipkin
at
the
same
time,
and
it's
just
a
question
of.
Are
these
all
individual
exporters.
K
You
can
you
can
you
can
do
that
in
the
with
a
collector,
so
you
can
only
offer
people
otlp
and
then
multiplex
in
the
collector.
If
you
run,
which
I
I
think
we
are
trying
to
reinvent
the
half
of
the
collector
into
this
config.
That's
why
I'm
very
worried
about
what
we
are
trying
to
do
there
and
try
to
keep
it
minimal
at
the
library
level
or
the
sdk
level.
G
G
G
So
just
semantically
this
should
go
into
the
spec
as
options
as
for
like
which
exporter
you're,
using
not
as
a
a
new
environment
variable
as
to
which
protocol
is
your
otlp
exporter
using.
So
if
that's
actually
a
more
minor
question
then
like
should
we
add
a
registration
system
and
all
of
that
stuff.
G
Yeah
yeah
and
you
don't
want
them
in
they
can't
be
the
same
ex.
The
whole
reason
we
have
these
different
protocols
is
because
you,
you
know
we
have
http
proto,
because
people
can't
always
take
on
the
grpc
dependencies
right.
That's
so.
This
is
like
very
common
for
these
to
be
totally
separate.
That's
actually
kind
of
the
reason
why
we
have
http
and
grpc
is
because
of
dependency
management,
and
the
reason
why
we
have
proto
versus
json
is
just
that.
G
Not
all
runtimes
support
proto
very
well,
so
you
need
a
json
option,
with
the
browser
being
like
a
big
example
there.
So
so
it
just
comes
down
to
these
things
being
like
separate,
separate
exporters,
not
one
exporter
with
multiple
options.
G
So
I
think
maybe
I
was
just
confusing
the
issue
by
bringing
up
the
whole
wedi
configuration
and
registration
like.
I
do
think
we
need
to
sort
that
out,
but
like
actually
this.
This
particular
issue
is
just
that
these
things
are
separate
exporters
in
most
languages.
So
can
we
list
them
as
options
under
which
exporter
you
use.
J
Yeah
yeah,
I
mean
if
that,
if
that's
a
consensus,
I
can
definitely
adapt
that
pr
and
switch
it
so
that
these
are
additional
exporter
options
under
the
existing
environment,
variables,
sweet.
E
Also
so
it's
clear,
my
comment
is
only
about
specifying
what
the
values
actually
mean
in
terms
of
protocol
and
how
to
validate
whether
that
you
speak
that
protocol
correctly.
So
all
I
care
about
is
whether
http
proto
means
the
same
thing
to
everybody
and
talks
to
the
collector
appropriately.
That's
all
I
care
about,
so
I
can
remove
that
once
that's
up
to
date.
The
rest
of
this
discussion
was
awesome.
Sorry
for
opening
that
can
of
worms
all
at
once,
but
yeah.
G
G
I
agree
that
if
every
sdk
just
start
ends
up
turning
into
like
a
mini
collector,
that's
probably
just
like
a
huge
amount
of
work,
which
is
what
bogdan's
probably
scared
of,
but
but
we
do
need
to
like
sort
all
that
out
and
I
think
we
need
to
sort
all
that
out
sooner
rather
than
later.
G
So,
hopefully
we
can
spend
some
cycles
in
september
in
this
meeting
and
other
places.
Thinking
about
that
stuff
bogdan.
If
you
want
help
with
the
proposals
on
this
I'd
love
to
work
on
it
with
you.
A
A
A
Okay,
metrics
update,
we
had
a
metrics
update
yesterday
from
the
metrics
group.
Riley
presented
some
updates
and
he's
not
around.
I
think
he
said
he's
not
yeah
he's.
A
E
Yeah
sorry,
I
added
that-
and
I
know
right
riley
has
an
all-day
meeting,
so
he
can't
come
if
you
can
give
us
a
quick
update
of
what
he
said
in
the
other
meeting.
That'd
be
awesome
because
I
don't
wanna
speak
differently,
but
basically
yeah,
there's
a
there's
one
specification
pr
out
around
exemplars
that
has
had
two
people
look
at
it.
One
approval,
definitely
not
enough
to
merge,
but
could
use
a
few
more
eyes
if
you
don't
care
about
exemplars.
Maybe
that's
a
sign
anyway.
Please
take
a
look.
E
Yeah,
I
was
just
calling
out
the
next
two
pieces
of
work
that
were
deferred
from
the
existing
exporter,
pr
which
I
think
has
enough
approvals
to
get
submitted.
But
if
you
haven't
taken
a
look,
please
take
a
look.
The
next
two
expected
prs
are
one
around
pool
based
exporter
specification
and
another
one
around
multi-exporter
based
specification.
A
Perfect,
thank
you
so
much
for
that.
By
the
way
the
the
update
from
yesterday
was
that
going
through.
The
document
now
is
that
api
is
free,
is
future
freeze,
language
clients
should
work
on
that
and
the
sdk
view
aggregation
part
is
merged.
L
Thanks
carlos
just
I
put
a
note
there
to
the
two
oteps
that
have
been
under
review
for
a
few
weeks
now
we
continue
to
iterate
there,
I've
gotten
good
feedback
and
applied
the
suggestions.
We
keep
cutting
this
back
to
reduce
it
in
scope.
At
this
point,
I
think
we're
there's
very
little
change.
L
I
would
I
can
see
us
making
without
completely
dropping
these
proposals,
we've
stripped
it
down
to
a
single
field
that
would
go
in
the
product
in
the
span
and
a
trace
state
specification
for
propagating
both
head
probability
and
consistent
randomness.
This
is,
I
think,
the
best
we
can
do
without
rewriting
the
w3c
spec
for
propagation
and
the
best
we
can
do
in
sort
of
balancing
both
the
cost
and
the
complexity
of
of
this
support.
L
I'm
referring
to
the
the
decision
of
the
proposal
to
limit
to
powers
of
two
probability,
which
really
simplifies
things,
so
I
would
like
more
reviewers.
I
think
just
like
josh
said
this
now,
if
you
don't
care
about
sampling,
that's
a
sign,
but
I
think
you
care
about
sampling.
So
please
help
us
approve
these
oteps.
Thank
you.
A
Thank
you
so
much
for
that.
Finally,
if
there
are
no
more
comments,
presemik
at
receive
timestamp
to
lock
record.
M
Yeah,
I
just
want
to
bring
it
to
your
attention.
This
is
a
small
item
that
came
during
one
of
the
recent
log.
Six
long
story
short
log
records
are
slightly
different
from
metrics
or
spans
because
they
can
come
from
third
party
applications
or
from
system
locks,
which
means
that
typically,
they
will
be
parsed
for
retrieving
the
timestamp,
and
currently
a
log
record
has
only
one
timestamp
value,
which
means
that
when
the
parsing
fails,
we
lose
the
information
about,
for
example,
the
receipt
timestamp,
which
could
be
a
good
fallback
for
for
for
that
parsing.
M
So
the
idea
is
to
have
two
fields
related
to
timestamp
in
the
log
record,
one
with
the
received
timestamp
and
the
other
with
the
associated
event
or
log
timestamp,
which
can
be
either
parsed
from
the
log
record
or
can
be
attributed
like
directly
using
some
different
means
or
can
be
empty.
So
that's
the
can
you.
K
M
So,
with
logs
things
can
go
wrong
easily
like
when
you
have
some
incorrect
timestamp
parsing
rules
associated.
You
can,
for
example,
mix
the
year
with
month
or
the
other
way
around,
and
you
can
get
like
totally
different
time,
and
so
it
might
be
good
to
have
the
recent
timestamp
but
yeah
like
I
said
this
is
up
for
discussion,
and
I
want
to
like
more
eyes
looking
on
that.
If
this
makes
sense.
K
I
I
don't
think
they're
special,
the
loads,
but
probably
some
experience,
and
that
you
have
proves
that
this
is
a
a
different
case,
but
unless,
unless
that
is
convincing,
I
would
my
default
answer
would
be
no.
M
Sorry,
yeah
yeah
and
that's
fine.
I
think
that
in
my
eyes,
logs
are
special
in
terms
that
there's
like
the
range
of
valid
timestamps
that
look
can
be
associated
is
is
much
broader
than
for,
let's
say
metric
scraping
because
you
will
be
always
scraping
something
that
was
more
or
less
recent
and
with
logs
you
might
be
reading
logs.
That
are
days
old
for
various
reasons,
and
then,
when
parsing
goes
off,
you
can
be.
I
know,
months
incorrect
or
like
things
things
like
that,
so
it's
better
to
have
two
sources
of
information
there
right.
M
Yeah,
that's
that's
true.
A
K
The
other
thing
we
may
consider,
and
probably
at
the
message
level,
would
be
the
receive
time
for
the
entire
message
or
for
the
entire
resource
logs
resource
metrics
resource
trace
or
spams.
I
think
it's
called
that
may
be
an
information
that
I
can
see
be
helpful
and
consistent
with
everyone,
but
I
don't
think
having
a
one-off,
timestamp
or
another
timestamp
in
the
just
in
the
logs
makes
sense
and
derives
from
from
the
consistent
view
that
we
have
right.
M
Yeah,
just
just
like
one
more
source
of
information
and
on
the
on
the
thing
you
have
brought
that
we
might
think
about
this
in
terms
of
metrics
and
traces
as
well.
I,
if
I
recall
correctly,
we
had
a
discussion
several
months
ago
about
being
able
to
fix
a
clock
in
accuracies
from
systems
that,
for
example,
are
not
running
ntp
and
etc,
and
even
there
was
an
idea
of
extending
the
protocol
to
include
information
about
the
timestamp
from
the
sender.
A
By
the
way,
time
is
up
you
may
consider
discussing
the
offline.
Thank
you
so
much
for
the
discussion.
Talk
to
you
soon,
ciao.