►
From YouTube: 2020-08-06 Go SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Everyone
tyler
asked
if
I
would
leave
this
meeting
today,
because
he's
possibly
not
gonna
make
it,
and
so
I
have
just
started
to
edit
the
notes
I'll
show.
B
A
And
maybe
we'll
get
everybody
a
couple
minutes?
If
you
have
agenda
items,
please
add
them.
I
was
just
gonna
run
through
pr's
and
issues,
because
I
I
am
a
little
out
of
date
on
go
simply
because
you
may
know
I've
been
pulled
into
a
lot
of
metrics
discussions
in
the
last
few
weeks
and
that's
something
I
could
put
on
the
agenda.
A
A
Okay,
if
you're
here
maybe
add
your
name
to
the
notes
and
we'll
get
started.
A
All
right,
let's
kick
it
off.
I
am
looking
at
the
list
of
open
pull
requests
and
you
might
as
well
go
in
reverse
order.
A
There
is
a
p
there's
an
issue
about
this-
I
think
maybe
not
linked
here,
but
there
have
been
some
tests
added
in
the
metrics
code
for
this
sdk
that
are
not
usable
outside
of
the
sdk.
It
would
be
very
nice
to
be
able
to
do
that,
especially
for
the
contrib
directory.
A
So
it
looks
like
this
is
pretty
new.
I
haven't
had
a
chance
to
look
at
it
yet,
and
so
I
will
offer
that
I
can
look
at
this
later,
something
I
asked
for
there's
an
issue
on
it
we'll
run
into
that
issue
in
a
minute.
A
So
I
don't
propose
we
look
at
this
now,
but
I
will
check
that
later.
Let's
see
actually
looks
like
almost
everything
here
is
about
metrics,
so
I
might
as
well
just
keep
talking.
This
is
a
nice
idea
to
add
some
pretty
printing
to
the
standard
output
histogram
for
metrics
and
the
problem
we
had
here
with
some
inadvertent
changes
in
the
go
mod
files,
and
so
I
think
we're
just
going
to
wait
for
that
to
get
fixed
I've
seen
this
happen
myself.
A
I
didn't
fully
understand
how
it
happened
myself,
but
when
I
see
something
like
this
like
this
is
a
dangerous
risky
change
to
just
inadvertently
slip
in
putting
in
a
change
of
protobuf
library,
for
example,
and
yet
it's
it's,
maybe
not
clear
that
that's
a
change,
because
this
is
an
examples
file
for
example.
Anyway,
this
this
pr
is
impossible
to
look
at
because
it's
all
it's
doing
is
changing
ghost
sums
and
go
mods.
So
I
think
it's
a
mistake
and
we'll
just
wait
for
that
to
get
fixed.
A
A
A
So
if
you
have
been
following
the
metrics
development
for
a
while
you'll
know,
I'm
not
sure
why
my
page
is
not
loading.
Okay,
that
we've
been
developing
this
sdk,
the
go.
Implementation
is
roughly
speaking,
the
official
prototype
that
we
have
and
we've
sort
of
landed
in
a
place
where
we
have
several
components
for
this
export
pipeline.
It
includes
an
accumulator
which
is
the
first
front
front
line
which
is
sort
of
integrating
over
a
short
period
of
time,
all
the
measurements
that
arrive.
A
This
is
about
the
processor
which
is
sort
of
sitting
in
the
middle
between
the
accumulator
and
the
exporter,
and
there
was
an
issue
that
was
filed,
numbers
862,
pointing
out
that
there
was
some
code
complexity,
that
was
that
looked
artificial
and
that
there
was-
and
we
were
sort
of
in
a
place
where
we
merged
this
change
before
where
it
just
had
to
move
forward,
or
else
we
were
stalling.
A
So
there
was
a
pr
which
I
probably
could
just
find
this
way
number
840
that
created
this
mess
and
partly
this
is
just
going
to
clean
up
the
mess.
A
So
I
don't
want
to
walk
through
that
right
now,
but
it's
this
a
large
change
we
made
so
that
the
processor
can
become
responsible
for
deciding
and
implementing
the
conversion,
from
deltas
to
cumulatives
and
from
the
cumulative
deltas.
And
while
I
was
caught
up
working
on
that,
I
had
made
a
decision
that
in
hindsight,
was
pretty
bad.
A
So
this
is
what
fixes
it
and
and
there's
sort
of
an
esoteric
corner
case
that
was
being
considered
when
we
when,
when
this
happened-
and
it
has
to
do
with
whether
you
ever
would
would
establish
an
export
pipeline
with
multiple
accumulators
providing
data
into
one
processor.
A
And
you
can't
imagine
why
that
might
be,
and
the
protocols
that
we've
built
like
pro
tlp
are
set
up
for
it,
because
you've
got
sections
of
the
data
which
are
for
individual
resources
and
for
individual
instrumentation
libraries.
So
if
you
did
have
different
resources
attached
to
different
accumulators
and
then
were
to
feed
all
that
data
into
a
single
processor,
the
question
is:
what
behavior
do
you
get
and
this,
in
my
opinion,
corrects
the
behavior
there
it
had
to
do
with.
A
I
feel
that
this
is
very
esoteric
for
this
conversation,
but
I'm
just
gonna
finish
because
I
need
this
and
I
hope
that
you
will
follow
along
what
this
does
is.
It
makes
so
that,
as
you
know,
as
you
may
know,
with
opensource
api
spec
for
observer
instruments,
these
are
the
ones
that
get
executed
through
callbacks.
A
The
spec
says
that
you
can
only
have
one
value
per
distinct
label
set,
that's
sort
of
in
keeping
with
the
observational
nature,
you
can
only
observe
one
value
per
label
set.
What
I
did
in
the
prior
pr
number
840
was
make
it
so
that
the
processor
would
sort
of
follow
those
assumptions.
Even
when
data
arrived
with
the
same
label
set
from
different
accumulators,
and
in
hindsight
I
think
that
was
the
wrong
behavior.
A
The
reason
why
is
that
we've
always
had
a
vision
as
part
of
open
telemetry
metrics
that
came
from
open
census,
metrics
that
you
could
configure
not
only
the
aggregations
that
you
want
in
the
process
say
you
want
histogram
or
or
some
for
example,
you
could
also
configure
which
legal
sets
were
used
for
the
aggregation
and
if
you
chose
a
label
set
that
was
narrower
than
the
actual
set
of
labels
present
in
the
measurement.
A
Those
would
be
aggregated
away
in
the
process.
So
if
we
want
to
aggregate
away
data
or
reducing
label
label
dimensions
in
the
in
the
export
pipeline,
the
processor
is
a
place
where
we
have
to
deal
with
duplicates
now.
In
other
words,
the
accumulator
is
going
to
say,
I
saw
several
measurements,
they
all
had
distinct
label
sets,
but
the
aggregation
we
want
will
reduce
one
of
those
dimensions
away.
Therefore,
I
have
multiple
points
so
now
I
have
multiple
points
and
I
don't
want
that.
A
I
hope
this
is
not
too
much
depth
or
too
much
detail
and
if
I've
lost
you
that'd,
be
it
I'm
going
to
have
to
discuss
this
in
front
of
the
metric
scoop
as
well.
But
this
is
step
one
in
getting
us
to
a
world
where
we
can
configure
which
dimensions
we
want
to
export
and
it'll
behave
correctly.
So
the
test
case
just
in
case
that
was
confusing
like
low
level
detail.
The
test
case
is,
you
have
say,
a
a
measurement
which
is
being
made
through
an
observable
instrument.
A
So
you've
got
a
callback,
the
callback
fires
and
you're
going
to
take
a
bunch
of
measurements.
So
you
have.
Let's
say
you
have
a
metric
which
is
cpu
temperature,
so
you're
using
a
value
observer,
and
you
have
16
cores
on
your
machine.
So
when
this
callback
executes,
you
are
now
going
to
execute
some
reads:
data
from
the
proc
file
system.
Let's
say
you
get
16
temperature
measurements.
A
You
can
now
label
them
with
core
number
and
any
other
values
that
you
want
and
output
those
observations,
so
you
put
in
16
different
observations
with
distinct
keys,
distinct
label
sets.
Now
I
configure
aggregation
that
says
I
want
to.
I
don't
want
that
cpu
core
id
it's
too
much
dimensionality.
For
me,
I
want
to
compute
an
aggregation
of
cpu
temperature.
The
aggregation
we've
defined
is
min
some
count.
We
could
also
choose
one
of
the
other
aggregations
for
this,
this
type
of
data.
A
So
so
you
might
configure
mid
max
sum
count,
and
now
you
get
16
points
in
to
the
accumulator.
The
accumulator
reduces
that
down
to
16
points
that
have
the
same
label
set
because
we've
dropped
one
dimension.
Now
the
accumulator
output
16
points
to
the
processor,
and
this
change
makes
it
so
that
the
processor
will
actually
aggregate
16
points
as
opposed
to
saying.
Well,
I
15
points
with
the
same
label
set,
and
I'm
considering
those
duplicates
there
was
duplicate.
Identification
of
duplicates
must
happen
in
the
accumulator
processor.
A
Just
says:
I
see
more
points
and
integrate
them
to
my
to
my
output.
I
did
include
another
pr,
that's
connected
to
this
one,
not
meant
for
merging.
It's
really
just
to
show
the
end
to
end
which
I'm
after
this
includes
three
parts.
One
is
the
pr
that
I
just
showed
you
about
the
processor
behavior.
A
There's
a
change
in
the
accumulator.
What
I
did
there?
Sorry,
the
change
in
the
in
the
accumulator,
which
is
I
I
added
the
so
well,
I
put
it
out
of
order.
I
had
added
support
in
the
label
set
to
have
a
filter,
so
you
so
you
configure
in
some
in
some
way
you
can
figure
a
predicate
extremes
in
and
returns
true
or
false.
If
returns
true,
the
key
will
be
kept
if
it
returns
false.
The
key
will
be
dropped.
A
We're
doing
this
as
early
as
possible
in
the
in
the
in
the
system,
so
that
we
can
avoid
the
cost
of
dimensions
that
we're
not
using.
So
so
once
we
filter
those.
We
get
points
that
appear
identical
because,
after
after
filtering,
but
before
filtering
they're,
not
they're,
distinct
points,
so
this
allows
the
sdk
to
be
configured
with
a
filter
that
will
then
perform
the
the
end-to-end
behavior
that
I
just
described,
which
is
16
cpu
measurements
come
into
the
accumulator,
the
processor
outputs,
a
new
maximum
count
of
those
16
measurements
again.
A
A
This
seems
like
it's
necessary
to
preserve
some
sort
of
performance
story,
so
we'll
be
discussing
that
in
any
case,
I
think
this
fixes
a
bug
and
it's
also
cleaning
up
some
code.
So
sorry,
for
that
lengthy
digression,
it
is,
however,
a
go
piece
of
code
and
we
need
to
get
it
reviewed,
and
so
any
questions
on
that.
I
apologize
for
a
lengthy
explanation.
A
I
know
tyler
asked
for
it,
so
he
might
check
the
recording
and-
and
there's
some
benefit
in
me.
Having
said
stated
all
that,
okay,
so
now,
we've
looked
through
these
two
everything
else
is
older
than
a
week,
which
means
it
might
have
been
discussed
last
time.
I
think
there
was
a
request
here
for
some
change.
A
Cool
great,
so
we'll
wait
for
something
to
happen
on
that
pr
and
then,
as
far
as
this,
this
older
one
about
structs
and
arrays,
it's
it's
an
issue
between
the
spec
and
what's
actually
implemented,
I've
sort
of
stayed
out
of
it.
I
actually
disagree
with
the
specs,
so
I
I
sort
of
like
what's
been
done
here,
but
I
I
don't
have
the
energy
to
change
the
step,
so
I'm
gonna
suggest
we
move
on
from
that.
A
I
also
since
we're
since
we're
15
minutes
in
right.
Now
I
want
to
state
that
I
have
to
leave
at
10
30
today
for
another
meeting
about
metrics.
That's
pretty
important,
so
I
will
continue
to
talk
unless
anyone
wants
to
take
over
from
me
now
until
10
30.
A
So
see
some
of
these
issues
aren't
new
to
me.
I
didn't
see
this
one
last
night.
This
is
a
crazy
issue.
It
looks
like.
B
A
B
A
Yeah,
I
support
that.
It
seems
like
probably
need
a
lot
more
integration
tests
with
with
all
the
sdks
and
the
collector,
and
I
know
the
collector
team
has
been
working
on
that
cool,
so
so
yeah.
But
this
sounds
like
an
issue
that
should
be
filed
for
the
collector
more
so
than
for
the
go.
I'm.
A
Where
the
integration
test
should
lie,
actually
that
I
should
take
that
back.
It's
it's
about
it's
about
testing
both
together.
I
would
just
support
that
for
sure.
A
This
I
was
going
to
say
this
looks
like
a
duplicate
except
one
is
about
traces
and
one
is
about
metric,
so
872
is
about
metrics
and
it
looks
like
this
one
is
about
traces
but
they're,
pretty
similar
to
each
other,
that
we
have
some
facilities
for
testing
inside
the
library
that
might
be
very
useful
outside
the
library
if
we
provide
them
for
the
external
users
they're
going
to
end
up
re-implementing
stuff,
they'll
be
duplicated
or
not
good,
or
something
like
that,
so
that's
available
for
work.
Actually
let
me
remark.
A
I
would
say
that
we
can
remove
duplicates.
Hopefully,
tyler
doesn't
disagree,
this
one's
one
that
I
filed
and
it's
not
an
urgent
matter.
I
I
said
something
here,
which
maybe
is
controversial.
A
I've
been
in
discussions
with
some
people
who
are
interested
in
possibly
using
the
hotel,
go
sdk
as
the
end
point
in
a
pipeline
of
the
collector,
because
we've
already
implemented
the
semantics
that
we
basically
want
is
we
have
this
pretty
high
performance
label,
site
implementation
that
can
be
used
to
as
a
map
key
for
building
up
a
metric
set
of
metrics,
and
it
would
be
probably
a
shame
to
re-implement
that
code.
A
However,
it's
not
worth
complicating
the
distribution
of
the
hotel
go
just
for
this
one
package,
so
just
some
discussion
about
what
we
can
do
there.
I
think
we
probably
should
just
move
it
to
internal
until
there's
a
really
strong
demand
for
it
in
the
collector,
but
it's
it's
not
too
urgent
and
and
someone
I
will
probably
follow
up
on
this.
If
no
one
else
does.
A
So
we've
got
another
six
or
seven
issues
here
that
will
get
us
through
the
stuff.
That's
new
in
last
week.
Maybe
that's
the
point
where
I
have
to
drop
off
regen's
asking
for
binary
propagator
support
this
one.
This,
I
believe
technically,
should
be
considered
a
prerequisite
for
ga
because
it
was
an
open
census
facility
and
we've
promised
to
retire
a
consensus.
A
That
said,
I
don't
believe,
there's
a
lot
of
interest
for
it,
so
I
would
be
interested
to
see
whether
someone
here
in
the
go
hotel
go
community
is
really
excited
about
this
or
if
somebody
from
google
wanted
to
help
out.
That
would
be
great.
This
seems
like
it
should
be
done
by
somebody
at
google.
Maybe
I
should
save
that.
C
A
A
Let's,
just
let's
just
cc,
morgan
and
see
if
he
can
find
somebody.
Well
anybody
have
anything
to
say
there.
I
don't.
I
don't
find
it
to
be
that
compelling,
but
I
shouldn't
shut
it
down
either.
So
too
many
trace
packages,
yeah
he's
right.
I
I
feel,
like
the
point
that
he's
making
here
is
that
well
these
these
issues
impact
the
developers
who
work
on
the
sdk
and
the
developers
who
work
on
the
exporters.
But
I
don't
think
this
affects
the
user
much.
A
So
it's
sort
of
an
issue
that
I've
been
willing
to
ignore
and,
for
example,
I
think
bogen
is
not
he's
a
java
programmer
by
nature.
So
I'm
not
saying
he's
bad
at
writing.
Go
but
he's
not
like
following
the
practice
very
well,
like
we've
universally
done
a
remap
from
this
particular
package
to
either
export
to
export,
because
it's
almost
never
to
use
both
the
metrics
export
and
the
trace
export
in
the
same
package.
So
I've
always
rematched
whatever.
A
B
A
B
Go
already
provides
facilities
for
dealing
with
this,
and
the
privilege
goes
to
the
package
authors
to
choose
names
that
they,
like
you
see
this.
Also
in
the
kubernetes
project,
there's
tons
of
packages
called
v1
or
v1
beta
1,
because
they're
along
a
generated
path,
it's
common
practice
to
just
adopt
preferred
renamings
for
those
that
people
use.
A
A
That's
the
idea
that
I
have
anyway,
this
is
not
to
say
no.
This
is
just
to
say.
I
don't
see
this
as
being
super
important.
A
Moving
along
this
one,
I
think,
is
probably
worth
everybody
thinking
hard
on.
There's
there's
a
method
named
with
spam
in
our
api.
I
will
take
credit
for
putting
it
there
long
ago,
and
this
was
when
we
were
first
prototyping
with
what
happened
was
in
in
the
tracing
api.
I,
instead
of
starting
from
a
body
of
code
from
open
census,
I
started
from
the
spec
and
and
from
scratch.
Roughly
speaking,
I
was
familiar
with
this
interface
called
pprof.do.
A
If
you're
familiar
with
how
the
runtime
label
profiler
sorry
the
label
support
in
the
runtime
profiler
works,
as
you
say,
prefabs.do
oops,
let's
just
find
another
name,
do
somewhere
here
and
this
this
is
a
contextual
method
and
go,
but
it
requires
you
to
be
called
in
in
line
because
it's
doing
crazy
stuff
underneath
the
coppers,
you
can't
get
the
behavior
of
prop.due
without
calling
a
callback
and
letting
it
wrap
around
your
your
call.
A
So
if
you
don't
have
a
built-in
with
span
of
some
sort,
you
can't
automate
this
type
of
attaching
of
context
labels
to
the
runtime
profiler,
which
is
nice
to
have,
but
I
don't
think
it's
essential,
and
so
that's
that's.
How
it
got
there?
I
was
thinking
about
pprof.do
and
I
was
writing
from
from
scratch,
rather
than
from
the
open
census,
I'm
kind
of
happy
to
remove
it.
I
can
see
that
others
agree.
So
let's
do
unless
anyone
wants
to
say
anything
else.
B
Yeah,
so
I
had
the
first
time
I
ever
saw
the
the
signatures
from
the
api.
This
is
back
in
march.
B
First
thing
I
saw
with
span,
I
said:
oh
okay,
now
I
get
it
and
I
stared
at
it
and
I
said,
wait
what
and
started
figuring
out
how
you
grovel
around
to
actually
get
hold
of
the
span,
and-
and
so
I
had
asked
liz
liz
long
jones
about
it,
you
know-
and
she
had
she
had
a
defense
of
the
current
interface
that
I
I
didn't
really
buy,
but
I
think
it
also
just
raises
questions
that
the
documentation
doesn't
answer
like
I
expected
it
to
maybe
do
things
like
catch,
an
error
that
my
function
would
return
and
automatically
you
know
instrument
the
error
that
arose.
B
A
Did
too,
I
suspect
that
there
was
a
draft
in
the
code
when
it
did
long
ago,
but
I
think
that
if
you
do,
if
you
follow
some
discipline-
and
it
does
require
discipline
because
the
way
defer
works
and
go,
is
that
you
have
to
defer
you
have
to
you-
can't
wrap
your
deferments
and
get
recover
behavior.
Is
that
I'm
sorry
this?
Is
that
not
a
very
exciting
the
behavior
between
deferred
execution
and
recover
blocks
is
limited
to
a
single
depth?
Essentially
so
that.
B
A
If
you
say
start
span
and
defer
end
in
the
same
block
as
opposed
to
pass
that
function
to
some
other
function,
that's
going
to
get
deferred.
It
won't
work
that
way,
but
I
think,
if
you
do,
if
you
follow
a
pattern
which
is
pretty
standard,
then
you
can
catch
those
errors
without
the
waistband
functionality,
so
it
shouldn't
prevent
us
from
doing
proper
error
handling.
It
just
requires
a
little
bit
more
discipline
that
users
have
to
understand
and
still
can't
do
something
like
people
do,
but
I
I
think
I'm
as
a
practitioner.
A
B
I
I
think,
I
think,
in
its
current
state
it's
malformed
I
could.
I
could
support
adding
a
span
argument
to
it.
I
would
be
more
supportive
of
just
removing
it
if
it
doesn't
yeah
privileged,
like
I
haven't,
looked
at
the
implementation
for
a
while
to
see,
but
if
really
all
it's
calling
is
start
span
and
then
call
the
function.
A
Right,
the
only
thing
privilege
that
I
can
see
it
doing
would
be
automating.
The
people
do
or
something
like
that,
which
is
a
it's
a
deep
integration
that
maybe
nobody
wants
either.
As
for
as
for
adding
a
span
argument
that
seems
like
there's,
yes,
that
would
solve
a
problem.
There's
another
opinion,
I
think,
which
is
you
can
see
it
if
you
go
digging
around
the
issues
in
the
oteps
repo,
like
the
sort
of
question
is:
why
do
we
have
a
span
object
anyway?
We
already
have
a
span
context.
A
B
A
B
B
B
In
other
words,
if
you
wanted
to
wrap
up
start
span,
call
a
function,
you
know
possibly
like
have
a
fallible
and
infallible
version
that
could
catch
an
error
and
automatically
attach
it.
We
could
add
those
as
helpers
elsewhere,
but
I
think
it'd
be
better
to
thin
down
within
the
interface
here
and,
like
you
said,
give
yourself
more
freedom
in
the
future
to
consider
even
removing
span
or
weakening
it
or
whatever
is
necessary.
C
Yeah,
there's
there's
a
spot
currently
where
I
had
considered
using
with
span
but
held
off
because
of
this
issue,
and
I
think
what
I'm
going
to
end
up
doing
is
implementing
my
own
helper
like
that,
because
it
just
keeps
that
closer
to
where
I
need
it,
and
I
can
add
things
like
the
error
handling
there
or
the
logging.
That's
going
to
be
specific
to
my
logic,
to
log
that
error
somewhere
other.
B
A
A
I
also
agree
on
the
notion
of
a
helper.
It's
something
that
a
lot
of
people
are
going
to
be
used
to
because,
like
let's
say
in
the
light
step
in
codebase
here,
we
have
help
sort
of
essentially
a
practice
of
calling
a
start
and
end
that
both
accomplishes
tracing
as
well
as
metrics,
as
well
as
error
reporting,
which
is
really
what
users
want.
A
I
think
and
there's
a
nice
story
with
open
telemetry
that
you
could
just
like
automatically
do
all
those
things
inside
of
your
sdk,
but
but
that's
sort
of
a
disenvision
at
this
point,
and
we
should
just
make
it
possible
for
users
to
do
it
in
the
shorter
term.
I
think
the
idea
that
let's
say
an
sdk
could
automatically
create
metrics
from
spans.
That's
on
the
verge
of
happening,
but
it's
not
close
enough
that
we
should
like
commit
to
it.
I
think
and
and
let's
see
the
exclusion.
A
Approaches
that
are
more
straightforward
and
I
don't
know
if
standard
and
in
the
trade
that
decay
does
something
with
recover.
At
this
point,
I
see
I
thought
I
remember
there
being
something
like
that,
but
I
might
be
wrong.
A
Okay,
that
was
probably
the
most
important
interesting
issue
for
us
to
discuss
and
I
need
to
drop
off
this
call
I
in
the
notes.
I
I
noted
that
I
would
like
to
propose
anthony,
become
my
replacement
as
a
maintainer.
I'm
definitely
interested
in
staying
on
as
approver,
but
I
haven't
been
able
to
put
as
much
time
in
and
that's
the
reason
why
I'm
dropping
off
the
call
right
now,
so
I
will
be
leaving
please
take
out,
carry
on
and
I'll
check
in
next
week
and
someone's
gonna
have
to
share
bye.
C
Issues
so
we
just
got
through
with
span
moving
b3
propagator
out
of
the
api.
C
So
I
think
yeah,
my
preference
is
probably
just
to
create
and
contrib
propagators
we've
already
started
moving
a
bunch
of
the
stuff
out
of
the
the
main
repo.
That's
not
api,
that's
not
sdk,
and
I
think
this
is
another
one
that
would
fit
well
because
there
may
be
like
tyler
mentions
here.
X-Ray
is
another
one
that
might
fit
in
and
that
would
probably
go
into
contrib
rather
than
the
main
repo.
B
Is
it
the
case
that
the
only
the
only
one
that
oh?
No
I
guess-
that's
not
a
propagator-
is
thinking
of
that
standard
out,
one
being
left
behind.
C
Yeah,
so
that's
that's
an
exporter.
The
the
only
propagator
that
would
be
left
behind
would
be
the
well.
There
would
be
two
right:
there
would
be
the
open,
telemetry,
correlation
context,
propagator
and
then
for
trace
context.
That
would
be
the
w3c
trace
context.
Propagator,
which
is,
I
think,
the
only
one,
maybe
the
only
one
that's
defined
in
the
spec
makes
sense.
C
So
if
there's
any
thoughts
one
way
or
another
on
this
one,
please
comment
this
one
is
mark
is
required
for
gas.
We
we
need
to
get
a.
B
C
So
I
think
this
came
up
in
the
spec
meeting
earlier
this
week
as
well,
where
they
were
discussing
where
exporters
and
things
like
that
should
live,
and
it
seemed
that
the
general
consensus
was
if
it's
in
the
spec
and
it's
an
open
protocol,
then
it's
okay
for
it
to
live
in
the
main
repo.
Otherwise
it
should
probably
live
in
contrib
or
in
a
vendor-hosted
repo.
C
So,
like
the
honeycomb
exporter
lives
in
a
vendor-hosted
repo
separately,
similar
with
the
light
step,
whereas
the
the
jaeger
and
zipkin
exporters
are
de
facto
standards.
Even
if
they're,
not,
you
know
formalized
by
zw3c
their
de
facto
standards.
I
think
they're
even
mentioned
in
the
spec
because
of
open
senses
and
open
tracing
compatibility.
B
Yeah,
that's
where
I
was
going
here
is,
I
think,
b3
is
so
common
that
it
doesn't
feel
right
to
penalize
it
for
not
being
a
standard
per
se.
I
don't
know
if
there
is
a
published
spec
on
it
that
we
would
all
over,
but
thank.
C
You
yeah,
I
know
I'm
not
married
to
either
option
really.
I
think
the
the
goal
here
is
to
move
it
out
of
the
api
package,
which
kind
of
makes
sense
to
keep
that
implementation
separate,
but
whether
it
stays
in
the
main
repo
or
moves
too
can
trip,
doesn't
doesn't
matter
to
me
one
way
or
the
other
yeah.
C
Okay,
so
we've
also
got
renaming
kv,
which
I
think
it
was
key
value
before
and
then
it
got
shortened
down
to
kv.
I
don't
know
is:
is
there
actually
a
suggestion.
B
This
one
seems
to
overlap
with
the
too
many
trace
packages.
One
again,
I
my
my
feeling
is
that
the
package
author
is
the
one
who
decides
the
the
users
of
the
package
are
free
to
rename
it
in
their
code.
B
I'm
not
really
sure
why
I
say
key
value
is
better
than
kv
as
far
as
avoiding
collisions
out
there.
You
know,
because
the
the
proposal
isn't
something
like
hotel
kv.
C
Right
yeah,
I
think
that
for
for
the
contrib
repo,
where
we've
got
guaranteed
collisions
because
of
the
the
naming
convention
we've
settled
on,
we've
talked
about
prefixing
the
the
final
package
with
otel
to
avoid
that
collision
just
to
make
it
easier
for
users
so
that
they
don't
have
to
necessarily
name
space
all
of
their
imports,
because
there's
going
to
be
a
guaranteed
collision
there
and
they're
always
going
to
be
used
together.
But
for
something
like
this,
I
I
think
I
agree
that
kb
is
a
perfectly
fine
name.
B
Yeah,
and
the
only
other
one
I
could
see
was
something
like:
what
do
we
call
these
attributes?
Is
that
the
right
word
for
these,
the
keys
and
the
values.
C
They're
used
as
attributes,
but
they're
also
used
as
labels
in
metrics,
which
are
effectively
the
same
as
attributes.
Maybe
maybe
those
concepts
should
be
reconciled.
B
B
C
Okay,
yeah,
if
anyone
has
thoughts
on
that,
please
comment
on
the
issue.
This
one
is
very
similar,
so
we
in
in
the
feedback
we
were
given
kv,
global
and
standard
were
all
called
out
as
as
potential
issues
we
renamed
standard
to
semcom,
which
I
think
makes
sense,
it's
really
the
semantic
conventions
and
standard
didn't
convey
that.
Clearly,
I
think
I'm
in
agreement
with
tyler
that
global
is
a
perfectly
fine
name,
especially
when
you
consider
its
uses.
It's
used
for
things
like
global.trace
provider,
global.meter
that
it
tells
you
what
it
is.
C
C
And
then
we
get
back
to
stuff
that
we
had
discussed
before.
I
think
we
discussed
this
in
the
in
the
last
meeting
so
yeah,
that's
just
getting
rid
of
the
distinction
between
start
and
end
options
like
yeah.
Can
you
specify
the
end
time
at
the
start
of
a
trace?
Well,
if
you
know
it
sure
why
not?
C
Okay,
so
I
think
that
covers
all
of
the
the
issues
that
are
new
since
last
time
and
the
only
other
thing
we
had
on
the
agenda
was
josh
proposing
to
replace
him
as
a
maintainer
with
me,
which
I
will
accept
that
if
that's
what
they
would
like
to
do
and
do
my
best
as
a
maintainer
does
anybody
have
anything
else
that
they
would
like
to
discuss.
C
Yeah,
I'm
good
okay.
Well,
I
think
we
can
I'll
take
back
a
little
bit
of
our
time
today
and
I'll
see
you
guys
next
week
sounds
good.
Thank
you.
Thanks,
bye
thanks.