►
From YouTube: 2021-06-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
Yeah,
you
could
have
stopped
and
moved.
I
moved.
Oh
okay,
that's
a
whole
new
world.
Okay,
yeah!
That's
there!
You
go!
That's
it
yeah,
I'm
out
with
the
old
concrete
in
with
the
new
drywall,
okay!
Well,
congratulations!
A
Yeah!
Thanks,
yeah!
I'm
not
cool
enough
to
figure
out
virtual
backgrounds
and
hide
my
my
anonymity
and
all
that,
but
yeah.
D
A
A
Maybe
we
can
wait
a
little
bit
for
pablo
or
gustavo,
I
would
say
pablo
get
confused
by
his
handle
so
but
yeah
if
you're
on
the
call,
please
make
sure
that
you
add
yourself
to
the
attendees
list
of
the
agenda.
If
you
have
anything
you
want
to
talk
about,
please
add
it
to
the
agenda
as
well.
A
We'll
start
here
in
a
little
bit,
mostly
we'll
start
by
just
going
over
the
open
symmetry
rc
project
board,
which
is
our
normal
starting
point,
but
yeah
anything
else
you
want
to
add
to
the
agenda
we'll
be
sure
to
include
that
afterwards.
A
I
imagine,
let's
give
another
minute,
actually
we're
getting
getting
into
the
hour.
I
think
we
usually
start
about
three
minutes
after
so
we're
getting
about.
A
A
Cool
well,
I
think
we
can
probably
get
started.
I
don't
see
gustav
on
yet,
but
that
is
okay,
we'll
press
on.
I
think
we
have
quorum
from
everybody
else,
so
yeah
everyone
welcome
if
you're
just
joining,
which
I
think
I
see
a
few
new
names,
please
be
be
sure
to
add
yourself
to
the
attendees
list
here
on
the
agenda
doc.
Anything
you
want
to
talk
about
in
the
agenda.
A
Please
add
it
and
we'll
jump
in
to
start
off.
We
usually
go
over
our
rc
project
board.
By
usually
I
mean
just
in
the
past
month
or
two
as
we're
driving
towards
this.
This
is
our
main
goal
on
the
projects
and
we're
doing
pretty
good
we're
in
the
dregs
of
the
projects
dealing
with
some
of
the
more
tougher
issues
at
this
point,
so
some
of
these
issues
have
stuck
around
for
a
little
while
the
bigger
ones
but
yeah.
A
A
A
We
have
already
refactored
the
trace
date
portion
of
the
project
to
only
accept
strings,
and
that
was
done
a
little
while
ago
in
this
pr
here.
I
just
opened
up
this
pr
here,
which
has
been
a
work
in
progress
for
a
little
while
to
address
the
baggage
portion
of
this.
It's
not
small,
unfortunately,
but
I
think
it
does
a
lot
of
really
useful
things
and
a
lot
of
really
useful
cleanup,
namely
just
to
kind
of
give
a
high
little
overview
for
people
who,
before
they
go
review
it.
A
It
adds
this
concept
of
a
baggage
type
itself,
which
is
just
a
struct
wrapping
some
internal
fields.
It
has
an
idea
of
a
member
and
a
property
these
terms,
except
for
the
baggage
part,
come
from
the
actual
w3
specification,
w3c
specification
here,
the
the
construct
of
the
baggage
as
it's
defined
here,
which
is
also
an
editor's
draft.
So
it's
probably
going
to
change,
but
this
is
the
one
that
opens
telemetry
is
going
off
of
this
is
concept
of
list
members.
A
This
baggage
type
was
influenced
mostly
by
the
idea
of
open
telemetry,
calling
it
a
baggage,
and
then
that
is
contained
by
members
which
it
comes
from
w3c
each
one
of
those
members
is
a
property
or
sets
of
properties,
and
I
think
that's
kind
of
the
key
thing
for
those
of
you
who
are
just
tuning
in
this
is
something
of
debate
whether
we
wanted
to
actually
like
refactor,
but
I
think
one
of
the
important
things
that
we
discuss
going
forward
is
if
we
try
to
maintain
sort
of
sort
of
type
that
would
have
to
be
encoded
into
the
header
content
as
well,
and
that
is
not
currently
there's,
not
there's
not
a
way
to
do
that
and
talking
with
bogdan.
A
The
only
thing
that
we
could
potentially
think
about
in
the
future
is
maybe
encoding
that
type
in
a
property
so
making
sure
that
we
support
properties
and
then
eventually,
if
we
wanted
to
build
our
attribute
system.
On
top
of
this,
we
wanted
to
make
sure
that
we
supported
the
properties
or
the
metadata
for
each
entry
in
the
baggage
system.
So
that's
why
it's
there.
It's
cara
has
definitely
been
put
into
making
sure
making
sure
the
property
is
treated
in
the
first
class
way.
So
we
can
handle
that
sort
of
thing.
A
So
yeah
there's
a
lot
of
detail,
parsing
all
the
way
down
to
encoding
so
yeah.
There's
a
lot
going
on
here.
There's
also
a
really
big
pull
refactor
used
to
be
that
a
lot
of
the
baggage
implementation
we
kept
in
an
internal
package,
and
that
was
built
a
long
time
ago
by
krezmir
and
he
did
a
really
good
job,
but
it
was
without
the
foresight
of
like
where
the
baggage
specification
in
the
w3c
would
end
up
as
well
as
open
telemetry.
A
So
a
lot
of
that
is
ripped
out
at
this
point.
So
there's
a
lot
going
on
here,
but
I
think
the
important
thing
to
really
kind
of
like
start
off
and
take
a
look
at
is
maybe
just
getting
some
feedback
on
that
baggage.
Api
that's
being
introduced
here,
so
it's
just
gonna
be
in
this
package.
I'm
sorry
in
this
file
here
so
a
good
place
to
start
a
good
place
to
start
with
content
or
feedback.
A
A
I
still
don't
see
gustavo
on,
so
I
can't
really
talk
too
much
about
this
one,
but
I
know
that
after
this
is
resolved,
my
focus
is
going
to
be
moving
on
to
this
and
resolving
this,
which
should
then
put
us
in
a
position
to
get
the
rc
out
in,
however
long
it
takes
for
that,
plus
all
this
otp
work,
which
we've
talked
about
in
the
past-
and
there
is
definitely
some
potential
here,
I
think-
to
parallelize,
which
is
something
if
you
haven't
been
to
all
these
meetings.
A
We've
talked
a
little
bit
in
the
past.
These
operations
are
all
the
things
that
need
to
be
done
or
outlined
here
in
this
list.
Updating
docs,
I
think,
was
something
that
we
identified
as
being
something
we
could
do
in
parallel,
but
I
think
gustavo
said
that
he
wanted
to
try
to
get
one
more
thing
out.
I
think
with
http
traces
before
we
did
that,
so
I
believe
there
is
a
pr
for
the
http
traces.
A
Okay,
that
is
something
I
missed
there
we
go.
This
is
what
I'm
guessing
yeah
cool.
A
Let's
put
this
on
the
project
board:
okay,
cool.
So
that
answers
my
question
that
I
was
hoping
to
get
answered
as
to
like,
where
we
were
at
on
this.
So
I
think
we
are
here
and
we
have
one
reviewer
and
we
need
more
reviewers.
So
if
you
are
new
to
the
project,
reviewing
is
extremely
valuable
to
us.
If
you're,
not,
you
know
how
hard
this
can
be
so
yeah,
please
spend
some
time
and
get
this
one
reviewed.
A
I
think
that
this
one
as
well
as
well,
I
guess
the
baggage
implementation.
I
guess
I
could
be
selfish
and
say
mine
as
well,
are
probably
the
top
two,
but
then
there's
also
one
from
tigran
about
adding
schema
urls.
These
are
probably
the
top
three.
I
would
say
that
need
reviews.
So
if
you
have
time,
it'd
be
useful.
If
you
get
some
up
on
those.
B
Yeah,
I
think
we
should
probably
add
the
schema
url
prs
to
to
this
project
board
as
well,
because
I
think
we're
gonna
want
those
before
we
call
it.
Nrc,
especially,
I
haven't
looked
to
see
if
this
spec
sig
has
made
their
release
this
week,
but
they
said
they
were
going
to
release
the
1.4
spec,
which
will
include
the
schema
attributes
or
the
the
attribute
schemas
this
week.
A
Right
they
have
not
released
it.
I
just
checked
10
minutes
ago,
but
I
agree.
I
think
that
they
should
be.
We
want
to
we're
not
going
to
get
the
release
up
for
them,
so
I
want
to
agree.
I
want
to
try
and
get
that
included.
Yes,
that
is
showing
up,
I
think
so.
A
Okay,
okay,
there
we
go
five,
get
the
old
hard
refresh
one
day,
I'll,
learn
how
to
use
a
computer
but
okay.
So
I
think,
if
that
looks
good,
I
think
we're
making
some
progress
on
this.
We
just
need
reviews
at
this
point
so
again
like
if
you're
on
this
call
wondering
how
you
can
help
contribute
to
getting
our
stable
rrc
out
like
it
is
reviewing.
A
We
need
help
reviewing
so
I'm
just
gonna
say
that
plainly
and
yeah,
I
think
that
we're
we're
close,
but
we
just
need
some
help
at
this
point.
B
A
Aren't
we
just
overachievers
yeah
cool,
I'm
gonna
put
this
in
the
in
progress
because
I
think
if
that's
then
activating
works
out,
it's
a
little
bit
of
a
duplication
of
issues,
but
it
identifies
something
different.
So
I'm
gonna
leave
it
like
that.
Okay,.
B
C
I
think
that
comes
from
as
you're
seeing
because
now
worked
with
two
different
vendors,
each
of
whom
supported
otlb,
either
over
http
or
over
grpc,
but
not
the
other.
So
if
an
sdk
chose
one
but
not
the
other,
you
could
be
in
bad
shape.
B
Yeah
and
I
think
realistically,
we
definitely
do
want
to
support
both
at
least
for
protobuf,
so
yeah,
but
let's
make
sure
we're
slightly
overachieving.
A
Yeah,
I
think
that
it's
specified,
as
only
one
as
this
little
comment,
introduces
this
idea
of
a
one
and
only
one,
but
I
yeah.
I
think
that
just
from
implementation,
kind
of
like
you're,
saying,
steve
and
alita
like
not
having
one
of
these
can
mean
compatibility
issues
except
for
this
one,
which
seems
to
be
a
lot
less
commonly
implemented.
But,
yes,.
E
A
Yeah,
I
agree
so
yeah
we're
gonna
have
to
update
this
okay,
but
yeah.
We
were,
I
think,
pulling
that
out,
because
we
also
found
bugs
and
there's
probably
a
great
youtube
video.
You
can
catch
from
last
week's
meeting
that
really
dives
into.
I
think
a
lot
of
the
details
as
well
as
there's
some
issues
so
yeah
for
those
interested
cool.
We
have
a
fair
amount
coming
on
here.
I
think
we're
out
of
five
now
so
I'm
gonna
do
some
live
updates.
Look
at
that.
A
Cool,
so
let's
jump
into
the
agenda
that
we
have
eddie
leland
module
version,
compliance
and
flexibility,
release,
tooling,
improvements
to
open
telemetry
repository.
D
Well,
do
you
want
me
to
share
my
screen
here?
I
guess
you
can
do
it.
A
Yeah,
whatever
is
easier
for
you,
I'm.
D
Happy
to
share
sure
okay,
so
this
one
hopefully
has
some
motivation
behind
it
because,
as
we're
getting
close
to
the
version
1.0.0
release
candidate
like
hopefully,
this
is
going
to
be
something
that
we
can.
We
can
use
during
that
process.
So
basically,
the
overall
picture
right
now
is.
It
seems,
like
all
of
the
modules,
when
we're
increasing
the
versions
of
all
the
modules
within
the
repository
go
up
at
the
same
time.
D
So,
for
example,
it
goes
from
0.19
to
0.20
and
then
the
next
time
it'll
go
up,
0.21
right
and
I
think
one
of
the
things
that
was
mentioned
by
anthony
in
an
issue
a
while
back
was
we
want
a
way
to
basically
specify
what
like
different
sets
of
modules
that
can
increment
their
versions
in
lockstep
and
have
it
have
like
multiple
different
sets
of
modules,
potentially
so
yeah.
Here's
a
good
example.
D
I
think
if
you
just
scroll
up
a
little
bit,
tyler
yeah
right
to
implementation
details.
D
D
D
D
All
right
back
to
where
we
were
yeah,
basically,
like
I
said
the
idea
is
we
have
several
like
different
sets.
D
So
maybe
we
have
like
the
tracing
module,
the
sdk
module,
maybe
the
top
level
hotel
and
the
contrib
are
all
within
one
set
of
modules
that
will
be
versioned
together,
so
we'll
go
to
like
rc1
and
then
like
the
1.0
v
1.1
all
together,
and
then
we
might
have
a
separate
set
of
modules
that
might
be
incremented
together
on
a
separate
like
no
versioning,
so
that
might
be
like
0.20
or
0.5
for
metrics
and
logs
separately.
D
So
that's
just
like
an
example
so,
hopefully,
like
I
have
a
few
questions
about
the
design
that
I
might
want
to
ask
the
maintainers
about.
I
talked
to
anthony
about
this
a
little
bit
and
is:
is
there
any
like
just
feedback
right
now
before
I
dive
into
more
of
the
specifics.
A
Yes,
I
I've
got
a
few
questions,
but
first
let
me
just
represent
this
looks
really
good.
Thanks
for
putting
this
thing
looks
I
haven't
obviously
read
through
the
whole
thing
yet,
but
I
like
the
comprehensive
nature
yeah
thanks
for
tackling
this
is
definitely
a
a
needed
design.
So
I
really
appreciate
it.
A
One
of
the
questions
I
have
is
one
of
the
requirements
we
have
in
our
version
specification
is
that
anything
that
is
released
as
stable
and
shares
the
same
major
version
is
maintained
at
the
same
miner
and
patch
version
as
well.
Is
that
handled
in
this
like?
Is
there
like
a
default.
D
Yeah,
so
I
specified
a
few
like
criteria
that
we
want
to
make
sure
of
so
I
guess
the
yaml
file
will
help
us
well
like
just
for
as
an
example.
If
we
have
like
all
these
modules
listed
in
a
yaml
file,
we
can
basically
run
a
script
to
check
all
of
these
versioning
items
and
details
to
make
sure
it's
compliant
with
the
specifications
like
the
the
convention.
Sorry
so,
for
example,
like
no
more
than
one
set
of
modules
exists
for
any
non-zero
major
version.
D
Like
you
had
mentioned,
for
example,
no
dependencies
on
any
experimental,
experimental
module
exists
in
stable
modules
and
so
on.
So
ideally,
a
lot
of
that
could
be
like
automated.
So
whenever
you
like
run
the
script
to
generate
the
versioning
and
the
tags,
it
will
yell
at
you.
If
you
somehow
manage
to
screw
something
up.
A
Is
there
a
way
that
we
could
design
the
yaml
structure
to
avoid
those
error
cases,
meaning
instead
of
having
these
like
groupings
of
label,
sets
by
some
sort
of
list
or
name?
You
could
have
an
entry
per
major
version
number
or
something
like
that?.
A
Yeah
so
currently,
right
now
the
module
said
it
looks
like
it's
a
list.
My
yaml
is
rusty,
so
you
know
I'm
gonna,
probably
say
some
things
here,
but
we
could
also
have
it
so
that
it's
an
object
in
the
the
json
terms.
A
I
guess
it'll
be
just
a
map
here
where
there'd
be
some
sort
of
entry
and
that
entry
would
then
you
know,
be
a
you
know:
a
version
like
a
major
version
number
so
like
v01
and
then
anything
down
from
that
would
be
all
of
the
listed
modules
and
then
you
wouldn't
have
like.
Well,
then,
if
you
had
any
duplicates,
you
could
very
easily
see
that
because
that
would
be
pretty
sure.
That's
invalid.
A
Yaml
again
the
animals
rusty
for
me,
I
guess
it
also
depends
on
your
parser
but
yeah
yeah.
Maybe
that's
that's
just
my
question
yeah,
so
I.
B
I
think
I
would
I
would.
I
would
second
the
proposal
to
move
from
a
list
to
a
map
or
addict
that
can
be
done
just
by
making
the
name
value
here.
The
key
instead
of
an
entry
in
the
the
map
at
each
list
entry.
But
I
think
that
it's
probably
sufficient
to
simply
have
the
the
tooling
that
validates
these
run
through
and
say.
Is
there
more
than
one
set
of
modules
with
a
major
version
greater
than
one.
F
B
Or
a
major
version
of
one
or
more
than
one,
with
a
major
version
of
two,
I
don't
know
that
we
would
want
to
use
the
version
as
the
name,
because
then
we
might
have
to
increment
that
every
time
which
could
lead
to
additional
turn,
as
opposed
to
just
incrementing
the
version
field
within
the
object.
But
if
the,
if
the
tone
can
say
oh,
I
see
that
there
are
two
sets
of
modules
that
have
major
version.
One.
B
Please
don't
do
that
and
just
stop
and
not
let
us
go
any
further.
I
think
that
probably
covers
it.
My
question
is:
is
that
even.
B
F
F
B
No,
I
I
don't
think
they
can,
though
our
policy
is
that
if
we
move
say
if
we
moved
metrics
too
stable,
we
were
at
stable
version.
1.5
we
moved
metrics
to
stable.
We
would
release
a
stable
major
version
or
version
1.6
that
included
metrics
and
metrics
would
start
out
at
1.6.
There
would
never
be
a
1.543
or
two
or
any
other
earlier
one
point
x,
version
of
metrics
yeah,
so
all
of
the
modules
in
experimental
metrics
would
move
up
into
the
stable
v1
set
stable
v1
would
be
incremented
by
a
minor
version.
D
D
Yeah
I
mean
yeah
like
if
you're
rusty,
with
the
emblem,
I'm
like
rusted,
like
I'm
like
completely
rusted
through
so
I
mean
I'm
not
sure,
even
if
like
this
is
like
the
valid
way
to,
I
guess
like
create.
I
don't
I'm
not
really
sure
if,
like
this
is
making
like
a
map
structure
like
a
json
file
would,
but
this
was
just
an
example,
and
I
guess
I'll
iron
out
those
details,
but
I
think
the
main,
like
verification
could
be
done
like
so
like
you
as
a
human.
D
Could
edit
this
yaml
file
and
make
sure,
like
things
are
placed
in
the
right
place
and
then
a
verification
script
could
verify
all
the
things
that
we
want,
like
no
module
will
appear
in
more
than
one
set.
B
Can
I
propose
that
we
use
json
instead
of
yml?
I
think
json
is
human
readable
sufficiently
for
those
of
us
who
will
be
working
with
it.
A
So
there's
there's
one
caveat
that
I
always
come
back
to
on
this
one.
First
off
yaml
is
a
superset
of
json
that
contains
comments,
so
the
comments
I've
had
in
the
past
like
can
be
useful
at
times,
but
I
I'm
not
opposed
to
jason
either.
I
just
want
to
throw
that
out.
There.
E
I
mean
I,
I
would
tend
to
agree
with
tyler,
because
again,
yaml
is
also
used
in
github
actions,
for
example,
and
other
you
know
execution
workflows.
So
I
would
think
that
you
would
implement
this
in
yaml.
A
Yeah,
I
I
am
being
a
super
set.
I
think
you
can
pass
it
json.
It
should
still
be
valid,
but
yeah.
I
I
think
that's
something
to
keep
in
mind
that
there
is
a
more
it's
becoming
more
ubiquitous.
That
being
said,
I
am
in
the
hard
campus.
I
hate
white
space
eliminated
config
files,
but
yes,
yeah.
I'm.
A
C
Off
my
lawn
with
your
ammo
yeah,
if
you,
if
you
offer
to
if
you
are
offered
a
parse
yaml,
as
you
were
saying,
tyler,
anybody
is
free
to
use
json
with
it.
So
I
think
it's
it's
good
to
maybe
in
documentation,
show
both
forms,
but
there's
really
no
way
to
accept
yaml
and
reject
json
right.
So,
if
you're
generating
input-
or
you
know,
whatever
you've
got
tools
that
emit
json,
it's
fine
to
feed
it
into
this.
It
just
becomes
more
like
a
sort
of
cultural
preference
of
which
one
do
people
tend
to
to
use.
F
F
The
one
comment
that
I
would
have
that
as
a
requirement
on
this,
I
I
haven't
been
able
to
generate
or
go
through
it
all,
but
make
sure
that
all
of
the
modules
that
are
in
our
repository
are
also
corresponding
or
it's
an
error.
F
And
then
the
one
warning
I
would
have
is
being
able
to
determine
what
is
a
module
and
what
isn't
isn't
too
hard.
Because
that's
just
the
presence.
C
F
Mod
determining
what
depends
on
what
is
a
lot
more
complicated
of
a
task,
so
that
portion
will
take
you
longer
to
build
like
that
is
something
that
is
not
super.
B
B
My
initial
thought
there,
though,
had
been
to
simply
use
ghost,
because
gosum
should
include
the
transit
of
dependencies
all
the
way
down
to
the
standard
library
right,
and
so,
if
we
see
a
module,
that's
in
one
of
the
experimental
sets
and
it
goes
some
for
a
module.
That's
in
the
stable
set,
that's
a
red
flag.
F
I
know
that
acts
a
little
bit
strange
when
you
have
replace
directives
like
we
do
and
the
other
thing
that
I
would
say
is
if
you
want
this
kind
of
behavior
be
explicit
about
it,
so
maybe
have
a
flag,
a
a
I
don't
know
what
you'd
call
that
a
collection
based
directive
that
is
like
experimental
and
thus
it
can't
be
included
in
other
non-experimental
things
more
more
than
just
the
trying
to
parse
the
version
number
that
will
probably
bite
you
bite
you
at
some
point.
A
So
kind
of
related
to
this,
I,
the
one
thing
I
would
strongly
ask-
is
to
not
write
this
in
bash.
We
had
some
some
previous
tooling
started
to
get
written
and
go
natively,
and
I
think
that
was
just
a
really
good
precedent
to
set,
because
everyone
in
this
project
writes
and
understands
go,
and
I
think
that
we
should
try
to
unify
on
that
and
related
to
that
past
conversation
that
just
happened.
This
problem
needs
to
be
solved
for
that
tool
that
was
originally
created.
A
We
have
what
we
have
right
now
is
essentially
something
that
comes
in,
and
peppers
every
single
go
mod
with
all
of
the
replaces
you
could
potentially
have
it's
like
the
cross
product
and
we
had
like
an
open
issue
of
like
we
should
probably
narrow
that
down
to
the
actual
requirements.
So
if
you
solve
requirements
in
some
way
where
you
could
parse
a
dependency
tree,
I
think
that
there
is
a
go
tool
that
does
a
poor
job
at
this.
A
So
it
is
a
hard
problem,
as
aaron
pointed
out,
and
this
anthony's
kind
of
pointing
out
as
well,
but
I
think
it's
it's
useful
beyond
just
this
tool,
and
I
think
that
like,
if
we
do
this
and
go,
it
would
be
a
useful
thing.
B
E
A
I
would
choose
some
really
poor
version
of
both
mixed
together
and.
E
E
B
Another
advantage
of
doing
it
in
go
would
be,
I
think,
as
aaron
might
have
mentioned
earlier,
that
you
could
choose
to
use
viper
to
read.
The
configuration
in
like
bash
is
gonna
have
hard
time,
parsing,
either
json
or
yaml,
but
if
you
do
it
in
go,
you
can
use
viper
and
that
can
read
just
about
any
configuration
file
format.
You
can
imagine
so
that
will
simplify
some
of
that
handling
as
well.
B
B
So
we
we
can
choose
whichever
libraries
we
feel
is
appropriate
here
yeah
and
see
if
the
to
answer
your
question:
there's
a
movement
towards
walling
off
all
of
the
configuration
interfaces
in
the
collector
from
exposing
viper.
I
don't
think
they're
trying
to
remove
it
immediately,
but
they
want
to
ensure
that
they
have
the
option
to
remove
it
without
breaking
all
the
interfaces
that
people.
C
F
Just
came
up
by
lagging
the
other
day,
yeah
prometheus
had
to
go
through
this
same
process
and
exposing
both
viper
and
cobra
directly
in
their
api
and
trying
to
make
it
as
a
library
made
that
very,
very
difficult
yeah.
F
A
Cool,
so
I
just
want
to
probably
do
a
little
time
check
on
this.
We
probably
have
a
little
bit
of
time.
If
anybody
has
some
other
major
issues
we
wanted
to
address
or
anything
that
they
had
some
like
big
standing
questions
for
the
proposal,
and
we
can
talk
about
those.
Otherwise,
we
can
move
on
to
the
agenda.
D
B
F
F
B
Is
part
of
the
top-level
hotel
package
and
module
this
would
represent
the
version
of
the
api
and
probably
should
continue
to
do
so.
The
question
I
think
we
need
to
ask,
then,
is:
is
there
any
place
where
we
would
need
a
separate
version
number
for
something
like
metrics
libraries
and
apis
and
sdks
or
logging?
Any
of
the
experimental
signal
packages.
B
We
find
them
yes,
I
I
think
and
hope
that
that's
a
question
that
can
be
deferred
indefinitely,
eddie
you,
you
can
probably
validate
this
fairly
quickly
by
looking
in
some
of
the
experimental
like
metrics
packages
and
seeing
if
there's
any.
B
F
A
Okay,
cool
so
yeah,
if
you
guys
or
gals
and
everyone.
Please
take
a
look
at
the
proposal
and
if
you
have
I'm
guessing
eddie
you're
looking
for
comments
in
the
go
doc
itself,
I'm
sorry
the
google
doc
itself.
D
Yeah,
if
anyone
has
any
questions
or
comments,
you
can
just
just,
I
guess,
suggest,
on
there
and
post
a
comment.
A
Okay-
and
I
think
I
don't
know
when
your
timeline
is
to
get
this
resolved,
but
could
this
culminate
eventually
into
an
issue
in
the
projects
in
the
you
know,
just
the
opensource
go
repo
for
what
you're
planning
to
do
once
this
design
has
been
ratified,
and
then
we
can
move
forward.
B
So
there's
there
should
be
an
issue
open
already
that
I
created
a
while
back
when
we
were
working
through
some
of
the
stuff
with
punya
that
I
think
I've
assigned
to
eddie
so
yeah.
We
can
use
that
to
track
it.
A
Perfect
yeah,
so
eddie
just
make
sure
that
we
link
this
stock
into
that.
A
A
Cool
moving
on
steve,
I
think
that
you're
up
next
for
the
resource
layering
through
the
open,
telemetry
collector
sure
this.
This.
C
Might
might
not
be
the
right
forum
for
this
because
it's
not
specifically
about
the
go
library,
but
I
know
that
we've
had
some
debates
about
when
resources
are
established
with
the
sdk
and
that
they're
immutable
and
all
of
this
and
one
thing
that
we
confusion,
that
we've
run
into
with
using
a
combination
of
the
go
sdk
and
then
sending
stuff
to
the
open.
C
Telemetry
collector
is
that
the
collector
has
a
processor,
I
think,
is
what
it's
called,
that
can
layer
in
resource
attributes
into
traces
that
it
receives
and
there's
some
confusion
for
us
about
how
resources
are
supposed
to
describe.
I
think
the
thing
that
is
publishing
like
the
source,
the
most
upstream
source
of
the
traces,
let's
say,
and
then
the
collector
gets
in
the
middle
and
can
either
shadow
or
or
underlay
values
onto
it.
To
where,
then,
the
question
is,
what
does
a
resource
mean?
C
That's
coming
through
the
collector,
so
we
have
like
some
some
kind
of
attribution
ambiguity
when
we
look
at
these
traces
later,
where
you're
asking
did
this
attribute
come
from
the
collector,
putting
it
in
there
or
did
this
come
from
the
original
source
of
the
trace,
and
we
found
that
we
wanted
to
use
this
feature
of
the
collector
when
we
were
receiving
traces
from
things
that
are
not
actually
instrumented
with
open
telemetry,
but
rather
things
like
zipkin.
C
You
know
where
it's
coming
in
without
enough
context,
so
we're
trying
to
add
context
to
some
of
the
stuff,
but
then
anyway,
so
it's
kind
of
confusing.
I'm
just
wondering
what
the
what
the
spec
has
to
say,
or
any
choices
that
we've
made
in
our
libraries
about
this
and
what
the
proper
way
to
use
it
is.
F
There's
actually
an
ongoing
discussion
of
how
resource
semantic
conventions
can
be
merged
and,
in
a
larger
sense,
that's
also
discussing
the
same
idea
here,
that's
more
focused
on
versioning
but
yeah.
There
there's
a
proposal
on
like
how
versions
can
be
combined
together,
and
I
think
that
that
kind
of
also
fits
along
this
way.
So
that's
what
I
have
to
add:
okay,.
A
Yeah,
I
I
think,
that's
kind
of
unfortunately
enough
to
be
the
answer
here
is
that
it's
gonna
have
to
get
bumped
up.
I
think
to
the
specification
level
there's
I
don't
think
we've
done
anything
particular
here.
We
have
a
merge
order
and
that's
the
merge
order.
A
That's
specified
from
the
open
telemetry
specification,
namely
like
if
you're
trying
to
merge
two
resources
together
and
then
on
top
of
that,
where
the
default
resource
mixes
mixes
into
that
in
certain
places
like
we
follow
the
specification
as
much
as
we
can
there
but,
like
I
know,
you're
talking
about
like
as
you
come
across
and
go
to
the
collector
or
you
go
outside
of
the
bounds
of
like
that
single
sdk
like
it
gets
really
murky
and
a
lot
of
times.
A
It's
like
there's
two
sides
of
that
like
what
what's
set
the
origin
and
what
happens
when
they
don't?
How
do
you
identify
where
that
came
from
and
retroactively?
You
know
attribute
that,
like
there's
yeah
yeah,
it's
a
murky
situation,
but
I
honestly
it's
not
something
we
should
be
solving
in
this
thing
specifically,
because
if
we
come
up
with
a
solution,
it
needs
to
be
universal
across
that
insulatory
and
so
yeah.
C
C
B
That
knows
where
the
ultimate
destination
is
so
that
way,
I
know
that
the
application
didn't
set
anything
legacy,
applications
that
were
instrumental
zipkin
or
yeager
also
haven't
said
anything
and
everything.
B
By
the
collector,
at
least
in
terms
of
the
kubernetes
resources,
I
know
that's
the
the
one
place
that
I'm
adding
those
and
if
I
don't
see
them,
I
also
know
that
it
apparently
then
didn't
run
through
that
collector,
and
I
should
figure
out
why
it's
configured
that
way.
C
C
B
What
the
kubernetes
resource
adapter
the
collector
does
is,
especially
if
you
tell
it
I'm
on
this
node
only
pull
pod
metadata
for
this
node.
It
pulls
all
the
pod
metadata
for
that
node
and
then
looks
up
by
ip
when
it
receives
the
trace
and
says.
Oh
this,
this
race
came
from
this
ip.
It
must
be
from
this
pod.
Here's
its
pod
yeah,
oh,
I
didn't
realize
it
did.
A
Cool,
I
see
there's
two
more
issues
on
the
agenda
from
evan
who
also
left
a
comment
in
the
chat
saying
that
he
had
to
jump
off.
So
let
me
double
check
yeah.
I
think
he's
jumped
off
at
this
point.
So
I'll
just
give
a
little
overview
here.
A
I
think
you
are
correct.
There
is
a
merge
order
if
I'm
not
mistaken.
Coming
from
this
in
the
same
sense
that,
like
we
parse
the
open,
telemetry
resource
attributes,
environment
variable,
I
think
that
this
can
be
handled
in
the
same
way.
There's.
B
Right
and
that
merge
order
is
specified,
I
think,
is
backwards
of
what
may
have
actually
been
intended
and
what
everybody
seems
to
say.
Oh
yeah,
that's
how
I
would
expect
it
to
work,
but
you
expect
an
operator
to
be
able
to
override
a
value
by
setting
an
environment
variable
that
was
otherwise
specified
in
code.
B
But
the
way
the
spec
says
it
has
to
be
is
that
the
value
in
code
takes
precedence
over
the
values
that
are
in
the
environment,
and
I
think
all
that
this
does
is
it
puts
service
name
after
resource
attributes
from
the
environment,
but
still
would
be
overwritten
by
whatever
is
written
in
the
code
right.
That
is
correct.
I.
A
Yes,
I
agree
with
you
anthony
on
this
one.
I
think
we've
talked
about
this
at
length,
but
yeah.
I
don't
know
what
to
say
like
I.
I
don't
know
why.
That
was
the
way
it
was
yeah.
It
really
doesn't
make
any
sense
to
me,
but
I
I
yeah,
I
think
that
you're
right,
the
the
resource
precedence,
is
just
that
the
hotel
service
name
takes
precedence
over
anything.
That's
actually
set
the
hotel
resource
attributes
yeah.
This
is
something
I
think
we
should
probably
is
it
required.
B
Name
must
be
provided
somewhere,
and
I
think
that
there's
that
this
environment
variable
has
to
be
accounted
for.
B
Attribute
detector,
we
can
add
this
in,
we
can
add
it
in
in
a
way
that's
specified.
I
just
think
that
the
the
way
it
specified
is
weird
and
even
the
conversations
that
we
had
in
the
spec
seg
about
this.
It
seemed
like
everybody
was
driving
towards
yeah.
The
operator
should
be
able
to
override
service
names
via
this,
but
I
don't
think
that's
what
the
spec
says
so
I'll
bring
this
up
again
at
the
spec
sig
next
week.
A
Yeah,
I
think
that's
a
good
idea,
I
think
yeah,
because
I
think
what
this
is
doing
is
it's
just
adding
in
another
term,
but
there's
still
the
underlying
behavior
issue
that
we
haven't
addressed
and
that's
exactly
what
you're
talking
about
where
if
an
operator
provides
something
that
overrides
something
in
the
code,
it
should
take
precedence,
but
it
doesn't
based
on
the
merge
order
that
opens
elementary
specifies
so
yeah
yeah.
B
Yeah
for
us
it'll
be
easy
to
add
this
as
a
second
step
in
the
environment
resource
detector
and
we
can
handle
it
as
a
specified
and
that
still
leaves
the
question
of
where
does
that
environment,
resource
detector,
appropriately
fall,
and
when
we
solve
that,
we
just
move
that
around
and
they'll
both
be
addressed.
So,
okay,
it's!
It
gets
a
little
bit
murky,
but
I
don't
think
it'll
be
a
problem
for
us
to
deal
with.
A
I
agree,
I
agree,
I
think
this
should
be
included
in
the
the
rc,
given
the
fact
that
this
is
already
merged
and
if
we're
going
to
be
waiting
on
a
specification
release,
this
is
something
that
needs
to
be
there
to
be
compliant.
A
A
I,
if
you
have
time
I
would
be
happy
to
have
you
complete
it,
so
that
looks
great
cool.
Moving
on
to
the.
B
A
What
what's
this
time
you
speak
of
yeah,
it's
your
20
time,
otherwise,
there's
your
weekends
and
your
time
that
you
sleep
normally,
I
I
love
this
project.
Don't
get
me
wrong.
Sometimes
I
can
be
a.
A
Yeah,
okay,
so
then
the
last
thing
is
that
I
think
evan's
throwing
a
little
shade
here
and
I
think
rightfully
so,
there's
definitely
some
open
issues
in
the
hotel,
collector
repo
that
been
neglected
and
probably
could
be
merged.
I
would
like
to
spend
some
time
doing
this.
A
I
have
past
a
week
moved
and
tried
to
build
that
pr
that
I
was
showing
at
the
beginning
of
the
meeting,
so
I
haven't
been
the
best
of
maintainers
in
trying
to
shepherd's
prs
in,
but
I'm
trying
to,
I
think,
by
the
end
of
today
reviewed
the
two
other
prs
in
the
hotel
go
repo
which
should
open
up
tomorrow
and
I'm
hoping
to
get
these
merged
unless
anthony
has
time
to
also
do
some
some
of
the
same
anthony's
much
more
on
top
of
reviewing
things.
So
maybe
we
can
coordinate
that
way.
B
Yeah-
and
I
think
this
also
asks
the
the
question
which
I
don't
know
if
we
have
a
good
answer
to
right
now
is:
should
we
make
another
release?
It's
been
a
month
since
we
last
released,
but
I
don't
know
if
we're
in
a
solid
state
to
release,
given
the
the
changes
that
are
ongoing
with
the
otop
exporter.
A
Oh,
that's
a
really
good
point.
I
forgot
about
that,
but
I
am
strongly
in
favor
of
not
releasing
based
on
that
that
knowledge
yeah.
I
agree
yeah.
I
was
kind
of
thinking
about
the
baggage
stuff
or
halfway
in
between
there,
but
like
we're
not
like
that's
that's
something
we
could
but
the
otilp.
I
don't
think
that
we
should
definitely
don't
think
we
should
have
a
partial
release
based
on
that
yeah.
F
A
So
I
think,
with
that,
I
will
again
plug
if
you
have
some
time
to
do
a
review.
It'd
be
awesome.
If
you
could
get
a
review
of
that
hp,
protobuf
pr
that
he
has
up
and
then
we
can
try
to
move
that
progress
that
forwards
and
then,
on
top
of
that,
I
think
we're
coming
close
to
the
end
of
the
meeting,
and
so
there's
no
more
things
on
the
agenda.
I
would
love
if
anybody
has
some
like
customer
stories
or
some
successes
with
open
telemetry
they
wanted
to
share.
B
Just
real
quick
on
finishing
up
on
the
topic
of
releases,
I
wanted
to
kind
of
get
a
sense
of
support
for
the
idea
of
if
we
get
to
this
time
next
week
and
the
grpc
and
http
trace
otop
exporters,
the
new
ones
are
there,
but
we
haven't
finished
all
of
the
rest
of
the
stuff.
That's
on
the
agenda
for
that
transformation.
B
What
do
people
think
about
just
ripping
out
the
the
old
otlp
combined
exporter
and
leaving
no
metrics
exporter
at
all
for
a
while
until
we
fold
it
back
in
so
that
we
can
make
a
release,
because
I
think
at
that
point
all
of
the
capability
for
for
the
stable
parts
will
be
there,
and
I
I
would
much
rather
see
us
make
a
release
than
make
a
perfect
release.
F
Yeah,
I
would
agree
with
that.
I
don't
feel
good
about
the
metrics
being
missing,
but
those
are
experimental
and
we're
current
we're
actively
working
reworking
them
anyways.
So
if
that
would
yeah.
B
Yeah,
I
guess
kind
of
restated:
would
there
be
support
for
removing
some
capabilities
from
the
experimental
metrics
so
that
we
can
have
a
stable
trace
package.
C
I
would
prefer
that
I
recognize
that
we
might
be
breaking
somebody
who's,
relying
on
it.
I
guess
I
guess
what
I'm
wondering
is:
would
they
have
any
path
forward?
Is
there
anything
that
a
consumer
could
do
like
to
take
the
new
release,
let's
layer,
something
on
top
of
it?
Maybe
that
would
allow
them
to
continue
using
metrics
if
they
wanted
to.
F
Could
we
move
what
is
currently
there
into
literally
hotel
otlp
experimental
like
exactly
what's
there
right
now
or
maybe
otlp
deprecated,
so
that
they
could
have
a
continuation?
But
you
know.
A
A
C
So
I
think
ripping
it
up
so
go
ahead.
Do
we
have
just
I
forgot?
This
package.go.dev
reveal
popularity
numbers
like
we
have
any
way
to
trace
how
many
people.
C
A
Definitely
have
used
it
in
the
past.
I
know
josh
uses
it
pretty
extensively
and
a
lot
of
the
stuff
he
does
like
the
prometheus
sidecar
and
all
these
other
things.
So
it's
not
a
great
option,
but
it's
also
not
a
really
great
option
going
through
breaking
change
after
breaking
change
on
each
release
cycle.
So
like
it's,
it's
kind
of
bad,
either
way,
especially
with
the
big
refactor
to
the
api
that
we
have
planned
coming
up.
A
B
This
now,
but
I
just
wanted
to
get
the
idea
into
people's
heads
that
maybe
that's
a
path
we
should
take
and
we
can
yeah
across
the
bridge
if
we
have
to
cross
it
next
week.
A
I
think
that's
a
good
point.
Hopefully
gustavo
watches
this
video,
and
maybe
we
can
prioritize
the
prs
based
on
that
ordering,
but
yeah
I
mean,
if
not
we'll
try
to,
I
think,
coordinate
via
slack
or
something
like
that.
A
Okay.
Well,
I
think,
with
that
we
can
probably
end
the
meeting
or
I
guess
we
never
really
paused.
I
didn't
see
any
hands
go
up
for
user
stories,
even
though
I'd
love
to
have
it
but
cool.
Well
with
that,
I
think
that
I'll
try
to
get
some
more
reviews
done,
and
I
will
see
you
all
next
week
thanks
everyone
for
joining.