►
From YouTube: 2021-06-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
D
D
Yes,
I
totally
agree
all
right.
Shall
we
wait
one
more
minute
before
we
start
we
have?
We
don't
have
that
many
items
in
the
gym.
I
think,
but
anyway,
let's
wait
just
a
little
bit
in
case.
Somebody
else
joins
and
add
yourselves
to
the
attendees
list.
Please,
as.
D
D
Okay,
I
guess
we
can
start
five
minutes
after
thank
you
for
joining
us.
Usually
I
don't.
First
of
all,
you
guys
remember
we
used
to
triage
new
issues.
I
saw
the
issues
having
triage
correct
me
if
I
am
grown,
rightly
on
that
on
that
point,.
E
D
Nice
yeah-
I
like
that
perfect,
so
nothing
to
do
that
on
that
front
today.
So
next
issue
is
yeah.
I
put
together
a
list
of
issues
that
I
consider
more
important,
so
we
can
discuss
them,
maybe
not
in
detail
all
of
them.
So
the
first
one
is
about
specifying
the
result
of
of
emerging
resources.
You
know
and
getting
an
error.
You
can
open
the
pr
there.
Basically,
it
says
that
the
result
of
this
is
undefined.
D
It
has
a
pair
of
approvals,
but
I
would
like
to
get-
and
I
find
with
that-
I
don't
know
what
other
people
especially
maintainers,
think
of
this
approach,
and
I
don't
know
whether
botany
is
around.
I
was
wondering
if
bottom
would
have
an
opinion
about.
D
That,
well,
I
guess
no
well
in
in
any
case,
please
take
a
look.
It
doesn't
seem
trivial,
to
be
honest
and,
as
I
said,
it
has
a
pair
of
approvals.
One
of
them
is
from
tiger
and
the
other
one
is
from
armin.
So
it's
looking
great,
but
please
just
review
it
after
the
call.
Whenever
you
have
some
time
the
next
one
is
about
splitting
out
otlp
grpc
versus
http
endpoint
configuration.
D
This
came
from
the
collectory,
if
I
remember
correctly,
and
basically
it's
making
the
specification
on
this
part.
More
permissive.
Similar
situation
didn't
already
have
proved
that
it
doesn't
seem
it's
going
to
break
us
and
it's
going
to
allow
us
it's
going
to
be
friendly
with
the
users
as
usual
maintainers.
Please
review
that
it
looks
good
from
from
the
specification
point
of
view,
but
we
need
to
verify
that
it's
not
going
to
break
for
really
anybody
yeah.
The
next
one
is
pastor
propagator.
D
This
is
something
that
anurag
proposed,
and
the
idea
about
this,
as
the
name
implies,
is
that
we
can
have
some
some
services
that
are
just
working
as
pass
through,
so
they
don't
even
parse
or
you
know,
deserialize
and
then
serialize
the
spam
context.
You
just
keep
the
headers,
you
know
what
headers
you
need
to
keep
and
you
just
you
just
simply
propagate
them,
and
this
is,
as
I
said,
for
services
that
are
only
used
as
pass-through
there's
a
prototype
in
java.
D
A
B
Is
something
actually
it's
something
you
would
install
it
with
the
api?
Only
you
don't
want
an
sdk.
In
this
case
I
see.
Maybe
the
whole
point
is
here
to
be
completely
passed
through.
So
you
get
all
of
the
headers
for
your
your
trace
propagation
pass
through
your
service
without
having
to
build
an
sdk
and
any
of
that
any
of
that
stuff,
but
you
also
want
it
to
be
as
cheap
as
possible.
A
B
This
is
yeah
in
conjunction
also
has
put
together
in
the
java
repo,
an
experimental,
100
percent,
no
op
api.
That
does
does
not
even
do
internal
context
propagation
or
anything.
It's
really
completely
no
op.
So
those
two
things
coupled
together
means
you
could
have
a
100
pass-through
service
with
virtually
no
overhead,
which,
I
think
is
the
main
goal
right.
A
I
just
I
wonder
whether
that
that
should
all
be
living
in
the
api
package
or
not.
I
don't
know
yeah.
B
A
Yeah
yeah,
I
would
be
maybe
just
a
little
wary
if
it
started
to
be
something
where
oh
set
an
environment
variable
and
the
api
will
configure
this
thing,
for
you
kind
of
thing
that
that
that's
the
only
if
it
gets
to
that
that
level
I'd
be
a
little
bit
concerned.
If
it
was
just
the
api
package,
because
that
would
mean
people.
D
A
D
Yeah
I
had
a
similar
concern.
I
mentioned
that
to
an
iraq
as
well
yeah,
but
we
should
have
a
complete
well,
I
don't
know
whether
we
should
have
like
all
the
the
all
the
things
that
are
part
of
these
completes
before
we
merge
anything,
but
it
seems
useful
and
yeah.
Of
course,
we'll
have
to
make
some
some
choices
here
and
there,
but
yeah-
and
I
imagine
this
propagator
living
next
to
the
b3
propagator,
or
something
like
that.
As
john
mentioned.
D
A
I
don't
think
the
api
package
should
start
grabbing
at
things
like
invar
surface
area,
just
because
the
api
package
goes
everywhere,
so
you're
kind
of
stuck
with
everything
the
api
package
does,
and
I
don't
think
I
would
want
like
software,
that,
like
instrumentation
necessarily
including
this
as
a
configuration
option-
or
maybe
that's
fine,
but
mostly
you
configure
this
stuff
as
as
part
of
configuring,
the
sdk
you
can
configure
it
on
its
own,
but
you
know
the
whether
or
not
this
is
like
the
application
developer
still
having
to
install
this
install
instrumentation
that
sets
up
context
propagation
correctly.
B
Guess
is
the
option.
I
am
just
guessing
that
honorary
one
that
wants
this
to
be
at
least
specified,
as
so
that
we
have
a
common
way
to
do
it,
and
your
concerns
are
probably
why
he
wants
to
make
sure
that
there's
a
common
way
to
do
it
and
a
common
understanding.
But
I
think
he
also
wants
to
be
able
to
bundle
this
up
as
an
option
in
the
aws
distro.
It's
like
hey
if
you're
running
an
aws
turn
and
you're
using
our
distro
hit
this
flag
and
then
you'll
get
this
feature.
A
D
B
F
F
F
Unfortunately,
it's
like
the
naive
way
just
doesn't
work
because
http
headers
can
get
rebranded
if
you
will,
because
they
can
go
from
lowercase
to
uppercase,
and
we
don't
specify
anything
about
that,
and
so
I
don't
expect
people
to
like
successfully
just
kind
of
use
that
api
without
a
little
bit
of
advanced
knowledge,
of
what
the
hell's
going
on
on
on
particular
things.
F
So
to
that
end,
like
I
expect
us
to
be
providing
good,
instrumentation
libraries
that
do
this,
my
question
would
be:
do
we
want
to
pair
this
with
the
notion
of
a
super
minimal,
instrumentation
library?
Is
that
something
we
plan
to
do
like?
Is
this
a
direction
we're
going,
or
is
this
just
a
one-off
thing.
B
F
F
Okay,
so
I
was
implementing
a
different
propagator,
sorry
that
I
was
doing
the
google
cloud
one
and
I
found
out
like
you
know,
for
example,
when
you
implement
that
that
when
you
actually
inject
it,
when
you
write
the
instrumentation
right
depending
on
what
client
you're
using
or
what
server
you're
using
may
or
may
do
different
things
to
your
headers.
F
That
you're
not
aware
of
for
some
reason
the
java
default
one
just
capitalizes
the
first
letter
every
time
I
don't
know
why,
but
I
ran
into
that
where
it
was
doing
it
on
its
behalf,
not
for
me,
like
it
just
sort
of
did
that
and
you
have
to
like
know.
So
I
guess
what
I'm
suggesting
is
implementing
these
things
is
not
particularly
simple,
and
I
think
that,
like
we
should
be
encouraging
instrumentation
libraries
just
as
a
general
direction.
You
know
like
we
have
a
way
that
instruments
the
java
http
server.
F
We
have
away
the
instrument
spring
wave
away
the
instruments
this
right,
like
that's
the
direction
we
should
go.
So
what
I'm
asking
is.
Are
we
also
saying,
with
this
particular
no
op
thing
that
we
want
a
hey?
If
you
want
to
do
trace
propagation
or
start
context,
propagation
in
a
very
minimal
and
highly
performant
way,
we're
going
to
provide
you
an
entire
instrumentation
library
to
do
that,
that's
that's!
That's!
Basically,
a
question
is
this
just
kind
of.
B
B
E
A
Yeah,
I
I
guess,
I'm
a
little
bit
back
to
being
confused
as
to
why
this
is
a
propagator
and
not
a
pass-through
implementation,
because
the
propagator
would
be
extracting
your
b3
trace
context
and
all
of
that,
it's
just
whether
or
not
you're
creating
spans
and
changing
the
span.
Ids
available
and
things
like
that.
Right.
E
Sure
that
that
pass
through
implementation
already
exists,
that's
the
default
mill.
Op
specified.
You've
got
a
propagation
context
that
gets
extracted
and
just
gets
passed
along.
They
try
to
create
a
new
span.
It
gets
that
context
gets
copied
into
the
new
span
and
it
doesn't
actually
create
one
if
there
is
no
sdk
installed,
so
I
think
that's
status
quo.
The
question
is:
can
is
it
possible
to
avoid
all
of
the
deserialization
and
research
of
the
context,
information
in
a
propagator
yet
still
propagate
that
context?
F
No,
I
mean
yeah,
so
I
think
I
agree
with
the
general
direction
of
like
this
is
a
should
not
a
must
if
we
do
implement
this,
but
I
I'm
I'm
just
nervous
of
the
implications
of
of
what
it
brings
downstream
like
like
that
direction
like.
If
we're
going
to
a
you
know,
open
telemetry
propagation
is
too
expensive
to
use.
Use
this
other
thing
instead,
like
it,
it's
just
adding
some
confusion.
You
know.
D
That's
all
the
game
sounds
about
its
concern.
Yeah
ted
looks
you're
going
to
say
something.
A
No,
I
mean
I,
I
kind
of
agree
that
there's
like
edge
cases
and
and
confusion
that
could
come
with
this.
What
what,
if
someone
installs
this
along
with
everything
else,
you
know
what's
expected
to
happen
there,
for
example,
but
these
seem
just
more
like
ways
the
users
could
misconfigure
things,
not
some
like
fundamental
issue.
A
A
That's
a
bit
different
than
the
overhead
of
having
to
serialize
and
deserialize
this
stuff,
the
context
where,
like
people,
caring
about
that
overhead
tends
to
come
up
in
things
like
network
proxies
and
stuff,
like
that,
it
comes
up
less
in
like
applications,
but
I
could
see
java
applications
caring.
B
Hey
while
we're
on
this
topic,
since
this
is
honorable's
issue,
it
would
be
really
great
if
people
could
show
up
to
the
asia
pacific
friendly
spec
meeting
this
afternoon
to
talk
about
this
with
him,
because
basically,
no
one
ever
shows
up
and
that's
kind
of
a
bummer
for
honorable,
because
he
is
interested
in
this
stuff.
D
D
Okay.
Next
one
is
semantic
conventions.
Just
a
reminder:
we
have
a
few
stuff
if
you're
familiar
with
other
with
either.
You
know
one
of
these
two,
their
kubernetes,
tagger
or
fast
defunct.
Please
join
or
if
you
know
somebody
in
your
company
that
is
very
interested
and
expert.
Has
an
experience
on
this.
Please
ask
that
person
to
come
and
help
us
verify
that
this
is
valid.
D
B
B
I
think
it's
still
a
pretty
significant
issue
that
we
need
to
resolve
with
well
with
respect
to
span
data
modeling,
especially
with
more
and
more
instrumentation
and
more
and
more
complex,
instrumentation
being
written.
B
So
I
just
I
added
a
comment
this
morning
that
I'm
gonna
put
in
a
pr
to
the
spec
to
make
it
so
that
nested
client
nested
server
spans
as
long
as
they're
all
logically
client
and
server
are
100.
Okay
and
let
people
fight
me
on
it,
because
I
think
this,
like
it's
going
to
be
really
really
hard
to
both
provide
a
rich
tracing
experience
where
people
want
it
and
not
have,
and
especially
with
independent
instrumentation
libraries
and
not
have
this
nested
client
span
issue
or
nested
service
bands.
B
D
B
D
B
I
think
the
the
super
easy
and
obvious
one
is
if
you're
using
elasticsearch
as
your
database,
you
have
a
client
span,
which
is
your
when
you're
you,
your
kind
of
logical
database
operation
and
then
elasticsearch
under
the
hood
uses
some
http
client
that
also
will
potentially
generate
client
spans
as
well,
and
those
two
things
are
both
logical
clients,
one's
a
database,
client
and
one's
an
http,
client
and
they're.
Both
there
and
they're
both
important-
and
I
mean
this-
is
the
kind
of
there's
there's
lots
of
other
examples.
B
A
A
Conventions
we
don't
I've,
been
I've
been
poking
people
about
this
trying.
Currently,
we
just
don't
have
people
there's
some
effort
needed
to
go
into
the
semantic
conventions
and
and
flush
do
some
research
and
flush
out
what
what
we
currently
have
and
areas
that
aren't
really
covered
are
aspects
of
span
structure,
for
example
like
people
who
want
to
put
timing
events
around
tcp
stuff
on
a
span.
How
do
they
do
that?
A
There's
often
configuration
options
for
at
least
some
of
them
like
I
want
to
change,
which
http
status
codes
count
as
an
error.
I
want
to
change
the
operation
name
or
those
kinds
of
things,
so
I
feel
like
there's,
there's
just
more
work
around
fleshing
out
these
conventions.
There's
still
a
lot
of
like
question
marks
there.
You.
A
I
I
think
it
should
live
in
the
conventions
and
just
for
each
reach,
convention,
type,
http,
etc.
This
stuff
should
just
be
spelled
out
in
more
detail.
A
So
this
is
a
thing
right
like
right
now
our
yaml
file
is
pretty
simple
right.
It
just
describes
attributes,
which
is
a
good
fine
start
like
that's
the
part.
We
know
how
to
turn
into
something
machine
readable,
but
for
the
rest
of
it.
We
just
haven't
gone
this
far
with
with
specifying
semantic
conventions,
but.
D
A
When
there's
questions
around
span,
structure
and
and
some
of
these
details,
it
just
needs
to
be
be
flushed
out.
But
in
order
to
do
that,
someone
needs
to
to
at
least
on
a
convention
by
convention
basis
kind
of
take
on
ownering,
I'm
going
to
dig
into
what's
missing
here.
B
But
so
for
from
my
perspective
right
now,
the
first
thing
I'm
going
to
do
is
put
in
a
small
pr
that
just
updates
the
wishy-washy
language.
That's
currently
in
the
spec
around
what
a
client
and
server
span
means
all
right
start
there,
and
then
we
can
probably
start
tackling
the
details,
because
I
think
it's
going
to
there's
going
to
be
complication
because
we
can't
just
say
http,
because
some
of
it
depends
on
like
who's
providing
http
or
like.
I
guess
database
is
the
really
case
like
what
kind
of
database
is
it
or
messaging
like.
A
Yes,
exactly
I
mean
you've
got
several
layers
right.
You
can
have
a
database
client
of
some
kind,
a
logical
client
using
an
http
client,
logically
and
under
the
hood
that
http
client
is
doing
retries
and
redirects
and
whatnot,
and
at
some
layer
maybe
the
http
layer.
We
can
say
all
of
that's
events,
but
once
you
get
to
like
a
database
like
that
too
logical
layer
like
you're
describing
but
for
to
really
help
the
people
who
are
writing
that
instrumentation.
A
I
do
think
we
need
to
yeah
case
by
case,
go
in
there
and
leave
suggestions
and
also
double
check
with
instrumentation.
To
like
make
sure
these
suggestions
are,
are
implementable,
and
so
it
requires
a
certain
amount
of
people
power
to
actually
push
it
through.
So
I
think
that's
that's
just
sort
of
the
missing
bit.
We've
been
all
heads
down
on
getting
1.0
clients
out
the
door.
There
just
hasn't
been
availability
to
work
on
instrumentation
in
general,
yet.
B
A
B
In
the
pr
to
do
to
update
the
language
and
then
maybe
start
creating
some
issues
to
track,
actually
writing
the
semantic
conventions
around
this
stuff.
A
Great
yeah
and
if
people
are
interested
or
feel
like
they
can,
they
have
the
time
to
take
on
some
of
this.
Like
r
d
work,
it
would
be
very
helpful,
especially
given
that
you
know
we
are
going
to
be
getting
some
resources
to
help
write
instrumentation.
A
A
C
I'm
just
curious
like
what
impact
this
might
have
for
tracing
back
ends
like
I'm
just
wondering
if
any
of
them
make
this
assumption.
I
guess
that
a
request
has
you
know
a
single
client
in
server
span
and
if
this
is
going
to
be
okay,
to
make
this
change.
If
so
so
matt,
I
would
say:
yes,
they
absolutely.
B
Do
and
they
need
to
change.
That's
that's
one
of
the
big
impetuses
for
this.
From
my
perspective,
is
that
I
mean
splunk's.
Back-End
is
making
this
assumption
and
we
need
to
change
it,
because
if
we
don't
we're
gonna
things
are
gonna
get
broken
and
people
are
gonna
complain.
They
aren't
gonna
know
why?
Because
this
is
gonna
happen,
people
can
write
instrumentation
that
has
nested
clients,
fans
themselves,
manual,
instrumentation
and
the
back
ends
have
to
be
able
to
deal
with
it.
B
D
Okay,
thanks
for
racing
that
okay
moving
on
josh
surf
semantic
combination,
stability.
Please.
F
Yeah
so
specifically,
we
took
a
dependency
on
semantic
conventions
for
resources
in
our
exporter
and
they're
not
marked
as
stable
and
I'd
like
to
understand
the
process
to
get
the
mark
to
stable
kind
of
like
define
that
figure
out
what
we
need
to
do.
What
process
we
want
to
set
up
and
kind
of
drive
and
push
that
specifically
around
resources.
F
So
you
know,
if
you
don't
know,
stackdriver
relies
heavily
on
this
notion
of
monitored
resource
and
accounting
for
that
correctly,
and
I
would
love
to
get
everything
short
up
in
open,
telemetry
and
then
actually
back
port.
What
we
do
to
open
census
as
well
to
kind
of
get
everything
aligned.
So
I
have
some
big
grandiose
plans
and
we're
going
to
do
very
minimal
things,
but
we
took
a
dependency
on
the
current
resource
semantic
conventions.
F
I
know
that
effectively,
that's
actually
become
a
liability
because
things
change
and
they're
not
marked
as
stable,
so
the
code
has
kind
of
shifted
underneath
us.
So
I'd
like
to
get
to
the
point
where
the
semantic
conventions
are
marked
as
stable
and
that
the
libraries
that
are
exposed
that
expose
the
semantic
convention
constants
are
also
marked
as
stable,
so
that
we
can
start
driving
these
things
a
little
deeper.
F
So
I'm
just
trying
to
ask
like,
what's
the
current
stat
like
what?
Where
do
we
think
as
a
community
the
status
is
on
these
things?
Are
we
comfortable
with
the
resource
ones?
Is
that
a
good
idea
to
push
them,
or
are
we
still
nervous
and
think
that
there
needs
to
be
more
prototyping
around
them?.
A
I
I
really
think
it's
two
two
parts
one.
I
I
put
a
link
to
the
schema
work
that
has
already
been
done.
Tigran
led
that
up,
I'm
not
sure,
I'm
actually
a
question.
I
don't
believe
this
has
worked
its
way
into
the
spec.
Yet
I'm
curious.
If
anyone
knows
has
this
worked
its
way
into
this.
It.
E
A
A
As
well
awesome,
so
it's
in
there,
but
but
not
released
yet
so
so
this
this
would
be
step
one
right.
This
is
the
the
ground
level
work
of
making
sure
we
actually
add
the
necessary
pieces
so
that
we
can
start
burgeoning
these
conventions
in
the
future
for
resources.
I
think
that's
that's
good
enough.
A
It
might
get
a
little
more
confusing
with
some
of
these
other
conventions
where
we're
talking
about
nested
spans,
though
so
personally,
I
would
love
it
if
we
took
a
more
coherent
approach
to
reviewing
these
conventions
before
we
started,
marking
them
as
stable.
I
know
we're
all
want
them
to
be
stable,
but
I
feel
like
we
haven't,
haven't
done
like
the
research
and
confirmation
for
that
yeah.
F
F
I
would
contribute
those
and
mark
those
as
not
alpha,
as
not
experimental,
I'd
like
if
the
resources
were
stabilized
so
that
we're
contributing
like
a
stable
resource
collector
as
opposed
to
something
where
you
know
if
it
breaks
well,
you
know-
that's
that's
still
possible
at
this
point
right,
but
our
exporters
take
a
dependency
on
these.
The
current
naming
conventions
of
these
semantic
things
so
from
the
standpoint
of
starting
small,
I
I
feel
like
resources
might
be
the
least
contentious
of
how
do
we
want
to
describe.
F
You
know
a
cloud
thing,
so
I
was
hoping
that
this
would
be
like
a
good
time
to
start
pushing
that
through
and
trying
to
understand
what
the
path
to
stability
looks
like
who
who
like,
in
terms
of
like
being
comfortable,
that
we've
nailed
these
things
correctly.
How
do
we
build
a
group
of
people
that
we
consider
the
experts
to
say
here's
what
this
needs
to
be?
Here's
the
people
that
you
know
we
feel
like
we're.
Confident
we've
had
enough
eyes
that
sort
of
thing.
F
So,
if
I
were
just
to
take,
say
the
let's
say
the
kubernetes
one
and
the
google
cloud
ones
right
as
being
you
know
near
and
dear
to
our
hearts,
where
I
feel
like,
I
could
speak
with
any
kind
of
authority
on
resource
semantic
conventions
if
we're
to
just
take
those
two
right
like
what
would
that
process
look
like
outside
of,
we
can
wait
for
the
zero
four
spec.
F
We
can
make
sure
that
we
have
a
well-defined
schema
inside
for
sdks.
You
know
with
that
format.
What
what
else
do
I
need
to
do
to
kind
of
drive
that
and
push
that.
A
So
I
think,
with
the
release
of
this
next
spec
you
can,
you
will
be
able
to
start
marking
things
as
stable,
because
that
groundwork
was
was
laid
into
with
that
otep.
A
That
stuff
has
to
go
get
implemented,
though,
making
sure
I
think
there's
some
minimal
amount
of
implementation
needed
in
the
collector,
for
example,
to
start
doing
these
translations
I
mean
technically,
I
think
you
could
start
doing
them
by
just
configuring
processors.
A
So
that's
that's
what
comes
to
mind.
The
other
thing
is:
do
we
want
to
start
within
the
conventions
themselves?
Are
not
yet
they're
all
still
marked
as
experimental,
even
in
this
next
release
of
the
spec.
A
So
I
think
going
through
and
section
by
section,
starting
to
mark
conventions
is
stable,
okay,
starting
with
the
resource
conventions,
because
I
agree
with
you
even
if
those
change
those
are
not
going
to
have
some
fancy
structure
that
couldn't
be
handled
with
this
new
telemetry
schema
concept,
it
just
means
when
we
do
change
them
going
forwards,
we
have
to
think
about
a
way
of
recording
the
the
diffs
like
right.
Now
we
have
a
yaml
file.
How?
How
are
we
going
to
record
the
diffs?
A
I
believe
that
was
also
in
in
this
oth,
but
maybe
josh.
If
your
team
wants
to
take
the
charge
on
this,
because
you
really
want
to
start
getting
some
of
these
stable,
just
making
sure
people
on
your
team
are
familiar
with
with
how
that's
supposed
to
work
yeah
and
getting
that
getting
those
those
missing
pieces
in
there.
You
can
start
marking
these
things
as
stable.
B
A
There
we
go
yeah,
but
I
think
it
sounds
like
this
version
of
the
spec.
These
things
will
still
be
marked
as
experimental.
So
I
think
this
version
this
month
we're
going
to
release
something
where
it's
like.
Here's
the
baseline
and
then
your
team
has
a
month
to
to
get
the
ones
that
you
think
are
reasonably
safe,
marked
as
stable.
F
F
Go
through
the
semantic
conventions
for
resources
and
like
section
by
section,
is
it
do
do
you
want
us
to
take
that
particular
currently
defined
section
and
just
mark
it
as
stable,
with
all
the
additional
components
in
like
a
pr
to
the
spec,
and
then
we
have
discussion
around
whether
or
not
this
should
be
stable?
Is
that
the
right
way
to
approach
this
process?
A
I
kind
of
think
it
should
be.
Things
should
be
marked
as
stable
on
a
convention
set.
I
don't
know
what
what
term
we
use
for
this
on
a
convention
by
convention
basis
like
so
like
services
like?
Can
we
say
the
service
name
space
is,
is
now
stable?
Can
we
say
you
know
the
infrastructure
namespace
is
now
stable.
The
container
name
space
is
now
stable.
It
seems
like
you'd
want
to
do
those
one
by
one,
just
as
a
kind
of
last
call
thing.
Yeah.
E
Do
we
have
a
definition
of
stability
for
these
conventions,
because
my
understanding
is
that
it's
somewhat
different
from
the
spec
right?
I
mean
we
don't
expect
the
the
the
attribute
key
value
types
that
are
being
used
are
going
to
change
that
is
stable
in
some
respect.
What
we
expect
may
change
is
the
names
or
the
expected
values
of
these
conventions.
A
Right
and
so
that
that
that's
what
that,
where
tigran
was
doing,
was
about
right,
like
if
you
do
change
these
things
in
some
way
going
forwards,
there
needs
to
be
the
data
getting
emitted
now
needs
to
have
a
schema
number
so
like
otlp
needs
to
include
the
the
schema
version
which
matches
the
version
of
the
spec
that
was
implemented,
and
there
is,
if
we
do
ever
update
these
conventions.
A
A
I
kind
of
feel
like
I
would
like
to
see
a
steel
thread
of
that
implemented
somewhere
before
we
start
marking
something
as
stable
like
for
the
first
thing
we
mark
as
stable.
Like
can
we
like,
as
an
exercise,
imagine
we
go
back
and
want
to
change
it
to
service
nickname.
I
don't
know,
make
some
of
these
changes
and
just
be
like.
Well,
let's
say
we
did
that.
How
would
that
look?
If
we
did
do
that
like
do?
We
actually
have
all
these
pieces
together.
A
It's
not
that
we
want
to
ever
break
these
things,
but
we
need
if
we
can
prove
that
that
works,
that
we've
got
that
that
implemented.
Then
we
can
feel
safe
about
saying
these
things
are
stable
and
take
that
seriously
and
say,
stability
means
we're
not
going
to
break
your
dashboards.
E
Yes,
that
leaves
my
question
kind
of
still
outstanding,
though
right.
What
does
it
mean
for
these
conventions
to
be
stable?
That
sounds
like
it's
more
talking
about
the
stability
of
the
infrastructure
for
dealing
with
changes
to
the
conventions.
If
we're
saying
the
we'll
call
these
stable,
but
they
must
may
still
change
in
the
future.
How
is
that
different
from
their
current
state.
A
It
it's
that
we're
we're
saying
the
changes
we
are
going
to
make
to
them
going
forwards
have
to
be
compatible
with
changes
that
work
with
this
schema
rewriter.
That
tigran
has
added.
So
is
this
you.
F
F
A
E
Yeah
right
right,
I'm
not
positive
about
that,
though
right,
because
if
you
get
back
to
one
of
the
first
issues
listed
on
the
agenda
here
is
resource.
Merge,
error,
right
and
one
of
the
errors
that
can
come
up
is
a
schema
version
mismatch
at
which
point
the
sdk
is
supposed
to
emit
an
empty
resource.
E
F
Fun
so
there's
also
the
issue
of
our
resources
additive.
Are
they
like
with
this
merge
and
how
resource
detectors
work?
The
the
spec
is
loosey-goosey
on
this,
where
either
you
have
a
resource
detector
that
detects
all
possible
attributes
and
is
done
or
you
have
resource
detectors
that
are
additive,
and
you
can
actually
merge
them
together.
F
If
you
read
the
spec,
it
doesn't
actually
specify
one
or
the
other
both
are
allowed
by
sdks
and
so,
but,
like
both
are
implemented,
I've
seen
across
two
different
sdks,
so
I
think
that
needs
to
get
pushed
on
a
little
bit.
From
from
my
standpoint,
though,
if
I
have
an
existing
setup
that
works
and
is
producing
these
labels,
those
labels
remain
for
that
specific
instance.
F
So
I
would
say
that
what
what
I
would
suggest
is
if
we
can
mark
resource
labels
as
stable,
saying
they
will
never
be
removed
until
we
can
prove
we
have
a
neck
mechanism
to
allow
these
to
change
in
that
same
scenario
and
we're
comfortable
with
it,
then
we
would
allow
that
change
to
happen,
but
until
then
we'd
actually
don't
allow
them
to
change,
because
we're
not
sure
if
we
can
handle
it,
and
I
think
that's
actually.
Okay,
like
we
prove
that
we
can
allow
a
change
before
we
allow
that
change.
A
Yeah,
I
I
think
I
just
want
to
see.
I
don't
want
that
to
be
like
a
handshake
we
make
at
this
meeting
like
we
need
to
have
written
down
in
the
spec,
how
these
things
change
and
if
you
are
going
to
change
one
for
whatever
reason
like
what
what
work
has
to
be
done.
Part
of
it
is
the
approval
process
right.
A
I
think
we
need
to
be
much
more
wary
when
we
now
that
we
start
now
that
we
have
stable
parts
of
the
spec
when
people
go
in
and
start
proposing
to
change
anything
in
the
spec
marked
as
stable.
I
think
there
needs
to
be
extra
scrutiny
there,
including,
like
some
amount
of
implementation,
work,
etc,
just
to
to
make
doubly
sure
that
this
really
is
not
going
to
create
a
problem
somewhere,
but
also
like,
I
think
the
process
should
be
should
be
written
down.
F
That's
fair,
that's
fair,
so
so
how
about
I'll
write
down
my
thoughts
that
I
just
thank
you
everyone
for
discussing
this
I'll
write
down
some
of
my
thoughts
and
things
that
I
think
need
to
happen.
Yeah.
F
Yeah
but
there's
still
pieces
that
are
missing
like
like,
for
example,
if
I'm
writing
an
exporter,
and
I
depend
on
resource
schema
version
x
right.
How
do
I
make
sure
that
I
always
interact
with
the
current
resource
at
that
schema
version?
Is
that
in
the
api?
Yet
it's
not
right.
I
can't
say,
like
hey,
give
me
resources
in
this
version
and
do
the
backboard
to
get
back
to
here
or
do
the
forward
port
to
get
to
this
right.
That's
not
something
I
can
do
as
an
exporter.
A
Well,
some
way
or
another
this
this
resource
schema
does
have
to
get
get
added
in
there
right
like
this.
This
whole
mechanism
depends
on
yeah,
there's
something
along
the
line,
knowing
what
what
the
schema
is
for
each
resource
are
we
gonna
say
all
the
resources
have
to
be
at
the
same
schema
that
seems
hard
to
do
and
if
not,
how
do
we
implement
this
so
yeah.
F
I
mean
the
specification
says
they
all
have
to
be
at
the
same
schema
right
now
like
with
how
the
merge
works,
and
the
second
thing
is
there's
not
like
a
way
to
for
me
to
depend
on
the
version
that
I'm
consuming
at
right.
I
just
get
what
I
get
and
I
can
read
it
and
say:
oh,
I
don't
support
this
version,
crap
and
die
right
so,
like
I
think
I
think,
there's
a
little
bit
in
the
spec
that
needs
to
get
done
around
these
schema
url
things.
F
I
think
there
it's
a
good
first
step,
but
I
don't
think
we're
done
so
I'll
write
down
my
thoughts
around
resources
and
what
we
need
there,
and
then
we
can
start
implementing
some
of
that.
What
I?
What
I
don't
want
to
have
happen,
though,
is
that
v1
gets
delayed
for
all
of
that
to
to
exist,
because
I'd
like
to
get
the
current
labels
that
we
rely
on
stable
so
that
we
don't
break
them
going
forward,
because
I
need
these
labels
like
across
open
census
and
open
telemetry.
A
You
could,
if
you
want
to
go
ahead
and
mark
some
of
these
things
that
stable
in
the
spec
I
feel
like
they
would
have
to
go
along
with
some
language
in
there
saying
we're
not
allowed
to
change
these
until
this
other
work
is,
is
finished.
Right,
like
we
should
write
that
down
in
the
spec
too.
If
that's
what
you
want
to
do.
F
A
F
Well,
I'm
not
I'm
it's
more,
I'm
dealing
with
like
bugs
around
resources,
and
I
want
to
take
a
dependency,
and
I
don't
want
that
dependency
to
shift
underneath
me
because
I
don't
have
time
to
deal
with
churn
like
that.
Just
means
I'll
have
a
bunch
of
bugs
and
I
won't
be
able
to
do
new
work.
So
that's
that's
my
concern
here
is,
I
just
don't
want
these
things
to
change.
A
F
Sure,
or
or
we
we
change
our
notion
of
what
experimental
means
where
we
start
really
scrutinizing
changes
to
make
sure
they're
absolutely
necessary
right.
Like
that's
another
thing
that
can
happen.
If
I'm,
if
I'm
more
assured
that
an
experimental
thing
doesn't
break
every
release,
then
this
is
less
of
a
concern
to
me,
but
like
right
now,
taking
a
dependency
on
anything
external
has
has
led
to
lots
of
churn
still.
E
Those
are
stable,
they're
rock
solid
1.5
might
be
entirely
different,
and
how
do
we
deal
with
going
from
1.4
to
1.5
is
still
an
open
question,
but
to
the
extent
that
1.4
is
the
only
thing
that
exists,
that's
stable
and
you
those
values,
won't
change
the
the
keys
that
they
use
won't
change.
The
mechanism
for
interacting
with
them
won't
change.
A
Yeah,
okay
sounds
good.
I
agree
we
should.
We
should
be
careful
even
with
things
that
are
marked,
as
experimental
like
in
general.
We've
learned
that
that
thrash
is
bad
in
the
spec.
Rash
is
painful
for
everyone.
I've
wanted
to
see
the
new
work
coming
in
as
not
being
marked
as
experimental
but
being
marked
as
draft.
This
is
the
thing
I
noted
is
all
of
the
metrics
api
work
is
going
in
as
experimental
right
now
and
I
would
have
preferred
we
added
a
a
another
layer
called
draft
for
stuff.
Would
we
mean,
like
yeah?
A
We
really
are
thrashing
around
on
this
and
you
shouldn't
implement
it
right
now,
unless
you
want
to
join
in
the
thrash,
but
in
general
stuff
marked
as
experimental,
but
in
the
spec
should,
as
people
should
assume
that
has
been
implemented
across,
you
know
20
million
languages,
and
that
if
you,
if
you
break
it,
you're
you're
creating
a
lot
of
thrash
and
pain
for
people.
F
All
right
sounds
good,
so
I
will
write
up
my
specific
thoughts
around
consuming
resources
and
schemas,
and
the
notion
of
consuming
a
resource
at
a
particular
schema
version,
so
like
the
usage
of
it,
not
necessarily
the
definition
and
production
of
it
and
I'll
send
out
probably
an
otep
and
then
maybe
some
prs
against
the
specification
that
are
not
reliant
on
the
otep
okay.
Thank
you.
D
Yeah
perfect,
thank
you
so
much
for
championing
this
one
great
okay.
Last
item:
we
only
have
seven
minutes.
I
don't
know
whether
the
the
the
metrics
group
is
here.
If
you
guys
want
to
give
an
update,
as
usually
have
been
doing
that
lately.
F
Yeah
I'll
take
I'll
take
one
minute,
so
the
data
model
we're
marking
it
as
stable.
The
the
protocol
is
already
marked
stable
the
document.
If
you
look
at
it,
it's
marked
as
table
with
a
section
that
is
experimental.
The
experimental
section
is
guidance
around
how
to
use
the
data
model,
not
the
data
model
itself
and
then
we're
going
forward.
There's
a
few
concepts
we're
looking
at
next
for
how
to
model
it
in
a
non-breaking
way,
for
example
in
nume
sets
in
prometheus
our
one
example.
F
We
went
with
histograms
that
don't
allow
negative
measurements
so
we're
looking
to
figure
out
how
to
support
negative
measurements
going
forward,
there's
a
few
things
that
basically,
we
limited
mark
things
as
stable
and
then
we're
going
to
be
adding
on
future
kind
of
capabilities
over
time.
So
that's
kind
of
where
the
data
model
sig
is
at
yeah.
G
Okay,
so
on
on
the
api
side,
we're
doing
the
final
clean
up
before
we
release
the
experimental
version
they're.
Just
some
like
remove
the
section
of
the
warning
that
people
shouldn't
follow
the
spike.
So
we
believe
this
api
spec
is
ready
by
by
enough
today
or
like
they're
they're,
just
like
two
cleanup
prs
and
for
the
isdk
part
we
might
need
another
month
and
last
week
we
also
discussed
that
we
want
to
be
able
to
accelerate
the
the
api
stability.
G
So
the
original
plan
was
that
by
end
of
november
we
can
call
the
api
stable
so
far,
since
people
have
better
confidence,
so
so
we'll
try.
G
We
can
move
the
stability
earlier
by
two
or
three
months
and-
and
I
will
take
the
feedback
after
the
experimental
release,
to
see
like
our
people
happy
with
the
current
one,
so
so
far
their
their
progress
on
the
c,
sharp
side
and
python
side,
and
I
I
leave
josh
from
lifestyle
he
he
will
be
able
to
pick
up
the
the
golan
side,
probably
later
this
month
or
next
month,
so
that
that
should
give
us
enough
experimental
languages
to
see.
D
G
No,
so
so,
currently
it's
even
before
experimental,
there's
a
big
banner
saying:
please
don't
look
at
this
api
spec
it's
under
heavy
instruction
and
everything
is
going
to
change
now
we're
doing
the
cleanup
so
we'll
remove
that
banner
and
and
mark
it
as
an
official
experimental
thing
and
see
if
we
can
go
this
release
and
and
the
sdk
will
will
follow
shortly
after.
G
Perfect
thanks
so
much
okay
great.
So
I
expect
like
by
unknown
this
week
when
you
look
at
the
matrix
api
spec,
it
won't
have
no
banner
saying
you
should
just
not
work
on
this
api.
G
G
The
spec,
it's
just
very
small,
clean
up,
so
so
I
want
to
see
if
it's
possible,
that
we
can
merge
those
pr
by
enough
today,
one
one
pr
is
just
to
remove
the
banner.
The
second
apr
is
just
to
add
the
change
log,
because.
G
Yeah
because
we
have
a
lot
of
changes
so,
instead
of
making
in
the
individual
changelog
entry
for
every
single
pr,
I
I
talked
with
some
pc
members
and-
and
we
agreed
that
I'll-
add
a
single
line
to
the
changelog
saying
this
is
the
matrix
api,
experimental
release
and
then
we'll
put
probably
20
apr
numbers
there.