►
From YouTube: 2022-10-17 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
B
Fine
I
gotta
catch
up
there,
you're
you're
you're,
quite
ahead
of
us.
C
C
It
is
on
the
median
invite
yeah,
so
it's
it's
attached
to
it.
So
if
you
look
at,
you
can
see
my
fun
calendar
here.
If
you
look
here,
there's
a
on
the
semantic
conventions
working
group
in
the
calendar,
you
can
see
it
here
now.
I
think
I
set
up
the
oops
I
think
I
set
up
the
calendar
wrong
because
it
somehow
only
invited
tigrin
so
I
don't
know
what
I
did
wrong
there,
but
because.
C
Favorite
person,
oh
okay,
I,
do
I.
Do
this
thing
where
I
have
like
the
calendars
here,
yeah,
don't
look
at
my
work.
Calendar
see
if
I
do
that.
It
looks
like
I'm
actually
an
engineer,
but
when
I
do
that,
you
can
tell
what
I
really
am
just
busy
anyway
busy
yeah.
Not
all
of
those
are
real
meetings.
Most
of
them
are
fake.
You
know
like
I'll,.
C
Yeah
yeah:
it's
like
this
hour
pay
attention
to
this
problem.
Cool.
Let's
see,
I
was
gonna,
give
another
like
minute
or
two
for
folks
to
join.
This
is
going
to
be
a
bit
of
a
different
than
the
normal
session
of
this
meeting,
because
we're
gonna
actually
cover
how
we
want
to
run
this
meeting.
What
we
want
to
do
also
I'm
getting
attacked
by
cats
right.
A
C
What
the
heck,
of
course,
as
soon
as
I'm
talking
so
yeah
we're
gonna
cover
the
charter,
major
pme
things
and
kind
of
like
our
major
priorities,
starting
where
our
project
lives,
how
we
want
to
do
tracking
that
kind
of
junk,
and
then
eventually
what
I
want
to
do
is
cover
a
standard
meeting
agenda
where
we
kind
of
have
two
focuses
in
this
meeting:
right:
triagian
incoming
related
semantic
convention
bugs
as
they
come
in
covering
Telemetry
definition,
stability
and
evolution
and
things
that
we
want
to
do.
There.
C
I
have
a
bug
for
today's
topic
and
then
like,
lastly,
trying
to
build
out
a
semantic
convention
process
where
we
can
make
sure
that
experts
I
own
semantic
conventions,
drive
them
forward,
and
we
have
a
process
for
how
we
will
pull
that
into
Hotel.
You
know
where
and
how
approvals
happen,
that
sort
of
thing
to
make
sure
we're
moving
quickly
and
that
the
right
people
are
making
decisions,
sound
cool.
C
C
Cool
the
only
thing
I
want
to
highlight
from
here
is
there's
a
set
of
work
streams
that
we
identified,
how
to
define
Telemetry
how
to
keep
it
stable,
how
to
evolve
it
over
time.
I'm
calling
that
kind
of
the
Telemetry
stability
work
stream,
and
then
we
have
this
notion
of
semantic
conventions
and
how
that
interacts
with
the
rest
of
everything,
so
making
sure
that
we
have
a
process
in
place
that
experts
can
define
those
conventions
and
move
forward.
C
So
those
are
that's
kind
of
how
we
split
the
two
things
in
terms
of
proposals
and
things
to
do.
There's
a
whole
bunch
of
work
outlined
here.
C
But
what
we're
starting
on
is
some
of
this
Evolution
work
is
basically
the
stability
work,
so
outline
change,
compatibility,
specific
to
traces,
metrics
and
logs
we're
going
to
start
a
little
bit
with
metrics
and
some
of
the
Hot
Topics.
That
I
think
have
been
debated
back
and
forth
a
little
bit
and
try
to
make
as
much
progress
on
those
as
possible.
C
A
D
C
Yeah
actually
great
that
you're
getting
into
what
I
want
to
talk
about
next,
which
was
pme
things
I
think
initially,
the
major
thing
we
need
to
do
is
is
our
number
one
product
is
unblocking
semantic
Adventure
progress
I
think
we
can
discuss
priorities
after
that,
but,
like
anything,
that's
preventing
us
from
being
able
to
take,
say,
HTTP
semantic
conventions
and
saying
we
feel
okay,
making
this
stable.
That's
that's
our
priority,
and
so
then,
underneath
that,
let's
list
out
what
we
think
is
blocking
making
that
stable.
C
Does
that
sound,
reasonable
yep?
Does
anyone
have
any
other
thing
that
they
would
say
is
as
high
or
a
higher
priority.
C
Okay,
cool,
let's
skip
to
priorities
then.
D
C
Yeah,
actually,
the
elephant
here
right
now
is
does
saline
tree,
schema
usage,
hope
the
stability,
so
we
need
to
Define
what
stable
is
across
all
the
signals
and
metrics
and
I
think.
Maybe
what
I
wanted
to
do
in
this
group
was
kind
of
walk
through
topic
by
topic.
What
changes,
we
think
are
considered
stable
changes,
what
are
considered
unstable
and
what
kind
of
technical
things
we
can
put
in
place
to
help
with
stability.
C
The
other
thing
I
want
to
encourage
us
to
think
about.
Is
that
actually
bumping
major
versions
is
not
as
problematic
as
maybe
we.
C
C
So
I
want
to
talk
about
that
in
a
little
bit
but
I
agree.
That's
that's
a
priority
is
what
to
stay,
be
what
does
stable
mean
and
then
after
we
have
that
we
go
through
and
check
what
exists
on
subcomp
Ted.
You
have
something
here.
E
Yeah,
just
so
having
been
working
on
this
for
a
while
when
it
comes
to
declaring
these
different
conventions
stable,
we
we
want
them
to
be
stabilized
on
something
that
we
think
is
best
in
class
and
in
order
to
do
that,
since
our
community
is
not
necessarily
have
like
deep
expertise
and
what
does
it
mean
to
have
Best
in
Class
redis
conventions
or
SQL
conventions,
we
need
to
get
subject
matter
experts
in
to
like
help
us
with
this,
and
this
is
kind
of
a
project
management,
I,
wouldn't
say
nightmare,
but
it's
effort
because
it
means
getting
people
in
and
organizing
these
groups
to
to
do
this
efficiently.
B
E
Yeah
we
haven't
really
like
for
for
each
each
class
of
convention.
It
would
be
I
personally
would
feel
uncomfortable
if
we
just
slapped
a
stable
label
over
what
we
currently
have
even
something's
basic
as
HTTP.
When
we
had
experts
come
in
and
review
it
like
coherently,
as
a
group
came
up
with
a
bunch
of
recommendations,
so
so.
B
When
we
call
something
like
stable,
is
this
like
a
what
granularity
are
we
calling
something
stable?
Are
we
saying
like,
for
example,
like
my
sequel,
this
is
now
stable,
or
are
we
just
trying
to
say
like
as
a
package
semantic
conventions
are
now
stable,
I,
don't
think
the
second
one
makes
as
much
sense
so
I
just
like
wanna.
C
B
C
The
first
yeah
it
should
it
should
be
the
first
like
like
if
you
here.
Let
me
let
me
go
into
here.
So
if
you
look
at
the
specification,
the
specific
thing
we're
trying
to
do.
If
we
come
into
spec,
we
come
into
I.
Think
it's
under
Trace
semantic
inventions
HTTP.
What
does
it
take
to
make
this
be
called
stable
instead
of
experimental.
D
And
also
this
goes
back
to
my
question.
Stable
doesn't
really
mean
you
can't
change
anything
there.
You
can't
fix
anything
there.
That's
not
the
case,
because
we
have
the
notion
of
evolving
schemas.
We
allow
that
so
stable
really
means
that
you
can
start
using
it
and
know
that
it
will
only
evolve
according
to
the
well-defined
rules.
It
doesn't
mean
that
if
you
don't
like
the
name
of
an
attribute,
you
can't
change
it.
So
some
of
the
changes
are
allowed.
D
A
B
We
want
to
change
something
we
have
to
like
have
like
the
procedure
in
place,
so
that
users
know
what
might
be
involved
in
a
major
version
upgrade
or
something
before
you
know.
We
just
drop
it
on
them.
C
Yeah-
and
we
have-
we
have
components
here
that
do
this.
One
thing
I
want
to
call
out
so
I
want
to
step
into
because
I
think
I
think
it
makes
sense
for
me
to
call
this
out.
If
we
look
at
this,
this
particular
bug
what
constitutes
a
a
breaking
change
for
metrics
I,
put
together
an
unfinished
document,
which
is
how
I'm
thinking
about
the
problem
and
I
think
I
I
wanted
to
raise
with
this
group
to
think
about
so
basically
to
Define
stability.
C
We
actually
focus
on
instability
and
how
it
impacts
users.
So
we
go
through
a
change
and
we
denote
is
this
change,
unstable
or
breaking
of
a
use
case
in
some
way
and
I
call
out?
These
are
the
important
use
cases
that
we're
going
to
look
at
and
how
they
break?
Okay,
so
alerting
I
think
is
number
one
right.
C
We
should,
if
we
change
a
log,
a
trace
or
a
metric,
in
a
way
that
silently
breaks
your
alert
thresholds
I
think
we
need
to
that.
That
should
be
considered
a
breaking
change
if
it
loudly
breaks
your
learning
thresholds
in
a
way
that
you
catch,
that's
Pro.
That
might
be
okay
right.
That's
something
for
us
to
think
about
of
like
what's
the
impact
on
this
use
case,
but
I
we
have
alerting,
we
have
dashboarding.
C
You
know
users
want
like
known
queries
that
they
run
there's
exploration
and
common
queries
where
I
want
to
be
able
to
go
filter
through
this
data
and
look
at
it
does
that
break
based
on
the
change
stream
processing.
This
would
be
like
alerting
aggregating.
These
are
generally
things
that
I
would
assume
can
be
baked
into
the
collector
and
and
tigrant
brings
a
lot
of
expert
around
these
Concepts
here
and
then
the
last
thing
is
machine
learning
and
that's
just
a
new
and
evolving
area
of
what
they
call
it.
C
C
So
then,
what
I
did
in
this
document?
Do
you.
C
Finish
and
let
me
link
this
so
you
should
all
have.
C
You
should
all
have
read
access
to
this
if
you
want
right
access
to
it,
happy
to
give
that
to
you
as
well.
The
first
thing
we
look
at
is
alerting
with
metric
attribute
changes
so
effectively.
C
What
this
is
calling
out
here
is
in
a
lot
of
logging
and
tracing
systems.
When
you
write
a
query
that
does
alerts
that
query
explicitly
defines
what
attributes
you
look
for
and
implicitly
Aggregates
away
everything
else.
C
So
if
you
add
an
attribute
to
a
log
or
a
trace,
you
actually
don't
tend
to
break
alerts
right
because
they're
implicitly
aggregated
away,
that's
just
inherent
in
how
that
works
in
most
systems
for
metrics,
though
you
have
this
issue
where,
if
I
add
an
attribute,
and
that
attribute
creates
a
new
set
of
Time
series
right
I
could
have
a
metric
that
was
on
the
verge
of
alerting,
but
then
I
published
my
new
version,
which
splits
into
three
time
series
because
of
this
new
attribute
and
suddenly
my
threshold
is
wrong
and
that's
a
silent
failure
today,
if
you
look
at
the
definition
of
stability,
that
otel
has,
it
does
not
allow
attribute
changes
on
metrics.
D
C
Does
not
allow
adding
attributes
yeah
what
I
would
actually
position
is
like
like
understanding.
This
is
the
fundamental
reason
why
metrics
break
and
why
alerts
break
and
why
thresholding
matters
we
could
actually
restrict
or
change
that
restriction
right,
if
you
add
an
attribute
that
doesn't
impact
the
number
of
Time
series,
that's
actually
a
totally
allowable
change
for
metric,
because
fundamentally
it
will
not
break
alerts
and
it
should
not
break
dashboards
right.
C
C
Change
an
attribute
in
a
way
that
fragments
the
time
series
differently
that
shards
them
differently.
You
could
potentially
break
alerts
because
the
thresholding
is
wrong
or
the
query
is
wrong,
and
this
is
because
in
metrics
generally,
you
have
to
explicitly
do
group
buys
everything
is
implicit
by
name
in
most
metric
systems
right.
D
I
guess
that
yeah
we'll
need
to
think
about
some
counter
examples.
I
can
I
can't
at
the
moment
I
guess
you're
right,
maybe,
but
we
need
to
think
it
through.
A
Josh
I
could
just
I
could
comment
on
the
counter
examples.
Briefly,
I
mean
we.
We
had
to
find
semantics
for
those
metrics
right.
So,
as
you
said,
the
implicit
assumption
is
you
get
things
aggregated
away
in
in
the
model
where,
where
monitoring
a
Time
series
means
literally
plotting
its
value,
then
you're
not
going
to
see
those
those
other
time
series
that
get
created,
but
in
a
model
where
your
your
query,
you
have
a
query
that
has
a
well-defined
behavior
for
aggregating
away
the
constituent
time
series.
A
The
value
doesn't
actually
change
and
I
want
to
question
whether
there's
a
future
where
the
addition
of
metric
attributes
is
not
breaking,
because
we've
defined
aggregate
a
way
to
do
the
right
thing.
This
would
allow
you
to
turn
on
different
kind
of
resolution
of
instrumentation
at
runtime.
If
you
have
the
sort
of
high
resolution
instrumentation
turned
on,
you
must
aggregate
a
way
to
get
back
to
those
old
time
series,
but
it's
well
defined.
It's
dramatically.
A
Meaningful
I
agree
with
you
that
probably
the
world
we're
in
today
can't
allow
these
attribute
change
attributes
to
be
introduced.
But
in
you
know,
in
the
long
term
that
was
the
whole
goal
of
the
semantic
definitions,
I
think.
C
I
I
think
you're
you're
walking
into
the
point
that
I
think
is
super
valuable
here.
So
today
those
aggregations
are
not
common,
they
can
be
used,
but
they
don't
have
to
be,
but
they're,
not
common
enough.
That
I
think
we
would
break
users
significantly
if
we
did
this.
Similarly,
we
actually
have
a
timeline
before
Telemetry
schema
is
enforced
and
it
starts
next
year.
D
C
Right
because
I
think
the
worst
case
example
by
the
way
of
breakage
is
so
machine
learning.
C
C
However,
there's
no
guarantee
that
someone
does
that,
and
so
we
could
provide
tooling,
for
example,
that
would
go
update
all
of
your
ground
truth.
Examples
that
your
training
data
based
on
television
schema
versions
and
have
multiple
versions
for
the
different
versions,
and
that
sort
of
thing
that's
tooling,
that
we
know
how
to
build.
We
know
we
could
do
it.
Do
we
see
that
as
something
that
people
will
do
over
time?
You
know
it
comes
into
I.
Think
there's
going
to
be
here's.
C
What
open
Telemetry
considers
stable
here's
a
set
of
best
practices
that
let
you
interact
with
that
stability
right.
So,
if
you're,
if
you're
going
to
build
an
ml
model,
include
version
number
and
instrumentation
library
in
that
model,
because
you
will
find
that
as
versions
change,
some
of
the
Telemetry
changes,
but
your
model
should
be
resilient
to
it.
If
you
account
for
it
yeah.
C
That
means
you
have
a
bigger
kernel
great
for
this
alerting
use
case
right,
there's
a
question,
then
of
do
we
say
you
should
use
Telemetry
schemas
here
or
not
right
like
you
need,
we
recommend
a
back
end
with
laundry
schemas
or
we
could
have
a
say
any
alert
that
you
write,
open,
Telemetry,
recommends
always
doing
manual
aggregation
of
your
metric
streams
so
specifically,
including
which
attributes
are
included
in
that
alert
that
that
second
thing
I
think,
is
a
high
friction.
It's
it's
possible
to
do
today,
but
I
think
it's
a
high
friction
point
for
users.
C
We
all
have
access
to
this.
This
notes
document
right
so,
let's
say
to
find
stability,
so
no
assistant.
C
Sorry,
how
do
I
want
to
phrase
this
in
terms
of
scope
of
what
what
we
would
consider
breaking
right
so
build
as
exists
today,
the
standard
inventions
do
not
break
without
a
major
version
of
promotion
right
tools
as
exist
today,
with
help
from
something
with
your
schema,
do
not
break
without
a
major
version
bump.
B
With
the
Health
be
transparent,
or
would
it
be
like
an
active
help.
B
I
like
if
someone
has
in
their
CI
CD
process
like
just
like
Peg
to
a
major
version
of
ours,
and
they
just
pull
our
change
in
like.
Are
we
saying
that
like
they
might
have
to
do
something
without
a
change
in
that
MV,
in
order
to
get
it
to
continue
working?
Or
are
we
saying
that
we
might
have
like
some
tooling
that's
transparent
when
they
pull
the
new
change
in
the.
C
Idea
the
idea
here
yeah,
we
would
have
tooling
So
like
in
the
hotel
collector.
You
would
have
a
processor
that
says
here's
the
version
of
of
this
Telemetry
that
I
want
and
then,
if
it
ingests
a
different
version,
it
would
look
at
the
telemacher
schema
for
that
new
version
and
back
like
downgrade
it
to
the
previous
one,
because
these
are
backward
forward
compatible.
C
That
was
the
idea
behind
this.
Telemetry
schema
thing,
so
this
would
be
if
we,
if
we
know
that
we
can
insert
a
man
in
the
middle
to
do
conversions
for
us
right
to
ease
that
transition.
Does
that
make
sense
then
to
allow
these
transformations
and
then
the
last
thing
is:
what
can
you
force
onto
I'm,
going
to
say,
storage
vendors
to
help
with
this
to
help
transition
Telemetry?
So,
for
example,
can
we
get
storage
vendors
to
actually
adopt
it's?
C
C
Is
that
a
thing
we
want
to
push
for
and
I
think
there's
a
spectrum
here
of.
Like
you
know,
vendors
actually
handle
this
problem
for
us
with
various
tools.
C
We
just
assume
nothing
on
our
end
and
stability
is
whatever
exists
today
and
then
there's
this
world
in
the
middle
of
what
kind
of
tooling
can
we
provide
to
help
alleviate?
You
know
stability.
Friction
right.
E
I
suspect
there
are
a
lot
of
devils
in
the
details
here,
yeah
in
particular
around
anything
that
involves
you
know
a
tool
chain
of
schema
version:
conversion,
I'm,
I'm,
bullish
on
that
stuff,
but.
E
People
like
practical
deployments,
you're
gonna,
have
a
bunch
of
different
dashboards
that
are
running
at
different
versions.
You're
gonna
have
rolling
updates.
You
know
how?
How
do
you
even
know
what
version
number
alert
is
should
be
pegged
to
I?
Just
wonder
to
what
degree
we
need
to
to
exercise
some
of
this
stuff
to
actually
understand
like
where
the
the
Practical
difficulties
are.
C
Yeah,
it's
a
good
point.
Can
I
throw
out
my
proposal?
C
A
D
It
complicates
things
in
my
opinion,
because
you're
removing
the
tool
that
allows
you
to
deal
more
easily
with
the
changes
when
you
remove
that
tool,
you're
raising
the
bar
for
declaring
something
stable,
because
it's
it's
impossible
to
change
the
adapter
you'll
be
quite
stable,
and
because
of
that
now
you
have
to
do
a
better
job
in
the
first
time
right
when
you
introduce
the
semantic
conventions
and
when
you
stump
them
as
stable,
you
have
to
be
sure
they
are
the
right
ones.
If
there
are
telemetry
schemas
in
the
picture,
you
don't
it
lowers
the
bar.
D
C
Actually,
I'm
actually
suggesting
something
subtly
different
okay,
so
my
opinion
is
a
we're,
never
going
to
get
it
right
start
to
start
with
I.
Don't
have
that
hubris.
We're
gonna
have
to
change
this,
but
I'm
suggesting.
Actually
we
don't
balk
from
major
version
bumps
I
agree
to
make
changes
and
we
leverage
we
still
include
Telemetry
scheme
of
out
of
the
box
to
make
that
major
version
bump
easier
to
deal
with,
but
users
will
get
a
clear
signal
that
something
has
changed
that
could
break
their
alerts.
D
They
do
get
that
clear
signal
today.
It
doesn't
have
to
be
a
major
version
when
you
change
the
minor
version
in
the
schema
files.
We
we
can
record
the
fact
that
it
is
unconvertible.
The
the
change
in
the
schema
is
impossible
to
express,
unambiguously
and
Implement
automatically
the
the
fact
that
it
is
imported
as
a
minor
or
major
version.
I
think
it's
it's
secondary
right
and
yes,
you
can.
We
can
do
that
by
decoupling,
the
version
of
the
semantic
conventions
and
schemas
from
the
spec
version.
I,
don't
think
it's
absolutely
necessary.
B
I
have
a
little
bit
of
a
confusion.
I
I
mentioned
earlier
in
this
meeting.
I
think
I
gave
a
bad
example
about,
like
the
granularity
of
what
like
the
major
versions,
all
that
stuff
actually
are
so
just
like.
Looking
at
the
GitHub
repo
right
now,
we
have
like
you
know,
traces
are
experimental
and,
like
HTTP
is
experimental,
are
we
saying
that,
like
what
about
like
actual
attribute
levels
like
I,
don't
see
it
like
at
the
attribute
level
for
a
given
the
HTTP
spec
of
saying
something
is
like
stable
or
not?
B
This
is
an
idea
I'm
posing
here,
but,
like
maybe
it'd,
be
easier
for
adding
new
functionality
to
a
given
like
https
spec,
or
something
like
that.
If
we
were
to
do
it
at
the
actual
attribute
level,
so
that
way
like
yeah
I'm,
guessing
things
like
the
name
of
something
won't
change,
but
like
sorry,
you
can't
do
it
at.
E
D
A
E
C
We
also
the
tracking
we
have
in
place
for
Motel,
so
like
kudos
to
tiggering
for
getting
this
in
there
early
for
1.0.
We
can
track
at
like
a
tracer
level.
C
The
instrumentation
that's
provided
and
give
a
version
and
a
URL
that
defines
migrations.
So
at
like
a
tracer
level
or
a
meter
level
or
a
log
exporter
level,
we
can
actually
understand
what
we
expect
to
come
out,
and
so,
if
you
wanted
to
do
Transformations
between
versions,
we
actually
have
that
hook
in
there
in
the
spec
right.
But
it's,
but
it's
it's
at
that
macro
level.
Just
like
Ted
is
saying
yeah.
E
And
not
to
to,
hopefully
not
over
complicate
things,
but
to
address
your
point.
T
grin
about
like
what's
simpler,
I
I'm,
a
little
bit
like
on
in
favor
of
Josh's
proposal
in
the
sense
of
like
I,
feel
like
it's
easier
to
understand.
E
If
you
don't
have
any
kind
of
pipeline
doing
Transformations,
it
is
simpler
to
understand
what
constitutes
a
breaking
change
or
not
and
I
think
it
might
be
wise
as
far
as
bumping
versions
and
things
once
we
figure
that
out
to
to
stick
with
that
as
our
definition
until
we've
actually
exercised
these
pipelines
and
processors
for
a
while
to
understand
what
what
they
can
or
cannot
do.
D
E
E
I
I
guess
what
I'm
saying
is:
maybe
it
would
be
wise
to
to
attempt
right
like
we,
we
shouldn't
be
making
changes
without
including
a
pipeline
processor
or
something
that
we
believe
would
would
fix.
It
reduce
the
pain
on
our
end
users.
To
me,
it's
just
a
question
of
like
how
much
do
we
trust
our
lack
of
experience
when
it
comes
to
saying
we're,
putting
a
guarantee
that
this
is
not
actually
going
to
break
something
for
you
that.
A
B
E
C
That's
actually
kind
of
what
I'm
suggesting
here,
like
I
I
was
I've,
been
trying
to
think
about
the
user
experience
of
using
these
schema
processors
and
like
let's
look
at
an
example
allowing
metric
name
changes
right
like
what?
What
does
that
actually
look
like
to
a
user?
Where
does
that
sit
in
their
architecture
and
how
do
we
prevent
breakage
for
them
when
they
adopt?
So,
let's
pretend,
like
the
schema
processors
in
the
hotel
collector
like
we
have
this
man
in
the
middle.
C
D
E
C
C
Be
in
there,
but
until
back
ends
adopt
Telemetry
schema
URL
again,
you
still
have
the
problem
and
correct.
There's
also
a
query
time
problem
of
what
is
that
back
end
experience,
look
like
do
I
need
to
specify
at
query
time
what
version
I'm
using
yes,
if
that's
the
case
now
we're
complicating
the
actual
usage
of
the
Telemetry
as
well.
D
I
think
it's
inevitable
any
consumer
of
telemetry
if
we're
using
the
Telemetry
schemas,
has
to
know
what
the
schemas
are
and
and
use
them.
When
you're
doing
the
querying.
When
you're
reading
the
data
when
you're
reading
Telemetry
that
is
produced
somewhere,
you
have
to
know
what
schema
it
is
produced
with
and
what
is
your
intended
interpretation
of
that
data
you're
looking
at
it
through
the
lens
of
the
schema
that
you
understand,
but
it
is
produced
by
a
different
producer
right,
which
uses
a
different
schema
you're
right
that
it
complicates
it.
C
D
You
don't
have
to
if,
if
the
tool
is
built
right,
if,
if
whatever
is
your
query,
Builder
is
doing
it
knows
what
data
you're
querying,
it
will
associate
your
alert
with
the
right
schema.
It
should
be
possible
right,
I'm,
querying
daily
by
building
an
alert
possible
based
on
my
query.
I
can
embed
the
schema
URL
together
with
the
query,
and
it
doesn't
have
to
be
a
manual
process.
We
don't
have
to
force
the
user
to
do
that.
E
E
D
C
D
C
I'm
suggesting
that
we
still
leverage
Telemetry
schema
to
Define
changes,
but
what
I'm
suggesting
is
we
def?
So
when
we
Define
what
stability
means?
We
do
that,
in
absence
of
telemetry
schema
to
adapt
to
make
more
things
allowable,
we
Define
it
with
what
we
would
say
is
common
usage
today
and
common
broken
use
cases,
and
this
does
mean
that
we
fully
expect
to
be
bumping
major
version
numbers
when
we
need
to
make
changes.
This
means
that
we're
going
to
be
doing
things
like
we
don't
allow
metric
renames.
C
Instead,
we
will
double
write:
two
metrics,
the
new
version
and
the
old
version
for
some
time
to
give
users
the
ability
to
kind
of
adapt
right
as
as
part
of
this
and
yeah
we're
going
to
get
the
first
version,
not
perfect,
but
stability
is
a
feature.
Users
want
stability,
so
we're
going
to
be
very
clear.
C
We're
going
to
make
sure
that
the
version
number
we
give
them
matches
the
stability
guarantee
without
schemas
we're
going
to
push
hard
on
schemas
to
start
solving
that
problem
over
time
and
to
alleviate
friction
of
adopting
major
version
numbers
and
at
some
point,
if
we're
convinced
that
Telemetry
schemas
have
like
one
and
that
they're
embedded
into
back
ends
and
engines
in
a
way
that
solves
this,
need
that
we
need.
We
can
change
the
definition
of
stability
to
be
less
restrictive
over
time.
It's
easy
to
go
less
restrictive,
it's
harder
to
go
more
restrictive.
B
So
for
the
process
there
of
writing
two
metrics.
At
the
same
time,
I
mean
I
like
that
for
a
changeover
in
general,
but
like
would
that
be
configurable
for
the
user,
so
that,
like
on
there
and
they're,
like
hey,
I'm
happy
with
the
new
metrics,
it's
going
to
start
using
that
don't
write
double.
E
E
Think
what
yeah,
because
a
a
thing
that's
tricky
I
guess
is
how
do
you
configure
open
Telemetry
instrumentation?
We
don't
currently
have
a
coherent
mechanism.
We
don't
yet
even
have
a
configuration
file
for
open,
Telemetry,
sdks
and
then
all
the
instrumentation
packages
are
are
individuals.
So
so
that's
like
maybe
another
work
stream
for
us
is
to
figure
out
how
the
heck
configuration
is
supposed
to
work
here.
I'm.
B
Just
thinking
like,
if
we
don't
have
that
in
place
like
we
mentioned
machine
learning
before
right
or
like
even
just
like
writing
a
cloud
watch
or
something
you
if
you're
writing
two
attributes
you're,
essentially
changing
the
shape
of
the
metric
and
it
won't.
You
might
not
be
able
to
process
it
in
the
same
way.
I'm
not
sure
if,
like
there's
a
little
schema
processor
that
get
rid
of
that
or
not,
but
so.
C
C
Because
you're
double
publishing,
but
if
you
went
from
1.1
to
2.0,
then
you
can
just
rename
and
that's
totally
fine
and
the
schema
URL
will
say
this
metric
was
renamed
to
this
other
one
here
right
in
2.0
and
so
schema
URL
would
allow
you
to
use
the
1.0
and
the
2.0
together,
because
you
can
use
scheme
your
schema
URL
to
rename
the
metrics.
However,
you
need
to
have
them
both
line
up,
but
from
the
user
standpoint
you
know
we
get
to
make
that
decision
of
like
when
we
go
from
1.0
to
1.1.
C
We
would
double
write
metrics
if
we
have
to.
If
we
go
from
1.0
to
2.0,
we
just
rename
the
metric
right,
and
so
yes
there's
more
friction
to
go
with
major
versions,
but
what
I'm
suggesting
is
I
want
us
to
maybe
aggressively
do
that
more
so
than
we
had
previously.
You
know:
hotel's
been
very
conservative
around
that
I
think
we
should
aggressively,
say
here's
the
stable
version,
here's
how
long
this
lasts
and
let's
aggressively
go
to
like
2.0
and
3.0
to
clean
up
things
as
we
learn
here.
D
A
It
I
had
a
question:
do
we
think
that
the
collector
you
know
this
man
in
the
middle
solves
the
issue
and,
if
so,
have
we
consider
have
we
decided
that
it
would
be
prohibitively
costly
to
add
that
in
at
the
SDK
level,.
E
I
I
think
it's
just
like
a
engineering
work
right.
You'd
have
to
duplicate
this
concept
in
in
times,
and
we
haven't
proposed
shoving
that
version
burden
on
every
implementation.
Working
group.
D
A
Was
wondering
if
the
with
even
The
Collector
case,
how,
like
you,
have
to
specify
somewhere
the
version
you
want
to
normalize
to
yeah.
D
D
D
Lock
into
forever,
but
you
you
upgrade
in
a
controllable
way
right.
You
decide
when
you
upgrade
not
when
every
single
freaking
application
updates
the
version
of
the
SDK,
but
you
have
a
centralized
place
to
control
the
version
number
of
all
of
the
Telemetry
that
your
backend
receives
and
that's
the
the
man
in
the
middle
The
Collector
and
when
you're
ready.
You
can
do
that
maintenance
right!
You,
you,
you,
you
you,
you
increase
the
version
number
in
The,
Collector
and
and
You
observe
your
backend
to
see
if
any
alerts
are
broken.
D
D
Space,
you
could,
if
you
have
a
remote
configuration
capabilities,
centralized
with
with
mass
deployment
of
that
configuration
to
all
of
the
sdks.
You
could
do
the
same
thing
there
right
and
that's
one
of
the
things
that
we
have
as
an
open
issue.
There
yeah.
C
E
Practical
purposes
trash
it's
totally
possible
to
do
this.
At
the
SDK
level,
we
need
to
figure
out
how
SDK
configuration
Works
in
a
way,
that's
a
little
bit
better
than
the
way
it
currently
works.
This
is
also
instrumentation
configuration,
but
it's
sort
of
like
there's
just
a
bunch
of
questions
open
questions
about
how
this
whole
pipeline
is
supposed
to
work
at
all
and
I
would
love
to
see
that
figured
out.
Well,
we
just
have
one.
E
We're
maintaining
in
The
Collector
and
once
we're
like
satisfied
with
with
how
this
like
actually
works
in
the
real
world,
then
that
would
probably
be
a
more
reasonable
time
to
go
to
the
different
working
groups
and
be
like
you.
You
might
want
to
consider
implementing
this
at
this
point,
but
I
wouldn't
want
to
thrash
thrash
all
those
people
by
if
I
tell
them
to
go,
run
and
implement
the
current
thing.
That's
in
the
collector,
so.
C
C
What
are
we
going
to
account
for
instability?
Do
we
want
to
account
for
something
that
that
alleviates
this
friction
or
asking
users
to
to
lock
to
a
version
somewhere
and
then
having
this
conversion
place
with
the
man
in
the
middle
right
I?
That
fundamentally,
I
want
to
ask
that
question
because
I
think
let's
go
with
option.
C
One
of
we
do
not
assume
that
users
have
that
access
to
that
with
how
we
Define
stability,
and
we
will
bump
major
versions
on
on
breaking
changes
and
that's
how
we'll
Define,
what
a
breaking
change
is
and
we'll
do
our
best
in
this
world
to
also
provide
tooling
to
make
that
situation
better
over
time
and
possibly
alleviate
that
restriction.
This
does
not
mean
we
don't
invest
in
Telemetry
schema
URL.
We
still
do.
We
just
are
conservative,
with
our
current
definition
of
stability.
C
Option
number
one
option:
number
two:
we
invest
more
in
Telemetry
schemas
in
some
fashion,
till
we're
confident
that
the
use
case
is
solved
in
a
way
that
users
will
know
what
the
hell
is
going
on
and
we'd
leverage
that,
in
our
notion
of
stability,
right
I,
think
those
are
kind
of
the
two
options
on
the
table
here.
C
D
A
Yes,
me
too
I
I
I
was
going
to
ask
this
before,
but
I
think
it
was
brought
up,
but
should
we
consider
more
people
that
don't
have
a
collector
or
or
can't
have
or
want
to
have
a
collector,
because
we
already
ran
into
multiple
this
case
that
we
have
customers
that
they
say
no
I
don't
want
to
have
a
collector
I
I
can't
have
a
collector.
I
won't
have
a
collector,
so
for
so
for
those
cases,
then
the
back
end
actually
has
to
implement
the
television
transformation
and
then
there's
all
these
problems.
A
Like
of
queries
like
how
do
I
run
my
priority
or
do
I
select
the
Telemetry
version
or
if
I
don't
select
the
term
division?
Does
the
back
end
select
this
for
me?
Is
it
the
latest?
Is
it
not
in
all
this
all
that
stuff,
so.
E
C
I
I
definitely
agree
with
that.
That's
again,
why
I'm
making
this
proposal?
What
what
details
do
you
need
to
know
like
the
effectively
we
would
Define
stability
without
taking
into
account
Telemetry
schema
URL.
So
we
would
just
look
at
common
use
cases
and
try
to
understand
if
we're
breaking
them,
so
the
idea
would
be.
We
go
through
all
possible.
Telemetry
schema
changes,
renames
refactors
that
sort
of
thing,
and
we
we
look
at
this
use
case
and
these
lenses
and
say
is
it
okay?
C
If,
if
we
think
that
that's
going
to
break
alerting
we'd
say
no,
this
is
not
a
stable
change,
so
in
this
case
my
proposal
for
the
specific
thing
that
we're
showing
here
is,
if
the
time
series
fragments
right,
we
Mark
that
as
an
unstable
change
that
the
time
series
doesn't
fragment,
we
mark
it
as
a
stable
change
and
that's
our
here's.
What
stability
means
for
attributes
and
metrics.
D
C
D
C
D
Sure
I
I
don't
mind
that.
C
Yeah,
okay,
cool,
so
we're
down
to
10
minutes
going
forward.
What
I'd
like
to
do
is
we
have
a
project
status,
tracker
right?
We
have
some
issues
and
we
have
this
notion
of
defining
what
constitutes
a
breaking
change
for
metrics,
where
I
have
that
document
about
adding
attributes
what
I,
what
I
was
hoping
to
do?
Was
it
within
this
working
group,
split
out
a
couple
tasks,
so
someone
can
take,
say
logs,
someone
can
take
traces,
someone
can
take
metrics
and
someone
can
take
I,
think
resources
also
matter.
C
We
can
open
an
individual
issue
for
each
one
and
have
ownership
of
all
of
them,
and
then,
if
you
look
in
the
document
that
I
tried
to
put
together
here,
it's
basically
we
have
to
talk
about
all
right.
What
are
changes
that
are
allowed
to
name
or
attributes
of
metrics
or
types
and
all
that
kind
of
stuff
logs?
What
are
what
are
changes
allowed
to
attributes
or
body
traces
names,
events,
attributes
and
status?
You
know
there's
more
things
that
we
have
to
cover,
but
basically
take
that
signal.
C
Make
a
proposal
for
what
changes
we
would
consider
stable
and
what
changes
we
consider
unstable.
You
can
leverage
the
existing
documentation,
tigernity
wrote
here,
which
I
think
covers
a
good
bit,
but
I
want
to
be
very
explicit
in
this.
In
this
thing
of
saying,
okay,
you
know
if
I
change,
the
name,
that's
a
breaking
change.
If
I
change
the
unit,
that's
a
breaking
change.
C
D
You
you're
saying:
let's
figure
out:
how
do
we
live
in
a
world
when
there
are
Telemetry,
schemas
and
schema
files,
but
there
aren't
any
tools
that
use
those
schemas
to
do
any
sort
of
automatic
Transformations
or
anything
like
that
that
schema
files?
Are
there
the
schemas
as
a
concept
exist,
but
we
don't
have
a
tooling
that
uses
it.
That's
what
you're
suggesting?
How
do
we
live
with
that?
That's
what
you're
asking.
D
And
you're
saying
having
the
schemas
is
an
advantage
compared
to
the
situation
when
there
are
no
schemas
at
all
right.
Yes,
so
they
somehow
help
us,
but
without
having
any
or
any
automated
tooling
that
actually
uses
those
schemas
and
and
I
agree
with
you.
Maybe
that's
an
advantage,
but
I
I
don't
quite
see
the
full
picture
at
the
moment.
C
Okay,
okay,
anyway,
that
said:
does
this
sound
like
a
reasonable
next
set
of
tasks
for
Folk?
C
Okay?
So
if,
if
that
makes
sense,
can
I
get
someone
to
sign
up
to
say
look
into
what
constitutes
healthy
changes
for
logs
hold
on?
Let
me
come
back
here.
C
Do
you
want
a
bug
to
reference
for
this
sure,
all
right
I
will
open
that
after
I
do
this,
it
might
be
opened
after
that,
this
thing
is
closed.
Did
someone
want
to
look
at
this?
For
logs
resources
is
going
to
be
the
most
fun
one
by.
A
C
Way
so
we
can
defer
that
I
was
thinking.
Maybe
we
focus
first
on
Metric
traces
and
logs
use
what
we've
learned
to
Define
resource
since
resource
impacts.
All
three
is
anyone
interested
in
defining
what
a
breaking
change
to
logs
would
mean
in
this
world.
E
D
D
D
To
be
very
similar,
surely
very
similar
I
can't
help
with
writing
any
of
these,
but
I
commit
to
reading
and
previewing
very
carefully.
C
One
and
yeah
honestly,
I'm
I'm
I,
think
getting
these
two
will
will
solve
a
lot
of
issues
and
for
logs.
Maybe
if
no
one
has
time
that's
something
I
can
take
on
after
I
get
metrics
done.
I'm
not
gonna,
be
able
to
do
both
before
our
next
well.
Actually,
we
have
two
weeks:
I
might
be
able
to
do
both
so
we'll
see
we'll
leave
it
as
open
to
do,
but
we
do
need
the
logging
bit
to
then
have
discussions
with
elastic
common
schema
around
what
that
means
with
semantic
conventions.
C
That's
a
topic
for
coming
forward.
Okay,
so
I
think
we
have.
We
have
some
next
steps
for
Telemetry
definition,
stability,
Evolution.
We
want
to
talk
about
there's
the
schema
processor
here
around
Advanced
convention
process
and
topics.
I
was
thinking
to
not
kick
off
the
semantic
convention
process.
You
know
we
mentioned
this.
A
lot
in
the
beginning
of
this
I
was
thinking.
Let's
not
kick
this
off
until
we
have
a
stability
definition,
we're
comfortable
with
as
a
working
group.
Yeah
process
is.
C
Yeah
yeah
exactly
I,
don't
want
to
have
a
process
for
people
to
Define,
stable
things
where
we
just
reject
all
proposals,
because
we
don't
know
what
stable
means
right.
So
since
it's
completely
blocked
on
this,
let's
get
let's
get
stability
down
first
and
then
let's
unblock
the
process.
Does
that
sound
reasonable
to
everybody?
Yeah?
C
Okay!
Thank
you
all
for
your
time.
This
is
on
a
bi-weekly
Cadence
bi-weekly
meeting
every
two
weeks
since
English
is
ambiguous
in
that
sense,
if
we
need
to
meet
sooner,
let
me
know
I
will
we
do
have
the
chat
room,
so
we
can
continue
the
discussion
and
chat
if
there's
anything
that
we
felt
we
didn't
have
time
to
account
for
there
highly
recommend
running
a
doc
thrown
in
chat.
That
sort
of
thing
I'll
put
my
links
in
chat
as
well.
Thanks
everybody
thanks.