►
From YouTube: 2021-04-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
So
one
this
another
one
is
asking
for
rewarding
the
default
behavior
because,
like
john
mentioned
for
java
sdk,
this
means
a
breaking
change.
So
this
is
a
pr
sent
by
carlos
last
month
and
the
the
behavior
changes
for
the
for
the
tracer
name.
If
it's
not
there,
then,
instead
of
returning
a
null,
we
return
a
predefined
string.
B
B
What
are
the
things
we
need
to
check,
because
in
this
way
it
seems
like
multiple,
like
client,
maintenance,
reviewed
and
they
believe
it's
okay
for
them,
but
there
was
no
java
maintainer
in
pr
review
so
so
that
I
won't
avoid
the
situation
where,
for
example,
if
we
release
a
1.2
version
of
spec
and
several
sdks
implement
that,
and
then
like
john
mentioned,
oh
it's
a
breaking
change
for
me.
B
A
C
A
B
B
It's
considered
that
we
can
change
at
all,
and
if
we
specify
that,
like
carlos,
is
going
to
send
the
pr
adding
an
optional
parameter,
then
we
know
we
don't
need
to
get
approval
from
all
the
maintainers,
because
this
is
not
practical,
but
if
it
is
some
occasional
case
like
we
want
to
change
the
behavior
like
previously,
it's
just
return
a
null
string,
and
now
we
want
people
to
return
something
like
lit
a
literal
invalid
tracer
or
a
default
tracer
yeah.
We
might
like
have
the
idea.
Oh,
this
is
not
something
in
the
safe
zone.
A
I
think
you're
correct
that
additive
changes
are
generally
speaking.
Okay,
I
don't
think
we
can
make
a
hard
rule
that
it's
automatically
okay
to
add
an
option
or
or
an
additional
method,
to
a
class
to
an
interface
I
should
say,
but,
generally
speaking,
we
want
to
say
that's.
That
is
fine.
Let's
have
to
double
check,
but
you're
right,
any
change.
That's
a
mutation
of
existing
behavior
needs
to
really
like
by
default.
A
The
answer
needs
to
be
no
essentially
right
right
and
you
have
to
like
actually
do
work
to
prove
it's
not
going
to
be
a
problem
anywhere
now.
It
could
also
be
that
in
this
particular
case,
it's
just
that
there
needs
to
be
an
addendum
around
null.
I
don't
know
what
this
particular
issue
is
like,
if
is
like
null
like
just
some
side
effect
of
this
change
and
like
java
needs
to
be
told
how
to
manage
this
change
without
worrying
about
null.
B
Sorry,
I
thought
the
call
would
start
later.
Yeah
we
were
talking
about.
There
are
two
issues
from
since
last
week
and
one
is
asking
to
reverse
the
pr
that
you
made
to
change
the
the
default
like
the
tracer
fallback
behavior
and
john
mentioned
it's
a
breaking
change
for
java.
B
So
I
I
asked
the
tag
I
I
I'm
concerned
if,
if,
for
example,
that
pr
is
already
part
of
the
1.2
spec
release,
I
believe-
and
if
that's
the
case
you
can
imagine
like
if
python
they
released
a
new
stable
version,
supporting
that
and
now
we're
trying
to
revert
that
back
for
java,
then
python
might
get
stuck
because
it's
agreeing
to
them
again.
So
I
was
asking
if
it's
possible
that
we
have
some
some
category.
B
For
example,
we're
saying
if
we
add
a
new
method
to
an
existing
class
or
we
add
a
new
optional
parameter
to
existing
method,
then
we
don't
need
to
get
acknowledged
from
all
the
language
maintainers
but
in
this
case
we're
changing
an
existing
behavior.
Although
we
got
blast
from
like
three
spec
approvals
or
like
three
language,
six,
that
doesn't
mean
it
would
work
for
all
the
other
language
sets.
We'll
probably
need
to
bring
this
out
during
the
spec
or
the
maintainers
meeting
to
get
more
like
confidence.
C
Yes,
I
agree
with
that,
and
this
got
me
thinking
that
only
so
having
doing
a
release
monthly.
It
seems
too
much
for
me
at
this
mo
at
this
point,
because
you
may
end
up
mixing
small
changes
with
like
semantic
conventions,
additions
with
changes
like
this,
and
you
don't
have
enough
time
in
one
month
to
test
with
an
off
six.
C
You
know
that
you
are
not
allowing
them
to
validate
changes
and
now
so
I
guess
that's
one
of
the
things
that
I
would
like
to
say
and
that's
why
I
would
like
to
have
a
release
every
three
months.
You
know
and
kind
of
kind
of
wait
a
pair
of
weeks,
yeah
that
we
know
that
languages
have
implemented
this
in
a
branch
or
something.
You
know
that
we
know
that
it's
working.
A
B
That's
yeah,
then
we
got
some
like
parallel
release
like
I.
I
know
that
is
doing
that
and
and
that
it
requires
actual
work
for
them
and
cleaner
from
the
maintenance
like
sigil
is
doing.
The
matrix
work
is
happening
in
a
separate
branch
and
it
is
released
as
a
preview
version.
Well,
the
the
membrane
is
released
as
a
stable
version,
and
so
basically
the
workout
doubled,
and
you
have
to
you
have
to
manage,
which
part
like
which
works
should
go
to
this
branch
versus
the
other.
C
Okay,
yeah,
sorry,
so
it
was.
I
meant
to
say
that
probably
something
that
we
could
do
is
that
hey
this
specific
feature,
this
specific
branch.
Sorry,
this
specific
pr
looks
good.
It
has
enough
approvals,
but
we
cannot
merge
it
till
we
have
validated
with
an
off
six,
so
we
don't
have
to
keep
it
in
a
different.
Well,
not
in
we
could.
We
could
keep
it
in
the
authors
repo,
you
know,
so
it's
not
like
an
experimental
branch
or
something
like
in
the
actual
main
repo
and
those
ideas
that
we
will
be.
C
That's
the
alternative
we
hold.
It
said
you
know
this
is
approved
and
everything,
but
now
some
specific
changes
were
required.
Besides
approvals
that
at
least
four
languages,
or
something
like
that
have
validated
this,
or
at
least
one
dynamic
and
one
static
type
language.
Something
like
that
and
we
keep
the
monthly
releases.
A
I
I'm
a
little
confused
honestly
about
this
particular
issue,
but
maybe
that's
separate,
but
just
to
get
back
to
like
releasing
do.
We.
A
Like
one
thing
I
just
worry
about
in
general,
regardless
of
how
long
the
release
cycle
is,
is
incremental
or
partial
changes
to
the
spec
like
it
feels
to
me
that
when
something
goes
into
the
spec
it
should
be,
it
should
be
complete
and
as
long
as
we're
in
that
situation,
where
the
spec
is
not
getting
into,
in
other
words,
the
spec
should
not
ever
get
into
an
invalid
state
even
over
the
course
of
its
development,
even
between
releases,
it
should
never
be
in
an
invalid
state.
We're
like
yeah.
A
We
couldn't
release
the
spec
right
now,
because
we
have
half
a
pr
added
to
it,
yeah
and
as
long
as
we're
that
is
a
rule
we
adhere
to.
Then
it
feels
like
we
don't
necessarily
need
branches
or
anything
like
that
to
manage
the
spec
it's
just.
A
You
can
only
say
we'll
figure
out
something
later
if,
if
it
really
is
not
going
to
affect
what's
going
into
the
spec
currently
so
yeah,
that
just
means
like
you
know,
prs
might
have
to
hang
around
for
longer
as
a
pr,
not
that
we
maintain
different
branches
and-
and
if
we
do
that,
then
then
we
can
release
the
spec
once
a
month.
A
A
A
No,
I
think
you
need
to
like
have
like
a
real
concept
of
of
a
main
branch.
That's
the
unreleased
version
and-
and
all
you
do
is
move
forward
with
that
thing,
because
it
it
also
like
here's
the
reality
once
something's
added
to
the
spec,
we
don't
have
a
process
where
we
have
like
a
spec
pre-release
like
a
spl,
a
spec
release
candidate
where
we
say
hey,
this
is
released
now.
A
Maintainers
go,
have
a
look
at
what
we
did
and
and
find
out.
If
this
is
going
to
break
you
or
not
like
we
don't
have
that
concept,
which
means
realistically
once
something
is
added
to
the
spec.
It's
effectively
released
like.
B
Does
that
make
sense?
Sorry,
I
remember
we
mentioned
like
if
there
is
a
new
document
marked
as
experimental,
it
serves
the
powers
if
there
is
an
existing
document
marked
as
stable.
I
think
there
are
multiple
folks
mentioning
the
spec
meeting
asking
if
we
could
add
a
section
and
mark
that
section
as
experimental,
and
it
seems
many
folks
like
the
idea,
but
we
never
try
to
double
click
on
that.
A
Well,
we
can't
add
experimental
stuff
to
stable
stuff
right,
like
that
is
that
is
another
nightmare
that,
because
you
can't
that's
just
making
presumptions
like,
first
and
foremost
like
as
soon
as
you
add
most
in
most
languages,
anything
that
you
change
the
api
signature
to
breaks
the
whole
package,
so
so
experimental
stuff
has
to
just
be
separate.
A
It
has
to
be
in
a
separate
signal
and
we
have
experimental
instability
and
draft
divided
up
by
signals
and
signals
is
kind
of
like
our
unit
of
like
there's
some
unit
of
managing
the
stable
stuff
separate
from
the
experimental
stuff
separate
from
the
draft
stuff.
We're
calling
that
unit
a
signal
like
that's.
That's
like
the
one
level
of
granularity
we
have
may
maybe
we
could
have
some
level
of
like
experimental
adjunct
tracing
packages
where
it's
like
part
of
tracing,
but
it's
like
over
on
the
side.
A
C
Yeah
and
the
interesting
thing
for
this
change
is
that
in
theory,
it
wouldn't
be
breaking
code.
We
are
just
changing
something
that
could
be
considered
a
semantic
convention,
but
the
thing
is
that,
as
long
as
we
don't
test
this
in
any
implementation,
we
will
not
know
what
are
some
of
the
side
effects,
especially
for
languages
like
java,
that
are,
you
know,
statically
typed,
but
so
yeah.
Okay,
oh.
A
Well,
it's
going
to
say
so
in
this
particular
issue,
though
I'm
actually
a
little
confu.
This
is
actually
something
a
little
bit
different
because
it
seems
like
the
issue
was
there
should
be
a
default
value
that
we
standardize
on
right
and
then
what
got
added
to
the
spec
was
seems
to
me
like
to
not
be
that
instead,
what's
added
to
the
spec
is
the
value
returned
is
whatever
the
end
user
put
in,
which
wasn't
like
in
the
issue.
As
far
as
I
can
tell.
C
So
it
was
more
or
less
like
this
before
we
didn't
specify
anything
about
the
default
value
like
there
was
a
default
value.
We
we
only
said
you
shouldn't
be
passing
a
null
value.
That
was
everything
that
was
said.
Then
we
changed
that,
because
many
languages
have
a
default
value
like
error
or
invalid
value
or
invalid
service
whatever,
and
then
we
change
that
to
accepting
you
know
to
use
passing
through
the
null
value
and
just
in
theory
showing
a
an
error.
Log
saying
this
is
an
invalid
case,
so
that's
how
it
changed.
A
But
but
the
the
thing
we
are
trying
to
fix,
it
seems
like
is
the
actual
name
of
the
tracer
that
you
get
back
was
not
the
same
name
across
languages
like
if
you,
if
you
just
don't
put
anything
in
or
you
put
in
some
unacceptable
thing,
they
all
return
different
names,
so
you
want
them
all
to
return
the
same
names.
So
it
seems
to
me
where
this
spec
went
past
where
the
pr
went
like
maybe
into
an
area
I
mean
like.
I
don't
know
how
we
can
generalize
this,
but
it
seems
like
that.
A
That
part
isn't
the
problem
here.
The
part
that
was
a
problem
was
that
there
was
a
line
in
the
spec
that
says
the
original
value
that
they
gave.
You
gets
returned
and
then
java
people
said
well
that
that
changes.
The
output
of
this
thing,
like
the
type
of
the
output
of
this
thing,
to
include
nulls,
because
what
we
did
before
was,
if
you
gave
us
a
null,
we
handed
you
back
a
string
because
it's
only
allowed
to
be
a
string
or
whatever
it
is
in
java
and
now
you're
saying
like
whatever
junk
they
give
us.
A
We
have
to
give
them
that
back
and
that
breaks
that
breaks
our
type.
So
so
we
could
fix
this
one
by
just
it's.
That
seems
to
be
like
this,
like
we
can
fix
this
thing
by
saying,
like
not
just
convert
it
into
empty
string,
but
like
actually
the
specs
should
say
what
you
get
back
is
the
thing
we
named
it
and,
and
we
log
an
error,
saying
like
you,
you
handed
us
garbage,
so
we
gave
you
back
our
default.
A
You
know
what
you
know,
what
I'm
saying
so
like
that
we
could
change
like
it's
actually,
not
just
that
it
should
return
an
empty
string.
It
should
return
the
the
value
like
the
the
canonical
default
value,
and
so
this.
C
Is
I
I
agree
with
that
and
let's
propose
that
but
yeah
for
the
sake
of
the
argument
before
we
merge
that
we
should
probably
validate
that,
but
this
for
some
weird
reason
doesn't
create
yet
another
effect.
You
know-
and
that's
probably
the
thing,
because
I
think
that
one
of
the
I
mean
so
because
that
was
the
problem
like
the
original
problem
is
that
making
this
change
seems
simple
and
that's
why
even
john
didn't
complain
he's
like
yeah.
It
looks
like
a
simple
change
right.
C
We
just
let
the
new
value
and
add
up
other
pair
of
checks
for
for
new
values,
but
it
ended
up
being
a
nightmare
in
java,
at
least
so
I
don't
know
I
I
need
we
need
a
way
to
actually
say
for
for
some
specific
pr's
we
need
like.
I
know
it
sounds
like
an
hotel,
but
I
don't
know
if
we
want
to
keep
the
specification
the
main
branch
valid
at
all
times.
We
need
to
really
validate
yeah
this
stuff.
A
Like
the
specific,
a
specific
thing,
we
have
to
look
at
our
behavior
changes
and
I
think,
a
real
specific
under
the
microsoft
subset
of
that
our
return
types
anytime,
we're
saying
we're
going
to
change
what
either
what
kind
of
data
we
accept
like
what
type
of
data
we
accept
or
we're
going
to
change
what
we
return
to
you
at
all,
then
that's
an
area
where
it's
like
almost
guaranteed
to
have
a
cross
language,
a
potential
issue
in
some
language.
A
As
soon
as
you
say
like
what
can't
what
type
of
thing
can
be
returned
here,
so
these
are
just
some
nuances.
I
think
we
have
to
to
learn
and
I
think
the
same
thing
goes
for
changing
what
we
data
we
accept.
You
know
like
as
soon
as
if
you
were
just
to
say
like
oh
now,
this,
like
we
hit
this
with
we're.
Gonna
hit
this
with
baggage.
We're
like
we
want
baggage
to
accept,
like
all
data
types.
A
What
am
I
supposed
to
do
right
and
we
can
define
it
as
like
an
another
method,
signature
or
saying
like
you,
should
we
need
to
have
one
that
accepts
strings
which
is
more
efficient
and
then
there's
another
way.
You
call
it
that
accepts
this,
but
but
if
we
don't
give
any
guidance
there
and
we
make
that
change
even
though
it's
just
expansive,
it's
gonna
cause
confusion.
A
So
I
think
whenever
we're
mutating
mutating
existing
method
signatures,
that's
that's
the
the
stuff
where,
where
we
have
to
be
really
careful,
I
think
we
have
to
be
really
careful.
In
general,
I
man
only
having
two
reviewers
on
the
spec,
like
that's
that's,
starting
to
look
suspect
to
me
as
well,
because
that
this
assumption
there
was
like
we
only
need
two
reviewers,
because
this
already
went
through
a
big
ot
process.
But
that's
not,
actually,
that's
not
actually
the
case
right
like
like.
A
C
But
yeah
I
could
say
that
not
only
reviews,
but
we
need
like
some
actual
implementation.
You
know,
because
if
you
look
at
the
change
it
was
not
polite,
it
was
the
only
thing.
Polemical
was
the
name
or
the
value
and
every
everybody
had
the
impression
that
it's
going
to
be
super
straightforward,
yeah
and
then
until
it
was
implemented.
So
even
if
we
get
the
reviews,
but
nobody
implemented
that
we
are
still
risking
stuff.
C
B
I
I
was
starting
by
question,
so
if
you
look
at
the
the
api
package
in
a
particular
language,
I'll
just
take
class
files,
for
example,
when
we
make
changes
the
definition
of
whether
it's
a
breaking
change
or
not,
is
purely
depending
on
the
c
plus
past
view,
instead
of
if
any
existing
user
are
depending
on
that
behavior.
So
it
like,
we
want
to
do
something
like
we
ask
all
the
user
who
are
depending
on
the
api
and
see.
If
we
make
this
change,
it
would
break
them.
Instead,
we
have
a
rule.
B
It
seems
like
here
in
the
spike.
We
have
a
different
perspective,
we're
saying
we
ask
all
the
consumers,
those
are
the
language
stakes
and
see
if
they're,
okay,
if
we
make
a
breaking
change
it.
This
is
the
place
where
I
think
it's
a
little
bit
weak.
So
so
what?
If
there's
a
another
implementation
outside
the
open
telemetry
like
community
like
if
someone
create
their
own
stuff
and
they
implement
based
on
the
spec,
and
we
make
that
change,
which
makes
it
impossible
for
them?
B
Like
instead
hot
spots,
we're
making
the
brick
like,
if
someone
is
asking
hey,
you
got
to
change
the
return
type
of
an
api
on
a
stable
version,
we're
going
to
tell
them
no,
even
if
they
can
prove
like
you,
can
search
the
entire
like
world
code
bank.
Nobody
is
using
that
so
you're
safe,
but
we
can
tell
them
in
theory.
This
is
considered
as
bringing
change
so
we're
not
going
to
change
that
and
that
that
actually
makes
the
maintainer's
job
easier.
B
B
B
A
I
agree,
I
think
it's
just,
and
this
has
been
my
experience
with
the
stuff.
All
the
way
back
through
open
praising
is
there's
no
way
to
fully
codify
this
there's
no
way
to
remove
human
judgment
from
this
process,
for
example
in
open
tracing.
We
wanted
to
add
simple,
think
to
return
returning
a
trace,
id
and
span
id
were
not
defined
originally,
and
so
we're
like
that
seems
fine,
just
add,
trace
id
return
string.
A
So,
if
you
add
this
to
the
interface,
it
actually
screws
everything
up
and
that's
like
a
weird,
one-off
right
like
there's,
no
there's
no
way
we
could
standardize
handling
the
open
tracing
spec
that
would
have
caught
an
issue
like
that,
like
the
only
reason
it
was
caught
was
because
people
who
care
about
implementing
that
spec
were
paying
attention
to
the
spec
when,
when
the
the
design
was
being
worked
out
for
that-
and
it
turned
out
just
changing
the
name
of
that
to
the
slightly
uglier
trace
identifier
ensured
we
didn't
make
a
collision.
A
You
know,
and
I
think
we're
going
to
have
the
same
kinds
of
problems
like
it's.
It's
it's
it's
tricky
to
just
say.
Well,
there's
a
general
rule
unless
we're
also
willing
to
say
this
spec
is
like
which
we
maybe
should,
which
is
saying
like
yo
implementation
should
implement
the
spec
only
as
as
far
as
it
goes.
A
You
know
like
if
something
in
this
spec,
if
like
a
pedantic
reading
of
the
spec,
is
forcing
you
to
do
something
it's
going
to
break
things,
then
you
can't
do
it,
and
if
it's
forcing
you
to
do
something
that
really
makes
your
implementation
ugly,
then
you
shouldn't
do
it.
You
should
try
to
figure
out
a
way
to
interpret
the
spec
for
your
language,
and
I
don't
like
having
a
lot
of
interpretation
in
there,
but
it
seems
that
that
should
actually
be
the
default
for
what
maintainers
do.
A
C
A
A
C,
plus
plus,
is
a
terrible
role
model
to
follow
in
this
regard,
c
c
plus.
This
is
like
the
most
maligned
feature
of
c
is
that
it
like
has
a
lot
of
undefined
state
same
thing
with
c
plus
plus,
but,
like
the
point,
is
actually
like.
Maintainers
shouldn't
be
silent
actually
in
these
issues
too,
whenever
they
feel
like
they
hit
this
like
step,
one
is
to
like
raise
an
issue
and
be
like
yo
like
we
did.
A
C
The
more
I
think
of
it,
the
more.
I
think
we
should
do
a
release
every
three
months,
so
we
allow
the
maintainers
of
the
languages
to
implement
the
complete
standard,
do
their
release
and
then
before
we
do
the
next
release.
We
have
to
validate
everything.
I
mean
it's
a
slow,
a
slow
process,
but
it's
an
alternative,
but.
A
I
guess
what
I'm
saying
is
with
that.
I
don't
see
how
it
that
would
have
changed
anything
in
this
case
like
in
this
case.
This
would
have
gotten
two
approvals.
It
would
have
gone
into
the
spec
and
then
john
would
be
complaining
about
this
two
months
from
now
like
okay,
well
as
soon
as
something
goes
into,
I
guess
what
I'm
saying
once
it
gets
approved
and
added
to
the
spec.
There
is
no.
It
is
effectively
released
in
the
sense
that
there
is
no
review
process
going
on
yeah
for
like
a
release
candidate
at
the
spec.
B
A
Yeah,
okay,
that
would
be
bad,
but
but
I
do
think,
there's
an
issue.
I
do
see
one
aspect
where
I
think
you're
right,
carlos,
which
is
there's
we
see
we
have
seen
pressure
around
shoving
things
out
the
door
like
if
we
release
once
a
month
and
someone's,
like
you
know
like
with
the
we
gotta
lowercase,
all
the
names
of
everything
right
now
you
know
and
like
that's
just
gotta
go
out
today.
A
We
have
to
resist
the
idea
that,
just
because
there's
a
spec
release
happening
in
two
days,
there
should
be
pressure
to
like
jam
something
into
the
spec
just
so
it
can
get
out
there
so
that
we
can
make
use
of
it
like
we
should
say
like
and
and
having
like
a
spec
go
out
once
a
month.
I
think
makes
it
a
little
bit
easier.
We
can
say
like
sorry
like
this
can't
sit
for
a
week
and
let
people
like
have
a
chance
to
review
it,
so
it
can't
it
can't
go
into
this
release.
A
We're
sorry,
okay,
yeah.
B
I'm
I'm
I'm
trying
to
think
like
in
donna.
I
know
the
donut
runtime
folks
has
been
pushing
seizure
to
apply
some
api
compat
tool
just
to
detect
if
any
braking
change
happens
in
the
stable
release
yeah,
and
when
I
look
at
that
tool,
we
actually
found
a
lot
of
corner
cases
where
we
ask
pm
or
even
the
owner
of
that
tool.
Do
you
think
this
is
a
breaking
change
and
they
agree
that
it's
not
a
breaking
change,
but
the
tool
called
it
as
a
breaking
change.
B
So
we
ask
why
and-
and
they
were
saying
like
it's
very
hard
to
cover
all
the
corner
cases-
I
think
what
they
were
suggesting.
Is
they
just
use
the
two
and
realizing
the
two
could
make
some
mistake
in
the
counter
cases.
But
the
two
is
the
ultimate
source
of
tools,
and
if
you
make
the
the
two
beliefs
you're
making
a
breaking
change,
then
then
you
you're
making
a
breaking
change
and
you're
the
faulty
part.
B
So
so
that
that
part,
I
I
figured
they
just
just
try
to
ignore
those
corner
cases
and
make
that
a
very,
very
like,
like
fixed
process,
to
make
people's
job
either,
because
in
this
way,
like
you
save
a
lot
of
time,
trying
to
judge
whether
a
particular
thing
is
broken,
change
or
not,
which
is
time
consuming
as
long
as
people
follow
that
thing.
B
It's
like
the
jerk
mode.
So
as
long
as
people
follow
that
there's
a
single
rule,
it
seems
to
be
working
efficiently
for
the
at
least
the
net
ecosystem,
like
they
were
fighting
with
the
no
like
normal
and
many
community
and
like
a
unity
and
with
that
tool,
it
seems,
like
people
agree,
okay,
the
two
sucks,
but
it
is
the
reality.
B
I
wonder
if
that
could
work
for
the
spike.
I
can
probably
ask
them
see
if
they
have
any
idea,
but
I
I
suspect
it
works
well
for
tonight,
but
probably
not
going
to
work
for
the
spec
right.
A
Well,
I
mean
our
spec
is
not
like
machine
readable,
so
it's
gonna
be
hard
for
and
the
other
thing
is
like
what
makes
a
breaking
change
in
one
language
different
from
another
like
like
in
go
interfaces
and
go
are
so
brittle.
I
hate
interfaces
and
go
because
it's
like
you
have
this
nice
clean
separation
and
then
you're
like
you
know,
but
it
turns
out
like
if
you
touch
them
at
all,
then
they're
instantly
incompatible
and
yeah.
You
know
other
languages
have
like
run
into
that
problem
and
added
things
like
default.
A
Implementations
like
other
stuff
to
smooth
over
those
rough
edges.
So
it's
and
then
there's
dynamic
languages
where
it's
like.
We
don't
care.
A
You
know
yeah
in
the
sense
that
like
actually
like,
maybe
it
is
a
realistically
a
breaking
change,
but
not
something
the
language
would
define
this
breaking
change
because
it's
like
yeah,
we
added
an
interface
to
the
spec
and
then
like
this
implementation
doesn't
implement
that
interface.
A
So
there's
no
compiler,
that's
going
to
tell
you
this
is
wrong,
but
it
will
blow
up
your
face.
If
you
touch
that
method
stuff,
like
that,
it's
I
don't.
I
think
this
is
why
there
has
to
still
be
just
a
lot
of
human
judgment
involved,
and
I
do
agree,
though,
something
like
like
a
canonical
list
of
like
if
you
mutate
anything
in
a
stable
thing.
A
That
is
like
automatically
super
duper
suspect
and
if
you
add
something,
if
you
touch
stable
things
at
all,
that's
automatically
suspect
and
as
soon
as
you're
saying
you're
changing
anything
about
a
mutation
of
something
that's
already
there.
Then
it's
like
the
answer
should
almost
be
presumed
to
be
no
because
the
chance
that
it's
gonna
break
someone
relying
on
the
current
behavior.
Even
if
it's
behavior
we
wish
wasn't
there.
Might
it's
just
going
to
go
up
with
time.
C
C
B
C
B
For
matrix
api,
we
have
only
one
remaining
issue:
the
the
name
of
the
instruments,
and
I
I
think
we're
we're
getting
a
good
discussion.
So
I
have
collected
a
lot
of
valuable
feedbacks
from
multiple
folks.
We
have
a
pr
out
for
people
to
debate
and
then
I'll
give
propolo
and
see
if
we
can
align
and
yeah
I'll
push
that.
Hopefully
we
can.
We
can
close
that
in
couple
days,
nice
perfect.
B
And
the
metric
size
sdk
is
very,
very
messy
like
if
you
look
at
the
the
previous
like
there
there's
many
questions
like
people
talk
about.
Do
we
need
a
preprocessor
and
post
processor
where
the
preprocessor
happens
at
the
raw
metric
api?
So
you
have
access
to
the
contacts
like
when
people
report
a
value
with
a
bunch
of
attributes.
You
can
access
the
contacts
and
change
some
attributes
on
the
fly
where
the
post
processor
happens
after
the
aggregation.
B
Makes
sense
yeah
so
so
currently
I
figure
like
for
the
api.
I
have
a.
I
have
a
good
support
from
from
several
folks,
like
josh
to
josh's
and
bogdan,
and
also
folks
from
premises,
client
and
micrometer
for
the
isdk.
I
I
I
think,
there's
no
good
reference
like
if
I
work
on
the
api
spec.
I
can
go
through
this
and
do
my
own
research
for
sdk.
You
have
to
look
at
the
code
and
everyone
is
doing
that
differently.
There's
no
there's
no
like
widely
adopted
model
across
these
different
things.
B
So
so
that
part,
if
you,
if
you
have
some
idea
like
how
the
sdk
should
be
implemented
or
someone
who
have
experience,
I
I'll
need
some
help.
Otherwise
I
figure
it
might
be
just
another
like
three
or
four,
like
experts
working
on
something
and
eventually
turned
out
to
be
very
hard
for
the
others
to
adopt.
A
So
the
actual
logging
group
is
mostly
I'm,
I
feel
like
there's
a
bit
of
a
disconnect
between
what's
going
on
in
the
spec
and
what
that
logging
group
is
doing,
I'm
not
even
sure
who's
talking
about
stuff
over
here
in
the
spec,
to
be
honest,
like
you
know,
like,
for
example,
tigran
added
some
stuff
about
the
sdk,
but
then
like
that
logging
group
didn't
know
much
about
it.
It
didn't
seem
like
people
in
that
group
had
had
really
been
part
of
that
discussion,
and
so
I'm
actually
confused
about
who's.
A
B
To
be
the
case
for
me,
so
so
let
me
ask
this
question:
when
do
you
think
the
the
logging
spec
will
become
stable?
Is
there
a
clear
timeline
or
expectation?
I
I
don't.
My
worry
is
it'll
be
similar
like
metrics,
so
there's
some
big
hole
and
then
we
realize
oh,
like
ted.
You
remember
like
earlier
this
year
and
when
I
explained
to
you,
I
I
don't
think
matrix
spike
will
be
ready
by
february.
You
were
surprised
when
you
asked
if
you.
A
Probably
one
tool
logging
is
going
about
things
in
a
different
order
than
we're
doing
it
in
tracing
in
metrics.
So
the
what
the
logging
group
is
doing
is
step.
One
is
to
get
the
protocol
finalized,
which
it
basically
is,
and
to
get
log
processing
working
in
the
collector.
A
That
is
where
all
of
their
efforts
are
focused
on
right
now,
integrating
stanza
and
making
log
processing
work
the
next
step
for
them
in
the
languages
which
people
are
starting
to
work
on
is
simply
creating
appenders,
like
log
adapters
to
existing
logging,
libraries
to
export
otlp,
in
other
words
like
not
even
a
logging
sdk,
but
just
like,
if
you
already
are,
have
a
thing
that
produces
logs,
make
those
things
produce,
otlp,
yeah,
formatted
logs
and
the
what
we'll
have
at
the
end
of
that
is
a
situation
where
people
can
use
logs
odlp
logs
and
if
they
install
the
sdk
they'll,
effectively
be
sending
two
otlp
streams
to
a
collector
right,
because
they're
separate
the
step
after
that
is
to
add
logging
to
the
sdk.
A
In
other
words,
how
do
you
then,
rather
than
having
like
an
appender,
that's
sending
logs
out
somewhere
else
with
your
existing
logging
library
having
it
so
that
your
existing
logging
library
is
sending
those
logs
to
the
sdk
so
that
it
goes
out
in
one
stream?
And
only
after
that?
Are
we
thinking
about
like
okay?
A
What
would
a
open,
telemetry
logging
api
look
like,
because,
in
terms
of
getting
value
to
people
on
a
practical
level,
just
the
feeling
in
that
community
is
that
inventing
a
logging
api
is
the
least
valuable
thing,
and
I
feel
like
the
community
of
people
who
are
interested
in
open,
telemetry,
think
differently
right,
like
they
get
weird
out
like
where's,
my
blogging
api.
B
If
you
look
at
the
the
logging
sig
like
they
have
some
like
high-level
documents
trying
to
describe
the
logging
has
a
lot
of
legacy
system
and
the
focus
is
on
making
sure
you
can
work
with
existing
logs.
You
can
get
some
correlation
and
yeah.
What
I
I
think
is
a
little
bit
stuck
is
on
the
execution,
because
you
can
see.
A
This
is
his
latest
thing
right
I
mean
this
is
about
this
one.
I
thought
that
this
is
related
to
just
versioning
right
and
stability.
A
No,
it's
related
to
our
instrumentation
push
right.
We
wanna,
we
wanna
declare
instrumentation
to
be
stable,
but
we
feel
like
we
can't
tell
people
it's
stable
until
we
define
what
stable
telemetry
means
right
like
stable
instrumentation
means
that
the
the
data
that
like,
if
we
want
to
keep
improving
our
semantic
conventions,
but
if
we're
breaking
people's
dashboards,
when
we
do
that,
that's
we
see
that
as
a
huge
problem.
A
That's
made
worse
by
this
is
actually
an
extra
difficult
problem
for
open
telemetry
because
we
don't
control
the
back
end.
So
there's
no
way
to
coordinate
so
tgrin
decided
to
take
on
trying
to
find
a
way
to
to
manage
that
understand.
B
That
part-
and
I
I
think,
even
with
all
this-
I
I
think
the
basic
tracing
would
work.
But
if
people
talk
about
the
trace
about
http
and
database,
they
would
have
a
lot
of
ambiguity.
So
this
pr
is
trying
to
address
that.
But
what
I'm
saying
I
I
think
without
this
pr,
the
logging
protocol
will
be
a
little
bit
stuck
because
there
are
a
lot
of
schema
related
things.
So
yeah.
A
Yeah
yeah
yeah.
That's
why
totally
right
you're
right
yeah
totally!
So,
if
they're
like
want
to
hit
the
next
step,
which
is
we
want
to
convert
stuff
to
otlp
part
of
doing
that
is
converting
potentially
like
converting
the
conventions
to
the
degree
to
which,
like
you
might
be,
coming
from
something
structured
enough
like
elastic
schema
or
whatever
yeah
yeah
yeah.
B
And
my
worry
is
currently
I'm
I'm
not
seeing
a
clear
timeline
and
and
the
forcing
function
to
drive
us
towards
the
timeline.
It
seems
like
it
seems
to
be
another
situation
like
the
matrix
spec.
It
could
run
for
a
long
time
and
we
have
some
big
expectation
that
it
will.
It
will
come
like
end
of
last
year
and.
C
B
A
The
problem
is
it's
like
it's
like
this
whole
all-or-nothing
thing,
and
that
has
to
do
with
the
domain
of
metrics
being
different
from
the
domain
of
logging,
so
with
logging
we're
like
hey
the
first
thing,
we're
giving
you
is
like
log
processing
and
like
that
that
solves
your
issue,
which
means
like,
if
you
get
you
already
got
logs
now
you
can
point
them
at
the
collector
and
they'll
get
integrated
there
and
then
the
next
step
is
you.
Your
logs
will
get
integrated
like
in
your
application.
A
If
you
want
it
and
the
next
step
after
that
is
you
only
have
one
stream
of
data
and
the
next
step
after
that
is
here.
Here's
like
an
actual
api
and,
if,
like
anywhere
along
the
line,
someone
wants
to
tackle
like
how
do
I
convert
elastic,
schema
to
open
telemetry
conventions
or
or
any
of
these
other
things
like
someone
can
take
take
that
on.
A
We
define
an
api
step
two
we
give
that
to
you
and
you
start
using
open,
telemetry
logs
and
that's
like
better
than
the
thing
you've
got,
and
since
we're
not
doing
that,
we
have
to
communicate
that
to
people
that
that
we
we
don't
see
that
as
as
valuable
as
doing
the
other
thing
you
one
might
ask
why
we
didn't
take
that
approach
with
metrics,
but
you
know
it
seems
like
defining
our
own
api
seem
kind
of
critical
because
we're
actually
trying
to
define
a
data
model.
A
B
A
B
A
Right
right,
and
so
that's
that's
a
place
where
I
think
we're
gonna
have
to
delay
it
a
bit
like
like.
I
don't
want
us,
rushing
a
logging
api
out
the
door
like
that.
I
don't
think
we
should
promise
that
to
people,
because
I'm
afraid
that
will
not
get
the
right
kind
of
attention,
especially
as
long
as
metrics
is
hanging
around
yeah.
B
A
I
still
think
there's
all
these
other
things
around,
like
the
overlap
between
logging
and
trace.
Events,
for
example,
is
a
thing
we
have
to
sort
out
like
you
know,
stuff
like
that,
or
you
know
like
we
now
have
like
logs
going
out
we're
like.
Is
it
like?
Every
log
has
a
trace
and
span
id
attached
to
it.
Like
that's
inefficient,
like
those
logs
should
all
be
trace.
Events.
B
Like
an
integrand
clarified
that
er
that
trace
events
are
the
same
with
logs
from
the
data
model
perspective
and
in
in
cpas
fast.
When
we
do
the
the
experimental
thing
on
the
logging
api,
we
actually
try
to
keep
the
same
signature
as
the
trace
events
and
for
attaching
trace
id
and
span
id,
and
even
in
the
current,
like
the
experimental
spec,
it
is
marked
as
optional.
So
we're
saying,
if
you
send
a
trace
event,
it
is
a
log
that
by
default,
has
trace
id
and
span
id.
B
A
I
feel
like
I
have
to
look
at
the
api,
but
it
or
look
at
the
data
model,
but
it
still
feels
like
inefficient
to
attach
those
things
unless,
in
the
data
model
they're
grouped
by
span,
but
even
when
you
have
that
it's
like
now,
you
have
like
span
logs
in
like
two
spots
like
they
can
show
up
here.
They
could
show
up
there
and
like
that,
maybe
that's
just
life,
but
even
that
I'm
like
like.
Do
we
just
like
retire
span?
A
Events
later,
like
we'd,
say
that's
kind
of
like
a
legacy
thing
and
like
actually
in
the
future,
like
our
sdks
should
just
put
all
of
those
into
the
logging
stream.
You
know
what,
like
with
those
are.
That's
like
just
one
of
the
areas
where
I
think
we
have
to
just
figure
out
what
makes
sense.
Oh
yeah.
B
There
has
been
so
many
like
conversations
in
you
know:
donut
stick.
I
I
heard
it
from
sigil
like
one
thing
is
that
if
you
put
the
span
events,
they
are
under
the
samplers
judgment.
So
if
the
senpai
decides
to
screw
that
up,
then
you
don't
have
any
log
like
the
events
on
the
span,
because
the
span
is
not
even
created
right.
This
is
just
a
dummy
one.
Great.
A
But
but
maybe
like
here's,
the
thing
like
you
can
now
with
with
tracing
and
trace
sampling,
you
could
actually
sample
your
logs
yeah,
which
is
like
really
critical,
but
then
idiots
like
take
these
logs
and
they
turn
them
into.
What
do
you
call
it
like
not
accounting
logs
but
really
important
logs
that
you
have
to
keep?
I
forget
the
word.
Why
am
I
forgetting
this
word.
B
So
I
I
started
to
see
a
lot
of
people
in
the
community
asking
hey,
I'm
using
asp.net
and
esp.net
has
its
internal
logs?
Can
I
convert
those
logs
to
spend
events
and
attach
that
to
trace
and
also
leverage
the
twist
sampler
or
I
have
the?
I
have
the
span
events
in
some
instrumentation
library?
Can
I
convert
that
into
the
I
longer
logs.
A
A
That
is
an
awesome
feature
to
give
people,
because
logs
are
it's
so
expensive
to
store
all
the
logs
and,
like
you,
don't
want
them
all
right
and
the
more
advanced
your
sampler
gets,
the
more
attractive
log
sampling
becomes,
but
if
we
don't
have
a
concept
of
audit
logs,
then
we
don't
solve
like
that.
Other
use
case
for
logs,
which
is
like
hey,
regardless
of
your
sampler
schmampler,
like
you,
have
to
save
this
one.
A
So
you
know
that
that's
why
it's
like
these
are
the
areas
where
open
telemetry
can
turn
around
and
be
like.
We
actually
gave
you
a
new
feature
that
you
didn't
have
before
here's
something
you
can
now
do
with
logs
that
you
couldn't
do
before.
We've
actually
moved
things
forward,
but
like
it
has
to
be
a
little
thought
more
thought
through
than
we
just
splat
the
logs
onto
the
traces
anyways
anyways,
it's
nine
o'clock.
We
should
we
should
run
but
good
talking.
Thank
you.
Bye.