►
From YouTube: 2022-04-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
C
Yeah
yeah,
I
was
just
in
the
office,
I
wasn't
traveling.
Last
week,
oh
okay,
we
had
some
sort
of
yeah,
it
doesn't
matter.
Everybody
went
in
one
day.
C
I'm
good
I'm
I'm
so
frustrated
about
these
versioning
issues.
I
like,
I
feel
like
it's
consuming
all
of
my
brain
space.
A
I
know
what
you
mean:
it's
it's
kind
of
the
the
whole
kind
of
foundation.
Why
I
started
to
you
know:
try
out
russian
and
pnpm,
because
one
thing
is
to
build
times,
but
that
wasn't
the
actual.
You
know
issue
like
why
I
started
investigating
it,
because,
because
the
problem
actually
was
from
the
versioning
right
and
the
hoisting
we
had
to
include
which
caused
versioning
issues.
C
A
C
Right
now,
I'm
having
problems
with
hoisting,
also,
if
I
could
turn
it
off,
it
would
be
okay,
I
think,
but
actually
it's
kind
of
good
that
I
ran
into
this
problem,
but
we
we
can
talk
about
it
when
we
get
to
the
the
item
on
the
agenda,
but
we
actually
introduced
a
change
which
makes
it
so
that
the
core,
so
that
sdk,
doesn't
work
with
old
versions
of
the
api,
which
is
obviously
not
good.
C
C
I
guess
we
can
probably
just
get
started.
Let
me
share
my
screen
here.
C
And
a
view
all
right,
martin
looks
like
you
added
the
first
item
to
the
agenda
here.
D
Yeah
hello,
so
I
we
just
have
a
question
for
this
group,
and
so
we
in
in
the
client
side
say
we
have
been
we're
currently
working
on
sessions,
trying
to
define
api
around
sessions,
and
part
of
that
is
trying
to
figure
out
how
to
you
know
how
to
send
the
data
processions.
D
When
we
are
concerned
about
duplicating,
I
mean.
Obviously,
the
easiest
solution
would
be
to
add
it
to
send
the
session
data
with
every
signal
as
an
as
attributes
on
every
signal,
but
we're
concerned
with
the
amount
of
you
know
the
overhead
in
the
data
center
over
the
wire.
D
So
one
of
the
options
we
have
been
talking
about
is
you
know:
would
it
be
possible
to
to
send
it
as
a
resource
resource
attributes?
D
The
challenge
with
that
is
that
sessions
can
change,
so
that
would
mean
that
we'd
have
to
propose
for
resources
to
be
mutable
in
some
way,
and
so
we
wanted
to
just
get
a
feel
from
this
group
how
difficult
that
would
be,
and
I
know
that
we've
been
told
that
daniel,
I
think
you
specifically
have
looked
into
that
son,
and
so
you
would
have
some
ideas
about
the
challenges
in
the
existing
sdk.
C
Yeah
I
mean
there's
no,
there's
no
technical
limitation
that
prevents
us
from
making
resources.
Mutable
from
what
I
understand,
I
mean
it
definitely
comes
from
the
spec,
like
the
spec
is
very
clear
that
resources
are
not
mutable,
and
I
know
that
certain
backgrounds
depend
on
that
behavior
like
they.
C
They
treat
a
different
resource
as
a
different.
You
know,
process
or
or
service
or
whatever
it
is
that
you
call
it
that
was
kind
of
before
we
had
the
service
name.
Attribute,
though
so
I
mean
it's,
it's
kind
of
a
weird
question
of
how
do
you
expect
a
back
end
to
differentiate
a
service
before
there
was
a
service
name?
It
was
like
you
know,
a
hash
of
the
resource
attributes,
basically,
but
now
that
we
have
that
it's
a
little
different.
C
C
E
A
couple
of
questions
with
this,
it
seems
to
me,
like
you,
still
technically,
want
like
the
modeling
of
the
resource
to
be
immutable,
but
you
want
it
to
be
like
changeable,
so
that,
like
you,
could
you
know
change
session
on
a
resource
or
yeah
change
session
on
a
resource
without
existing
resources
kind
of
accidentally
getting
that
session?
So
it's
kind
of
like
context
where
you
would
like
want
to
make
a
copy
of
the
current
resource,
but
replace
the
session.
E
If
that
makes
sense.
So
I'm
just
kind
of
wondering,
if,
like
the
the
framing
of
this
is
resources
would
still
be
immutable,
but
they
would
be
changeable.
I
guess,
with
with
regard
to
a
tracer
provider,
because
I
think
that's
kind
of
like
the
current
situation
is
that
like
as
soon
as
you
initialize,
your
tracer
provider,
like
the
resource
is
kind
of
like
set
in
stone
and
changes
thereafter
are,
are
a
problem?
E
I
mean
today
it's
not
technically
possible,
but
I
I
think
the
the
current
scheme
is
that,
like
as
soon
as
you
set
a
resource
on
a
tracer
provider,
that's
like
unchangeable
for
the
duration
of
that
tracer
provider,
but
I
think
the
proposal
I
guess
would
be
to
make
the
resource
something
that
can
be
changeable
per
tracer
provider
at
runtime
at
will,
but
not
make
the
resource
itself.
E
Mutable
just
make
it
possible
to
kind
of
update
a
resource,
make
a
copy
of
a
resource
and
change
something
and
then
reset
that
resource
on
the
tracer
provider.
Yeah.
D
E
E
At
time
one
you
have
resource
one
with
session
one
and
at
time
three
you
have
session
two.
You
don't
want
to
accidentally
update
that
resource
and
have
the
one
at
time
one
get
the
reference
to
the
other
session.
Yeah.
F
One
thing
that
you
want
is
when
you're
the
way
otlp
works.
Batches
of
data
are
associated
with
a
set
of
resources.
F
F
You
would
have
create
a
whole
new
set
of
references
and
you
would
have
like
either
a
pointer
or
like
a
reference
set
id
that
every
new
new
signal
that
gets
created
new
span.
What
have
you
would
be
associated
with
that
that
reference
id
or
that
resource
id,
so
that
when
all
these
things
are
coming
into,
like
the
export
pipeline,
they're
able
to
be
grouped
by
the
set
of
resources
they're
associated
with
yeah?
I.
C
Would
say
our
current
implementation
is
very
far
from
that
yeah
I
mean
not
not
an
impossible
task,
but
a
large
body
of
work
to
make
it
work
like
that,
because
currently
we
just
have
the
the
tracer
provider
has
a
reference
to
the
resource
and,
like
the
the
spans,
don't
have
individual
resource
references.
I
don't
think
so.
C
When
it's
exported,
you
just
take
the
resource
from
the
from
the
tracer
provider
and
use
it
so
like
you
know,
it
was
never
like
the
the
use
case
of
having
a
different
resource
per
span
was
never
something
that
we
considered
when
we
designed
it
in
the
first
place.
E
Because
I,
at
one
point
in
time,
every
span
had
a
resource
reference
did
like
every
metric
point
and.
C
F
C
C
So
you're
saying
rather
than
mutating
the
resource,
you
would
just
change
the
resource
in
the
resource
provider
yeah
or
in
the
tracer
or
whatever.
F
Yeah,
you
would
do
like
an
atomic
pointer
swap
in
the
the
tracer
provider
or
whatever
is
retaining
the
the
reference
to
the
resource,
but
you
would
never
mutate.
The
the
resource
set
object.
C
Yeah,
it
looks
like
the
resource
is
also
a
reference
is
created
at
construction
time
of
the
tracer
too.
So
if
you
change
the
resource
out
on
a
tracer
provider,
the
any
pre-existing
tracers
would
continue
to
have
the
old
resource.
F
Yeah,
it
could
be,
could
be
done,
yeah
there's
a
way
to
make
it
all
happen
further
down
in
the
export
pipeline.
I
believe,
if
you're
fine,
with
saying
things
things
only
get
the
new
set
of
resources
things
get
associated
with
the
current
set
of
resources
with
a
when
they
end,
because
when
spans
end,
that's
often
when
they
get
handed
down
the
export
pipeline,
but
I'm
kind
of
concerned
that
that's
not
actually
what
people
mean
when
they
try
to
update
a
resource.
F
C
I
would
think
spans
start
because
I
think
it
would
be
confusing
if
you
started
a
span,
for
instance
before
you
know,
if
you're
starting
a
session
for
instance,
and
that
you
know
there's,
there's
an
api
call
to
get
like
a
session
identifier
from
the
back
end
or
whatever,
and
that
span
would
is
then
associated
with
the
session.
That
would
be
weird
because
it
would
the
session
would
start
while
that
span
was
running.
F
Yeah,
so
I
think
that's
that's
simple
enough
to
to
just
since
tracers
retain
a
reference
to
a
tracer
provider.
You
know
every
time
they
create
something
new
like
a
like
a
span.
They
can.
They
can
just
request
basically
turn
into
a
function,
call
to
get
resources
rather
than
a
pointer
that
gets
assigned.
C
It's
created,
so
I
guess
I
go
back
to
my
initial
response.
Is
that
I
don't
see
it
as
a
big
technical
challenge,
but
I
think
you'll
get
some
pushback
from
the
specification
yeah,
but
as
long
as
that's
okay
and
understood,
then
yeah.
If
you're,
trying
to
to
get
an
idea
of
which
languages
this
will
be
a
problem,
for
I
would
not
say
that
javascript
is
a
blocker.
F
Yeah
yeah
we're
trying
to
to
get
all
of
our
ducks
in
a
row
so
that
we
can
make
a
a
coherent
proposal
because,
like
you
said
it's
it's
it's
a
it's.
It's
a
big
deal
to
make
to
make
this
proposal,
but
we're
certainly
coming
up
with
a
number
of
situations
on
the
client
side,
at
least
where,
where
this
is
the
case,
things
you'd
want
to
be
resources
session,
id
user
id
things
like
that.
Don't
don't
stay
totally
immutable.
C
F
C
I
suppose,
within
I
mean
within
the
browser
talking
about
processes,
doesn't
really
make
sense,
but
but
like
session
ids
and
stuff
like
that,
are
something
that
could
definitely
change
within
the
the
context
of
a
single
page
load
right.
So
there's
no
there's
no
guarantee
that
the
that
the
jobs
run
time
is
restarted
or
anything
like
that
and
in
a
modern
application,
is
probably
closer
to
guaranteed
that
that
won't
happen.
Yeah.
F
And
then
mobile
mobile
apps,
you
know
like
swift
and
android
apps,
as
well
as
like
desktop
applications
which
people
are
starting
to
use
this
stuff
for
then
you,
you
really
run
into
this
because
you're
talking
about
things
that
don't
don't
stop,
they
tend
to
not
be
stopped
and
reloaded
so
much
as
like
put
to
sleep
and
then
reawaken
at
some
random
point
in
the
future.
F
Where
now
the
time
zone
is
different,
the
location's
different,
you
know,
a
bunch
of
stuff
has
changed
that
you
may
have
been
used
as
like
a
resource
association,
yeah.
C
I
mean-
I
know
you
guys
have
have
talked
about
this-
a
lot
guys
and
girls,
but
is
this
possibly
more
of
a
data
model
problem?
We
we
have
span
resources
or
span
attributes
which
potentially
change
a
lot
with
every
request,
and
we
have
resources
and
resource
attributes
which
are
expected
to
basically
never
change.
C
But
it
sounds
like
you're
talking
about
some
sort
of
in
the
middle,
something
that
rarely
changes
where
you
don't
want
to
send
it
with
every
span
because
you'd
like
to
conserve
resources,
but
you
also
don't
want
it
to
be
like
a
resource
where
it
never
changes.
Is
this
potentially
something
where
the
data
model.
G
C
An
update,
or
is
that
I
mean
I
know
I
said
making
resources
mutable
would
be
a
big
ask
from
the
spec.
I
know
changing
the
data
model.
This
late
in
the
game
is
also
certainly
a
big
ask,
but
is
that
something
you
guys
have
considered.
F
Well
current,
currently
we
send
resources,
we
don't
at
the
data
model
level.
We
don't
really
leverage
the
fact
that
resources
are
totally
immutable.
It's
more
that
they're
like
an
envelope
for
a
batch
of
data,
so
here's
a
batch
of
data
and
then
the
resources
are
you
know.
A
set
of
attributes
on
that
batch
is
how
it
I
believe,
how
it
currently
works.
So
if
you
were
changing
resources
like
all
the
time,
if
it
was
like
highly
immutable,
then
that
would
that
would
fall
apart
a
little
bit.
F
F
That
was
my
other
question
because
from
talking
to
matt,
it
sounded
like
some
of
the
work
that
was
in
there
was
to
allow
like
multiple
tracer
providers
that
might
have
different
resources
like
share
the
same
exporter,
or
something
like
that
which
I
was
a
little
surprised
by.
I
was
just
curious
if,
if
something
like
that
was
already
going
on
in
js,
where
the
export
pipeline
was
was
just
having
to
deal
with
the
fact
that
not
all
the
data
coming
into
it
would
have
the
same
resource
reference.
E
E
While
you
can
only
have
one
global
tracer
provider,
you
can
technically
like
manage
an
independent
one
kind
of
yourself
off
the
books
somewhere
and
at
least
according
to
some
spec
conversations-
and
I
think
there
are
probably
some
issues
in
the
spec
repo
that
mentioned.
They
should
be
able
to
share
the
same
export
pipeline
because
there
was
there
was
some
discussion
like
way
back
in
those
days
about.
Why
can't
we
just
put
the
resource
reference
on
the
exporter
and
be
done
with
it
and
kind
of
there
was.
E
There
was
a
lot
of
talk
and
the
outcome
was
really.
You
had
to
staple
the
resource
on
to
like
every
span
in
order
to
make
things
work
properly,
which
is
kind
of
why.
F
Yeah-
and
so
it's
kind
of
I
think-
that's
I
don't
want
to
take
a
too
much
of
the
js6
meeting
time
on
this,
but
we
that
was
one
of
the
things
we
were
investigating
was
if,
for
other
resources
other
reasons
the
sdk
pipeline
already
has
to
deal
with
like
stuff
coming
into
the
exporter
that
may
have
different
sets
of
resources.
Then
then
we
aren't.
Actually,
we
aren't
actually
like
voicing
some
new
architectural
problem.
Like
that's
already
been
been
foisted
now,
the
biggest.
C
Problem
that
I
can
think
of
is
that
right
now,
most
of
our
exporters,
I
think
all
of
them
depend
on
the
at
our
our
exporter.
Interface
takes
a
list
of
spans
and
it
assumes
that
all
of
those
spans
have
the
same
resource
so
when
it
constructs
the
otlp
envelope,
it
just
takes
the
resource
from
the
first
l,
the
you
know,
the
first
element
in
the
list:
okay-
and
it
just
assumes
that
all
of
them
have
that
right,
explore
how
I
figured
it
worked,
yeah,
potentially
an
invalid
assumption.
C
If
this
change
is
made-
or
you
just
call
the
exporter
twice
like
there's
ways
around
this,
but
possibly
the
exporters
would
need
to
be
updated
to
allow
for
the
idea
that
multiple
spans
in
an
export
request
could
have
you
know,
different
resources.
C
But
again,
that's
not
an
enormous
change.
It's.
F
F
C
Something
that
should
be
done,
anyways
because
yeah,
I
don't
know
it's
in
my
opinion-
it's
a
it's
it's.
I
won't
call
it
a
bug
because
there's
no
way
to
fix
it
now,
because
it's
part
of
the
stable
api,
but
I
wish
that
that
we
had
caught
an
unfortunate
decision
that
our
export
interface
doesn't
match
the
protocol
interface
very
closely.
C
With
metrics.
We
tried
to
we're
we're
learning.
G
F
Okay,
thanks
yeah
we're
gonna,
you
know,
take
all
this
to
the
spec.
Obviously
we
just
as
part
of
making
like
a
full-fledged
hotep
proposal
for
this
when
we
get
to
that
stage,
we'll
be
prototyping
in
a
couple.
Languages
and
javascript
will
definitely
be
one
of
them.
Since
the
browser
is,
you
know
the
place.
F
We
want
this
right,
yeah,
okay,
so
so
that's
why
we
wanted
to
to
check
in
with
you
all
get
the
lay
of
the
land
and
also
not
totally
surprise
you
when
we
start
start
coming
back
with
maybe
questions
or
help
requests
for
the
prototype,
we're
building.
B
E
Yeah
on
that
note,
I
dropped
a
couple
of
spec
issues
from
a
long
time
ago.
That
kind
of
talk
about
this
weird
reason
why
you
have
to
stable
resources
on
two
stands,
so
this
should
be
pretty
if
you
find
that
this
is
pretty
universal
across
cigs.
That
would
be
why.
F
E
And
not
to
not
to
draw
this
out
any
further,
but
I
feel
like
one
thing
that
would
make
this
possibly
an
easier
sell
at
the
spec
level
and
for
everybody
else
is,
and
also
possibly
for
javascript.
E
So
it's
worth
bringing
up
is
there's
like
this
whole,
like
race
condition
in
the
beginning,
or
in
kind
of
a
process
startup,
while
you're
trying
to
like
resolve
your
resources
and
the
fact
that
they
can't
be
changed
at
run
time
is.
It
really
complicates
this.
It
limits
the
options
anyways
and
for
that
reason,
like
you
kind
of
have
to
load
your
application
code
in
a
promise
resolve
for
javascript,
which
is
like
not
awesome.
E
C
And
the,
if
resources,
what
you're,
what
you're
talking
about,
is
async
resources,
which
was
something
that
I
guess
a
can
that
got
kicked
down
the
road
and
then
it
got
kicked
too
far.
And
now
it's
unfortunately
with
us
forever,
but
I
think
changing
the
the
sdk
node
module
to
use
synchronous
resources
instead
would
be
possible.
Once
we
have
the
concept
of
changeable
resources
which
would
yeah.
F
Yeah,
I
mean,
I
think,
there's
like
other
other
bugbears
there,
but
I
mean
because
there's
a
question
of
whether
users
want
to
have
their
data
a
batch
of
data
going
out
without
all
those
resources
having
been
resolved,
which
is
like
a
separate
question.
You
know,
but
anyways
anyways.
This
is
awesome.
I'm
actually
a
little
stoked
to
see
that
the
architecture's
not
actually
that
far
off
from
where
we
would
need
it
and
for
what
it's
worth
at
least
for
the
session
stuff
we're
trying
to
build.
F
We
don't
need
to
go
as
far
necessarily
as
exposing
updateable
resources
in
the
hotel
api
layer
like
we
don't
need
to
add
a
resource
api.
Necessarily
if
the
if
the
interface
is
just
like
a
session
manager.
F
You
know
that
lets
you
start
a
session
and
end
the
session
and
start
a
new
session
or
whatever,
and
that
thing
you
know
has
access
to
updating
the
resource
on
the
like
the
sdk
level,
like
we
don't
necessarily
have
to
go
as
far
as
like
handing
this
over
to
like
end
users
as
like
a
generic
just
muck
with
the
api
thing.
Maybe
we
want
that,
but
it's
for
what
it's
worth.
I
I
I
think
if
there
was
a
simple
way
to
set
the
resource
on
the
trace
level
like
once
a
trace
started
within
a
service,
then
use
the
same
resource
for
the
entire
trace,
maybe
something
with
the
context.
I'm
not
sure
it's
a
big
problem,
but
it's
definitely
something
we
should
think
about
for
implementing
such
a
change.
F
I
would
love
to
have
trace
attributes
in
general,
but
that's
totally
other
concept,
side
path,
but
yeah
I
mean
for
what
it's
worth
traces
today,
totally
have
spans
that
have
different
resources
right,
like
so
distributed,
trace
spans
across
the
trace
are
coming
out
of
different
places
and
they
have
different
resources
attached
to
them.
So
right.
C
I
So,
like
maybe
like
a
back-end
use
the
resource
to
to
understand
if
a
new
instance
is
up
or
to
differentiate
expense
from
different
origins.
So
it
might
have
some
effect
on
this
area.
F
F
Okay,
I
think
we'll
we'll
get
out
of
your
hair
now,
with
this
sessions,
business,
okay,.
C
Feel
free
to
reach
back
out
or
come
to
more
meetings.
If
you
have
more
questions.
C
I
guess
we'll
move
on
to
the
the
next
topic
here,
thanks
for
thanks
for
coming
thanks
for
contributing
yeah
speaking
of
instrumentation,
the
instrument,
the
instrumentation
stability
guidelines
merged
we've
talked
about
this,
for
the
last
couple
of
weeks
emerged
mostly
unchanged
from
what
we
talked
about
last
week
and
the
week
before,
although
tigran
did
create
issues
to
address
some
of
the
concerns
that
were
brought
up
by
not
just
arsenic
but
by
others
as
well.
C
So,
if
you're
interested
in
the
final
state
of
that
and
if
you
maintain
instrumentation,
you
should
be
then
take
a
look
at
the
merged
state
there.
C
Svetlana.
Are
you
on
the
call
here?
Yes,
yes,
I
was
just
wondering
what
is
the
current
status
of
the
of
the
security
environment
variable
pr
are
we
are
we
up
to
date
or
are
we
still
waiting
on
some
of
the
changes
from
tigran's
clarifications.
G
D
C
Okay,
why,
oh
because
it
depends
on
the
core
package
right
yep
for
the
environment,
okay,
yeah!
I
got
it.
I
was
working
on
that
this
morning
and
it
was
depressing
me
and
that's
what
we're
about
to
talk
about
next,
but
I
will
try
to
get
that
in
as
soon
as
possible.
C
Okay
sounds
good
okay,
so
that
brings
me
to
this
topic
here.
This
is
actually
a
pr
that
she
was
just
talking
about
and
in
creating
this
pr,
I
ran
into
several
versioning
issues.
C
And
we
haven't
run
into
this
yet,
but
in
creating
this
pr,
a
sort
of
weird
dependency
chain
meant
that
an
older
api
got
installed
and
that
actually
broke
the
pr
which
it
should
not
have
done.
The
the
sdk
should
continue
to
work
with
api
1.0
and
it's
a
simple
type
change.
It's
simple
thing
to
revert.
It
actually
hasn't
been
released
yet
so
it's
not
a
problem,
but
we,
I
do
believe,
need
to
find
a
way
to
prevent
this
from
happening
in
the
future.
C
I
don't
know
if
anyone
has
a
different
idea
for,
for
you
know,
working
on
that.
We
could
also
just
have
like
test
packages
like
we
do
for
the
node
back
compatibility
tests,
but
I
I
think
this
is
probably
an
issue
that
other
people
have
run
into
as
well.
At
least
in
the
contrib
repository,
I
think
rano
you've
run
into
this.
C
I
A
We
have
that
the
situation
in
some
of
the
country
repos,
but
we
have
done.
We
separate
the
new
feature
tests
to
a
separate
file
and
then
run
you
know
the
files
per
the
version
tested.
So
we
test
all
versions
allows
all
that
configuration.
So
so
that
specific
case,
if
I
understood
you
correctly,
should
not
be
a
problem.
C
C
Yeah.
I'm
not
super
familiar
with
test
all
versions,
but
it
sounds
like
ronald
is
pretty
confident
that
that
could
be
worked
around.
A
Yeah,
you
know
in
a
way,
but
I
mean
we
could
essentially
have
a
totally
different
test
suit
for
each
of
the
api
versions
right.
So
if,
if
everything
goes
to
you
know
that,
then
then
we
can
just
have
a
different
test
versions,
which
probably
we
don't
have
to
do
ever,
but
but
just
some
tests
probably
require
a
more
recent
api
version
and
we
could
just
run
those.
C
A
I
mean
we,
we
know
we
control
all
the
test
code
right.
We
can
tell
the
test
which
api
version
it
runs.
So
it's
a
matter
of
implementing
how
we
implement
the
tests.
C
Okay,
well,
in
any
case,
I
am
gonna
have
to
revert
the
change
that
was
made,
which
which
broke
compatibility
with
1.0
in
order
to
get
this
pr
finished.
But
it's
the
last
change
that
should
move
that
pr
along.
So
I
had
been
hoping
to
finish
that
before
the
call
today,
but
I
ran
out
of
time,
but
hopefully
this
afternoon,
and
then
after
that,
I
will
probably
work
on
a
a
draft
for
the
trying
test.
All
versions
for
the
api
in
the
in
the
main
repo.
E
Question
about
how
this
actually
arises,
and
possibly
whether
or
not
it
can
actually
just
be
solved
by
specifying
slightly
more
like
narrow
version
version
ranges.
But
first
let
me
make
sure
if
I
understand
how
this
problem
like
shows
up.
It
seems
like
this
carrot:
1.0.0
could
pick
up
like
a
1.1,
but
because
resource
detected,
aw
aws
has
a
1.0.x.
C
Yeah
it
takes
the
like.
The
the
learn
of
hoisting
takes
all
of
the
specified
version
ranges
and
then
uses
the
highest
version
in
the
intersection
of
all
of
the
requested
ranges,
and
that
resource
detector
aws
depends
on
like
the
tilde
version.
1.0.X
so
only
accepts
patch
updates.
So
when
the
hoisted
api
is
installed,
it
installs
the
1.0
api,
which
is,
should
be
fine,
but
it
turned
out.
It
wasn't
fine.
So
that
was
why
we
ran
into
the
the
compilation
issue.
C
There
was
a
change
made
in
core
which
depends
on
the
new
1.1
api
in
not
core,
like
the
repository
like
the
actual
core
package,
which
depends
on
the
1.1
api,
but
the
peer
dependency
version
range
in
core
doesn't
match
that
it.
It
says
it
should
be
able
to
work
with
1.0,
but
because
we
don't
test
with
both
or
compile
against
both
during
development.
E
Would
like
the
the
real
fix,
be
to
specify
that
you
know
as
soon
as
core
uses
a
new
method
in
api
1.1,
it
should
depend
on
1.1
and
basically
it
should
not
be
able
to
be
satisfied
by
1.0.x,
and
you
should
at
least
get
a
warning
from
your
package
manager
that
things
don't
line
up
and
and
then
you
have
to
update
your
api
yeah
and
then
you
have
to
actually
fix
things.
E
C
So,
that's
why
I
mean
it
should
work
at
runtime.
That's
why
I'm
saying
we
should
test
against
each
version
of
the
api,
because
we
don't
know
which
version
of
the
api
our
user
has.
C
I
guess
it's
good
that
we
have
ted
on
the
call
today
because
he
was
the
one
that
wrote
the
original
versioning
document,
but
as
far
as
I
am
aware,
and
the
way
that
I
interpreted
that
document,
if
you
update
your
sdk,
it
should
continue
to
work
with
old
versions
of
the
api
according
to
the
specification.
C
So
we
cannot
just
say
you
need
to
update
your
api,
so
we
need
to
test
against
the
old
versions
of
it.
Some
somehow
test
against
compile
against
whatever.
C
So
yeah
it
is,
it
is
backwards
compatible,
but
then
backwards
compatibility
for
users
who
are
calling
the
api
is
very
easy
backwards.
Compatibility
for,
inter
for
implementing
packages
which
implement
interfaces
exposed
by
the
api,
is
significantly.
G
C
Difficult
and
I
believe
that
that
was
not
ever
guaranteed
so.
J
Yeah,
I
think
this
is
more,
I'm
not
sure
if
it
was
tigran
or
bogdan
talked
about
forward
compatibility.
So
this
is
having
an
older
api
work
with
a
newer
set
of
sdk.
So,
like
newer
code.
J
Versions
or
other
way
around
so
effectively,
something
can
say:
okay,
I
need
at
least
1.1
and
if
you're
in
1.0
they
can
and
then
you
can
use
1.1.
But
if
you're
1.1,
you
can't
necessarily
use
1.0,
does
that
make
sense,
so
the
difference
between
forward
and
backward
compatibility.
C
J
C
That
I'll
tell
you
the
specific
change
that
was
made
that
broke
compatibility.
We
renamed
the
the
attributes
type
from
span
attributes
to
just
attributes
so
that
it
could
be
used
everywhere.
C
You
know
it
so
that
it
can
be
used
for
metrics
and
logs
and
stuff
like
that
as
well,
and
in
order
to
maintain
backwards
compatibility,
we
export
it
under
both
names,
so
old
code
that
references
span
attributes
continues
to
work.
Just
fine,
so
that's
fine,
it's
backwards
compatible,
but
then
in
the
open,
telemetry
core
package,
somebody,
I
believe
it
was
legendicus
changed
the
type
reference
to
the
new
name.
C
So
then,
when
it
was
installed
with
the
old
api
that
doesn't
that
isn't
aware
of
that
name
where
the
compilation
broke,
so
it
needs
to
be
updated
to
reference.
The
old
name
again
and
end
user
code
can
can
reference
the
new
name
just
fine,
but
in
the
sdk
we
can't
because
we
need
to
work
with
old
versions
of
the
api.
C
F
F
I
think
the
the
expectation
I'm
looking
at
the
the
upgrade
path,
doc
that
I
wrote
forever
ago,
which
I
realized
I
don't
think
actually
got
copied
into
the
spec.
I
should
fix
that,
but
but
I
posted
a
link
in
the
chat
to
a
doc
that
described
in
theory.
F
What
we
were
thinking
upgrade
procedures
would
look
like
for
for
end
users,
but
I
think
implicit
in
there
was
the
idea
that
if
the
sdk
supported
the
latest
version
of
the
api
and
the
api
didn't
have
backwards
compatibility
issues,
there
wouldn't
be
any
reason
why
older
versions
of
the
api
wouldn't
work
with
newer
versions
of
the
sdk.
C
C
C
F
C
Super
duper
yeah,
so
with
that
like
we
are
we're
all
good
there.
It's
just
our
sdk.
The
current
unreleased
version
broke
that
promise,
so
we
just
have
to
fix
it.
Okay,
yeah,
and
we
have
to
find
a
way
to
test
against
that
in
the
future,
because
we
weren't
testing
against
it
and
we
we
very
nearly
made
the
mistake
of
releasing
an
sdk
that
would
have
broke
compatibility
with
the
old
api.
F
Yeah
another
like
useful
test.
I
might
go
around
to
maintainers
and
suggest
this,
like
one
thing
that
would
cause
real
hell
in
our
ecosystem
would
be
backwards.
You
know
basically
different
libraries,
depending
on
different
versions
of
the
api,
creating
a
dependency
conflict
like
a
transitive
dependency
conflict
with
each
other.
C
Easy
to
do
in
javascript,
and
then
it
turns
out.
If
that
happens,
multiple
versions
of
the
api
are
just
installed
and
they
are
like
yeah
that
it's
it's
a
nightmare
is
the
short
situation,
particularly
in
javascript
and
I'm
sure
in
other
languages
as
well.
Yeah
javascript
has
a
concept
of
peer
dependencies
to
get
around
this,
which
essentially
just
defers
the
decision
to
the
I
guess.
What
open
telemetry
calls
the
operator,
like
the
the
end
user
application
installs
a
particular
version
of
the
api,
and
then
all
of
the
packages
declare.
C
I
will
work
with
any
of
these
versions
of
the
apis,
but
you
have
to
install
it.
So
it's
not
like
a
dependency.
That's
automatically
installed,
that's
the
way
that
javascript
kind
of
gets
around
it,
but
in
different
versions
of
javascript
peer
dependencies
are
handled
differently
and
javascript
is
a
famously
fractured
ecosystem,
so
different
build
tools,
handle
it
differently.
Different
versions
of
npm
handle
it
differently.
C
The
concept
of
optional
peer
dependencies
is
a
fairly
new
one,
as
well
and
in
recent
versions
of
npm
peer
dependencies
are
automatically
installed,
which
previously
they
were
not
so
supporting.
Multiple
versions
of
anything
in
javascript
is
a
total.
C
But
this.
F
Doesn't
help
it's
it's
at
least
a
little
bit
squishier
in
a
dynamic
language
than
strictly
typed
runtimes,
where
literally
you,
you
can't
load
up
multiple
versions
of
the
same
thing
but
yeah.
The
the
two
things
we
kind
of
want
to
avoid
in
our
ecosystem
are
transitive
dependency
conflicts
right
where
I
can't
basically
like.
F
I
can't
upgrade
my
sdk
to
the
latest
version
and
get
that
like
security
hotfix,
that
I
want,
because
some
library
or
code
that
I
don't
control,
is
dependent
upon
an
older
version
of
the
api
that
the
new
sdk
doesn't
work
with.
That's
that's!
That's!
That's
the
the
problem
that
we
have
in
our
ecosystem.
F
J
Just
to
like
you
know,
based
on
the
statement
here,
saying
that
the
version
of
the
sdk
needs
to
work
with
the
previous
one,
but
it's
also
going
to
be
released
with
the
new
one.
That
really
sounds
like
you're
putting
in
stone
that
we
must
support
forward
compatibility
which
I
don't
think
is
really
a
viable
option.
I
B
I
J
Which
is
fine,
so
if
you
go
to
like
version
1.1
of
the
api,
then
anything
that
that
use
earlier
versions
would
continue
to
work.
That's
that's
backward
compatibility,
I'm
talking
about
here
where,
if
you
go
forward
to
version
1.1
of
the
sdk
and
then
saying
that
needs
to
work
with
1.0
of
the
api,
that's
forward
compatibility,
saying
that
the
api
1.0
needs
to
work
with
newer
versions
of
the
sdk,
and
I
don't
think
we
want
to
play
that
game.
F
F
If
that
new
version
of
the
sdk
doesn't
support
the
old
versions
of
the
api
that
those
other
libraries
are
using,
then
you're
now
stuck
right,
like
you,
can't
upgrade
the
sdk
to
use
the
new,
the
library
that
wants
the
new
api
and
you
can't
stick
on
an
old
version
of
the
sdk,
because
it
doesn't
support
the
new
api.
So
so
I
think
the
the
only
solution
is
have
some
flexibility
somewhere
and
I
think
the
place
to
have
it
is
to
just
ensure
that
newer
versions
of
the
api
aren't
mutating
things
like
they're,
adding
things.
J
Yeah
yeah,
but
I
I
agree
with
that
bit
because
because
that's
that's
saying
that
the
newer
api
is
backward
compatible
so
yeah
exactly
that's,
that's
the
only.
I
think.
J
But
what
what
we
just
said
earlier
was:
we
also
must
be
forward
compatible.
So
so
we,
if
we
have
a
newer
version
of
the
sdk,
it
must
work
with
the
older
version
of
the
api.
That's
the
game.
I
don't
think
we
want
to
play.
I
think
we
want
to
say
well,
we
have
a
newer
version
of
the
sdk
which
got
released
in
tandem
with
the
newer
version
of
the
api.
C
F
Maybe
you're
making
a
subtle
distinction.
Nev!
That's
just
like
literally
saying
that
the
dependency
like
version
listed
for
all
those
libraries,
you
should
be
able
to
like
overwrite
that
with
the
latest,
so
that.
F
Compile
with
a
new
version
of
the
api
and
they'll
still
work,
because
you
didn't,
you
didn't
cause
anything
that
would
that
would
break
their
usage.
J
Down
to
what
I
think
daniel's
saying
is
we
have
different
packages
using
different
upgrade
parts
where
they're
using
tilde
or
the
carrot
right,
and
I
think
we
need
to
sort
of
say
well
we're
always
going
to
be
backward
compatible.
So
therefore,
it
is
safe
for
people
to
always
use
carrot.
F
F
Exactly
in
an
ideal
world,
you
also
support
the
reality
where
someone
has
like
pinned
a
library
to
some
old
cranky
version,
but
you
know,
I
think,
it's
easier
to
override
stuff
in
like
a
a
dependency
list
than
it
is
to
tell
someone
they
have
to
actually
make
a
code
change
and
rewrite
their
instrumentation.
That's
that's
the
the
nasty.
C
J
This
is
the
newer
version
of
the
sdk,
so
in
this
particular
case,
it's
easy
because
it's
because
the
name
is
exported,
so
we
just
need
to
go
back
and
and
use
the
older
name,
but
if
it
was
a
new
function
that
didn't
exist
in
the
older
api
that
that's
that's
the
forward,
compatibility
that
I'm
talking
about
it's
like.
Yes,.
G
C
Which
I
mean
we
did
so
we
added
an
optional
parameter
to
add,
schema
url
to
the
git
tracer,
and
because
it's
an
optional
parameter,
it's
okay,
it
still
works
and
that
that
upgrade
was
fine
and
worked
completely
fine.
We
just
accidentally
referenced
the
new
name
of
the
type
when
we
should
have
continued
referencing,
the
old
name.
F
Yeah-
and
I
don't
know
how
hard
it
is
to
create
like
a
test
suite
that
just
like
checks
for
transitive
dependency,
conflicts
or
any
of
these
kinds
of
things,
but
it
might
be
helpful
to
to
have
something
like
that,
if
you
don't
have
it
already,
because
it
can
be
hard
to
guess
whether
these
things
work
or
not,
you
know
that's
the
unfortunate
reality
of
it's
actually
in
almost
every
language.
It's
actually
really
hard
to
know
whether
or
not
you've
broken
backwards.
C
Yeah
we're
we're
running
out
of
time.
This
is
obviously
an
important
topic,
but
I
do
want
to
make
sure
if
anything
else,
if
anyone
absolutely
needs
to
cover
anything
else.
Now
is
the
time
before
that
the
time
ends.
C
K
C
On
vacation
right
now,
I
have
taken
on
this
pr.
I've
made
some
cleanup
changes
on
a
local
branch,
but
I
haven't
pushed
him
yet
he'll
be
back
tomorrow,
though
so
I'll
push.
My
cleanup
changes-
and
this
is
mostly
ready
for
review
he'll,
be
back
tomorrow
and
yeah,
we'll
we'll
continue
working
on
this.
K
Okay,
yeah
thanks.
That
was
basically
my
question
and
it
won't
use
the
newer
protostar
right
the
hand.
I
Outside
so
from
what
I
read
in
the
spec
when
we're
saying
that,
if
an
old
sdk
cannot
work
with
a
new
api
right
right,
but
it
is
possible,
the
new
api
will
be
introduced
into
the
application,
with
an
old
sdk.
So
for
in
for
automatic
instrumented
libraries.
We
control
it
with
the
peer
dependency.
But
if,
for
example,
redis
will
want
to
instrument
the
package
internally
like
not
use
an
instrumentation
library,
then
they
can
depend
on
any
api
version
that
they
want
and
we
can't
control
it
like.
F
Our
thinking
was
that
users
as
a
the
sdk
is
only
installed
once
it's
installed
by
the
application
owner
right,
like
the
libraries,
don't
depend
on
it,
and
so
it
should
be
possible
if,
when
the
application
owner
is
building
their
application,
if
they
pull
in
a
library
that
needs
a
new
version
of
the
sdk,
then
the
application
owner
must
then
upgrade
to
a
newer
version
of
the
sdk
and
which
is
why
the
new
versions
have
the
sdk
have
to
work
with
the
old
apis
and
also
why
we
want
to
be
very
slow
and
wary
about
deprecating
sdk
plug-in
interfaces
right
like
in
other
words,
we
want
to
make
it
as
easy
as
possible
for
end
users
to
always
just
be
on
the
latest
version
of
the
sdk
and
not
not
ever
feel
the
need
to
be
pinned
at
some
old
version
of
it.
I
F
Bingo,
so
so
that's
why
it's
important
to
make
sure
that
that
end
users
can
roll
forwards
to
the
latest
version
of
the
sdk
without
without
trouble,
because
you're
get
we're
we'll
get
people
stuck.
If
we
don't.
If
we
don't
do
something
like
that,.
I
Yeah,
but
it
means
that
every
package
that
depends
on
the
api
like
makes
the
user,
like
you,
have
to
to
use
a
specific
version
of
the
sdk
to
use
it
right.
F
Well,
but
hopefully
the
we
want
to
that's
why
it's
so
important
to
keep
the
api
backwards
compatible
so
that
the
latest
version
of
the
api
is
guaranteed
to
work
with
all
those
old
sorry.
The
latest
version
of
the
sdk
is
guaranteed
to
work
with
all
those
old
versions
of
the
api
that
as
long
as
as
long
as
that
works
as
long
as
the
latest
sdk
supports
all
the
older
apis
that
are
that
might
be
installed
in
that
person's
application.
F
Yeah
yeah,
I
mean
that's
because
it's
a
cross-cutting
concern
the
moment.
Anyone
anywhere
in
your
whole
application
stack
decides
they
want
something
from
the
latest
version
of
the
api,
then
that
application's
gonna
have
to
bump
to
the
latest
version
of
the
sdk.
So
as
long
as
it's
as
long
as
that
upgrade
path
is,
is
not
onerous,
meaning
that
you're
not
breaking
support
with
old
apis,
including
old
plug-ins.
F
Then
it's
not
a
big
deal
to
to
do
that,
and
in
fact
you
want
people
doing
that
for
security
reasons.
This
is
like
the
new
world,
where
we
really
care
about
security
means,
like
part
of
ensuring
that,
as
software
developers
is
ensuring
that
people
can
can
can
continue
to
consume
new
versions
of
your
stuff
without
getting
stuck
somewhere.
I
I
could
I
I'm
afraid
that
it
might
be
different.
People
like
someone
is
responsible
for
everything
that
is
open,
telemetry
in
the
application
and
some
other
teammate
member
just
want
to
upgrade
something,
and
he
doesn't
know
that
it
will
affect
open
telemetry
in
some
way.
F
It's
a
new
finding
for
me,
this
is
like
yeah.
One
of
the
reasons
why
backwards
compatibility
is
so
important
and
yeah.
I
wish
we
could
emphasize
it
emphasize
it
harder.
It's
also
one
of
the
reasons
why
we
have
the
api
and
the
sdk
so
decoupled
from
each
other
right,
because
it
it
minimizes.
F
But
if
we
can,
you
know,
put
some
testing
in
place
or
just
generally
ensure
that
you
know.
Upgrading
to
the
latest
version
of
the
sdk
is
like
a
smooth
process
for
people
to
the
point
that
you
know
they
would
be
encouraged.
You
know
in
their
you
know,
dependency
file
to
to
list.
F
The
version
is,
you
know,
just
pull
in
the
latest
up
to
the
next
major
version
bump
or
whatever
that's
what
we
we
want
to
encourage
people
to
do,
and
hopefully
the
hope
is
that,
like
if
two
teammates
do
something
out
of
sync
with
each
other
they're
teammates
right,
so
they
can
at
least
talk
to
each
other
but
yeah.
At
the
end
of
the
day,
when
you're
compiling
an
application
out
of
a
bunch
of
dependencies,
you,
you
are
forced
to
resolve
all
those
dependencies.
C
F
Yeah
sounds
good,
which
might
be
good
advice
to
post
somewhere,
where
my
hope
is
that
when
our
api
is
stabilized
like
metrics
and
logs,
stabilize
we're
gonna
encourage
library
developers
to
start
becoming.
You
know
first
party
instrumentation
native
instrumentation,
but
creating
like
a
guide
for
those
authors
would
probably
be
helpful
and
advice
like
this
is
like
probably
the
kind
of
stuff
that
you
want
to
you
know
put
in
it.
C
C
Which
is
right
now
the
latest
with
a
carrot
range,
but
then
you
know
that's
just
the
latest
api
and
it
causes
the
problem
that
amir
was
just
talking
about
now.
Anybody
that
uses
this
new
version
of
redis
that
I
just
released
that
I'm
really
excited
about.
They
now
all
have
to
update
their
sdks,
which
is,
as
the
redis
author,
not
something
that
I
was
intending
right,
I'm
maybe
not
using
those
new
apis.
C
I
just
used
what
the
default
was
yeah
and
we
need
to
make
the
guidance
very
clear
that,
like
if
you're,
if
you're,
not
using
new
apis
that
were
released
in
1.1,
then
you
should
depend
on
1.0
if
possible,
yeah,
1.0
and
higher.
C
Extremely
difficult,
probably
what
we
will
want
to
do-
and
I
had
written
this
in
my
notes
actually
this
morning-
to
go
annotate
in
the
js
comments,
every
single
method
and
property
on
the
api
with
which
version
introduced
it.
C
Would
be
helpful
because
yeah
I
I
think
that
when
you
tell
users
depend
on
the
earliest
version
of
the
api
that
you
know
if
you're
not
using
new
features,
then
then
don't
depend
on
that
later
api.
The
immediate
question
is
well,
which
features
are
the
new
features,
and
how
am
I
supposed
to
know
that?
Am
I
supposed
to
meet
your
entire?
C
C
I
think
that
that
is
going
to
be
a
huge
headache
for
particularly
first
party
instrumenters,
like
the
early
adopters
to
that
this
is
going
to
be
a
huge
headache
for
them,
and
I
don't
think
that
most
of
them
are
likely
to
look
in
the
repo
for
like
a
a
guide
on
how
to
do
it,
just
because
there's
a
well-established
yeah.
F
But,
but
that's
also
why
it's
important
this
isn't
like
a
new
problem
that
open
telemetry
is
inventing
right,
like
it's
very
normal,
for
you
to
depend
on
a
library
that
depends
on
grpc,
and
then
you
update
that
library
and
now
that
library
is
depending
on
a
new
version
of
grpc
and
that
shouldn't
be
a
big
deal
because
then
you
just
go
okay,
I
have
to
upgrade
my
overall
grpc
dependency
to
this
new
version.
This
library
forced
me
to
do
it.
F
However,
anyone
who's
worked
with
grpc
for
a
long
period
of
time
knows
that
this
is
hell
because
they
they
make
incompatible
version
changes
on
a
regular
basis,
and
I
have
been
in
multiple
code
bases
where
I
have
gotten
stuck
where
for
this
reason,
where
one
library
I
want
wants
one
version
of
grpc
and
another
library,
I
want,
wants
another
version
of
grpc
and
it's
not
like
they
did
something
wrong
by
depending
on
those
versions
of
grpc.
F
Right-
and
you
know
you
can
get
around
this
in
in
some
languages
in
some
ways,
but
that's
the
I
guess
what
I'm
saying
is
it's
not
it's,
not
the
library's
responsibility
to
to
sort
that
out.
They
should
be
able
to
take
a
dependency
on
the
stuff
you
offer
it's
just
like
we.
We
want
to
be
different
from
the
grpc
team
in
the
sense
that
we
want
to
recognize
that
that
would
create
a
bad
situation
for
us
even
more
so
because
we
have
this
centralized
sdk
thing
that
we
wanted
all
to
pipe
into.
F
So
so
our
problem
is
is
worse
than
than
their
problem,
in
the
sense
that
these
things
are
going
to
interact
with
each
other.
But
as
long
as
the
solution
is
just
we'll
upgrade
your
sdk
as
long
as
we
can
just
tell
that
to
anyone
who
comes
in
who's.
Like
I
have
this
problem,
I
pulled
in
redis
and
once
the
new
version,
the
fck
should
be
able
to
just
say,
we'll
update
your
sdk
yeah.
C
F
F
It
can
be
a
little
hard
to
know
whether
you've
broken
that
stuff
right,
because
you're
not
gonna,
have
a
compiler
or
something
just
just
tell
you
that
you
did
it,
but
but
it
that's
that
that
kind
of
sdk
compatibility,
I
think,
is
like
super
key
to
us
having
like
a
happy
user
base,
because
we
can't
do
anything
about
the
api
sku,
like
that's
just
like
a
reality
of
a
cross-cutting
concern
like
instrumentation
like
you
just
like
you,
that's
just
like
comes
with
the
territory
so
yeah.
F
Hopefully
it's
not
it's
feasible
to
to
keep
one
version
of
the
sdk
continuously.
You
know
keep
a
version
of
the
sdk.
That's
that's
generally
backwards
compatible.
F
The
other
part
of
it
to
watch
out
for
is
like
sdk
plugins.
That's
where
you
know
in
the
past.
I
often
get
stuck
with
frameworks,
so
that's
the
other
part
to
think
about
which
is,
I
want
to
bump
up
to
the
latest
version
of
rails
or
vue,
or
something
like
that.
But
I
can't
move
forwards
on
my
framework
because
I
depend
on
these
plugins
that
are
kind
of
old
and
the
the
sdk
framework
thing
doesn't
support
those
plug-ins
anymore.
F
F
Yeah
and
a
thing
in
the
versioning
dock,
the
thing
we
recommend
is:
if
you
do
need
to
change,
how
plugin
interface
works.
You
need
to
break
it
to
take
the
approach
of
instead
of
mutating,
that
existing
interface
create
a
second
one.
That's
the
new
one
and
then
have
a
some,
hopefully
not
too
inefficient
bridge
under
the
hood.
F
That
supports
both
interfaces
for
ideally
some
long
period
of
time,
so
that
there's
a
window
where
the
new
plug-in
interface
is
like
out
and
available
to
plug-in
developers,
but
the
old
one
still
works,
and
you
know
then
there's
room
for
those
plug-in
developers
to
to
migrate
to
that
new
plug-in
interface
before
you
eventually
like
deprecate
and
kill
off
the
the
old
one.
F
If
you,
if
you
need
to,
if
you
don't
need
to
don't
ever
kill
it,
but
that's
that's
the
solution
on
the
the
sdk
plug-in
side,
which
is
the
thing
I
wish
more
frameworks
did,
instead
of
coming
out
with
a
new
version
of
their
framework
and
all
new
plugin
interfaces
that
instantly
breaks
the
entire
ecosystem,
because,
like
of
course,
nothing
supports
those
new
interfaces
because
you
just
released
it
yesterday.
I
F
F
F
You
know
for
some
time
and
to
be
clear:
we've
already
done
this
right,
like
like
open
tracing
was
like
actually,
the
1.0
interface
and
open
telemetry
is
like
the
2.0
interface
and,
and
the
sdk
currently
supports
supports
both
of
them.
So
so
we've
already
done
that
jump
once
but
but
yeah,
let's
not,
let's
not
get
excited
about
about
2.0
api.
Anything
anytime
soon,.
J
Yeah,
it
would
be
1.99,
so
I
guess
to
summarize,
in
an
ideal
world
we're
saying
we
want
everyone
to
use
carrot
and
have
it
pinned
based
on
their
minimum
level
and
therefore
we
could.
We
could,
in
theory,
release
a
1.1
sdk
that
depended
on
a
1.1
api
and
everything
would
work.
That's
the
ideal
world.
The
reality
is,
we
have
authors
out
there
that
are
appending
to
specific
versions
or
using
tilde.
J
That
would
actually
block
that
from
happening.
So
I
I
guess,
daniel
as
part
of
your
identifying,
which
sdk
introduced
what
we
probably
want
to
make
that
clear,
saying
we
recommend
we
are
guaranteeing
backward
compatibility.
So
therefore,
whenever
you're
consuming,
you
should
use
carrot,
yeah
and
then
going
forward.
We
can
bring
the
sdk
up
because
you're
trying
to
test
a
1.1
sdk
with
a
1.0
api.
J
C
Yeah
yeah
and
eventually,
at
some
point
we
may
want
to
say
the
sdk
has
a
minimum
version
of
the
api
right
like
we
don't
necessarily
25
years
from
now
want
to
be
supporting
version
1.0.0
of
the
api
or
at
least
testing
against
it.
Every
time
we
run
the
test
suite,
but
for
now
there
aren't
enough
versions
to
worry
about
that.
F
Yeah,
but
I
mean
you
know,
microsoft
is
planning
to
put
open
telemetry
into
things
like
office
and
windows,
and
those
are
software
packages
that
have
like
a
10-year
support
window.
F
So
you
know
old
instrumentation,
never
dies.
That's
that's
one
truism.
You
could
probably
do
like
a
maven
search
for
like
how
much
ancient
log
for
j
crap
is
like
still
out
there.
It's.
It's
definitely.
C
Yeah,
okay!
Well,
I
I
think
everybody's
kind
of
on
the
same
page
right
now
it
is.
There
are
obviously
some
drawbacks,
but
I
think
for
now
we're
on
the
same
page
that
we
have
to
test
against
the
old
versions.
That's
the
takeaway
yeah.
F
And
I
think
coming
up
with
good
solutions
here,
I
think
we
can
pay
it
forward
to
the
js
community
by
the
way
figuring
out
how
to
do
this
stuff.
One
thing
I
think
we
can
talk
about
in
the
future
to
the
js
community
is
about
this
kind
of
compatibility,
support,
inversion
management,
because
we
have
an
extreme
case
of
this
problem
as
a
cross-cutting
concern.
F
But
what
we're
actually
doing
here,
I
believe,
is
like
best
practices
that
you,
if
you're
an
open
source
library
author
giving
people
your
stuff
like,
like
you,
should
care
about
these
kinds
of
concerns,
and
so,
if
we
can,
if
we
can
come
up
with
a
coherent
way
of
describing
how
we
do
that
here,
I
think
that's
that's
like
a
benefit
for
the
js
community
at
large
to
model
off
of.
I
H
Fine
yeah,
it
should
work
just
fine
and
even.
C
If
you
call
the
old
version
of
the
api
with
this
schema
url,
so
if
you
used
the
new
property
right
now,
it
would
actually
because
it's
javascript
just
be
fine.
H
C
But
we
don't
in
the
tests
called
the
api,
we
call
the
sdk.
So
that's
not
a
problem
yet.
C
C
C
Okay,
so
see
you
next
week,
yeah
see
you
next
week,
we'll
probably
talk
about
this
then
too.
So
all
right
don't
be
surprised
to
see
this
come
back
up.
G
I'm
in
dallas,
texas,
dallas.