►
From YouTube: 2022-03-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
how
was
it,
how
was
the
trip
yeah?
It's
pretty
cool,
like
I
really
like
porto
and
I've
been
there
once
before,
like
10
days
years
ago,
so
and
had
lots
of
good
memories
from
it.
So
it
was
nice
to
be
back
as
well
and
it
didn't
disappoint
at
all.
A
B
B
C
C
No,
no,
I
guess
if
it
makes
you
feel
better,
I
am
completely
lost
and
it's
it's
my
fault
like
the
this
versioning
system,
like
I
came
up
with
all
of
it,
so
I
I
feel
responsible
for
causing
these
headaches
for
everyone.
I
mean
it's.
We
need
to
solve
it.
It's
at
this
point.
We
can't
do
anything
else
until
it's
done
like
it's
blocking
everything,
yep.
B
I
feel
like
like
we
have
to
either
do
some
hard
decisions
or
or
like
you
know,
it
might
not
be
conventional
the
solution
and
then
I
might
want
to
you
know
just
bounce
it
before,
like
with
someone
else
before
I,
I
kind
of
suggested
it
openly.
C
Well,
so
right
now,
the
I
I
kind
of
summed
it
up
in
in
I
opened
a
pr
this
morning
to
try
to
update
the
api
and
sdk
versions
in
all
the
experimental
packages
and
florina
mentioned
that
you
have
to
remove
the
wip
packages
since
they
pull
in
the
1.0.x
stuff.
C
So
the
otlp
exporter,
update
that
mark
is
working
on
is
blocked
by
this
dependency
issue
because
he
can't
compile
because
the
dependencies
don't
match
the
removing
the
wip
package.
Names
is
blocked
by
the
update
that
he's
working
on
and
fixing
the
dependencies
are
blocked
by
the
wip
packages.
So.
D
C
A
circular
block
it's
a
deadlock.
Yes,
this
is
a
classic
deadlock
and
seven
minutes
ago
he
said,
I
guess
one
pr
to
rule
them
all
is
needed,
and
I
think
I
agree
so
what
mark
what
you're
going
to
have
to
do,
since
your
pr
is
the
most
complex
one,
is
update
the
api
and
sdk
packages
in
the
experimental
in
your
pr
and
remove
the
wip
from
the
names
all
right
sounds
good.
C
I'm
sorry
that
it's
sorry
to
tell
you
this,
but
I
think
it's
required
if
anyone
can
think
of
a
better
solution
than
that.
Please
speak
up
now.
C
C
What
this
was,
I
think,
caused
by
what
what
triggered
this
was
the
wip
packages,
but
trying
to
hold
packages
and
not
release
them
by
you
know,
renaming
them
and
making
them
private.
They
depend
on
old
dependency
things
and
the
they
didn't
move
along
with
the
rest
of
the
repo.
C
So
definitely
we
cannot
do
that
again.
That's
a
lesson
learned.
I
think
that
I
would
advocate
for
going
back
to
a
single
learner,
monorepo
with
independent
versioning,
and
then
we
need
to
keep
all
of
the
packages
like
all
the
sdk.
All
the
core
packages
need
to
have
the
same
version
and
all
the
experimental
packages
need
to
have
the
same
version
so
we'll
have
to
create
tooling,
I
guess,
to
or
scripts
to
update
the
versions
and
all
of
the
correct
packages.
C
C
C
I
I
think,
that's
the
long-term
fix,
but
we
can't
do
that
until
we.
We
need
to
fix
this
first,
so
we
need
to
get
through
our
current
problem
and
then
we
can
talk
about
the
long-term
fix.
Probably
next
week.
C
C
Amir
you
had
asked-
or
I
guess
rauno
you
have-
you
had
asked
in
slack
earlier
about
the
versioning
mechanics.
Do
you
want
to
talk
about
that.
B
So
yeah-
and
this
is
like
it's
also
versioning
issues,
but
it's
not
as
far
as
I
know,
connected
to
what's
happening
with
the
experimental,
stable
packages
in
the
core
repo,
but
it's
contribute
stuff
and
by
the
mechanics.
What
I
mean
is
that
whenever
we
have
a
package
updated
that.
B
And
for
me
it
doesn't
make
sense.
It
basically
means
that
whenever
a
core
package
is
updated
like,
like
everything,
basically
runs
in
sync
this
we
do
have
like.
B
We
are
publishing
different
packages,
but
basically
they
very
often
it
happens
that
you
have
to
update
all
of
them
in
in
one
go
in
once
in
one
go
basically
yeah
and
keep
them
in
sync
and
like,
but
basically
that
doesn't
make
sense
to
me
with
the
current,
with
the
current
stability
guarantees
that
the
spect
hotel
spec
requires
us
to
do,
which
is
to
for
the
api
version
only
to
take
account
the
consumers
or
the
callers
and
not
the
implementers.
B
So
well,
it
started
with
the
build
breaking
because
the
some
of
the
apis
weren't
compatible,
basically
because
we
have
the
dependency
on
an
older
api,
but
one
of
the
real
dependencies
was
implementing
or
was
requiring
a
new
api,
basically
to
build
against
so
and
like
trace.
Sdk
requires
1.1,
for
example,
but
the
package
the
country
package
was
built
and
dev
depended
on
1.0,
so
the
build
broke.
A
B
So
I
I
guess
that's
something
we
strive
for,
but
it's
not
the
case
right
now.
For
example,
the
the
old
sdk
trace
sdk
package
does
not
support
the
old
api
anymore.
C
A
So
but
I
guess
that
that's
the
meaning
of,
like
a
minor
version,
bumping
right
that
you
can
still
use
an
old
version,
it
will
work.
C
B
And
this
is
this
is
actually
where
the
the
ball
got
rolling
that
invalidated
the
debit
dependency.
So
we
have
to
update
the
development
dependency
and
now
we
have
to
go,
go
back
and
update
in
turn
all
the
other
core
dependencies,
because
they
aren't
aware
because
they
are
gapped
with
above
with
the
sdk
they
support.
B
A
B
B
Right,
it's
current
starts
with
carrot
for
who.
A
B
Yeah
that
that's
probably
one
way
that
can
happen.
However,
this
time
it
happened,
the
other
way
around
we
have
carrot
or
sdk,
and
that
means
means
that
we
have
that
that
we
lost
support
for
an
older
api
and
in
turn
that
invalidated
everything
but
yeah.
Basically,
that
happens
both
ways.
A
B
A
B
C
Yeah
hold
on
before
we
do
that
there
was
a
pr
here,
the
pr
which,
let
me
put
it
in
the
chat.
C
C
But
it
was
causing
when,
when
that
was
happening,
there
were
ci
failures
only
on
node
16
because
of
the
difference
in
pure
dependency
handling,
node
16,
automatically
installs
pure
dependencies
or
npm
8.
I
guess
automatically
installs
peer
dependencies.
A
I'm
not
into
the
details
by
the
way
we
can
also
take
it
there
offline.
If
we
have
other
things
on
the
agenda
and
we
don't
want
to
ever.
C
Yeah
that
so
this
is
all
a
complex
issue
that
pr
and
there's
he
opened
a
follow-up
issue
here
to
discuss
it.
So
maybe
we
should
discuss
offline
on
the
issue,
because
I
think
he
has
some
context
that
we
don't
want
to
miss
too.
Okay.
C
Oh
I'm
not
sharing
my
screen.
I'm
sorry.
C
Okay,
so
we
got
through
that
information,
instrumentation
stability
guidelines-
I
don't
know
if
anyone
else
has
seen
this
before,
but
there
is
a
pr
open
in
the
specification
which
defines
what
it
means
for
instrumentations
to
be
stable.
C
This
probably
I
would
expect
is
at
least
interesting
to
you
rono,
since
you
do
a
lot
of
our
contrib
work,
probably
also
you
a
mirror,
since
you
have
a
bunch
of
control
packages
as
well,
but
the
the
short
version
is
that
unstable
instrumentations
can
do
whatever
they
want.
C
Once
they're
declared
stable,
they
must
either
there's
two
different
types.
The
first
type
does
not
have
a
schema
url
and
can
never
make
a
breaking
change
to
its
the
instrumentation
that
it's
outputting,
so
it
can
never
change
the
name
of
a
attribute
or
remove
an
attribute.
A
C
Encourage
you
guys
to
read
this
in
your
own
time
and
make
sure
that
it
makes
sense
to
you
and
that
there's
nothing
that
that
concerns
you,
since,
since
the
two
of
you
do
most
of
our
contrib
management,
I
think
your
your
opinion
here
is
important.
C
So
yeah
so
hold
on.
I
also
asked
that
question
down.
A
C
A
C
Complex,
I
think
it
would
be
instead
of
having
a
spec
just
for
fastify.
It
would
be
a
spec
for
http
frameworks
in
general,
like
they
would
try
to
generalize
it,
but
we
also
did
talk
about.
C
Where
there's
somebody
else
brought
it
up
and
yeah
here
we
go
so
I
suggested
having
like
an
extension
namespace
where
you
were
allowed
to
do
custom
things,
and
they
said
you
know
interesting
suggestion
whatever.
If
there's
demand
we
can.
We
can
talk
about
this
and
I
you
know
I
replied
here.
There's
definitely
demand.
People
will
definitely
want
to
have
custom
so
as
long
as
we
properly
name
space
them.
C
I
think
it's
not
been
added
to
this
pr,
but
I
will
add
another
comment
here
after
the
meeting
to
make
it
clear
that
that
we
want
that
and
then
I'll
send
it
to
you
guys
in
slack,
so
you
can
either
reply
or
thumb
up
or
whatever.
A
C
C
Yeah,
so
it
that
was
essentially
what
I
brought
up
was
you
guys
all
already
know
this?
We
allow
the
people
that
contribute
a
contrib
package,
a
lot
of
ownership
over
those
packages,
and
I
don't
want
to
say:
oh
yeah,
you
can't
do
this
if
they
want
to
add
some
attribute.
C
Okay,
there
is
a
new
metrics
pr
open.
This
is
a
relatively
minor
fix,
but
it
needs
review.
C
So
just
please
review
that
and
there's
a
handful
of
metrics
to
do
topics
most
of
these
are
blocked
on
existing
prs.
So
they're,
not
things
that
we
can
exactly
pick
up
right
now.
One
exception
is
the
otlp
transformer
for
the
new
types.
So
I'm
working
on
that
so.
C
Updating
the
otlp
metric
text,
borders
mark
pickler,
is
working
on
and
has
been
working
on
that
for
about
a
week
he's
blocked
on
the
versioning
issues
we
were
just
talking
about,
but
hopefully
we'll
be
unblocked.
By
the
end
of
the
day,
there
is
a
draft
pr
to
handle
multiple
async
callback
registration,
which
depends
on
the
shared
meter
state
pr,
which
is
this
one.
C
Meter
identity
also
depends
on
that
one
and
we
don't
have
any
work
done
on
the
aggregation,
temporality
controls,
which
there's
a
tracking
issue
for,
but
there's
no
pr.
I
don't
believe
it's
blocked
on
anything.
So
if
somebody
wants
to
volunteer
for
that
then
great,
if
not,
then
it
seems
like
legendicus
is
banging
out
these
metrics
issues
left
and
right.
So
if
nobody
takes
it,
he
probably
will.
H
So
one
comment
regarding
total
p
metrics
exporters
I
assumed
the
with
lp
metrics
exporter
will
introduce
that
will
be
a
transformer
right.
C
Yeah,
we'll
eventually
use
the
transformer
right
now
we're
just
trying
to
get
that
the
current
and
get
everything
updated
to
the
point
where
we
can
remove
the
wip
packages
and
and
stuff
like
that.
But
I
think
using
the
otlp
metrics
transformer
is
the
next
step
which
will
remove
a
major
headache,
because
currently
the
metrics
exporters
depend
on
the
trace
exporters,
which
I'm
not
sure
how
that
even
happened
to
begin
with.
But
they
do
and
it
is
contributing
to
our
versioning
headaches
at
the
moment.
H
H
Moving
to
the
utility
transformer,
doesn't
the
other
p
exporter
actually
have
to
be
a
metric
reader
as
well
yeah.
Well,
because
currently,
the
promotions
exporter,
which
is
has
been
updated
to
the
new
sdk,
is
actually
implements
the
metric
reader
interface
or
extends
the
metric
reader.
So
I
wonder
if
the
current
pr
should
also
make
it
the
metric
reader
or
do
we
have
any
information
there.
E
C
C
Okay,
I
guess,
before
we
move
on
to
dropping
the
discussing,
dropping
support
for
older
node.js
versions.
I
wanted
to
ask
andy
about
this.
Express
pr.
B
C
Okay,
I
guess
fixing
the
build
is
probably
depending
on
an
experimental
release.
First,
isn't
it.
B
No,
not
really
I
mean
we
would
have
that
done
like
it.
It
depends
on
me
being
unsure
how
to
fix
it.
Actually,
like
one
easy
way,
would
be
to
just
update
every
core
and
api
dependency
to
the
latest,
and
we
would
be
golden
which
I
haven't
wanted
to
do
yet,
but
but
I
think
it's
increasingly
clear
that
there
is
no
other
way
out,
but
but
I
would
like
to
discuss
that
with
you,
perhaps
to
punch
it
a
bit
and
then
decide
what
to
do.
B
That
requires
an
update
and
there
that
again
molds
into
some
other
issues,
which
requires
some
other
packages
to
be
updated
and
so
on.
It's
it's
totally
solvable,
as
you
mentioned.
Removing
hoisting
will
also
make
the
issue
smaller,
but
will
increase
the
build
times
by
a
factor
of
two
or
so
something
I
think
it's
it.
It's
ten
minutes
right
now,
but
it's
25
after
the
morning
hosting,
for
example,
which
we
could
do,
but
it's
a
lot.
B
On
the
issue
really
sorry
about
keeping
your
hand
hanging,
but
but
I
mean
you
don't
need
any
input
from
und
anymore
for
this,
but
and
I
will
merge
it
asap
once
we
get
the
versioning
clear.
C
J
It's
fine,
I
I
get
it
I'm
I'm
chilling
as
long
as
everything
else
is
resolved.
You
know
like
it's
thought:
we're
not
like
blocked
by
it
or
anything
just
trying
to
help
out
okay.
C
I
put
discuss,
drop
support
for
older
node
versions
here,
but
I
think
actually
I
would
recommend
that
we
push
this
discussion
until
after
our
current
issues
are
resolved.
C
Unless
you
guys
really
want
to
try
to
tackle
this.
Now,
I
can
tell
you
what
I
discussed
with
ted
the
api
version.
We
should
try
to
keep
on
one
dot,
something
for
as
long
as
possible,
including
supporting
old
node
versions.
C
The
api
should
be
as
stable
as
we
can
reasonably
make
it
the
sdk
packages
we
can
version.
We
can
have
major
version
revs
as
long
as
we
don't
do
them
too
often
and
in
those
major
version
revs.
We
are
sort
of
expected
to
drop
old,
runtime
versions
that
we
no
longer
want
to
support.
C
So
I
I
talked
to
ted
about
the
node
release,
lifecycle
kind
of
drops,
a
major
version
once
a
year.
So
each
year
you
know,
12
is
dropped
and
then
14
is
dropped
and
then
16
is
dropped
and
he
said
that
revving,
the
major
version
once
per
year
of
the
sdk,
should
be
no
problem
as
long
as
we
don't
do
it
like
once
a
month
or
you
know
four
times
a
year
or
anything
like
that.
C
So
I
think
if
we
want
to
drop
those
old
versions,
we
can
consider
a
major
version
rev
of
the
of
the
sdk,
but
we
should
probably
do
that
along
with
like
the
metrics
release
or
something
like
that,
and
if
there
are
other
other
headaches
or
deprecations
we
want
to
make
now
is
probably
the
time
to
consider
those
so
that
when
we
do
the
major
version
red
we
can
drop
them.
C
I
think
the
biggest
pain
is
that
moca
doesn't
support
old
versions
of
of
node,
which
means
we
can't
update
our.
C
We
can't
update
our
dev
dependencies
and
our
testing
infrastructure
it,
which
also
was
blocking
the
webpack
update.
So
we
can't
update
the
browser
tests
because
the
newer
version
of
of
the
browser
tests
require
the
newer
version
of
mocha
and
the
newer
version
of
mocha
doesn't
support
the
old
versions
of
node.
C
But
the
bug
fixes
and
stuff
like
that
are
not
being
backboarded
to
these
old
versions
of
of
our
testing
infrastructure
and
it's
only
dev
dependencies.
So
it's
not
the
end
of
the
world,
but
it's
something
that
I
I
don't
want
it
to
get
too
bad
to
the
point
where,
when
we
do
finally
update
them
all
it's
going
to
be
a
lot
of
work
already
and
the
longer
we
push
it
the
more
work
it
will
be.
I
Okay,
so
I
was
just
gonna
say
like
thumbs
up
on
not
supporting
old
versions
of
node
indefinitely,
because
because
of
the
issues
that
you're
mentioning
I'm
just
kind
of
curious
generally,
I
find
in
the
real
world
users
do
not
track
super
well
with
node
end
of
life,
like
people
will
run
a
end
of
life
node
for
longer
than
they
should.
I
But
I
don't
want
yeah
like
I,
I
think,
there's
a
balance
here.
You
don't
want
to
enable
that
forever,
but
I
think
also
maybe
do
not
want
to
be
like
too
aggressive
on
how
quickly
you
deprecate.
I
So
I
was
kind
of
thinking
about
if
we
should
just
adopt
a
formal
policy
and
just
kind
of
throw
something
out
there.
I
think
the
policy
that
that
would
make
sense
to
me
would
be
like
node
eol
plus
a
year
or
something
I
I
don't
know
if
that
is
too
2
lakhs,
but
that
would
give
users
a
year,
I
guess
after
node
and
end
of
life,
something.
C
Yeah,
I
I
think
end
of
life
plus
a
year
is
probably
pretty
reasonable.
I
was
actually
looking
for
what's
amazon's
policy
here,
because
I
think
you
know,
for
example,
if
amazon.
C
Phase
one
start
phase:
two
start.
I
don't
even
know
what
that
means.
If
amazon,
for
example,
keeps
a
run
time
for
a
year
past,
when
it's
end
of
life,
then
we
probably
want
to
keep
it
for
a
year.
Past
amazon,
but
I
don't
know
what
their
policy
is
phase.
One
no
longer
applies
security
patches
phase
two.
You
can
no
longer
create
or
update
functions
that
use
that
runtime.
C
B
C
C
Six
months
beyond
that
yeah,
if
we
do
a
year
and
a
half
or
yeah
18
months
after
the
official
end
of
life,
then
we
will
not
support
it
anymore.
I
C
And
we're
still
supporting
that,
so
I
think
at
least
node
8
makes
sense
to
draw,
I
think,
that's
the
ones
blocking
like
mocha
and
stuff
like
that.
I
think
mocha
currently
supports
node
10,
but
a
year
from
now,
they'll
probably
support
12.
You
know
that
people
move
on
for
now.
I
would
recommend-
probably
18
months
just
so
that
we
can
make
sure
we
support
all
the
way
through
amazon's
support
cycle,
because
we
don't
want
people
to
you
know
that
have
old
lambda
functions
to
suddenly
not
be
able
to
use
open,
telemetry.
B
I
mean
if
we
drop
support,
they
can
still
use
the
old
libraries
right,
so
in
that
sense,
usually
like
very
seldom
the
old
applications
that
use
the
old
libraries
are
updated
and
that's
usually
the
issue.
Why
no
date
is
used
right.
It's
because
they
don't
want
to
update
it.
A
C
We
that
we
state
in
the
policy
that
say
amazon's
support.
I
just
think
that
we,
it
looks
like
amazon
is
about
10
months,
and
I
suggest
that
our
policy
is
at
least
that
long.
So
I
would
say
12
months
or
18
months
would
be
fine
for
me.
A
A
So
even
if
we
drop
support
for
node
eight,
then
we
still
need
to
support
it
for
another
year.
From
now
right.
B
B
Basically,
I
don't
know
I
feel
like
it's
not
the
big
deal,
if
we,
even
if
we
would
do
something
as
dramatic
as
cut
the
release
with
with
the
anode
version
end
of
lighting
and
say
like
okay,
this
this
is
when
we
basically
plan
to
drop
support
and
not
do
any
updates
in
a
year,
but.
A
C
G
A
C
C
C
C
Okay,
the
last,
the
only
thing
that
I
had
here
was
prs
that
were
open
that
had
been
open
for
more
than
a
week
here
that
we
talked
about
last
week.
One
is
the
the
changelog
pr
which
this
is
mine,
it's
relatively
simple,
but
it's
kind
of
a
policy
change,
so
I
was
hoping
that
we
could
get
more
more
than
just
a
couple
of
reviews
on
it
before
we
merge
it
just
to
make
sure
everybody's
happy
with
it
and
this
meter
shared
states,
I
believe,
oops.
What
would
I
do?
C
I
believe,
mark
reviewed
this
and
I
did,
but
it
would
be
nice
to
get
some
additional
eyeballs
on
this
too
again,
I
know
that
metrics
prs
can
be
fairly
complex,
but
you
know
if
we
can't
get
any
more
reviews
on
it.
I
will
merge
it
soon,
anyways,
but
it
would
be
nice
to
get
a
few
more.
C
That's
it!
That's
all
that
I
had
for
you
guys
today
anything
else
that
anybody
wants
to
to
discuss
before
we
drop
off.
C
Yeah,
I
have
time-
or
I
know
it's
late
for
both
you
and
ronno.
Would
you
prefer
to
do
it
in
the
morning
tomorrow
my
morning
your
afternoon,
or
would
you
rather
do
it
now.
C
Yeah,
that's
fine
for
me,
okay!
Anyone
that
wants
to
stick
around
is
welcome
to,
but
I
guess
we
can
consider
the
meeting
officially
over
for
now.
C
Okay,
I
don't
remember
exactly
where
we
left
off
so.
B
Maybe
I
can
start
with
a
question:
if
that's
fine.
A
B
C
No,
so
if
you
update
your
api,
the
api
would
potentially
have
methods
that
have
not
been
implemented
by
your
sdk.
But
if
we
add
some
new
method
to
the
interface,
then
the
sdk
won't
understand
it.
You
know
won't,
have
implemented
it.
So
we
definitely.
I
think
that
that
the
answer
to
that
is
just
no.
If
you
update
your
api,
you
must
update
your
sdk
the
other
direction.
C
I
think
the
answer
would
be.
Yes,
you
should
be
able
to
update
your
sdk
and
use
old
versions
of
the
api.
A
Okay:
three
packages
should
depend
on
the
least
supported
version
of
the
api,
because
we
list
the
last
version,
the
latest
version,
but
it
should
probably
be
one
zero,
zero
everywhere.
C
Right
so
that
the
the
most
recent
api
that
we
released
added
some
new
interface-
I
don't
remember
exactly
what
it
was,
but
if
contrib
package
doesn't
use
it,
then
it
should
support
the
lowest
api
version
that
it
can.
So
it
should
support
if
it's
only
using
the
basic
tracing
methods,
it
should
support
1.0
and
higher
apis
in
order
to
give
the
end
user
as
much
flexibility
as
possible
to
pick
their
api
version.
C
C
Wait
I'm
trying
to
find
this
pr.
I
didn't
see
this
pr
until
after
it
was
merged.
I
just
put
it
back
in
the
chat,
but
the
ci
was
broken
because
only
on
node
16,
because
node
16
is
installed
with
nvm
version
8
by
default,
which
has
different
peer
dependency
handling.
F
C
We
could
relax
this
and
change
like
there's
a
there's,
a
flag
to
use
legacy
peer
dependency
handling
in
node
8,
which
we
could
do
in
our
ci.
It
was
only
broken
on
rci.
It
was
not
broken
for
end
users.
B
C
No,
so
the
the
legacy
pure
depths
would
have
solved
this
issue
for
us
because
it
would
have
installed
the
dev
dependency
version.
C
Dependency
version,
but
for
end
users
we
want
them
to
be
able
to
specify
their
versions.
The
issue
for
us
was
primarily
caused
by
hoisting.
As
far
as
I
know,
because
when
it
hoists
the
api
dependency,
it
tries
to
find
one
that
matches
everything
and
it
was
not
in
it.
It
was
installing
the
old
api
and
then
the
tests
would
fail
because
they
were
expecting
methods
that
don't
exist
on
the
old
api.
A
C
B
C
C
C
Yeah,
it
could
be
one
zero
zero
one,
zero
either
one
zero
one
or
one.
Zero.
Two
is
completely
broken
and
unusable,
but
it
could
be
one
zero,
zero
you're
right
because
they
would
just
get
the
latest
one
anyway.
C
C
L
C
Technically
so
in
node
16,
it
installs
peer
dependencies
by
default.
Technically,
this
104
that
was
getting
installed
does
satisfy
the
range
the
old
range.
It's
just,
not
the
one
that
you
would
expect.
B
One
thing
that
might
affect
what
version
is
that
what
version
of
api
is
installed
is
the
fact
that
the
latest
tag
is
1.0.4
still
on
the
api.
The
next
one
is
1.1,
so
node
will
not
actually
install
1.1
unless
the
dev
dependency,
for
example,
or
any
other
dependency,
except
like
like
specifically
asks
for
it.
C
B
E
E
C
I
mean
it
seems
like
a
bug
in
npm
that
it,
it's
probably
npm,
probably
looks
at
peer
dependencies
looks
at
the
range
installs.
The
latest
then
sees
that
the
dev
dependency
doesn't
match
and
fails
when
it
should
look
at
this
range.
Look
at
this
range
and
find
whatever
version
matches,
regardless
of
tag.
B
Yeah
that
that's
clear
that
this
kind
of
complex
trying
to
satisfy
everything
we
can
think
is
not
happening
from
my
tests
at
all
like
it's
like.
There
is
all
the
information
and
it
could
potentially
find
out
what
are
the
versions
that
satisfy
everything,
but
it
doesn't
do
that
you're
right.
C
C
Well,
it
doesn't
fail
during
the
compile
step.
It
fails
during
the
install
step
because
there's
a
pure
dependency
consistency
check
in
npm
eight,
and
that
was
failing.
C
Before
this
pr
was
merged
right,
so
what
npm
is
doing
is
it
looks
at
this
range
and
it
says
it
looks
at
the
latest
which,
if
we
go
to
api
package
here,
the
latest
is
1.004
latest.
Is
one
zero
four,
so
it
installs
that.
F
C
That's
an
exporter,
oh
god,
I
was
looking
at
the
highlighted
thing
on
the
left.
I
was
like
what
is
going
on.
That's
also
an
exporter,
let's
see.
So
if
we
look
at
we'll
look
at
the
core
package
core
package,
if
we
look
at
this
one,
it's
the
same
range
which
we
all
agree
is
correct:
dev
dependency,
tilde,
1,
1
0.,
that's
the
same
tilde
1,
1,
0
and
the
full
range.
B
B
C
B
C
B
Another
thing
is
the
time
when
was
this
merged
were,
were
the
other
versions
already
bumped
or
not?
That's
the
that's
another
question.
Okay,
so
we
could
not
actually
core.
We
couldn't
update
core
dependency.
C
C
C
These
101
are
all,
I
think,
correct,
because
if
we
look
at
the
core
package,
json
you'll
see
101
and
api
1.1.
A
C
We
should
try
to
make
a
pr
that
reverts
that
pure
dependency
change
to
one
zero
zero
because
that's
what's
causing
problems
in
quadrant
and
it's
only
three
packages
for
some
reason.
So.
A
A
D
A
It's
it's
a
it's
not
implementing
anything
from
the
api
right.
It's
just
using
the
types.
A
We
can
see
what
it's
importing.
C
B
Now
I
mean
if,
if
it,
if
the
peer
dependency
is
carrot,
then
the
dev
dependency
could
also
be
carried
right.
Oh,
it
is
sorry.
B
B
Yeah
but
okay,
but
let's
do
it
one
two
carrot
one:
one.
B
F
F
B
A
A
A
C
B
B
B
That
supports
them
yeah,
you
know
in
a
way,
but
that's
that's.
We
usually
do
it
or,
as
I
understand
right
now,
we
do
it
by
not
releasing
not
tagging
it
the
latest
but
having
it
as
next.
So
we
can
update
all
the
core
packages
before
and
require
1.1
without
actually
releasing
1.1
api,
as
the
latest.
B
C
There
were
good
reasons,
but
the
reasons
were
mostly
about
not
accidentally
introducing
breaking
changes,
and
things
like
that.
I
think
we
can
trust
ourselves
not
to
do
that.
B
True
but
but
then
we
I
feel
like
like,
and
I'm
not
saying
it's,
it's
not
the
best
option
we
have,
but,
but
I
feel
like
the
different
packages
in
a
way
like
should
be
like
semantically
like,
like
logically
thinking,
they
should
be
able
to
have
depend
on
different
versions,
and
it
like
the
looser
version,
is
semantically
more
correct.
Perhaps,
and
we
we
would
avoid
issues
and
create
a
like
a
strict
or
like
simple
to
follow
rule
that
probably
makes
it
more
clear
for
our
users,
but
we
wouldn't
necessarily
solve
the
issue
we
are
having.
A
B
But
we
can.
We
can
essentially
do
the
same
thing
right
now
today,
by
by
publishing
everything
at
once.
Right
doesn't
wouldn't
that
work
as
well.
A
C
C
C
M
G
A
Yeah,
how
long
does
experimental
supposed
to
live
like.
C
I
mean
I
don't
think
everything
will
be
stable
ever
once,
metrics
are
stable,
we'll
have
logs
and
once
logs
are
stable,
we
will
want
like
a
friendly.
I
I
want
to
have
that
node
sdk
package
working.
I
ideally
end
users
should
only
ever
install
two
packages,
they
should
install
the
api
and
the
sdk
and
everything
works
and
then
obviously
whatever
instrumentations
they
want
to
install,
but
installing,
like
five
different
core
packages
with
weird
names,
is
just
a
bad
experience.
A
I
think
nothing
here:
none
of
the
options
are
really
a
good
one.
You
just
have
to
pick
the
one
that
seems
most
reasonable.
C
I
I
think,
for
now
we
don't
have
to
move
api
back
to
poor
right
away.
I
think
we
merge
core
and
experimental
back
into
a
single
learner,
repo
back
into
a
single.
You
know
whatever
and
get
through
this
release
and
then
the
next
time
we
release
the
api.
We
will
try
to
do
it
with
the
the
next
tag.
Then
we
update
the
core
repo.
Then
we
tag
the
api
and
we'll
see
how
painful
that
process
is
and
if
it
also
has
problems
we'll
consider
moving
the
api
back
at
that
point,
that
sounds.
A
B
Yeah
two
questions:
will
the
sdk
trace
sdk
peer
dependency
versions?
Are
we
trying
to
get
them
by
corrected
for
the
next
release,
or
should
I
do
all
the
updates
on
country
before
that's
fixed.
B
Awesome,
but
that
means
that
we
will
have
a
broken
build
on
contrib
until
we
do
that
yeah
right.
Okay,
second
question:
the
one:
the
experimental
ones,
the
packages
right
now:
okay,
there
is
ap
like
the
metrics
stuff,
but
there
is
also
exporters
and.
B
Reasons
we
still
have
them
experimental
now
for
back
to
experimental.
Is
the
json
exponent
json
format?
Right?
B
Yes,
would
there
be
a
like
like
a
possibility
that
we
actually
sorry
almost
sorry
they
would
suggest
that,
but
to
actually
release
that
as
one
way
too,
but
without
the
json
exporter
like
does
that
is
that
required
to
be
there?
We
know
that
it's
going
to
be
added
at
some
point,
but
it
right
now
it
isn't
there.
I
I
guess
it's
not
spec
compliant,
but
it's
stable
as
it
is
right.
It's
without
some
features,
but
it's
stable.
The
parts
that
are
there
are
stable
could
that
con
considered
stable,
but
for
not.
C
No
they're,
not
because
the
json
exporter
could
export
different
field
names
so
like
that
one
field
that
was
recently
renamed
is
the
instrumentation
library
is
renamed
to
instrumentation
scope.
So,
if
you're
like,
if
I
was
using
the
json
exporter,
I
would
expect
a
major
version
bump
on
that
change.
True.
B
A
C
C
It's
brutal,
I
I
I
it's
a
legacy
of
a
previous
maintainer.
I
don't.
C
To
get
into
it,
this
has
been
a
a
headache
for
me
for
two
years
and
I'm
extremely
excited
to
get
rid
of
it.
A
Do
you
know
when
the
json
exporter
is
supposed
to
be
stable
in
spec
yeah
like.
A
Stable,
so
it
means
we
can
release
it
as
stable
once
we
verify
that
the
de
facto
is
working
as
expected
right.
A
F
Experimental
okay,
but.
C
D
D
C
I
swear
there
was
like
there
was
a
proposal
somewhere.
I
just
obviously
can't
find
it.
A
What
is
going
on,
but
this
one,
this
one
is
also
old,
but
this
is
what
we
want
right.
C
It
was
about
changing
field
names,
because
the
field
names
were
oh
we're
encroaching
on
a
different
sig
meeting.
We
we
all
share
the
same
rooms,
so
we
we
need
to
get
out
of
this
room.
B
Okay,
cool
was
pleasant
thanks,
guys.
D
D
N
All
right,
so
we
were
able
to
make
to
the
meeting.
I
don't
know
what
happened,
but
it
seems
that
the
calendar
changed
the
link
again,
I'm
not
sure
but
anyway
so
should
there
you
have
noah
as
raj.
M
I
was
not
able
to
ping
contact
nova
over
the
teams.
I
don't
know
he
might
be
trying
to
join
the
other
link.
I
can
see.
M
N
Noah,
hello,
how's
it
going
so
before
we
dive
into
that.
I
think
just
a
quick
update
on
what
we
did
recently.
So
I
think
the
examples
now
are
completely
done.
We
are
done
with
that.
I
detected
a
bit
of
instability
on
the
tests
on
the
ci
paths.
N
I
think
I'm
going
to
set
up
something
to
run
them
to
see
which
ones
we
should
prioritize,
but
we
have
some
trouble
on
that
and
I
have
a
pr
on
the
sdk,
because
the
defaults
were
not
working
for
http
protobuf
for
otlp,
but
I
think
that
is
going
to
be
something
quick
that
is
fixed
and
I
think
we
can
so
we
have
time
to
discuss
and
go
over
the
stuff.
N
I
think
we
can
switch
to
talk
about
the
proxy
and
the
dependencies
that
we
had
mentioned
in
the
past
and
better
use
the
time
of
noah
being
with
us.
N
All
right,
then,
let
let
me
just
give
a
quick
recap
so,
where
this
comes
into
play,
I
think
so,
since
quite
some
time
ago
we
had
heard
about
the
rapper
greg
had
a
prototype
with
that
and
that
deals
with
the
dependence
and
this
the
traditional
way
with
dealing
with
dependencies
injected
by
a
profiler.
N
But
the
part
that
kind
of
I
would
say
that
we
are
missing
the
picture,
and
perhaps
since
there
is
a
lot
of
folks
that
were
not
part
of
that
discussion
is
perhaps
worth
repeating
what
is
kind
of
the
wrapper,
how
it
is
implemented
and
how
it
works.
But
I
think
the
main
thing,
at
least
in
my
mind,
is
that
we
need
some
proxy
with
the
sdk,
because
we
wanna
to
use
the
open,
telemetry
sdk.
N
You
know,
and
it's
not
like
we
are
implementing
a
tracer-
that
we
have
control
of
the
type
that
we
can
just
wrapper
and
get
that
we
need
to
somehow
proxy.
This
communication
of
the
wrapper
type
with
the
open,
telemetry
sdk.
N
So
I
think
that's
my
mind,
but
I
think
perhaps,
as
I
said,
for
the
benefit
of
the
the
rest
of
the
folks
here
that
didn't
see
the
demo
from
greg.
I
think
about
almost
a
year
ago,
perhaps
a
brief
explanation
about
what
is
the
general
idea
about
that.
And
then
we
moved
to
the
the
proc
century.
Sdk
sounds
good.
N
All
right
are
you.
I
think
that
I
I'll
delegate
to
you
and
raj
to
go
to
that
and
explain
to
folks.
N
Yeah,
I
I
I
think
you
know
okay.
K
I
mean
so
I'm
not
sure
I
saw
greg's
demo
either
of
that
I
mean
I
was
in
some
vague
sense
aware
that
he
was
creating
a
rapper,
but
I
don't
know
exactly
how
he
constructed
it
or
what
the
invariants
were
or,
or
things
like
that.
So
I'm
probably
not
a
good
person
to
explain
it.
I
mean,
if
anything
I
would
love.
Having
someone
tell
me
what
he
did.
N
Say
that
he
implemented
the
ideas
that
we
discussed
it
for
the
diagnostic
source,
actually
because
it
was
a
smaller
surface
than
the
activity
itself
sure
and
he
basically
used
reflection
and
kind
of
having
that
thing
kind
of
falling
back
for
oh,
this
version
has
such
method
or
such
property.
N
If,
yes,
I
use
that,
if
not,
I
use
a
backing
property
to
implement,
so
the
code,
that's
using
the
diagnostic
source
can
deal
with
stable
semantics
or
not
semantics,
but
the
cement
stable
interface
of
the
object,
okay
right,
so
that
was
the
the
demo
that
greg
did.
So.
I
think
this
was
the
the
idea
that
we
discussed
quite
some
time
ago
and
as.
H
N
It's
the
traditional
way
of
doing
this
kind
of
thing
right,
so
I
think
the
question
then
becomes
when
we
raj
start
bringing
the
startup
hook.
We
also
went
in
the
path
of
k,
hey.
We
are
gonna
change.
What
we're
gonna
do
here,
because
initially
our
plan
was
to
kind
of
have
our
own
tracer,
but
in
the
end
of
this
I
know
we
need
to
use
the
hotel
sdk
that
provides
the
tracer
so
that
code
we
will
not
be
using
directly
our
wrapper
type.
N
So
what
comes
to
my
mind
is
okay,
if
you
want
to
make
this
work
with
this
kind
of
wrapper
that
deals
with
all
these
versions,
we
need
somehow
to
communicate
that
with
the
sdk.
N
A
N
To
proxy
this
conversation
with
the
sdk,
if
it
was
just
us
doing
a
wrapper
type
that
we
do
here
with
the
source
code,
I
think
we
understand
how
that
happens.
You
know
is
this
next
step
about
how
we
could
possibly
leverage
the
sdk
and
use
the
sdk
with
this
wrapper
to
to
avoid
all
the
dependency
issues
that
we
are
facing
when
trying
to
use
something
like
this
startup
hook.
M
Okay
also
know
what
you
start.
One
other
thing
also
wanted
to
call
out.
The
issue
is
not
only
with
the
diagnostic
source,
for
example,
if
a
user
brings
a
old
version
of
like
world
or
new
version
of
open,
telemetry
sdk,
and
if
instrumentation
is
going
to
bring
something
different
than
what
user
has
it,
so
we
have
it
there
also
and
right
you.
We
need
to
take
a
dependency
on
our
logging
libraries,
so
we
have
this
issues
everywhere.
M
K
Now,
okay,
so
let's
see
I
I
mean
again,
I
don't
want
to
claim
that
I
have
like
fully
thought
through
this
and
I
have
a
design
in
my
head
and
all
that
you
need
to
do
is
just
like.
Take
this
design
and
implement
it
and
everything
will
be
glorious.
I
I
have,
you
know,
fragments
of
a
of
a
design.
I
have
thoughts
about
it,
but
I'm
sure
it
would
take
further
fleshing
out.
There
might
be
issues
they
might
need
to
be
accounted
for
before
you
sort
of
get
something
that
finally
works.
K
You
know,
sort
of
what
would
happen
at
build
time
is
that
nougat
would
sort
of
recognize.
Oh,
you
have
dependency.
You
know
you
have
this
dependency
with
a
certain
version
and
you
have
another
path
to
the
same
kind
of
dependency,
but
maybe
with
a
different
version,
and
it
would
do
sort
of
build
time
what
some
you
know,
arbitration
some
you
know.
Heuristic
runs
that
says:
okay!
Well,
what
version
should
we
actually
load,
given
that
these
two
different
dependencies
both
depend
on?
You
know,
system
diagnostic
source,
but
of
different
versions?
K
And
I
I
don't
even
recall
what
all
the
logic
there
is.
I
think
it's
something
you
know
something
sort
of
of
you
know
pick
whatever
versions
higher
and
something
about
whatever
dependency
chain
is
closer,
has
more
preference,
but
regardless
you
don't
have
the
benefit
of
that,
because
this
is
not
happening
at
build
time.
You're,
injecting
at
run
time.
N
Yeah
just
just
to
to
make
very
clear
actually
because
of
this
difficulties,
we.
A
N
We
are
kind
of
opting
to
kind
of
hey:
let's
try
to
force
build
time,
but
that
limits
the
scenarios
and
raj
com
came
to
us
and
talked
about
the
startup
hook.
That
don't
need
anything
at
build
time
and
we,
of
course
we
have
interest
on
that.
But
on
the
other
hand,
we
perhaps
kind
of
limit
ourselves
to
the
build
time
if
the
solution
with
the
startup
hook
is
too
hard
to
get
it
working
right.
K
Right
right
so
I
mean
yeah,
so
the
startup
hook
helps
you
load
some
things,
but
it's
certainly
not
not
a
panacea.
In
terms
of
you
know
it,
doesn't
it
doesn't
let
you
load
anything
you
might
like,
and
in
particular
it
doesn't.
Let
you
upgrade
versions
of
assemblies
that
the
app
already
has
taken
a
dependency
on.
K
You
know-
and
I
was
talking
this
over
somewhat
with
with
jan
and
I
apologize.
If
there
was
confusion
before
I
I
was
actually.
K
I
was
under
a
mistaken
impression
that
application
had
insights
was
already
doing
some
other
things
to
upgrade
dependencies
and
that
that
that's
always
sort
of
made
me
nervous,
but
I
was
under
the
impression
they
were
doing.
It
was
working
okay
and
I
was
like
all
right.
Well,
if
they've
proven
it's
working,
okay
then
I
guess
I
can
put
my
nervousness
to
the
side.
I
was
wrong.
They
weren't
doing
that.
So,
basically,
all
my
nervousness
came
back,
which
is
really
to
say.
K
I
don't
view
it
as
a.
I
don't
view
it
solely
as
like.
Oh
it's
just
a
technical
obstacle
that
we
can't
upgrade
the
dependencies.
I
I
view
it
as,
for
the
sake
of
reliability
and
good
back
compat
of
the
auto
instrumentation
that,
even
if
we
had
a
way
that
right
now
would
just
upgrade
your
dependencies
that
we
shouldn't
use
it,
because,
basically,
it
just
introduces
a
lot
of
risk
into
an
auto
instrumentation
engine
that
you
put
it
in
there
and
all
of
a
sudden.
K
The
app
starts
acting
differently
than
it
did
before,
because
you
have
changed
the
version
of
the
dependencies.
The
app
was
working
with,
so
the
the
strategy
that
I'm
advocating
you
know
is
that
we
would
not
change
the
versions
of
dependencies
that
the
app
is
working
with,
and
instead
we
find
ways
to
accommodate
those.
K
But,
like
you
said
paulo,
if
it
wound
up
being
overly
difficult,
I
still
think
you
always
have
the
option
to
say:
hey,
there's
only
a
range
of
flexibility
that
we're
willing
to
offer
and
beyond
this,
it's
just
it's
either
too
difficult
for
us.
It's
too
onerous.
You
know
so
at
some
point
you
could
still
kick
it
back
to
the
customer
and
you
say:
hey
you're,
just
outside
of
what
we're
capable
of
supporting.
K
If
you
want
to
be
in
this
auto
instrumentation
game,
we
need
at
least
you
know
this
much
from
you.
You
got
to
at
least
go.
Do
this
one
thing
and
you'll
make
our
lives
a
lot
easier,
and
it's
totally
up
to
you
guys
to
decide
how
how
much
effort
you
put
in
to
avoid
the
customers
needing
to
you
know,
lift
a
finger
to
do
anything
to
help
you
so
in
terms
of
avoiding
sort
of
conflicts
with
their
dependencies
at
run
time
I
mean
the
main.
K
Scope
for
you
know
the
rest
of
the
application
and
you
so,
for
example,
you
can
load
assemblies
in
there
and,
if
the-
and,
if
the
you
know
the
app
code
says,
oh
you
know
load
such
and
such
assembly
or
such
and
such
type
as
long
as
they're,
not
explicitly
doing
it
against
your
assembly
load
context
which
presumably
they
wouldn't
because
you'd
never
leak
the
object
to
them
to
begin
with,
then
you
know
they
just
won't
recognize
that
it's
there
so
that
that's
one.
K
The
other
thing
you
still
have
to
be
careful
of
is
that
even
within
your
sort
of
private
assembly
load
context,
the
default
context
will
take
precedence
if
you
tell
it
to
load
something
that
has
the
same
assembly,
simple
assembly
name.
So
if
you
say
I
want
to
load
diagnostic
source
in
my
private
context
and
the
diagnostic
source
is
already
loaded
in
the
default
one,
that's
the
that's
the
version
you're
going
to
get
it
will
just
sort
of
fall
back.
That's
my
recollection!
K
All
of
the
I
think,
the
sort
of
exacting
dependency
loading
rules
are
now
documented,
but
I
believe
that
is
what
happens,
and
so
what
you
can
also
do
to
sort
of
you
know
really
isolate
yourself
and
and
avoid
even
those
sorts
of
fallback.
N
Yep
the
question
I
I
I
see
that
resolving
conflicts
and
I,
if
I
recall
correct
in
the
beginning
of
net
like
2000,
we
had
to
have
a
lot
of
tools
doing
this
similar.
N
But
I
I
worry
about
things
like
the
context.
How,
for
instance,
we
are,
we
are
gonna,
see
I
wanna
house
dancing
and
sharing
the
same
context.
You
know.
N
K
Then
that
sort
of
gets
to
the
let's
say
the
second
part
of
this
chat
is
like
now:
you've
created
isolation,
but
then
you
say,
but
I
didn't
want
complete
isolation.
I
actually
did
want
to
share
some
stuff,
and
so
now
now
you're
kind
of
yeah
now
is
where
you
start
building
either
proxies
or
bridges
or
some.
You
know
something
that
connects
like
the
little
isolated
island
that
you've
just
gone
and
built
for
yourself
to
the
rest
of
the
the
application.
K
And
one,
I
guess
one
thing
that
I
know
I
taught
I
remember
talking
about
this
with
greg
in
the
past,
and
so
so
I
was
asking
him
this
question
of
assume
that
your
profiler
well
for
for
activities.
There
were
sort
of
two
pretty
obvious
things
where
the
action
of
the
profiler
sort
of
leaks
right
back
into
the
application.
K
One
is,
if
you
were
to
change
the
default
id
format
like
if
you
go
far
enough
back
in
time.
The
default
id
format
for
activities
was
the
hierarchical
id,
whereas
all
of
the
open,
telemetry
stuff
is
based
on
the
w3c
standard.
K
K
And
potentially
you
know
if
the
logic
of
the
app
of
course
didn't
properly
handle
that
or
or
maybe
even
the
app
works.
Okay,
but
now
it
actually
changes
the
telemetry
that
the
app
logs
via
some
other
channel,
like
the
ilogger,
will
log
your
ids
or
the
you
know,
asp.net,
like
error
pages,
or
something
might
put
those
correlation.
I
you
know
so
it's
something
like
a
user
could
be
like.
K
Oh
all,
of
a
sudden,
this
page
changes
when
the
auto
instrumentation
is
enabled,
or
all
of
a
sudden,
the
content
that
gets
logged
into
my
independent
logging
system,
like
those
ids
they're
changing
when
the
auto
and-
and
you
know
at
the
time
I
think
greg
said
like
yes,
that's
a
like
that's
a
good
thing.
We
want
that.
K
I
would
I
I
guess
I
would.
I
would
question
that
decision
as
to
say
if
the
goal
is
to
be
highly
reliable
and
to
not
impact
the
behavior
of
the
existing
app.
But
then,
at
the
same
time
you
are
deliberately
changing
the
behavior
of
the
app.
You
know,
there's
some
amount
of
of
conflict
there.
So
you
know
I
I
might
suggest
you
know-
maybe
maybe
that
is
not
the
right
behavior
and
that
instead
you
would
actually
say
no,
I'm
not
going
I'm
not
going
to
change
that
default
state.
K
K
That
has
the
id
format
that
that
I
want,
as
my
open
telemetry,
auto
instrumentation,
and
then
I
guess
that
the
same
thing,
but
to
a
slightly
lesser
degree
again
with
sort
of
trying
to
share
the
chain
is,
of
course,
if
open
telemetry
is
pushing
its
own
links
in
that
chain
and
the
whole
chain
is
visible
to
the
app
again
we're
sort
of
having
open,
telemetry,
auto
instrumentation
change
the
applic,
the
state
that
is
visible
to
the
app
when
it
sticks,
links
into
that
chain,
and
so
again
it
might
be.
K
The
answer
is:
let's
not
try
to
share
the
same
chain,
let's
just
let's
just
observe
whenever
the
app
injects
something
into
into
it's.
You
know
like,
let's,
observe
the
diagnostic
source
callbacks
that
occur
when
activities
get
created
and
ignore
the
state
of
the
apps
chain.
Let's,
let's
just
build
our
own
little
private
chain
off
to
the
off
to
the
side.
That
has
what
we
want
it
does.
I
say
it
does
raise,
maybe
a
a
fuzzier
question
of.
K
If
the
app
was
using
a
new
enough
version
of
diagnostic
source,
would
you
still
want
sort
of
a
totally
isolated
chain
or
or
would,
at
some
point
you'd
say?
Okay,
like
at
least
they
agree
on
the
id
format?
So
if
they
agree
on
the
id
format,
I'm
just
gonna,
let
it
all
be
one
chain,
but
if
they
don't
agree
on
the
id
format,
I'm
gonna
keep
my
stuff
over
here,
I'm
gonna,
let
the
app
do
its
thing
over
there
and
I'm
not
going
to
mix
them.
K
I'll,
be
honest,
I
don't
feel
like.
I
know
what
the
right
answer
there
is.
I
mean
I
do
see
some
advantages
of
allowing
them
to
to
mix
and
of
allowing
the
profile.
You
know
the
auto
instrumentation
spans
to
be
embedded
into
the
chain
that
the
app
would
otherwise
see
if
it
didn't
have
auto
instrumentation
enabled
that
I
mean
that
might
be
one
where
you
you
have
to
either
take
your
best
guess
about.
K
N
Let
me
pause,
it
seems
to
me-
and
I
invited
the
other
folks
who
also
give
their
opinion,
but
it
seems
to
me
that
then
we
have
to
have
kind
of
a
version
that
we
do.
The
cut
you
know
could
be
the
version
with
the
id,
but
something
there
will
be
a
version
that
we
do
the
cut
and
say:
hey,
hey
this
here
we
do
observe.
A
N
Below
that
we
don't
participate,
or
we
collect
minimal
information
from
that,
because
it's
it's
too
cumbersome
to
coalesce
these
things
to
a
single
state.
In
that
case
you
know,
so
I
think
it's
reasonable
to
have
a
version
that
we
say:
hey
below
it's
fine,
we
or
we
they
ignore
or
capture
partially.
A
N
Information
and
above
some
version,
okay,
here
we
integrate
all
these
things.
You
know
this
seems
reasonable,
I'm
I'm
kind
of
because
I
think
for
us.
N
We
we
have
to
decide
how
much
you
want
to
invest
on
this
path,
because
the
thing
is
even
there
was
somebody
that
commented
on
the
issue
that
erasmus,
hoping
on
the
runtime,
that
okay
building
is
fine.
You
know,
add
a
nuget
package
with
the
proper
reference
and
that
let
the
nougat
package
resolution
take
care
of
that.
N
A
silver
bullet
because
there'll
be
right,
compatibility,
breaks
and
people
will
have
to
change
the
app
but
at
least
is
a
kind
of
let's
say
reasonable
path.
But
we
lose
all
those
that
just
can
use
something
like
startup
hook
right.
O
K
I
do
expect
that
to
be
significant,
like,
like
my
under
you
know,
my
understanding
of
that
business
is
that,
even
though
it
might
sound
simple
for
the
user
of
like
oh
just
do
this.
One
thing
is
like
oh,
okay.
Well,
there
go
a
whole
bunch
of
users
that
were
not
willing
to
just
do
that,
one
thing
and
and
that
you
as
the
business
might
be
like.
Well,
I
just
left
a
lot
of
money
on
the
table
by
by
adopting
that
strategy.
N
Yeah,
I
I
actually
don't
know
the
answer
for
that.
I
think
I
have
a
bias
that
many
will
be
willing
to
build,
but,
for
instance,
you
raj-
and
I
think
chris
also
mentioned
this
in
the
past-
that
they
have
a
lot
of
cases
that
people
can't
rebuild.
They
just
have
the
the
dlls
and
the
beats
there
and
they.
P
Q
N
Yeah
we
could
repack,
but
then
we
need
a
tool
to
do
this.
Repacking
right,
yeah.
M
We
have
the
same
approach
with
the
application
insights
currently,
so
we
use
a
open
source,
il
repack
tool
to
get
that
done,
so
we
repack
the
entire
application
inside
sdk
and
bring
that
to
the
auto
instrumentation
space.
Q
I
do
not
remember
paulo,
but
I
know
one
year
ago
or
something
like
that.
I
was
asking
if
it
would
not
be
a
better
solution
to
use
like
you
know
at
build
time,
I'll
rewriting
instead
of
using
the
profiler.
Q
Also
because
then
you
you
can
still
use
the
profiler,
and
still
I
see
that
we
have
a
lot
of
issues
connected
to
that
when
the
build
time-
and
you
know
doing
at
build
time-
could
help
us.
So
basically,
we
could
explore
the
path
if
the
experimentation
would
not
simply
simply
be
a
tool
that
will
simply
you
know,
just
instrument
the
application.
Q
L
I
think
there's
other
complexities
there
because,
let's,
let's
say
somebody's
using
something
like
sharepoint
to
run
their
intranet
or
something
like
that.
At
that
point,
let's
say
they
run
some
tool
that
updates
their
sharepoint
install.
L
Now,
how
does
that
affect
the
support
contract
that
they
have
with
microsoft
for
their
sharepoint
instance,
because
it's
technically
no
longer
the
same
code.
Q
Q
L
Q
N
So
I
I
I
just
want
to
be
sure
so
raj
you
said
you
epping
sites
have
a
tool.
These
two
should
do
the
repackaging,
so
you
could
do
in
theory
upgrade
of
the
nougat
packages
that
were
built
with
the
application.
M
No,
it
is
not.
We
are
not
going
to
touch
the
application,
so,
for
example,
we
have
an
open,
telemetry
sdk,
which
has
lot
of
libraries.
So
we
cannot
do
a
side
by
side
of
these
two
libraries.
So
the
plan
is
to
take
the
open,
telemetry
sdk
and
make
it
as
a
single
library,
and
even
if
customer
brings
his
own
open,
telemetry
sdk,
we
can
load
in
the
even
in
the
default
context.
M
So
that's
what
we
have
it,
but
if
sdk
here
supports
us,
I
did
some
research
last
week
if
sdk
supports
us
like,
if
we
could
do
some
small
changes
to
their
filter
pattern,
where
we
could
like
change
the
values
to
reflection,
we
can
like
coexist
and
work.
Even
we
can
take
the
customer
configuration
and
we
can
bring
the
auto
instrumentation
configuration
and
mix
and
match
together
and
get
it
working.
N
I
see
so
so
in
a
sense,
I
think
this
tool
is
a
kind
of
shorter
path
right.
L
But
there
is
still
a
gotcha
with
with
the
aisle
repacking,
especially
with
the
vendor
library,
hooks
that
that
we
added,
because
when
you
run
a
tool
like
aisle
repack
it
it
kind
of
it
changes
the
name
spaces.
It
changes
the
names
of
things
to
keep
them
as
distinct
things.
And
so,
when
you
have
this
separate
built
dll
for
all
of
your
vendor
code,
it's
not
going
to
be
able
to
see
the
il
repacked
stuff.
Unless
you
have
some
other
thing
that
you
expose
that
can
bridge
that
that
that
gap.
P
N
N
I
think
I
think,
then
we
kind
of
get
back
to
the
same
kind
of
discussion
that
we're
having
about
how
to
do
this
proxy.
So.
M
One
thing
like
nova
called
out
when
he
started
his
speech
like
he
called
out
like
there
are
few
scenarios
we
need
to
document
it
like
hey.
This
is
the
max
the
auto
instrumentation
can
support
if
you
cannot
do
it
at
that
point,
we
need
to
ask
them
to
take
like
we
have
a
plan
for
that
already
take
our
new
gate,
use
it
in
your
build
time,
dependency
and
use
it.
Then
it's
going
to
work
in
all
other
scenarios.
M
We
can
try
and
auto
attach
that
there
are
several
scenarios
even
in
for
our
azure
web.
Like
webapp,
we
won't
be
able
to
ask
the
customers
to
just
go
and
build
or
rebuild
things.
It
won't
work
that
way.
It's
like
more
sharepoint
code
like
we
cannot
something
like
that.
We
have
it,
so
we
can
try
our
best
to
support
it.
Like
still,
there
will
be
few
areas
we
will
be
left
out
at
this
point
with
any
other
any
approach
we
take
it.
For
example,
like
with
startup
work
additional
dips.
M
If
we
do
that,
we
will
cover
little
more
of
the
customers.
So
if
we
look
at
the
left
out
space
in
that
kind
of
scenario,
it
will
be
very
minimal
and
for
that
very
minimal
also,
we
have
a
solution
to
use
a
build
time,
dependency
of
providing
a
new
get
package
and
more
things
on.
N
So
let
me
just
what
you're
saying
is
kind
of.
We
could
get
a
very
good
coverage
for
going
that
approach,
but
there
will
be
a
few
cases
and
for
that
few
cases
that
isn't
possible
to
do
it,
then
we
kind
of
fall
back
hey.
Unfortunately,
you
need
to
build
again
yeah
right.
K
And
you
should
be
able
to
like
if
you
use
I
o
repack,
I
mean
I
o
repack,
you
can
you
can
practically
think
of
as
what
would
it
look
like
if
I
just
went
to
the
github
repo
for
the
runtime
and
I
just
grabbed
the
source
for
the
latest
version,
and
then
I
just
went-
and
I
was
like
this-
is
no
longer
microsoft,
extensions
library.
K
This
is
a
apollo
extensions
logging
and
then
and
then
you
compiled
it,
you
stuck
it
in
your
thing
and
you
know,
and
it
would
work
with,
of
course,
the
caveat
that
now
now
the
types
in
there
are
completely
disassociated
from
the
original
types
you
you
can't
just
pass
one
around
interchangeably
as
if
it
was.
N
N
Be
very
clear
here,
trying
to
clarify
deals
exactly
what
ielts
pack
is
doing
in
this
case.
Suppose
I
have
the
project
here.
It
had
an
old
version
of
open,
telemetry
sdk,
bringing
diagnostic
source,
also
old.
A
N
Could
do
iel
repack
to
use
a
new
version
of
open,
telemetry
sdk?
That
brings
a
new
system
diagnostic
source
that
is
expected
to
work
unless
the
breaking
changes
in
the
apis
and
things
like
this,
but
the
upgrade
of
the
version
itself
should
work.
M
You
cannot
repack
an
diagnostic
source,
because
custom
diagnostic
source
should
be
in
the
context
of
the
app
only
then
we
will
have
the
activities
captured
from
that
guy,
so
the
you
repack
it.
It
won't
load
in
the
app's
context
and
we
will
not
have
an
activity
from
the
app
to
be
captured.
So
whatever
the
repack
we
are
speaking
is
like,
apart
from
diagnostic
source,
whatever
the
dependencies
we
have,
we
are
going
to
repack
that.
M
So
no
one
lead
the
diagnostics
or
not
the
ones.
K
K
I
o
repack
on
anything
you
wanted
like
in
the
same
way
that
you
could
go
to
the
runtime
source
code
and
you
could
fork
it
for
yourself
and
you
could
run
the
c
sharp
compiler
on
it
and
get
yourself
a
new
assembly,
and
then
you
know
so,
if
you
wanted
to
you
know
like
you,
you
could
go.
Take
the
source
code
of
system,
private
corelib
and
run
that
through
the
c-sharp
compiler
and
come
up
with
you
know,
apollo
private,
core,
lib,
the
and
you
could
get
the
runtime
to
load
it.
K
K
What
functionality
can
I
expect
from
this
library
and
what
interactions
will
it
have
with
other
libraries
and-
and
that's
I
think,
maybe
where
raj
you're
you're
beginning
to
point
out
you're
saying
if
we
have,
you
could
load
paulo
diagnostic
source,
but
you
can't
then
expect
palo
diagnostic
source
to
be
fully
interoperable
with
the
original
diagnostics
like
if
the
app
will
still
have
a
reference
to
the
original
diagnostic
source
and
if
the
app
creates
an
activity
using
that
diagnostic
source
or
it
creates.
You
know
diagnostic
listener
with
the
original
diagnostic
source
assembly.
K
You
know
from
palo
diagnostic
source
that
new
activity
lister
is
not
going
to
recognize
that
any
of
those
old
activities
are
being
created
because,
as
far
as
it's
concerned,
this
new
activity
listener
only
listens
to
paulo
diagnostic
source
activities
and
those
are
not
palo
diagnostic
source
activities.
They're
original
diagnostic
source
activities-
and
so
I
I
is
that
kind
of
what
you
were
getting
at
raj
that
like
that
is
good
okay,
so
so
I
think
of
it
as
yes,
you
can.
K
I
o
repack
anything
you
want,
but
then
you
need
to
consider
what
interaction
do.
I
expect
my
new.
I
o
repacked
version
to
have
with
the
code
that
is
using
the
original
version
and
and
the
answer
by
default
is
there
will
be
no
interaction
whatsoever
like
they
are
just
two
completely
isolated
islands
of
code
and
state,
and
if
you
want,
if
you
want
some
interaction
to
exist,
you
would
have
to
do
something
explicit
to
make
that
interaction
happen.
K
So,
for
example,
you
could
have
a
we
could
if
we
continue
with
our
palo
diagnostic
source
library,
and
you
say
boy,
I
I
really
wish
that
I
had
some
activities
here
in
palo
diagnostic
source,
that
that
had
the
same
content
as
the
ones
that
are
being
created
with
the
original
diagnostic
source
and
in
order
to
get
that
you'd
probably
have
to
write
a
little
piece
of
code,
maybe
using
reflection
that
does
register
a
you
know,
activity
listener
or
diagnostic
listener,
with
the
original
diagnostic
source
listener
types
and
it
l
and
it
hooks
up
delegates
on
those
original
types,
not
the
new
ones,
and
then
anytime,
you
get
the
callback
that
says
the
app
created
the
original
activity.
K
K
I
mean
that's
a
that's,
probably
a
little
messier
specifically
when
it
comes
to
activities,
because
you
know
not
only
do
you
want
sort
of
maybe
the
original
content
of
the
of
the
activity,
but
you
also
want
like
ongoing
updates
to
be
mirrored
from
one
to
the
other,
and
the
code
of
diagnostic
source
doesn't
make
that
easy
for
you,
and
I
mean
you
could
yeah.
You
could
build
something
off
to
the
side
that
continually
tried
to
like
pull
the
old
one
replicated
onto
the
new
one.
That
sounds
pretty
messy.
K
K
A
N
M
Okay,
also,
there
is
a
like
earlier.
I
provided
a
demo
on
that,
like
the
proxy.
The
smaller
proxy
which
I
created,
the
activity
created
in
the
lower
version
of
the
like
diagnostic
source
was
ported
to
the
higher
version
and
the
open
telemetry
captured,
and
we
had
a
console
exporter
to
demo,
show
the
output.
M
So
that's
the
work
in
progress.
We
kept
it
on
hold,
based
on
the
like,
like
further
decision
that
we
make,
we
will
think
whether
we
need
to
proceed
in
that
way
or
not,
but
that
proposal
is
already
in
there
in
our
repo.
N
N
Kind
of
going
back
but
think
about
here.
N
First,
one
one
thing
that
we've
we've
been,
and
I
I
don't
wanna
this.
This
perhaps
sounds
very
dramatic,
but
it
is
not
it's
just
kind
of
reality
of
the
thing
we've
been
with
the
project
for
a
long
time
and
we've
not
been
able
to
release
because
this
it's
basically
a
technical
challenge,
and
I
have
to
say
that
we
need
to
make
a
decision
of
the
path
that
we
are
going
to
take
and
we
need
to
release.
I
never
been
a
project
that
I
was
not
able
to
release.
N
N
We
are
going
to
be
really
targeting
the
one
that
we
do
debuted
at
least
to
get
us
some
information
and
perspective
of
people
trying
to
use,
and
we
are
gonna,
probably
find
a
bunch
of
issues,
but
I
think
also
we
need
to
pursue
and
investigate
that
the
path
that
we
want
to
do
kind
of
supporting
this.
I
I
call
devops
scenario
right.
I
don't
know
about
a
name
for
that,
but
we
need
to
pursue
this
path
too.
N
N
You
know,
I
don't
think
we
need
forks
or
brands
or
anything
to
do
this,
but
I
think
we
need
to
set
up
kind
of
okay.
What
are
the
steps
and
milestones
that
we're
gonna
do
to
experiment
and
confirm
that
this
is
a
viable
path?
You
know,
I
think
I
think
raj
had
been
the
one
that
is
being
kind
of
initiate
and
pursuing
this
path.
N
We
are
kind
of
resigning
ourselves
through
the
build
time
thing
and
even
in
the
build
time
thing
there
are
challenges:
it's
not
like
a
silver
bullet
that
fixes
everything
we
still
have
some
stuff
with
dependencies,
but
I
think
the
the
next
step
is
to
put
this
with
this
a
plan
to
get
this
process.
We
know
how
to
have
the
islands
that
preserve
the
isolation,
but
we
need
to
kind
of.
We
are
going
to
have
these
proxies,
that
are
going
to
be
the
bridges
and
which
version.
N
I
think
we
need
to
be
very,
not
generous
there
and
say
pick
up
a
very
high
version
that
we
say
hey.
This
will
integrate
with
all
the
stuff.
We
don't
you
know
and
set
up
a
path
for
that
work.
I
I
think
raj
once
more.
I
think
this
is
more
the
interest
that
you
brought
up.
So
I
think
I
I
think
the
sig
is
happy
to
work
with
you
on
that,
but
I
think
this
is
something
that
you
should
be
driving.
You
know.
N
N
Let's
set
up
try
to
put
a
small
plan
doesn't
need
to
be
anything
super
detailed
one
page,
two
pages
saying
the
steps
and
the
things
that
we're
gonna
do
and
start
to
work
on
that
and
we
participate
collaborate
on
that.
M
K
I
I
mean
fine
for
me,
I
I
mean
I'm,
you
know
I
kind
of
have
a.
I
don't
know
the
easy
ride
here,
because
I
I
sort
of
sit
to
the
side
and
I
and
I
basically
just
sort
of
say
it's
up
to
you
guys
as
to
how
much
progress
you
make
or
what
or
what
path
you
go.
And
I
and
I'm
just
sort
of
here
to
surprise
you
know
technical
support
but
yeah.
N
That
sounded
fine
to
me.
I
I
think
I
think,
actually
to
be
fair.
We
we
know
that
sometimes
these
questions
that
are
not
on
the
chip
called
net
developer
path.
They're
hard
to
answer
you
know
so
having
direct
contact
with
people
with
your
own
knowledge
is
really
helpful.
You.
N
All
right,
so
I
think
next
step
raj
is
for
you
when
you
have
a
chance
kind
of
set
up
a
small
dock
and
perhaps
on
the
next
meeting
we
discuss
it
or
when
you
can
with
this
path
for
us
for
the
sig
right
now
we
focus
on
the
beta
and
the
beta
requires
build
and
other
stuff.
It's
not
going
to
be
a
smooth
path,
but
for
now
you'll
be
the
path
that
we
are
looking
to
to
follow,
because
once
more,
we
need
to
get
this
project
out.
M
So
I
I
think
that
we
should
scan
like
go
with
the
beta
as
per
the
schedule.
I
think,
next
month
we
have
the
plan
to
do
the
beta.
It
should
go
as
per
the
plan.
We
should
not
stop
or
do
anything
to
that,
whatever
the
work
we
are
planning
to
take
should
be
after
that
and
in
parallel.
I
also
feel
that
we
should
try
to
bring
the
metric
and
the
log
support
to
our
auto
instrumentation.
So
currently
we
are
doing
much
of
traces,
so
we
know
the
challenges
with
it.
M
So
unless
and
until
we
touch
the
other
areas
like
we
don't
know
the
challenges
real
challenges
in
it,
so
everything
is
from
a
theoretical
perspective,
so
I
was
trying
to
do
some
research
on
how
to
bring
a
like
metric
exporter.
There
are
no
big
challenges
because
it
is
also
like
to
do
with
the
diagnostic.
It
uses
a
diagnostic
source
so
but
we
never
explored
how
a
logging
exporter
is
going
to
look
like.
M
So,
even
if
I'm
going
to
create
a
document
and
come
up
that
that's
going
to
sort
out
the
challenges
between
the
tracer
and
the
matrix,
it
may
not
cover
logs
at
this
point,
because
I've
never
played
with
logs,
like
with
an
auto
instrumentation
myself,
so
might
need
to
consider
like
a
design
for
that
before
understanding.
The
challenges
like
coming
up
with
the
challenges
in
it.
N
Yeah
I
I
would
say
that
for
us
very
likely
that
if
we
get
the
tracing
work,
then
our
next
priority
is
actually
metrics.
N
You
know
before
logs,
but
I
I
think
first,
we
need
to
have
our
official
release
of
the
sdk,
even
to
kind
of
not
that
we
must
have,
but
since
we
are
behind
on
the
other,
things
is
very
natural
that
we
wait
for
the
official
release
of
metrics
and
once
we
have
that
then
work
on
top
of
that
you
know,
and
and
then
we
have
kind
of
really
good
space
for
stuff
like
using
the
profiler
to
the
user,
wants
to
get
latency
for
specific
functions
and
stuff
like
this.
N
Besides
just
collecting
their
own
time
metrics,
you
know,
so
we
have
some
good
stuff
to
do
regarding
metrics
but
log.
I
even
didn't
think
yet,
because
we
have
we
are
far
behind
on
on
that.
M
L
N
L
Quick
question
first
zach,
so
I
know
datadog
has
its
own
log
handling
within
the
the
auto
instrumentation.
I
don't
know
if
you've
had
much
chance
to
work
on
any
of
that
to
to
have
some
sort
of
background
knowledge
with
it
for
when
we
get
to
there.
R
Yeah,
so
the
logs
that
we
do,
we
actually
went
through
quite
a
few
different
iterations
of
how
we
do
it,
and
now
we
only
do
it
via
like
by
code
instrumentation,
which
I
mean
that
aligns
with
what
this
project
is
doing
so,
and
that
seems
to
work
where
we
just
basically
instrument
the
various
logging
libraries
like
ceralog
and
log
log
for
net.
R
R
So,
if
we're
doing
rewriting-
which
I'm
not
sure,
if
that's
what
we're
gonna
be
pursuing
here,
seems
to
largely
work
where
we
basically
just
create
new
scopes
and
add
the
like
the
correlation
information
so
like
from
the
activity-
and
I
I
feel
like
most
of
the
time-
that's
already
there
unless
well,
it
should
be
available
at
least
so
at
most.
Maybe
we
would
have
to
actually
add
that
onto
the
log
message
itself,
but
that's
really
the
only
interaction
that
we
do.
R
R
I
imagine
that
would
not
be
necessary
here
if
we
just
add
an
exporter
through
the
configuration
that
kind
of
covers
all
the
stuff
we
do
with
logs,
so
possibly
rewriting
if
we
actually
do
need
to
like
just
in
time,
add
the
correlation
information
to
like
the
log
object
and
then
just
adding
the
exporter
somehow
like
during
the
hotel,
sdk
setup.
N
I
I
I
have
a
random
question
regarding
the
logging
for
the
sdk
I
so,
but
I
don't
want
to
jump
to
that
before
to
say
folks
have
other
questions
on
the
topic.
N
Okay,
very
very
quickly
when
I
was
trying
out
the
samples-
and
I
had
the
problem
with
the
defaults
of
the
exporter.
N
I
I
use
what
I
typically
use
kind
of
I
go
to
dot
net
trace
to
enable
the
the
capture
of
the
event
source,
but
I
also
in
windows-
and
I
have
perfect
view,
so
I
did
the
traditional
stuff.net
trace
this
provider
converted
and
opened.net
trace.
But
then
I
was
trying
this
on
the
on
the
mac
and
I
was
like
there
is
no
way
for
me
to
visualize
the
trace
from
dotnet
the
log
events.
I
I
have
to
export
with
something
else.
N
K
K
No,
at
the
moment
I
I
mean
it's
been
a
feature
that
has.
K
Been
a
feature
that
has
sort
of
loosely
been
kicked
around:
it's
never
been
prioritized
all
that
highly
I
mean
if
someone.
If
someone
wanted
to
do
it,
I
think
we'd
be
glad
to
take
the
pr
or
sooner
or
later
it
might
percolate
up
high
enough
in
in
the
team's
own
priority
list
that
we
would
do
it.
I
mean
I
think
it's
a
good
thing
at
the
moment.
The
only
integrations
that
we
have
for
visualization
on
a
non-windows
platform
are
the
export
to
the
speed,
scope,
format
and
the
export
to
the.
K
I
think
it's
called
event
trace
format
that
chromium
uses
for
its
profiling.
N
Yeah
I
tried
the
chromium
format.
I
don't
know
if,
because
I
didn't
capture
any
stacks,
I
just
captured
the.
K
Yeah
that
that's
the
events,
that's
the
issue
is
that
my
understanding
is
that
both
of
those
the
speed
scope
and
the
chromium
there
yeah
they're,
all
about
showing
you
cpu
sampling,
kind
of
information,
or
you
know
things
that
have
stacks
and
either
samples
or
durations,
and
because
your
events
probably
didn't,
have
stacks
and
didn't
have
durations
you
you're,
like
I
want
to
see
text
yeah
and
then
they're
like
well
that
that's
not
what
we
do.
We
don't
show
text,
we
show
we
show
stack
stuff.
So
that's
that's!
K
My
guess
is
that
you
you
you
converted
it
and
it
was
like
well
we're
showing
you
everything.
We
know
how
to
show
you,
which
is
unfortunately,
completely
the
intersection
between
that
and
what
you
wanted
is
zero.
So
you
get
zero
and
you're
like
oh
well,
yeah.
K
N
A
Of
yeah
yeah
and
I'm
not.
A
K
N
Yeah,
this
reminds
me
actually
about
something
that
unfortunately
didn't
get
traction,
but
I
hope
in
the
future
we
get
more
attraction
on.
That
is
that
the
open
element
collector
has
a
event
pipe
listener
that
we
could
use
for
this
kind
of
thing
you
know,
and
since
the
collector
funnels
log
metrics,
and
we
could
use
that
to
capture
all
this
stuff
and
do
the
same,
but
unfortunately
that
we
don't
have
people
to
work
on
that
right
now.
So.
L
K
Yeah,
I
think
it
probably
it
probably
would
so
yes.net
monitor,
has
a
logs
rest
point
that
you
can.
You
know
you
could
use
curl
or
whatever
other
you
know.
You
use
a
browser,
and
so
yes,
I
think,
if
you
set
up,
if
you,
if
you
set
up.net,
monitor
and
well-
and
your
data
was
coming
specifically
from
I
logger,
because
that's
that's
the
one
that
they
listen
to,
it
will
render
that
one
for
you
in
text.
K
N
In
this
case
was,
since
all
the
logging
from
the
sdk
is
event
source.
N
N
That's
that's.
What
was
I
did
and
since
I
have
perfect
view
on
the
windows
was
pretty
easy.
You
know
but
but
I
was
like
oh
why
I
can't
drop
this
just.
N
K
O
N
Makes
sense
makes
sense
all
right.
This
was
just
because
I
needed
in
that
quick
investigation,
and
I
I
was
surprised
that
there
was
not
bread
there,
but
it
was
actually
the
first
time
that
I
needed
and
I
needed
just
for
a
small
amount
of
things.
So
it's
fine!
You
can
survive
without
that.
N
All
right,
thanks,
noah
for
dropping
by
and
talking
to
us
folks
anything
else
that
you
guys
want
to
mention.
N
S
N
Okay,
do
you
have
listed,
what
are
the
cases
that
are
problematic
there
or
comments.