►
From YouTube: 2020-10-14 .NET Auto-Instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
B
A
A
A
Okay,
the
first
thing
that
I
I
want
to
mention
is
regarding
the
pace
for
reviews
and
marriage.
A
I
think,
since
I
started
I,
I
was
trying
to
wait
for
everyone
to
have
a
chance
to
look,
but
I
think
the
stuff
right
now,
especially
the
things
that
we
have
hoping,
are
kind
of
relatively
non-controversial
and
things
like
this,
so
we're
gonna
try
to
move
faster
and
we
already
have
one
review
from
chris
and
it's
based
on
code
from
greg
so
kind
of
I
I
will
unless
it's
something
that
is
kind
of
architectural
decision
that
we
need
to
make.
A
I
I
think
we
should
try
to
move
a
little
bit
faster
and
get
one
to
approvers
approvals
and
then
merging
the
changes.
So
we
don't
take
that
much
and
we
start
to
really
move
things
around.
A
D
A
Yeah,
no,
that
that's
fine.
I
I
I
think
it's
just
to
to
everybody's,
be
on
the
page
about
how
the
things
are.
Gonna,
move
and
to
be
fair.
This
this
first
review
is
kind
of
just
a
skeleton,
for
the
thing
is
empty.
The
meat
starts
to
come
in
the
next
ones,
so
I
will
merge
that
one
and
afterwards
are
you.
A
I
always
start
on
the
dynamic
loader
I'll
do
just
the
loader
for
the
assembly
separate
as
one
review,
and
then
we
move
it
to
the
invoker
and
that
sequence,
so
we
can
review
the
parts.
I
thought
that
zach
did
great
progress
on
the
reorg
of
the
the
the
repo
and
I
I'm
waiting
on
that
to
do
a
pull
just
before
when
that's
merged
I'll
do
a
pull
before
and
do
then
I
pull
with
the
reorg.
E
Yeah,
that
sounds
good.
I'm
just
I
had
to
address
some
feedback
which
I
got
to
yesterday,
so
I'm
hoping
that
it
looks
good
now
for
a
date,
oxide.
A
Yeah,
it
sounds
good,
sounds
good
to
me
and
I
think
then,
in
this
regard,
you
can
move
to
the
discussion
of
the
ga
that
eric
put
and
the
the
info
on
the
dark.
A
C
A
You
wanna
open
the
dark,
eric
or
and
guide
from
there,
or
should
I
open
here
and
scroll,
you
can
just
go
ahead
and
open
it.
Okay,.
B
You
know
if
this
makes
sense
the
way
I'm
laying
lane
information
out
and
then
maybe
we
can
also
talk
a
little
bit
about
you
know,
planning
for
if
there's
any
sort
of
like
beta
release
that
we
want
to
do
so,
starting
off
with
this
this
first
section
here
this
instrumentation
of
libraries,
I
had
been
thinking
that
I
wanted
to
delineate
things
between
what
we're
supporting
explicitly
in
the
auto
instrumentation
agent.
You
know
things
are
requiring
bytecode
injection
versus
things
that
are
implemented
or
supported
by
the
sdk
or
act.
B
You
know,
activity
source
supported
or
so
that
was
my.
That
was
my
thinking,
but
I
I
realized
that
I
need
to
understand.
B
You
know
more
along
those
lines
in
terms
of
like
how
we're
going
to
be
utilizing,
the.net
sdk,
and
how
we're
going
to
be
picking
up
information
from
things
where
the
library
natively
supports
activity
source.
B
A
So,
let's
first
separate
here,
we
need
to
be
able
to
pick
up
what
comes
from
the
new
activity
source.
A
One
interesting
thing
is
that
the
dot
net
sdk
itself
at
the
opentelemetry.net
sdk
itself
has
a
small
difference
in
to
hotel
in
that,
in
the
sense
that,
because
you
know
tell
if
you
add
a
library,
that's
auto
instrument,
auto
instrument
that
is
instrumented
manually
instrumented,
it
automatically
shows
up,
but
the
way
that
dot
net
did
you
needed
to
explicitly
say:
I'm
gonna
be
collecting
source
x.
A
A
No
just
that
we
we
should
be
able
to
capture
active
source.
We
we
have
to
try
to
treat
the
legacy
activity
in
that
regard,
but
the
approach
will
be.
We
need
some
way
to
list
what
the
names
of
the
active
sources
we
are
going
to
be
showing
to
the
user.
If
we
follow
the
same,
that
was
done
for
opentelemetry.net.
D
I
that
sounds
fine,
but
there
is
one
consideration
that
we
should
have,
and
I
think
like
for,
for.
I
think
it
also
can
be
up
to
vendors
to
to
help
this
list,
but
I
think
we
should
at
least
support
this.
That's
the
following
scenario.
So
again
from
the
perspective
of
a
vendor
scenario
that
builds
on
top
of
on
top
of
open
source
right.
D
If,
if
because
we
have
these
two
big
scenarios
right,
one
is
somebody
just
takes
the
open,
telemetry
tracer
and
sets
up
their
own
back
end,
and
does
it
completely
independently
right?
Only
huge
companies
will
do
this
and
then
also
vendors,
who
build
on
top
of
this
and,
at
the
end
of
the
day,
we're
all
doing
well
here,
because
we
will
build
on
top
of
this,
a
customer
who,
as
a
vendor,
really
it's
like.
D
They
just
want
things
to
work,
they
have
an
application
and
they
might.
This
application
might
already
be
running
in
production.
They
just
drop
the
tracer
and
they
expect
things
to
work.
So
it
should
be
definitely
supported
that
all
activity
sources
are
listened
to
and
that
can
be
done
right
in
the
in
the
you
know
in
the
activity
listener
we
can
just
like
always
say:
yes,
I'm
listening,
so
I
I
strongly
feel
that
it
should
be
supported
which
one
should
be
the
default.
Behavior
is,
I
feel
less
strongly
about.
D
A
Yeah,
so
basically,
if
I
recall
correct
from
the
opentelemetry.net
the
option
there
was,
you
have
to
be
expressed,
but
you
can
put
a
some
kind
of.
I
don't
remember
if
it's
a
regular
expression
or
gobbling
pattern,
but
you
you
have
a
way
to
say
hey.
I
want
to
listen
to
our
activity
source
that
exists
on
the
project,
but
in
principle
you
have
to
list
the
one.
A
D
I
think
so
the
the
the
plan
that
I
had
with
this
whole
activity
thing
and
we
can
always
modify
it.
I'm
just
sharing
what
I
had
in
my
mind
so
far
and
that's
like
we
can
always
tweak
it.
D
So
once
the
wrapper
is
finished
and
I'm
not
so
happy
with
the
progress
I'm
making
just
because
of
other
commitments
that
I
have
but
like
some
progress
is
being
made
so
once
that
is
done,
I'll,
be
building
the
activity
listener
right
and
that
I'll
be
loosely
basing
on
this
prototype
that
I
shared
earlier
that
we
have
already
that
actually
works,
it's
just
not
using
the
wrapper
right
and
then
we'll
have
a
version
that
works,
and
for
that
I
will
use
the
default.
D
That
makes
most
sense
for
for
our
vendor
and
like
the
one
that
I'm
bringing
up
and
then
we'll
have
a
review
and
people
will
say
hey.
This
does
not
make
sense
for
other
vendors
or
it
does
not
make
sense
for
open
telemetry.
Please
modify
it.
Add
additional
injection
like
additional
configuration
or
additional
injection
point
and
then
we'll
go
through
a
review
and
we
will
work
on
it
either.
I
will
other
people
will
contribute,
or
whatever
is
right,
the
right
way
there.
A
B
D
Yeah
so
essentially,
the
whole
activity
project
has
three
parts
that
I
kind
of
outlined
on
the
on
the
github
issue
or
arguably
four
parts
whatever
so.
First
we
solve
the
versioning
problem
by
creating
the
shim
around
around
the
activity.
That's
what
they
call
the
wrapper.
D
So
then
we
create
a
component
that
is
the
activity
listener.
That
essentially,
is
a
background
thread
that
collects
all
activities,
listens
to
all
the
right
activity
sources
and
and
invokes
the
the
right
export.
So
that's
that's!
This
is
the
second
part,
and
that
needs
to
to
be
done.
D
Third,
we
take
the
existing
data
dock
library
that
is
currently
used
by
all
the
integrations
and
modify
it
so
that,
instead
of
instead
of
making
spams
it
makes
activities
and
it
should
probably
be
a
feature
flag
so
that
we
can
test
it
and
and
mitigate
risk
there,
and
the
next
fourth
step
is
that
we
actually,
because
that
that
will
allow
us
to
start
dog
fooding
this
in
in
and
tested
and
potentially
have
it
in
production
with
customers
who
agree.
D
But
it
will
not
be
performant
enough
to
call
it
done,
because
now
that
we
have
switched
in
the
library,
we
will
have
a
performance
overhead
of
essentially
going
through
this
existing
data
dock
library
and
creating
two
two
objects.
Where
often
no
object
is
required,
so
as
the
last
step
will
need
to
go
at
into
every
single
integration
that
exists
and
modify
it
so
that,
instead
of
using
the
existing
data
dock
library,
it
uses
this
wrapper
directly
to
create
activities.
D
E
One
thing
that's
still
unclear
to
me
is:
how
do
we?
What
do
we
do
if
the
application
brings
in
the
open
telemetry.sdk,
since
we
will
end
up
basically
double
listening
to
activities.
D
It's
actually
a
very
good
point,
I
think,
from
the
application
standpoint:
it's
not
a
problem
because
we
can
even
tenfold
tenfold
listen
to
activities.
The
problem
is
with
the
entrance
scenario
where
the
customer
it
it
had
some
exporter
and
it
now
it
sends
the
telemetry
twice
to
some
place,
so
the
problem
is
more
with
the
entrant
experience
rather
than
with
the
actual
application.
D
So
I
think
the
strategies
there
may
be
vendor
specific
customers
who
control
the
application.
They
can
make
the
choice
to
us
remove
the
sdk.
But
what
happens
if
you
have
an
existing
application
customer
doesn't
want
to
modify
application.
What
do
you
do
what
I
would
recommend?
What
I
would
recommend
that
we
as
datadog
do-
and
I
think
microsoft
has
exactly
the
same
problem,
because
they
have
exactly
the
same
thing
with
application
insights
already
their
plan
is
the
same.
In
fact,
I
recommend
it
because
I
think
this
plan
makes
sense.
D
They
will
create
a
special
integration
for
bytecode
instrumentation
integration
for
application
insights
and
they
will
say
if
application
insights
is
running,
they
will
essentially
turn
it
off
using
an
integration
like
they
forceful
they
will
they
will.
They
will,
they
know
one
place
in
application
assets
where
you
need
to
make
small
byte
code
level
change
to
stop
doing
the
application
inside
pipeline
in
the
right
place,
and
instead
data
will
only
flow
through
the
through
the
tracer,
and
that
has
the
advantage
that
you
know
you
turn
off
the
tracer.
D
It
goes
back
to
flowing
through
applications,
so
I
would
recommend
that
we
consider
something
like
this.
If
anything,
we
should
definitely
support
this
approach.
A
D
Yeah,
that's
another
issue,
that's
when
that's
when,
if
the
actual
application
is
affected,
so
that
shouldn't
be
a
problem
because
we
never
reference
the
open
climate
sdk
from
the
from
the
tracer
and
the
only
conflict
is
the
activity,
the
diagnostic
source
in
the
library,
so
that
shouldn't
happen.
A
Yeah
that
that
shouldn't
happen,
and
in
the
case
that
the
sdk
itself
is
loaded,
we
we
have
options
because
we
can't
identify
that
loaded.
Actually,
so
we
we
have
options
on
what
we
want
to
do.
You
want
to
do
just
say,
analog
kind
of
hey.
We
have,
we
have
the
open
telemetry
and
I
don't
know
we
are
not
going
to
do
our
part.
We
are
going
to
generate
the
activities,
but
we
are
not
going
do
the
exporting,
but
we
do
have
some
options
there.
A
B
So
help
me
understand
a
little
bit
more
in
terms
of
the
sdk
like
are
we,
I
know
we're
saying
we're
not
taking
any
sort
of
hard
dependency
on
it,
but
are
we
utilizing
it
in
any
way
and
what
is
how
does
that
impact?
If
there's
you
know
the
exporters
that
have
been
developed
for
the
sdk?
Does
that
mean
that,
like
we
need,
if
we're
doing
things
separately,
we
need
a
you
know,
a
different
exporter
for
the
auto
instrumentation
agent,
so.
D
I
would
I
would
prefer,
but
this
is
up
to
us,
to
decide
to
be
pragmatic
rather
than
creating
a
policy,
and
so
when,
when
pragmatic
means
looking
at
every
respective
component
and
deciding
whether
it
makes
sense
to
copy
the
code
either
exactly
one
by
one.
In
that
case,
we
could
just
like
make
it
at
build
time
somehow
or
with
minimal
modifications.
D
I
think
they
we,
if
we
figure
out,
we
can
have
the
same
interface
then,
should
be
able
to
be
reused.
Maybe
we
may
need
to
have
a
slightly
different
interface
because
of
this
discussion
that
we
had
with
the
background
thread
versus
asynchronous.
In
that
case,
what
I
would
do
is
again
should
be
programmatic.
D
I
would
take
one
exporter
that
is
common
re-implement
it
with
the
minimum
modifications
that
are
required
for
architecture
and
then
start
a
conv,
and
once
it's
actually
just
one
as
an
example
and
then
once
it's
implemented
and
kind
of
checked
in
start,
the
conversation
with
the
sdk
group
to
say:
hey
guys,
we
needed
to
make
this
particular
modifications
for
this
particular
reasons,
let's
together
understand
whether
that's
actually
an
overall
improvement.
In
that
case,
you
guys
modify
your
interface
or
whether
it's
we
have
really
good
reasons
for
having
slightly
different
architectures.
D
I
would
just
sort
of
prefer
to
pick
one
exporter
and
actually
like
implement
it
for
our
architecture
and
then
have
the
conversation
based
on
an
actually
like
actual
change,
rather
than
kind
of
make
it
serious.
A
Yeah,
I
I
would
love
to
be
able
to
use
the
code
from
the
exporters
that
is
already
on
the
on
the
dotnet
open,
telemetry
repo,
but
since
we
have
a
a
type,
that's
writing
wrapping
the
actual
activity.
A
The
case
that
we
don't
have
a
real
activity
that
we
are
using
our
stub
becomes
kind
of
complicated
to
use
directly
their
code.
You
know
so
so
my
point
of
view
is:
we
should
try
to
use
them
as
much
as
we
can.
But
at
this
point
I
am
I'm
not
100
sure
that
we
will
be
able
to
you
know.
D
Yes-
and
you
know
again
when
we,
when
we,
when
we
get
to
implement
the
first,
whoever
does
this-
we
should
be
kind
of
thoughtful
about
this
and
potentially
with
some
minimum
magnifications
to
their
source.
D
We
can
make
it
if
not
identical,
very,
very
similar,
for
example,
if,
like
just
as
a
concrete
example
right,
if
you
have
some
kind
of
exporter-
and
it
has
some
code
that
says,
activity
gets
tagged
with
a
certain
tag.
Name,
if
you
just
wrapped
it
in
this,
in
a
local
method,
called
get
activity
id
whatever,
and
then
you
have
like,
I
don't
know,
a
superclass
that
is
shared
and
then
just
you
implement
these
methods
or
if
that
is
not
performant
enough,
you
just
have
a
code
that
is
slightly
different.
G
So
eric
at
the
very
least,
assuming
that
we're
able
to
use
reuse
100
of
the
code,
there
would
still
likely
be
changes
for
vendor-specific
exporters,
because
I
suspect
that
those
exporters
all
take
some
sort
of
nougat
dependency
on
the
sdk
itself
and
in
which
case
you
may
be
able
to
reuse
the
code.
But
you
wouldn't
necessarily
be
able
to
reuse
that
nuget
package.
D
And
I
I
think
we
shouldn't
prescribe
vendors
in
this
group
if
a
vendor
really
wants
to
take
a
dependency
on
the
sdk
that
we
shouldn't
forbid
it
right.
But
I
think
to
to
me
it's
very,
very
critical
that
the
vendor,
who
doesn't
want
that
can
avoid
it
so
that
all
the
shared
components
don't
take
dependency
on
the
sdk.
A
Yeah,
the
the
the
problem
case
really
to
reuse
is
when
we
fail
to
you
to
load
system
diagnostic
diagnostic
source.
You
know
yeah.
A
D
Once
once
our
rapper,
like
another
point,
is
once
our
wrapper
is
finished,
we
will
measure
performance
if
the
performance
is
abysmal.
The
whole
wrapper
approach
will
not
work.
If
performance
is
very
good,
then
the
sdk
may
consider
actually
using
it
as
well.
B
Great
quick
question
along
the
lines
of
the
the
wrapper
stuff
in
the
the
scenario
you
were
explaining
just
a
minute
ago,
can
you
just
in
the
chat
ping
me
the
issue
that
that's
that's
that's
all
in,
so
I
can
review
that
later.
F
D
D
I'm
talking
about
issues
on
there,
it's
slightly
like
I.
I
know
I
found
me
and
that's
I
haven't
done
the
the
call
target
thing
it's
I
was,
as
I
said
last
week,
I
was
on
call
so,
but
I
haven't.
B
So-
and
I
did
want
to
clarify
one
other
thing
you
had
said
in
terms
of
that
last
step
or
the
step
four
being
probably
needing
to
go
back
and
essentially
do
some
performance
optimizations
and
that
would
you'd
have
to
we'd
have
to
do
that
for
every
every
library
that
we're
instrumenting.
B
So
is
that
I'm
just
wondering
if
we
think
that
you
know
like?
Actually,
I
guess
two
questions
you
know
is
that
something
we
anticipate
being
a
lot
of
work
and
the
second
question
is
is:
do
we
need,
then,
all
of
those
libraries
updated
for
our
ga
or
are
we
comfortable
with
just
a
subset
of
the
say,
the
most
popular
ones.
D
Actually,
with
it's
a
lot
of
work,
I
think
zac
can
be.
That
could
know
better,
but
I
I
feel
that
it
is
necessary
for
so
it
is
necessary
for
two
reasons.
First
or
maybe
one
one
is
actually
valid
for
ge,
but
it's
it's
a
performance
issue
and
it's
an
architecture
issue.
D
I
know
that
I
wouldn't
be
comfortable
running
on
top
of
on
top
of
this
by
default
until
we
do
it
for
performance
reasons,
and
even
when
we
evaluate
the
performance
of
the
of
the
whole
feature
right,
there
will
be
a
couple
of
ovulations.
One
will
be
like
a
micro
benchmark
early
on
once
with
the
record
finished,
but
the
other
will
also
we.
We
must
compare
the
overall
performance
of
what
we
have
today
versus
versus
what
we
will
get.
D
If
we
have
today
significantly
better
performance,
then
we
can't
have
this
this
thing
and
for
that
we
have
to
actually
migrate,
but
how
to
issue
guys,
I
don't
know
it's
just
my
opinion.
A
Actually,
actually
perhaps
perhaps
david
can
can
clarify
that,
because
I'm
not
remember
that
one
of
the
it's
orthogonal
to
the
activity
source
itself.
But
one
thing
that
noah
mentioned
some
time
ago
was
we're
switching
from
the
usage
of
the
type
of
reflection
to
use
kind
of
a
stub
that
we
could
call
the
stuff.
D
No,
no,
this
is
different.
I
mean
we
have
to
do
this
at
run
time
because
because
we
don't
know
at
compile
time
which
one
which
what
will
be
loaded,
we
have
to
regenerate
the
stubs
every
time
a
new
diagnostic
source
is
loaded.
H
I
can't
remember
the
name
of
the
method
now,
but
basically
you
can
create
a
delegate
that
calls
it.
It
will
actually
use
jitted
code
and
you
don't
have
to
pay
any
of
the
performance
of
the
reflection
you
just
have
to
on
the
startup
path.
Do
the
reflection
then
create
the
delegate
and
then
you're
and
then
you're
just
running
through
normal
code?
At
that
point,.
A
No
because
the
code
that
we
have
the
instrumentation
themselves-
oh.
D
Oh,
oh
sorry,
let
me
clarify
so
so,
no,
no!
So
the
activities
tab.
I
call
it
activity
by
the
session
that
activities
tab
contains
stub
the
apis
for
creating
activities,
collecting
activities
and
clearing
for
activity
tags.
So
it
contains
all
the
apis.
We
need
for
the
entire
thing
both
for
for
collecting
and
for
for
for
sending.
D
So
both
the
instrumentation
part
and
the
collection
part
takes
the
dependency
on
the
stub
library
and
the
stub
library,
dynamically
loads.
The
red
techno
systems.
A
Yeah
but
for
instance,
let's
say
you
are
instrumenting
regis
right
now.
I
think
that
when
you're
instrumenting
radges,
you
pick
up
the
object
and
then
you
use
reflection
on
top
of
that
to
do
all
the
stuff,
and
I
remember
no
mentioning
that
there
was
a
way
for
us
to
kind
of
getting
a
way
of
having
to
do
at
least
at
the
compile
time.
Without
that
reflection,
you
know.
D
Yes,
so
that
one
sorry,
I
misunderstood,
you
mean
getting
data
out
of
the
object
that
belongs
to
redis.
A
D
That
is
being
addressed
as
a
part
of
the
coal
target
changes
that
we're
doing,
and
there
is
a
whole
big
effort
to
make
this
really
fast,
based
on
some
magic
that
that
I'm
not
really
an
expert
to
discuss,
but
it's
it's.
We
we
stub,
we
stub,
we
create
proxies,
we
create
proxy
classes,
so
when
you,
when
you
create
basically
the
plan
for
for
us
for
now,
is
not
to
address
that
area
before
call
target.
Instrumentation
is
done
and
as
a
part
of
that,
we
will.
D
When
you
declare
your
new
your
new
integration,
you
will
go
and
say
hey.
These
are
the
parameters,
and
these
are
the
types
and
we
will
generate
stop
types
and
we
will
do
magic
like
we
will
create
those
cache
delegates
to
get
the
right
private
data,
and
we
will
do
this
like
in
the
engine
of
of
the
of
the
tracing.
A
Okay,
so,
let's
not
sorry
for
bringing
that
up,
I
think
I
I
kind
of
took
the
a
detour
to
a
party-
that's
kind
of
far
farther
down
the
road.
So
sorry
about
that,
let's,
let's
get
back
on
track
eric.
G
Yeah,
so
I
think
there
was
another
angle
to
the
to
the
question.
Please
correct
me
if
I'm
wrong,
but
it
feels
like
there
was
a
question
about:
do
we
have
to
make
updates
to
all
of
the
existing
bytecode
instrumentation
before
we
can
move
on
to
something
else,
or
is
there
a
way
to
do
it
in
a
more
iterative
fashion
like,
for
example,
have
everything
disabled
and
then
tackle
one
library
at
a
time
just
to
throw
something
out
there.
D
A
Yeah
I
I
I
was
going
to
say
that,
prior
to
ga,
as
greg
mentioned
before,
I
think,
having
a
a
wrapper
that
allows
us
to
use
all
the
the
instrumentation
that
already
exists
would
be
great,
but
for
ga
we
need
to
get
rid
of
that.
You
know
so
for
us
to.
We
really
start
to
play
and
be
able
to
kind
of
find
this
stuff
and
debug
and
do
stuff
you'll
be
great
to
have
adapter.
That
does
that,
but
I
I
I
I'm
not
comfortable
saying
ga
with
that.
You
know.
D
I
agree,
I
think
I
think
like
it's,
it's
something
that
if
we,
if
we
don't
get
rid
of
it
like
step
by
step,
absolutely
that
will
mitigate
risk
but
leaving
it
for
later,
then
it
will
never
get
done.
It
will
be
a
architectural
legacy
that
we
drag
on
forever.
B
B
Then
I
think
I
don't
need
to
so.
I
had
been
thinking
that
we
would
identify.
You
know
like
the
most
popular
libraries
like,
for
example,
the
most
popular
data
store.
You
know
and
then
go
through
and
determine
like
okay.
These
are
the
ones
we'd
be
supporting
out
of
the
box
at
ga,
but
you
know
we'll
add
support
for
this
stuff
later,
but
if
we're
talking
about
basically
supporting
all
the
ones
that
datadog
currently
supports
right
now
for
for
ga,
then
we
don't
really
need
to.
B
You
know,
do
any
sort
of
decisions
around
which
ones
will
be
supported
or
not.
I
guess
it
would
be
nice
to
have.
You
know
have
that
list
here
and
so
that
we
can,
just
as
part
of
our
you
know,
ga
plan
and
say
this
is
the
list
where
you
know
the
things
we're
going
to
support,
but.
A
I
I
I
think
that
we
we
could
trim
it
down
to
a
set
that
we
consider
must
have.
You
know,
I
think,
perhaps
for
a
vendor
to
say
that
they
are
recommending
their
customers
to
move
perhaps
require
something
else,
but
from
the
perspective
of
open
telemetry,
I
would
say
that
if
we
have
kind
of
a
mouse
set
that
allows
most
of
users
to
get
value,
I
think
we
we
perhaps
should
go
for
that.
You
know.
D
A
Yeah,
I
I
I
I
just
really
want
to
call
ga
when
I
can
do
that
to
my
customers,
but
on
the
other
hand,
I
don't
want
to
kind
of.
Let's
say
I
I
don't
know
one
instrumentation,
that
is
kind
of
corner
case.
Let's
say
I
I
I
don't
have.
I
do
have
some,
but
I
don't
have
many
users
of
wcf
instrumentation,
but
I
do
have
some
so
in
that
sense
I
kind
of
I
can
live
with
kind
of
open,
telemetry,
saying,
hey
ga,
but
we
still
don't
have
wcf.
A
D
A
You
know
so
I
I
I'm
I'm
I'm
trying
to
see
if
that
is
kind
of
that
80
20
case
that
we
can
say
kind
of
hey.
These
instrumentations
here
cover
a
good
chunk
of
what
people
need.
You
know
so
and
then
I'll
have
perhaps
a
handful
of
my
customers
that
I
can't
move,
but
the
vast
majority
I
can
move.
You
know.
D
There
is
also
one
way
to
add
some
data
driven
information
into
this,
so
once
we
have
converted
a
few
of
these
integrations,
we
will
measure
performance
and
if
we
realize
that
the
performance
difference
between
removing
this,
this
intermediate
layer
and
not
is
negligible,
then
I
think
what
you
say
makes
a
lot
of
sense.
If
we
realize
that
this
performance
is
very
non-negligible,
then
I
again
is:
it
will
be
up
trust.
D
A
Available
but
thinking
about
the
experimentations
itself,
perhaps
I'm
being
too
optimistic,
we
are
devs.
We
are
usually
very
optimistic
about
everything,
but
I
have
a
feeling
that
actually
kind
of,
if
all
that
previous
work
is
done
to
our
satisfaction,
converting
the
specific
instrumentation
is
not
a
huge
amount
of
work.
You
know,
I
think
the
the
the
harder
work
is
the
one
that
we're
starting
now
we
start
with
the
hardest
part.
You
know.
G
And
then
I
also
suspect
that
if
one
of
the
vendors
has
instrumentation
that
doesn't
exist
in
open
telemetry,
it's
probably
not
going
to
be
too
big
of
a
deal
to
implement
something
similar.
A
Yeah
we
do
plan
to
bring
some
of
the
stuff
that
we
have
and
as
soon
as
we
feel
that
the
repo
is
ready
for
that,
we
are
doing
that.
B
Yeah
so
I
agree,
it
seems
to
make
sense
to
me
to
take
an
approach
of
you
know,
making
some
some
measurements
around
performance
and
then
making
the
decision
based
off
of
you
know
the
further
information
we're
able
to
to
glean
from
that.
So
that's
that
sounds.
That
sounds
good
to
me.
G
Also,
I
suspect
that
once
we
have
a
certain
subset
of
instrumentation
in
place,
we
could
do
a
beta
release.
A
Yeah,
I
always
in
favor
of
having
people
that
can't
use
and
test
before
production
doing
as
soon
as
possible.
You
know.
D
I
think,
in
this
particular
case,
beta
will
be
a
good
thing,
also
to
kind
of
signal
to
the
community
that
we're
making
progress
in
terms
of
testing.
One
thing
to
consider
is
that,
like
you,
guys,
are
running
this
introduction
all
the
time
right
I
mean
splunk.
G
A
Of
our
customers,
the
ones
that
I
work
closely,
I
I
don't
work
closely
with
all
of
them,
but
the
ones
that
I
work
closely.
Some
of
them
have,
let's
say,
staging
environments
that
typically
when
we
are
not
trying
things
with
them,
they
deploy
them.
So,
for
instance,
if
I
have
a
beta,
I
will
be
willing
to
push
to
then
say:
hey.
Can
you
take
a
look
of
this
in
your
stage,
environment.
G
D
So
so
I
think
I
think
bit
is
a
great
idea.
It
would
be
special,
like
kind
of
a
signal
to
the
community.
I
think
at
some
point
like
folks
from
your
relic,
might
also
want
to
like
right
now.
D
You
have
your
your
other
tracer,
but
at
some
point,
when,
when
you
see
fit,
it
could
be
when,
when
we
called
beta,
you
might
wanna,
you
know,
have
some
customers
try
it
and
see
how
it
goes,
and
that
would
be
great
feedback,
especially
because
it
might
give
us
some
comparison
about
like
one
versus
the
other,
and
we
could
identify
what
needs
to
be
moved
across
from
from
what
you
guys
have,
and
but
in
terms
of
trying
testing
that
that's
kind
of
why
I'm
kind
of
you
keep
keep
bringing
up
customers
because
for
us
it's
like
it's
it's
being
tested
all
the
time.
B
B
A
B
A
Yeah
some
sounds
good
to
me
and
sounds
like
good
discussion
conversations
about
this.
This
criterious,
if
you
are
close
on
that,
I
didn't
I
don't
have
anything
else
for
my
part
to
talk.
I
don't
know
if
any
anyone
has.
A
So
one
thing
I
I
just
want
to
mention
this-
I
don't
want
to
address
this
as
we
have
much
bigger
fish
to
fry
before
we
get
to
that,
but
I
I'm
not
been
following
that
closely
regarding
dot.net,
a
and
aot,
but
I
keep
hearing
a
lot
of
people
saying
that
the
futureof.net
is
going
to
be
aot2
and
that
gets
me
a
little
bit
concerned
about
the
future
because
then
what's
up
our
approach
since
we
are
using
the
profile,
so
I
I,
as
I
said,
we
have
bigger
fish
to
fry.
A
This
is
down
the
road,
but
if
anyone
is
aware
of
anything
and
wants
to
bring
to
this
group,
I
think
anyone
you'll
be
beneficial.
You
know
so
I
keep
in
here.
As
I
said,
kind
of
gossip
is
over
there,
the
the
window,
people
saying
kind
of
hey
the
futureof.net
is
going
to
be
aot,
but
last
time
that
I
checked
it
was
basically
client.
A
You
know,
and
not
the
the
server
side
that
we
are
mostly
focused,
but
there
is
rum
that
is
other
things
too,
but
not
the
most
stuff
that
we
are
focused
and
if
that
really
becomes
a
reality
for
net,
I
think
then
we
we
need
a
solution
regarding
our
profiling
means
of
doing
that.
You
know,
I
think.
D
This
is
a
a
point
for
david
and
and
what
are
you
guys
like?
What's
your
long-term
vision.
H
So
that's
a
good
question
and
this
is
an
area
that's
rapidly
evolving,
even
inside
microsoft,
and
so
at
a
high
level.
There's
there's
two
pushes
right
now,
so
there's
the
core
rt
push,
which
is
an
evolution
of
dot
net
native
and
on
a
personal
level.
I
wouldn't
really
ever
expect
core
rt,
which
is
literally
compile
everything
down
to
native
code.
I
don't
think
that's
ever
going
to
become
the
norm.
H
I
could
be
wrong,
but
I
mean
my
feeling
is
it's
kind
of
been
out
for
a
while?
It's
not
really
getting
that
much
adoption
and
in
the
in
the
web
services
world.
There's
there's
not
as
much
of
a
you
know
compared
to
blazer
or
client
apps
or
whatever,
there's
just
not
as
much
of
an
advantage
for
full
native
compilation,
the
web
services
world,
because
startup's
not
that
much
of
an
issue,
the
thing
that
we
probably
should
have
a
talk
about
and
the
profiler
you
know.
I
know
the
concerns,
but
I
don't
have
a
full-fledged
plan.
H
Is
there's
this
push
for
for
what
we
call
single
file
inside
of
the
run
inside
the
runtime
team
and
which
is
basically
it
combines
two
things.
So
one
is
you
so
the
first
step
is
you
take
all
of
your
managed
libraries
and
then
you
you
do
what's
called.
I
o
link
them
together,
and
so
you
merge
them
all
in
one
big
assembly
and
then
you
do
tree
shaking,
which
is
you
go
through.
You
find
dead
code
and
eliminate
dead
code
paths
and
then,
after
that's
done,
you
package
it
all
up.
H
So
you
put
the
run
time
and
any
other
you
know
legit
the
runtime
and
this
single
managed
dll
inside
of
it
and
then
package
it
up
into
a
single
file,
and
so
now
you
have
now
you
have
a
application.
H
That
is
conceptually
still,
you
know
a
runtime
and
managed
code
that
has
a
jit
and
is
not
ahead
of
time
compiled,
but
is
smaller,
leaner
and
a
single
file
that
you
can
just
place
anywhere
and
it
will
run
self-contained,
and
that
is
probably
what's,
if
any,
if
anything
changes
in
the
paradigm
of
how
the
runtime
works,
that's
probably
the
more
likely
thing
to
win
out.
I'm
still,
it's
still
not
clear
how
popular
that
is.
H
That's
it's
kind
of
coming
from
this
angle
of
competing
with
go
and
and
other
frameworks,
and
we
have
a
small,
vocal
contingent
of
customers
who
are
who
say
that
you
know
we
absolutely
need
to
have
a
single
file
that
we
can
pass
around
to
to
be
able
to
to
use
net
core,
but
there's
a
lot
of
people
who
are
really
happy
with
the
existing
scenario,
and
so
does
this
target
server
as
well.
Yesterday,
right
and
so
the
problem
that
profilers
will
run
into
is,
if
you
so
is
the
tree
shaking
part
specifically.
H
So
if
you
have
this
app
and
you
you
merge
all
your
dlls
and
then
you
do
the
tree
shaking
get
rid
of
dead
code.
Now,
when
you
go
to
inject
your
instrumentation
code,
your
managed
code,
it's
going
to
start
calling
into
the
into
code
that
may
be
missing
because
it's
been
removed
now
right
now.
You
guys
assume
that
every
thing
in
system,
private,
core
lib
is
up
for
grabs.
You
can
call
it
you.
H
Can
you
know
you
can
add
references
you
can
do
whatever,
but
that
won't
necessarily
be
true
after
single
file
becomes
a
thing,
and
there
are
you
know
like
I
said
I
don't
have
a
full
plan
for
exactly
what
we're
going
to
do.
We've
run
into
similar
issues
in
the
past,
so
there's
always
ways
to
you
know
there
there's
options,
I'd,
love
feedback,
if
you
guys
have
concerns,
so
it's
like
one
option
is
just
have
a
way
to
specify.
I
need
these
apis.
So
I'm
you
know
the
open,
telemetry
agent
needs.
H
The
http
client
needs
activity.
Source
needs,
whatever
don't
reshape.
Those
a
second
option
might
be
just
tell
people
who
are
running
single
file
to
to
pass
a
command
line
argument
to
skip
the
tree
shaking
part
where
we
get
rid
of
code,
so
say:
go
ahead
and
package
it
up
in
a
single
file,
but
you
know
leave
all
you
know:
don't
do
the
dead
code,
elimination,
because,
if
you're
running
under
a
profile,
you
know
we
need
to
be
able
to
access
all
this
code.
H
So
those
are
probably
the
two
main
paths
as
it
stands
right
now,
just
to
back
up
a
step.
Probably
just
started
with
this
single
file
is
not
really
a
thing.
Yet
we
shipped
experimental
support
only
on
linux.
Only
on
amd
64
architecture
in
5.0
in
6-0
the
plan
is
to
is
to
have
it
across
more
architectures.
H
Don't
know
you
know
we
even
internally
to
microsoft.
We
don't
know
exactly
how
many
architectures,
I
think
the
people
working
on
the
feature
want
it
to
be
everything,
but
I
don't
know
if
that's
realistic
or
feasible
or
not
so
do
you
have
questions
or
concerns
that
come
out
of
what
I
just
said.
A
A
Take
advantage
not
only
from
microsoft
being
present
in
this
discussion,
but
the
the
other
vendors
and
people
participate
on
the
project,
so
we
can
have
some
solution
when
that
times
comes
you
know,
if
it
comes,
you
know
so
so
we-
and
I
think
you
you
you,
the
team.
The
people
in
microsoft
are
already
aware
of
these
use
cases
and
scenarios,
because
there
are
red
applications
that
are
built
on
top
of
that
so
but
yeah
we,
we
would
like
to
kind
of
be
involved
and
plan
for
the
future
if
necessary,
yeah.
D
I
think
also
a
good
thing.
One
of
the
benefits
of
us
organizing
as
a
community
here,
is
that
when
this
becomes,
if
and
when
it
becomes,
I
think
from
the
net
perspective,
we
can
the
the
detriment,
ask
and
work
together
to
identify
a
set
of
apis.
D
That
you
know
need
to
be
available
after
tree
shaking
for
for
monitoring
to
work,
and
if
and
when
it
does
become
a
thing,
we
probably
will
go
through
some
effort
to
review
our
entire
code
base
to
see
whether
we
can
slightly
trim
down.
D
We
actually
already
have
done
this
on
the
data
dock
side
to
get
rid,
should
you
reduce
a
lot
of
dependencies,
but
we
may
or
may
not
be
able
to
be
even
more
aggressive
and
reducing
framework
dependencies
too,
and
if,
when
it
becomes
the
thing,
we
could
do
that
reduce
dependencies
within
the
framework
and
then
work
with
the
framework
team
to
you
know,
say:
hey
customers
who
want
monitoring
to
be
feasible.
They
either
make
it
optional
or
even
by
default
or
whatever
is
the
right
approach
there.
The
tree
shaker
leaves
some
api's.
E
David,
I
also
have
a
question:
well
not
free
to
answer
right
now,
but
I'm
curious
if
you
know
of
any
roslin
plans
to
allow
source
generators
to
modify
code
or,
if
that's
even
a
possibility
going
forward.
Because
right
now,
with
the
new
source
generators,
you
can
add
code.
But
you
can't
modify
existing
stuff,
because
that
would
be
also
like
an
aot
way
sort
of
like
foaty.
But.
I
Ish
way
of
basically
adding
instrumentation
yeah,
I.
H
Haven't
heard
anything
that
doesn't
mean
it's
never
gonna
happen,
but
it
hasn't
reached
the
point
where
I've
heard
anything
you
know
I
would
have
to.
I
would
have
to
talk
to
the
roslin
team
to
see
if
it's
a
you
know
an
idea
for
the
future,
but
there's
certainly
no
act
of
work
on
it
that
I
know
of,
but
I
my
impression
would
be
no
that
it's
never
going
to
happen,
but
that's
probably
not
worth
very
much
since
I'm
not
super
close
to
the
rosalind
team,
so
yeah.
G
With
that
being
said,
though,
I
mean
there's
already
aspect-oriented
programming
libraries
out
there
that
do
similar
things
that
that
aren't
related
to
the
new
source
generator
stuff,
not
that
it's
something
I'm
suggesting
we
should
pursue.
But
there
are
things
that
exist.
A
All
right,
I
think
I
think
it
was
a
very
productive
meeting.
I
I
have
a
meeting
statue,
so
I'm
I'm
be
gonna
dropping
out.
If
you
guys
want
to
keep
going.
I
have.
I
have
an.
D
Architectural
question
for
chris,
and
maybe
david
that
would
be
relevant
if
you
guys
have
a
few
minutes
to
run
over.
I
have
a
question
about
this.
I
was
running
I
I
spent
some
time
looking
at
some.
You
know
forward-looking
future
things
that
can
be
done
doing
some
research
and
I
was
looking
at
profiling
and
I
have
if
you
have
a
few
minutes.
I
have
some
questions
about
how
new
relic
approached
it
and
generally
david's.
H
C
D
Me
describe
my
question
so
I
was
I
was
thinking
if
we
wanted
to
do
profiling
in
in
the
in
the
in
the
future.
The
two
general
feature,
spaces
or
three
changes-
feature
spaces.
There
is
memory
cpu
generally
and
wall
clock,
which
is
like
method
level
tracing
where
you
say
a
particular
span
called
these
particular
methods.
B
D
D
There
is
generally
two
approaches:
one
is
use
etw
and
one
is
using
the
actual
profiling
api
right
and,
as
I
am
kind
of
comparing
this,
I
think
the
thing
is
for
etw
the
cpu
probes
that
actually,
let
me
collect,
stack
traces
if
I
understand
correctly,
and
I'm
just
starting
to
look
at
it.
Please
correct
me
if
I
wrong
there
initiated
by
the
os
rather
than
by
the
runtime.
So
the
question
is:
how
would
they
even
approach
this
on
linux.
C
So,
and
the
second
question
is
how
how
does
new
reality,
you
wouldn't.
H
Approach,
so
this
is
a
big
thing
where
there
is
not
a
one,
easy
answer,
so
you're
right
on
windows.
So
when
you're
running
etw
on
windows,
all
the
stack
collection
is
done
by
the
os
and
what
happens
is
the
run.
The
runtime
emits
a
series
of
events
called
rundown
events.
So
basically,
when
you,
when
you
turn
off
your
session,
then
it
will
spit
out
a
bunch
of
things
saying:
here's
all
the
jitted
code.
H
I
know
about
here's
all
the
metadata
I
know
about
and
then,
when
you
see
a
stack
trace,
you
can
either
use
native
symbols
or
you
can
use
the
manage
rundown
events
to
to
map
those
back
to
call
frames.
Now,
once
you
go
to
linux,
that's
all
out
the
window.
So
in
the
200
time
frame
we
we
made
a
tool
called
perf
collect,
which
is
basically
a
script.
H
You
can
get
that
will
use,
try
to
approximate
the
same
thing
as
etw
by
using
various
linux
tools,
so
ltt
and
g
for
eventing,
and
then
ltt
g,
lte,
tng
and
then
another
tool
called
perf
for
doing
this.
H
The
call
stacks
and
it
basically
simulates
the
same
thing,
but
on
linux
it
uses
perf
to
collect
native
call
stacks
and
then
at
the
end
it
collects
rundown
events,
but
then
also
does
some
other
scraping
to
find
to
find
ready
to
run
image
symbols
and
then
we'll
parse
it
all
together,
and
is
this
an
open
source
tool
that
I
can
look
at
yeah
just
go
to
aka
dot?
Ms
slash
perf
collect.
H
H
So
that
one
so
and
that's
so
now,
you
have
etw
and
windows,
and
you
have
perf
collect
on
linux,
but
perf
collect
has
a
couple
of.
It's
meant.
It's
meant
as
a
way
to
collect
traces
for
a
user
diagnosing
a
perk
problem.
It's
not
really
meant
as
an
automatable
tool.
H
So
the
third
option
we
have.
Oh,
why
not?
What
is
the
program
there?
It's
just
it's!
So
it's
a
script.
It's
just
a
shell
script
that
that
calls
out
to
a
bunch
of
different
linux
things.
So
if
you
wanted
to
automate
it,
you
there's
no,
like
extension
points
for
it.
You'd
have
to
call
all
the
tools
yourself
and
in
order
to
have
the
tools
you
need
pseudo
to
to
to
get
all
the
like,
perf
installed,
the
lt
tng
module
kernel
extension
installed.
H
D
H
That's
a
good
question
dude,
I
think
that's
true,
but
I
would
have
to.
I
would
have
to
double
check.
Okay,
so
then,
then
the
the
other
thing
that
happens
is
now
so
we've
we've
launched
this
thing
called
event
pipe,
which
is
basically
instead
of
relying
so
all
throughout.net
desktop.
H
We
used
etw,
and
now
we
have
a
concept
called
eventpipe
that
we
technically
launched
in
2-1,
but
it
doesn't
it's
not
really
feature-complete
until
3-0
and
event
pipe
is
basically
etw,
except
for
we
move
it
all
up
into
the
run
time,
and
so
we
just
emit
our
events
ourselves
all
the
events
ourselves
and
it
has
the
concept
of
a
stack
sampling,
profiler,
and
so
you
can
tell
it
start
a
trait
start
sampling
and
then
every
however
many
milliseconds
it
will
just
send.
You
call
stacks
for
every
thread,
and
can
you
can
you?
H
Can
you
tweak
the
frequency
over
time,
yeah,
yeah
and
but
the
problem
is,
it's
only
managed
call
stacks
because,
because
event
pipe
sits
inside
the
runtime
and
the
runtime
has
no
idea
about
the
native
threads
that
are
happening
outside
of
it,
and
it
also
doesn't
sample
even
the
runtimes
own
native
threads.
It
just
does
manage
so,
which
may
be
all
you.
D
H
Events,
so
basically
you,
you
start
a
trace,
it
sends
out
stack.
The
stacks
will
just
be
an
array
of
ip
addresses,
just
raw,
you
know,
pointers
and
then
you
might,
and
so
as
it
goes
along,
you
might
not
have
the
information
and
the
rundown
at
the
end
is
what
will
actually
give
you
the
whole
picture,
and
so
you
won't
necessarily
be
able
to
in
real
time
construct
the
stack
so
just
to
confuse
the
waters
even
more
there.
There
are
two
approaches
for
using
that.
So
that's
that's
eventing.
H
D
Once
apologies
for
interrupting
you
just
a
question
about
the
event
and
stuff
before
we
move
on,
so
so
all
this
so
for
for
the
starting
at
3-0,
the
the
feature
for
this
stack
sampling.
Profiling
starts
in
3-0.
H
C
Core,
yes,
so
if
I
understood
you
correctly
about
supported
versions.
D
If
we
were
to
say
that
on
on
windows,
it's
supported
by
all
the
things
that
we
want
to
support
like
what
we
as
a
community
support,
starting
with
four
five
and
then
we
go
the
etw
route
and
then
on
linux.
We
only
support
starting
from
3-0.
D
Then
we
would
avoid
the
the
difficulty
with
the
like
setting
up
pseudo
or
things
and
and
whatnot.
D
Is
there
an
example
that
shows
the
usage
of
these
sampling
profilers
somewhere?
There.
H
G
H
It
was
there's
all
sorts
of
reasons
you
shouldn't
use
it
you
can't,
you
can't
use
multiple.
There
was
no
concept
of
multiple
sessions.
There
was
just
one
event
pipe
session
at
a
time,
so
you
can't
use
multiple
tools.
So
if,
if
one
tool
is
trying
to
use
event
pipe,
you
can't
start
a
different
session.
You
you
can
just
join
the
existing
session
and
then
there
was
memory
leaks.
There
was
bugs
there
was.
H
H
To
be
in
proc
to
no,
there
are
we,
so
there
are
endpoints
where
you
can
open
an
ipc
and
then
just
and
then
get
the
events
over
ibc.
We
have
a.
We
have.
We
have
various
tools
so
like
there's
done,
that
trace,
and.net
monitor
and
that
you
can
select.net
trace,
will
open
a
stream
and
then
write
it
to
a
file,
and
so
you
can
say
here's
the
events
I
want
and
then
it
will
pipe
it
to
a
file
just
like
doing
an
atw
collection
and
then
there's
there's.
So
we
just
have
a.
H
We
have
a
collection
of
tools
that
will
that
will
enter.
It
could
be
also
in
prog.
It
could
be
improv,
but
the
way
you'll
do
it
in
proc
is
the
same,
whether
you're
in
proc
or
out
of
proc.
Is
you
open
up
an
ibc
connect
to
the
event
pipe
and
so
there's
nothing
preventing
you
from
opening
an
ipc
to
your
own
process?
C
H
H
So
so
that's
the
eventing
there's,
but
the
other
thing
you
can
do
is
in
process
stack
sampling
through
icor,
profiler
and
there's
the
leg.
There's
the
way
we've
done
it
on
windows,
but
it
turns
out
that
that's
not
feasible
on
on
cross-platform
for
a
variety
of
ways.
H
But
the
that
was
never
the
best
solution
because
like
what,
if
you
happen
to
pause
a
thread,
that's
in
the
middle
of
a
native
heap
allocation.
Now
it's
holding
a
lock
anytime,
you
try
and
do
a
native
allocation,
it's
an
instant
deadlock
and
there's
just
there's
all
these
tricky
gotchas
so
and
it
only
works
on
windows
because
there's
no
suspend
thread
at
all
on
linux
or
mac
or
bsd
or
anything
there's.
H
You
know,
there's
p
thread
apis,
but
the
p
thread
apis,
don't
work
the
same
way
and
some
someone
reached
out
to
me
once
about
trying
to
simulate
suspend
thread
by
sending
the
thread
a
signal
and
then
registering
a
signal
handler
and
then
blocking
in
the
signal
handler.
H
Yeah-
and
I
don't
think
it
worked
out-
I
never
heard
back
about
if
it
worked
out,
I
kind
of
was
working
with
them
to
say,
like
I,
don't
know,
I'm
kind
of
scared
of
this
approach
I
and
and
then
you
also
run
into
issues
where
so
we
have
various
helper
frames
in
the
runtime
and
those
helper
frames
may
not
actually
use
like
proper
linux,
unwind
information
and
so
then
live
unwind
won't
handle
them
right.
H
So
it's
kind
of
not
so
getting
native
stack
frames.
So
long
story
short
is,
if
you
want
to
do
impro
profiler,
based
sampling,
there's,
not
a
super
good
story
for
native
call
stacks,
but
you
can
do
manage
call
stacks
in
3-1.
I
think
I
I
added
a
profiler
apl
api
that
that's
suspend
runtime
and
resume
runtime,
and
so
what
it
will
do
is
it
will
just
pause
all
the
managed
threads
where
they
are,
but
not
do
a
gc.
H
Not
do
anything
else,
and
so
you
call
suspend
runtime
and
then
it
will
park
all
the
magistrates
in
a
good
known
spot.
So
you
can
just
walk
so
you
then
you
can
use
the
enumerate
threads
api
to
go
through
all
the
manage
threads
and
then
you
can
call
do
stack,
snapshot
on
every
manage
thread
and
then
you'll
get
all
the
managed
call
stacks.
H
And
then
you
can
call
resume
runtime
to
resume
the
runtime
and
what
you
basically
end
up
doing
is
the
same
thing
that
that
event
pipe
does,
which
is
you
get
just
managed,
call
stacks
and
you
at
whatever
frequency
you
want,
but
you
do
you're
doing
it
yourself.
Instead
of
having
to
open
up
an
event
pipe
session.
C
Well
or
they
a
vampire
does
almost
exactly
the.
H
H
Now,
like
everything,
there's
pathological
cases,
you
know
you
can
have
an
app
where
it's
doing
something
pathological
and
suspension
might
take
a
while,
but
just
to
give
you
a
number,
if
you
as
long
as
you,
keep
the
sample
times,
not
not
super
frequent,
if
you're,
if
you're
just
doing
sampling,
say
every
100
milliseconds,
it
should
add.
Maybe
five
percent
overhead
to
the
app.
You
know
three
to
five.
D
Percent
two
questions
about
the
overhead
so
because,
like
when
I'm
is
right
now,
I'm
just
reasoning
about
this,
like
I,
I
think
it's
like
all
at
least
for
open
telemetry,
it's
all
kind
of
theoretical
and
I'll
digest
it
all,
and
then
we
internally
will
discuss
about
like
when
we
want
to
work
on
this
right.
But
one
thing
that
is
in
my
mind
when
I'm
thinking
about
this
is
essentially
this
should
be
like.
D
H
C
Absolutely
absolutely,
and
we.
D
Have
like
that's
why
you
notice
you
notice
and
and
all
these
conversations
that
we
have
when
I
talk
like
guys,
you
talk
about
fishes.
We
talk
about
performance
because
we
actually
like
do
a
lo.
We
have
a
lot
more
overhead,
but
we
have
concrete
plans
on
how
to
drive
it
down
when
we,
when
we,
we
continue
pushing
things
into
the
like
open
telemetry
thing.
Also
like
the
cold
target.
Instrumentation
will
drive
things
down
a
lot
yeah.
G
C
D
H
So,
yes,
you
know,
at
the
end
of
the
day,
it's
up
to
you
to
decide
what's
feasible,
you
can
always
make
that
number
lower
by
sampling
less.
But
then
you
know,
of
course,
that's
the
trade-off
of
the
less
you
sample
the
less
accurate.
Your
data
is,
and
you
know
so,
if
you
sample
every
once
every
10
seconds
now
you
add
zero
percent
overhead,
but
then
the
data
may
not
be
worth
anything
if
you
only
sample
once
every
10
seconds
and
the.
D
The
api
is
flexible.
I
can
at
runtime.
I
can
play
tricks
where
I
sample
every
few,
like
every
100
milliseconds
for
a
few
seconds
and
then
go
back
to
sampling.
C
H
H
Type
based
the
event:
you
can
control
the
frequency
of
the
sampling,
but
that's
about
it,
you're,
basically
saying
sample
all
the
threads
every.
However
many
milliseconds
okay,
but
I
can
frequently
change
this
parameter.
Yeah,
okay,
it
you
reminded
me
that
I
have
a
sample
somewhere.
D
Yeah
so
I
already
started
looking
to
test.
I
looked
at
the
the
ico
profiler-based
thing
first,
because
I
was
oblivious
to
the
problems
that
you
mentioned
about
linux.
D
I
didn't
realize
that
so
I
thought
it
was,
but
then
I
looked
at
the
whole
sync
like
I
looked
at
it
and
I
got
scared
about
the
synchronization
areas
that
you
also
mentioned.
So
it
seems
like
it
wasn't
the
concern
so
before
three
before
version
three,
when
you
added
the
suspension
time
like
because
really,
if
you
want
to
support
this,
it
would
be
starting
with
four
five.
C
H
So
this
is
there's
not
a
good
answer
for
before
three,
except
for
to
say
that
two
one.
So
two
two
is
out
of
support
one.
You
know
one
o
two
o
and
two
two
are
all
out
of.
D
The
open
telemetry
supports
full
framework,
starting
with
four
five
yeah,
and
course,
starting
with
two
one.
H
Okay,
so
two
one
will
is
going
to
go
out
of
support
three
years
after
we
released
it,
which
I
think
is
this:
is
it
this
year
in
2021
yeah?
But
if
customers
are
running
it,
they
will
support
it.
So
there's
not
a
good
answer
is
the
end
of
the
like.
There's
just
there's,
just
not
a
really
good
answer.
How
do
how
does
he
really
do
it.
G
Yes,
so
I
can
talk
about
that
a
bit
and
so
for
us.
The
simplest
thing
for
us
to
do
as
far
as
our
profiling
was
concerned
was
we're
using
the
profiling
apis,
and
so
we
we've
got
it
working
for
both
dot-net
framework
and
dot-net
core
and
dot-net
core
on
linux
and.
G
G
So,
no
so
so,
I
believe
we'll
support
the
profiling
for
dot-net
core
older
versions
on
windows,
but
when
we
detect
that
we're
running
on
linux,
we
just
we
just
won't,
do
it
unless
you're
running
at
least
3-1.
G
D
On
windows,
you
essentially
use
what
what
what
david
described
to
well,
I
could
confuse
you
sorry.
I
was
making
notes
and.
G
G
Loving
leveraging
the
the
profiling
apis
to
be
able
to
suspend
a
thread
and
do
the
stack,
walk.
D
You
you
don't
care
about
native
frames
right.
D
Okay
and
if
I
wanted
to
like
what
how
do
how,
why
did
you
decide
to
do
this
for
this
event
and
all
this
stuff.
G
G
So
so
that's
good
to
hear,
but
my
experience
with
the
event
pipe
was
actually
using
it
to
try
to
extract
garbage
collection,
information
from
the
runtime
and
that's
where
we've
we've
run
into
to
many
problems
prior
to
3o
and
and
in
fact
we
still
have
some
quirks
with
it,
even
in
3-1,
especially
if
we're
disposing
our
event
source,
that's
our
not
even
sourced
but
event
listener.
That's
trying
to
listen
to
for
those
events.
D
So
you
still
don't
recommend
it
for
use
for
usage
or.
G
I
I'd
say
it's
definitely
worth
playing
around
with
it,
seeing
if
it
makes
sense,
but
it's
not
available
for
net
framework,
and
so
it's
just
a
question
of
do
you
want
to
have
two
two
different
solutions
for
doing
this
profiling
or
well.
The
profiling
apis
work
best,
because
the
difference
between
supporting
windows
and
linux
is
really
small.
D
D
For
I
mean
if
it
was
event
pipes,
then
for
full
framework,
it
would
be
etws
on
windows.
H
Yes,
but
sort
of
yeah,
that's
true
for
real
time.
I
think
you
can
start
a
delayed
session
without
being
super
user,
but
but
for
real
time
listening
you
definitely
have
to,
and
you
may
even
need
it
for.
D
You
could
do
it
in
the
same
process
as
as
the
you
can
you
can.
I
think
you
can
listen
to
each
w's
emitted
in
the
same
process
without
like
always,
but
then
I'm
not
sure
exactly
what
happens
with
the
cpu
probes
yeah,
I'm
not
sure
either.
D
I
see,
and
and
so
if
you
use,
if
you
use
the
the
api
the
profile
profile
apis,
then
you
get
you
just
get
a
stack
pointers
right
and
also
where
do
you
get
the
information
to
actually
construct
a
proper
managed
stack
with
all
the
namings
and
like.
H
The
api
is
called,
do
stack
snapshot
and
then
so
you
pass
in
a
snapshot.
Snap
js
stack
snapshot
callback
and
it
will
actually
pass
you
a
function
id
so
just
like
just
like
everything
else
in
like
core
profiler,
you
get
a
so
basically
use
the
workflow.
If
you're
using
the
raw
icor
profiler
interfaces
is
you
call
suspend
runtime,
you
enumerate
the
manage
threads
and
then
for
each
thread.
You
call
do
stack
snapshot
and
then
that
will
call
you
back.
H
You
can
you
know
you
could
call
request
region
if
you
wanted
line
numbers
that
is
gonna,
be
more
tricky,
so
line
numbers
are
more
difficult
because
the
runtime
doesn't
even
know
line
numbers,
and
so
once
you
start
talking
about
line
numbers
now,
you're
talking
about
managed,
pdbs
and
so
like,
but
yeah,
even
literally,
the
runtime
has
no
concept
of
line
numbers.
H
All
that
stuff
is
stored
in
the
pdb
output
by
the
c-sharp
compiler,
and
so
now
you're
talking
about
you
would
have
to
you'd
have
to
open
up
the
pdb
and
none
of
that's
automated
and
pat
and
you
there
are
other
tools
to
open
up
pdbs
and
do
that
sort
of
thing,
but
not
through
the
profiler
api.
Do
you
guys
do
this
chris
or.
G
Yeah,
we
don't,
we
don't
capture
any
line
numbers,
but
greg
allen,
just
posted
a
link
in
the
chat
that
I
think
will
be
of
interest
to
you
and
it's
basically.
This
is
probably
how
we're
actually
using
the
profiling
apis
all
right.
I've
been
kind
of
quiet,
I've
just
been
listening
along
and
it's
great
discussion,
but
yeah
that
that's
basically
where
we're
just
on
linux,
we're
suspending
the
runtime
and
then
we're
resuming
the
runtime.
B
D
No,
I
just
I'm
just
kind
of
thinking
about
this.
I
see
and
you
guys
and
that
new
relic
do
you
like
what
is
your?
How
what
is
your
customer
recommendation
run
it
all
the
time
run
it
on
demand
or
like
run
it
and
when
you're
investigating
an
issue
or
what's
what's
your?
What's
your
approach.
G
We
would
say
that,
okay,
you
can
manually
run
this
thread.
Profiler
and
it'll.
Give
you
information
about
what
the
call
stacks
are
looking
like,
and
so
you
can
get
an
idea
of
where
you
might
want
to
add
some
custom
instrumentation
to
get
some
better
visibility.
G
G
G
H
If
sometimes
I
work
at
a
low
enough
level
that
I'm
you
know
actually
not
maybe
the
expert
on
the
the
end
user
scenario,
but
kind
of
the
way
I
think
about
it
is
I
don't
even
see
a
need
for
constant
stacks
stack
sampling
all
the
time,
so
it
would
be
interesting
to
collect
stacks
on
demand
for,
like
unhandled
exceptions,
or
maybe
you
know,
some
sort
of
trigger
cases
where,
in
response
to
an
event,
you're
collecting
stacks
for
that
event
and
then
maybe
once
it
reaches
a
threshold,
you
say:
okay,
you
know
some.
H
Some
node
somewhere
is
no
longer
responding
to
requests.
Now,
let's
turn
on
stack
sampling,
but
if
you,
even
if
you
could,
let's
say
it-
was
zero
percent,
no
overhead
whatsoever,
you
can
turn
on
stack
sampling.
What
would
you
do
with
just
stack
sampling
constantly
for
every
single
node
every
single
process
like
what?
What
is
the
benefit?
Or
would
you
just
be
inundated
with
data?
You
don't
use.
D
Yeah
yeah,
I
mean
that's
a
good
question.
I
I
I
don't
have
a
good
answer
yet
I
I'm
just
something
just
you
know,
I'm
I'm
just
starting
to
think
about
it.
So,
but
would
do
you
guys
know
what
like,
but
roughly
as
the
order
of
magnitude
of
the
overheads
you
guys
said
really,
because
you
guys
really
have
an
actual
thing
that
works
for
the
customers?
What's
what's
an
order
of
magnitude
that
you
caused
as
an
overhead.
B
D
Yeah
yeah,
but
like
your
features,
you
can
either
do
cpu
profiling
or
you
can
do
like
also
this
method
level,
where
you
associate
the
profiling
information
with
with
spence
right.
Well,
you
don't
do
this
right
now.
G
G
D
Because
I
was
what
I
was
thinking
when
I'm
just
like
generally
thinking
about
profiling
right
when
you're
traditionally,
profiling
gives
you
a
flame
graph
where
you
have
like
call
stacks
to
identify
a
place
in
code,
but
in
a
production
scenario,
you
really
want
to
put
it
in
the
context
of
what
was
the
entry
point
that
caused
so
this
this
this
stack
to
occur,
because
you
know
you
might
say.
Oh,
I
am.
This
method
is
eating
up
a
lot
of
cpu
and
then
you're
starting
to
look
at
it.
D
But
it
may
be
very
relevant
that
when
this
method
is
called
from
a
particular
entry
point
in
your
service,
it
does
eat
a
lot
of
cpu
and
from
another
entry
point
it
does
not
eat
a
lot
of
cpu
and
that
could
be
a
very
relevant.
And
so
this
whole,
like
stack
collection,
needs
to
somehow
also
collect
the
current
span
in
one
way
or
another.
G
Yeah
we're
not
collecting
the
actual
stack
trace
but,
like
I
said,
depending
on
how
much
instrumentation
is
in
place
for
that
type
of
framework,
you
might
see
your
entry
point
span
and
then
like
in
the
case
of
asp.net
core.
G
There
might
be
another
span
for
kind
of
like
the
middleware
that
that's
executing
and
then
perhaps
another
span
for,
like
your
web
api
controller,
and
then
you
might
have
another
span
for
some
sort
of
external
call
other,
whether
it's
a
database
or
an
http
client
call,
and
those
are
kind
of
like
your
big
hit
points
where
you're
most
likely
to
be
spending
a
bunch
of
time
and
then
within
the
context.
G
So
all
of
those
things
would
generate
spans
and
from
there.
If
you
have
the
timings
of
those
spans,
the
the
parameters
that
were
associated
with
that
request,
you
could
then
put
together
a
an
idea
of
this
request
is
causing
problems,
but
this
other
request
isn't.
But
it's
just
the
same
controller.
D
It's
similar
for
us
right
now
as
well.
I'm
just
thinking
you
know
if,
as
I'm
just
when
I'm
looking
at
profiling
before
even
like
doing
anything,
I'm
trying
to
kind
of
have
a
big
picture
of
it
and
and
like
it
seems
to
me
that
if
we
were
to
approach
profiling
at
any
point
in
the
future
that,
like
whatever
architecture,
would
pick
this
feature,
that
described
should
be
eventually
possible,
maybe
not
in
any
of
the
initial
versions.
But
eventually
it
should
be
possible
to
relate
profiling
information
and
spend.
H
To
run
now,
and
I'm
happy
to
talk
about
this
with
anybody,
you
know
that
wants
to
so,
if
you
want
to
like,
if
you
want
to
even
book
another
meeting
sometime
this
week
next
week,
like
separate
from
the
open
telemetry,
I'm
more
than
happy
to
show
up,
I
just
have
to
run
right
now.
D
Of
course,
yes,
thank
you.
I
really
appreciate
your
time.
Actually
me
too.
So
what
I'll
do
is
I'll
collect
what
we
discussed
in
a
bunch
of
like
coherent
bullet
points,
and
I
will
spend
a
couple
of
days
to
just
play
around
with
the
with
the
pointers
that
you
gave
me
with
tools,
and
then
I
will
let
you
know
if
you
guys
want
to
know
about
it.
G
Yeah
greg
I'm
curious
about
about
what
what
you
find
out
and
run
into,
and
then
there
was
one
other
thing
that
I've
seen
that
was
actually
done
in
some
logging
libraries,
I
want
to
say
I
saw
the
code
in
ceralog
and
in
end
log,
but
they
actually
used
some
of
the
some
managed
debugging
libraries
to
try
to
get
stack,
trace
information
and
it
even
attempts
to
get
line.
Number
information
all
from
managed
code
within
the
context
of
somebody
writing
something
out
to
a
log
yeah.
So
it's
actually.
H
C
But
I
really
do
have
to
run
now
so
I'll
say
any
any
point.
Thank
you,
david,
chris.
Any
any
tools
that
I
should
look
at.
G
Yeah,
I
want
to
say
that
I
saw
it
in
ceralog
and
in
seralog,
they've
got
this
concept
of
enrichers,
and
all
it
in
richard
is.
Is
that
it's
the
plug-in
to
ceralog
that
allows
you
to
add
data
to
a
log
event
and
they
have
a
line.
Number
enricher
and
I
wanna
say:
they've
got
a
file
name
in
richer,
and
so
they
warned
that
that
using
those
can
cause
a
performance
hit.
G
D
Yeah
yeah,
it
makes
sense
yeah.
I
think
one
of
the
also
considerations
is
between
between
all
these
approaches
is
all
the.
If,
if
we
go
down
the
like,
if
we
even
do
this
right,
but
if
we
take
the
approach
of
using
the
ico
profile
api,
then
everything
is
a
native
code
and
if
we
use
the
atw
and
event
pipes
and
whatnot,
then
things
can
be
managed
code,
which
is
always
kind
of
nice
to
work
with,
but
yeah
cool
anyway.
I
also
got
around.
D
Thank
you
very
much
like
I'll
I'll
go
through
this
and
we
can
have
a
conversation
like
what
would
you
find
out
what
I
found
out?
Sorry,
I
am
spoken
all.