►
From YouTube: 2020-10-28 .NET Auto-Instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
C
A
It's
like
sometimes
I
I'm
not
organized
of
some
zoom
meetings
and
for
some
reasons
somehow
I
don't
see
that
pop
up
with
allow
the
the
guests
in
the
in
the
meeting.
I
I
don't
see,
I
keep
doing
the
stuff
that
I'm
doing
and
that
thing
is
on
the
corner
of
the
screen
and
the
person
is
waiting.
You
know.
C
C
C
A
A
When
I
joined
splunk
it
was
a
red
zone,
so
everything
goes
but.
A
By
the
way
talk
about
this
kind
of
tool,
there
is
a
conversation
that
ftga
somehow
they
are
going
to
find
move
this
thing
to
his
lack,
but
the
instead
of
guitar,
but
to
be
seen.
A
E
A
A
A
A
All
right,
so
I
think
it's
time
to
get
started.
I
I
kind
of
let
this
fell
off
my
radar
last
week
and
now
I'm
I'm
kind
of
back
to
try
to
merge
the
the
pre-org.
So
I
can
merge
the
reorg,
but
the
easiest
cla
thing
is
kind
of
not
allowing
and
I'm
suspecting
that's
related
to
the
history
that
I'm
trying
to
keep.
A
D
Is
it
the
sealer
stuff.
A
No,
it's
not
the
cli
stuff
is
the
datadog
stuff.
I
don't
know.
Perhaps
datadog
had
some
people
that
contributed
to
the
repo
that
are
not
members
of
the
cloud
native
foundation.
D
Yes,
so
if
you
are
merged,
if
you're
merging
changes,
that
only
happened
since
the
last
change
like
since
we
created
the
repo,
then
everything
should
be
fine,
but
if
it
has
people
who
you
know
historically
have
contributed,
then
there
is
yeah.
Then
we
need
to
squash
it
somehow.
A
D
So
so
so
what
I've
done
in
the
past,
I
essentially
was
creating
a
bunch
of
branches
inside
of
temporary
branches
inside
of
hotel,
and
I
was
doing
squash
merges
from
I
was
creating
pr
from
one
branch
to
another
and
proving
it
myself
and
making
it
a
squash
pr
when
necessary,
and
that
way
I
was
kind
of
getting
rid
of
selectively
selectively,
like
cherry
picking
and
then
squashing,
and
then
selectively
getting
rid
of
of
specific
people.
D
But
then
I
realized
it's
too
much
and
then
I
just
like
it
was
a
very
initial
one.
I
just
like
squashed
everything,
but
maybe
if
you
have
more
history
than
recent,
maybe
you
can
create
a
temporary
branch
only
pick
stuff
into
that
branch.
That
has
the
old
history
squash
that
and
then
put
the
new
changes
on
top
of
it.
And
then
you
have
a
sub
branch
with
like
old
stuff
being
squashed
and
new
stuff
having
the
right
history
and
then
you
can
merge
from
there.
In
turn,.
A
Yeah
so
so
yeah
I
I.
I
have
to
figure
out
a
way
to
squash
the
the
old
stuff,
and
then
we
go
from
the
top
okay,
so
I'm
I'm
gonna
be
follow
up
on
that
I'm
gonna!
Actually
I
I
pinged
a
bunch
of
people
from
open
telemetry
in
that
regard,
but
I
think
nobody
had
the
same
problem
as
us,
but
yeah.
It
seems
that
I
need
to
to
to
get
rid
of
the
to
squash.
The
chains.
D
Yeah
yeah,
it
might
be
so
so.
But
what
did
you
do
you?
You
took
you.
You
took
the
current
date
of
the
repo
and
you
merged
it
into
a
like.
You
created
a
what.
What
exactly
did
you
do.
A
Right,
no,
I
I,
I
fetch
all
first
b
before
fetching
a
sub
branch,
then
I
I
check
out
our
master
master
and
I
went
to
and
I
did
a
git
check
out,
git
merge
on
on
a
specific
sha
just
before
the
the
reorg,
the
the
last
one.
So
what
I'm
thinking
happened
is
that,
if
I
recall
correctly,
when
you
do
the
merge
this
way,
it's
gonna
try
to
pull
out
the
history
you
know.
A
So
I
think
that
is
probably
the
whole
history
is
there
and
the
the
the
bot
doesn't
reply
if
it's
people
with
doubt,
but
I
think
just
the
size.
The
number
of
commits
in
that
history
is
too
big
for
the
for
the
bot.
D
D
Or
actually
the
other
way
around,
I
I
created
two
branches.
I
created
like
a
sub-branch
of
master,
which
is
copy
of
master.
Then
I
created
the
data
doc
as
remote
and
then
I
pulled
into
a
new
branch
from
that
remote
into
a
new
branch.
Now
I
have
a
branch,
but
it's
completely
disjoined,
so
I
did
it
at
that
time
from
data
doc
master,
but
now
you
would
need
to
do
it
not
from
simply
to
doc
master
but
up
to
the
same,
commit
as
actually
no
now
you
can
do
it
yeah.
D
You
have
this
sub
branch
right.
Then
you
have
a
completely
disjoint
branch
that
you,
where
you
just
pulled
from
datadog,
and
then
you
create
a
pr
from
from
that
datadoxa
branch
into
your
master
branch,
not
not
into
the
actual
master
button
to
the
one
where
you
just
for
sub,
create
a
branch
from
master
right
and
that
pr
will
ask
you
to
like
you
need
to
put
this
flag.
That
says,
like
merge,
unrelated
histories,
that's
what
essentially
what
they
did
before.
D
So
hopefully
it
will
see
the
stuff
that
is
already
being
merged
previously
and
only
recognize
the
new
changes
and
the
new
changes
have
only
the
people
who
are
in
cloud
native.
So
that's
what
I
would
try
first
so
like.
If
I
was
to
to
do
this,
I
would
like
go
this
route,
it's
similar
to
what
I
described
in
this
document,
where
we
decided
then
not
to
like
put
it
in
the
repo.
But
you
remember:
I
had
this
little
document
about
how
to
import
so
is
there's
still
a
pr
open
for
it.
D
A
Yeah,
okay,
okay:
I
I'll
try
to
do
to
do
that
path.
So
yesterday,
yesterday,
no
monday,
I
I
had
a
a
chat
with
greg,
also
about
the
vendoring.
A
The
rendering
is
is
basically
basically
red,
but
I
I've
been
kind
of
holding
off
merge
because
of
this
thing
I
want
to
kind
of
do
that
merge
only
after
you
do
the
reorg,
because
we
also
have
to
do
the
the
feature
branch
that
we
are
working
greg.
We
should
also
kind
of
move
to
the
new
organization.
So
if
people
want,
I
can
put
a
appear
out
so
people
see,
but
then
I
don't
I'm
not
planning
to
merge
from
there
because
I
just
want
to
merge
after
the
reorg.
You
know.
A
Yeah
that
that's
my
assumption
too,
that's
why
I
didn't
put
the
pr
out
for
that,
because
I
I
I
want
to
resolve
that
one
first.
So
then
we
can
ju
already
look
at
the
kind
of
final
state
that
should
be.
D
So
let's
do
the
data
doctor
master
whenever
it's
completed
with
the
master
feature
branch
and
then
we
can
do
the
pr
from
yeah
your
branch,
two
feature
branch.
A
Yes,
yes,
all
right
that
that's
was
the
two
things
that
I
wanted
to
talk
for
today.
Unfortunately,
I
think
we
we
hope
to
be
going
a
bit
faster,
but
we
have
a
bunch
of
other
commitments.
I
I
had
the
conference
last
week
but
yeah
that
that
was
the
two
points
that
I
wanted
to
to
mention
for
today.
A
Oh,
and
by
the
way
I
I
I
took
a
look
at
the
document
about
the
changes
to
the
instrumentation
looks
pretty
good
to
me.
I
I
left
just
a
couple
of
comments.
The
the
dark.
I
think
tony
wrote
right
the
call
target.
Yes,
yes,.
D
D
Comments,
okay,
so
what
I
wanted
to
show
you
guys
is
this
over
the
weekend.
I
have
you
remember
like
last
time
we
chatted
about
the
exporter
and
that
we
can
kind
of
start
start
thinking
about
its
shape
so
that
we
turn
a
theoretical
discussion
into
a
practical
one
right.
I
think
chris
you
mentioned
that,
so
I
took
this
prototype
that
I
mentioned
several
times.
D
That
kind
of
was
created
for
activities
exporting,
but
not
with
not
around
the
wrapper
that
we're
talking
about,
but
around,
like
you
know,
referencing
activities
directly,
so
I
took
the
prototype
that
was
sort
of
working
entry
and-
and
I
I
just
spent
some
time
on
the
weekend-
refactoring
it
into
the
report
and
changing
it
to
use
the
the.
D
Stab
like
the
the
the
stubbing,
the
the
reflection
wrapper
and
because
the
reflection
wrap
is
not
yet
working,
I
actually
couldn't
run
it
to
test
it.
It's
just
kind
of
more
of
a
careful
refactoring
of
stuff
that
used
to
work.
So
the
reason
I'm
mentioning
it.
I
wanted
to
point
you
guys
so
that
if
you
feel
there
is
value
in
it,
you
can
start
looking
at
it,
and
I
previously
pointed
you
to
this
obscure
feature
branch
in
the
data
dock
repo
that
was
never
updated.
D
Hopefully,
this
is
this
is
not
like
current
and
whatnot.
So
in
in
the
feature
activity,
centric
tracing
in
sources,
I
have
created
a
bunch
of
stuff,
so
the
whole
activity
reflection
wrapper.
Wasn't
this
a
library,
so
I
created
some
one
folder
here
with
all
the
files
that
I'm
building
into
separate
different
projects.
D
Just
so
we
have
some
shared
code.
That
is
not
necessarily
a
public
api
and
then
here
I
refreshed
all
this
code
and
it's
not
actually
a
lot
and
but
it
does
it
does.
I
can't
open
it
in
visual
studio
right
now,
because
I'm
on
a
different
laptop.
But
if
you
look
at
here,
then
there
is
all
the
stuff
that
I
was
talking
about
like.
If
you
you,
you
should
start.
If
you
start
reading,
you
actually
can
start
reading
at
public
and
activity
collector
and
from
there
it's
relatively
trivial.
D
There
is
a
thing
that
sets
up
a
a
listener
through
a
stub
and
starts
a
background
loop
and
then,
in
the
background
loop.
There
is
a.
D
That
store
store
activities
and
yeah.
I
don't
wanna,
go
through
course
right
now,
but
you
can
you,
you
can
look
at
it
if
it
builds.
So
it's
it's
up
to
the
new
shape,
but
it
doesn't
run
because
a
lot
of
stub
stubbed
apis
that
it
uses
who's
wrong,
not
implemented
for
now.
D
So
so
it
collects
activities
in
a
slightly
different
way
than
the
current
implementation.
It
is
uses
it.
It's
lock.
Free
and
users
doesn't
use
long
arrays,
so
it
will
be
not
much
nicer
to
memory.
The
only
fancy
thing
is
that
it
does
is.
I
don't
know
whether
other
vendors
are
affected
by
this
data.
Dock
is
right.
Now
we
cannot
export
just
spans.
We
must
export
complete
traces
so
because
of
that,
the
let
me
go
here.
D
This
shaped
the
public
api
here,
the
collector,
sorry
exporter,
the
exporter,
interface.
D
So
essentially,
we
cannot
export
activities
just
as
a
collection.
We
must
export
traces.
A
trace,
like
is
a
local
thing,
so
essentially
the
root
the
local
root
span,
with
all
the
subspence
of
it
like
not
complete
trade,
but
the
the
local
part
of
a
trace
right,
the
non-re-entrant
part,
and
so
in
order
to
account
for
this,
the
interface
needed
to
be
slightly
different.
D
The
value
is,
if
event
that
chooses
to
kind
of
play
this
game,
as
if
you
have
a
if
you
decided
that
a
particular
trace
is
not
interested.
You
sampled
it
out,
but
then
some
subspan
of
it
actually
encountered
an
error
or
some
other
user
interaction.
Then
suddenly
this
span
became
interesting.
D
Now
you
probably
want
to
keep
it
right
now,
if
you
simply
sample
things
out,
you
will
have
the
span,
but
not
the
trace,
at
least
not
even
the
local
trace.
Of
course,
in
a
distributed
way,
you
might
have
lost
more
stuff,
but
if
you
are
collecting
complete
traces,
then
then
you
can
suddenly
like
by
realizing
now
you
need
to
spend
you
can
keep
the
entire
trace.
D
Very
efficiently,
because
it's
all
cross-linked
whether
or
not
it's
interesting
for
all
vendors-
I
don't
know-
is
the
reason
I
we
needed
this
functionality
for
data
docs
other
reasons
we
just
have
like
it's
the
way
our
backend
works.
It
needs
to
use
a
trace
as
a
whole.
In
order
to
do
this
fast,
I
needed
to
build
it
directly
into
the
engine.
You
could
have
done
it
within
the
within
the
exporter
bucketing
waiting
until
something
is
completed,
but
then
it
becomes
super
slow
lots
of
lot
lookups
and
whatnot.
D
So
I
kind
of
build
it
directly
into
the
into
the
engine
which
drove
the
interface
for
the
activity
exporter,
to
be
a
little
bit
like
that,
so
other
than
that,
there
is
like
this
single
loop,
no
async
io,
and
all
these
things
that
I
mentioned
earlier.
There
is
a
whole
bunch
of
comments
explaining
why
and
and
where.
D
Take
a
look
if,
if
you're
interested,
if
not,
then
what
will
happen
is
once
the
stabbing
is
ready
and
the
whole
reflection.
Wrappers
are
completed
I'll
hook
it
all
up,
and
then
we
will
be
able
to
start
testing
at
the
entry
end
and
then,
of
course,
definitely
need
need.
Reviews
and
whatnot.
A
We
particularly
don't
need
the
the
trays
to
come
together,
but
this
reminds
me
something
that
was
discussed
at
some
time
ago
in
the
open,
telemetry
spec,
because
the
exact
case
that
you
described
about
kind
of
deferring
the
sampling
decision
and
that
kind
of
needs
to
keep
all
this
stuff
on
alive.
Let's
say
until
the
decision
is
made
you
know,
so
I
think
there
is
a
relation
to
that.
But
I
to
be
fair,
I
don't
know
the
status
of
that.
A
F
D
Yeah
the
the
the
constraints,
so
in
terms
of
performance,
of
course
you
want
to
do
the
sampling
decision
as
early
as
possible,
but
as
late
as
necessary
right,
if,
essentially,
if
you,
if
you,
if
you
decide
that
you
don't
want
to
send
all
traces
to
the
collector
just
because
in
the
case,
where
most
of
the
time
where
you
you
sample
things
out,
say
you
have
a
very
high
request
rate,
so
you
sample
out
a
high
percentage
of
them.
D
Some
of
them.
You
were
thinking
you're
going
to
sample
out,
but
then
an
error
occurred
and
you
want
to
keep
them,
but
it
happens
rarely
right.
So
you
don't
want
to
suddenly
move
the
entire
sampling
into
the
agent
out
of
process,
because
now
you
have
you're
serializing
everything
and
you're
deserializing
everything,
so
of
course,
so
so
so
doing
this.
In
this
way,
where
sampling
happens
at
the
end
of
the
local
pipeline
is
more
efficient
and
of
course
you
would
rightfully
bring
up.
D
But
if
you
do
it
like
this,
then
you
might
keep
the
local
part
of
the
trace,
but
the
remote
still
like
independent.
You
cannot
rescue
it.
True,
I'm
just
trying
to
be
pragmatic.
It's
like
suddenly,
I
received
a
a
error
code
from
some
dependency
that
I
invoked,
and
this
is
a
part
of
large
distributed
transaction
where
I
cannot
suddenly
make
the
entire
thing
be
kept
unless
I
do
the
sampling
at
the
very
end
of
the
back
end,
which
is
not
scalable
right,
but
at
least
I
I
keep
the
complete
local
execution.
D
D
So,
but
honestly,
to
be
honest,
it's
right
now
with
this.
With
this,
it's
a
it's
a
nice
side
effect
of
the
fact
that
simply
a
data
dock,
we
have
no
other
choice
than
than
keeping
the
entire
trace
together.
So
that's
what
drove
the
this
particular
shape.
So
the
constraint
here
that
I
have
like
we
can
certainly
change
the
names
of
the
api
and
the
names
of
the
methods
and
the
shape,
but
the
constraint
is
that
we
must
be
able
to
send
a
complete
trace
and
it
must
be
fast.
D
So
so,
because
of
that,
we
need
to
group
things
into
traces
relatively
early
right
now,
I'm
still
using
some
bucketing
just
because
I
wanted
to
time
box
the
effort
that
I
did.
But
there
is
some
place
here
called
in
internals.
There
is
a
place
called
trace
cache
and
you
have
a
long
command
of
how
this
can
be
done
faster.
D
You
can
subclass
activity.
Add
a
like.
This
should
be
that
that's
a
reflection
emit,
you
can
add
a
bucket
there
and
you
can
like
store
traces
in
there.
It
would
be
much
faster,
but
it
should
work
for
now
so
yeah.
F
B
F
One
question
that
that
I
have
regarding
this,
and
so
I
I
can
understand
why
the
decisions
made
to
to
implement
it.
F
This
way,
at
least
for
now
with
this
tracer,
but
I'm
curious
about
how
an
equivalent
thing
could
be
done
with
the
sdk
itself,
because
I
feel
like
this
is
something
that
would
likely
come
up
in
that
use
case
as
well,
and
so
I'm
wondering
if
there's
something
that
we
can
contribute
there
or
we
can
leverage
from
from
what
they
have,
because
I
know
that
there's
different
samplers
that
can
be
set
up,
and
so
it
might
be
interesting
to
post
some
of
these
questions
to
that
sig
as
well.
D
Yes,
so
so
in
theory,
so
so
here's
what
what
happens,
what
what
basically
the
fast
the
the
description
of
the
fast
way
it
says
it
says.
D
Activity
of
course
can
have
like
a
custom
property,
because
so
so
in
sdk,
it's
easier
because
there
we
can
rely
on
version
five
right
here.
We
cannot
so
here
I'm
trying
to
stick
a
special
variable
into
activity.
That
is
like
a
strongly
typed
container
from
where
I
can
do
very
fast
lookups
for
that
dictionary.
D
So
it
could
also
be
in
the
custom
property,
but
I
want
to
do
it
even
faster.
I
will
subclass
activity
and
every
time
I
create
a
a
activity
using
auto
instrumentation,
I
will
not
actually
be
creating
a
type
or
a
a
an
activity
instance.
D
I
will
be
creating
a
instance
of
the
subclass
so
and
then,
whenever
I
need
to
associate
additional
information
with
an
activity,
I
will
use
a
conditional
weak
table
which
is
kind
of
just
a
slightly
slower
way
to
insert
a
private
or
like
a
a
field,
and
then
from
that
zdk
can
do
that
too.
D
D
The
collection
can
happen
in
a
similar
way,
so
essentially,
what
the
collector
does
is
if
it
is
if
the
trace
collection
is
turned
on.
So
this
one,
the
one
that
I
implemented
has
two
modes:
if
trace
collection
is
turned
off,
it
simply
gets
every
span
that
is
completed,
puts
it
into
this
very
fast
collection
and
then,
when
a
certain
threshold
is
reached,
it
gives
the
collection
to
the
sending
to
the
x
to
the
exporter.
D
This
guy
has
like,
if
it's
it
doesn't
put
spence
in
the
collection.
It
only
puts
root
spans
in
the
collection,
which
essentially
means.
Oh,
it
has
like
one
objection
for
it,
which
is
a
like
local
trace.
So
it
only
does
that
and
then
it
does
the
same
thing
and
it
has
some
logic
to
figure
out
whether
a
particular
span
is
a
root
pen
or
not,
and
it
has
then
it
does
like
it.
D
F
Yeah
so
so
I
mean
I'm
mostly
asking
the
question,
because
I'd
like
to
not
reinvent
the
wheel
as
much
as
possible
with
regards
to
certain
things
that
that
people
may
expect
from
open
telemetry,
and
so
this
way
people
don't
necessarily
have
to
re-implement
different
exporters,
re-implement
different
samplers
or
any
other
extension
points
that
are
necessary
because
I
think
aws,
for
example,
looks
like
they
have
to
build
some
custom
propagation
logic
in
order
to
to
support
their
needs.
D
I
think
you're
right
we
we
should,
I
should
come
to
that
meeting
and
bring
it
up
and
at
least
start
the
discussion.
I
think
there
is
two
things
that
we
should
split,
because
the
decisions
there
should
be
independent.
One
is
the
whole
grouping
into
traces
question
and
the
other
is
the
asynchronous
versus
synchronous,
behavior
inside
of
an
exporter,
question
like
they
are
both
relevant,
but
they
we
could
make
independent
decisions
in
that
front.
D
D
It
seems
to
me
like
a
no-brainer
improvement
like
clear
value
versus
whether
or
not
grouping
into
traces
is
something
that
they
want
to
go
after
and
to
what
extent
it's
it's
like
needs
to
be
architected
on
this
or
some
other
way.
That
would
be
an
interesting
discussion.
I
don't
know
what
the
outcome
would
be.
D
But
yeah
try
to
make
it
next
week
and
mention
it
to
them.
F
Yeah,
at
the
very
least,
what
you
have
here
should
be
enough
to
allow
us
to
to
begin
measuring
performance
of
the
the
approach
that
we're
taking.
As
far
as
listening
to
to
the
data.
D
It
will
be
a
little
hard
because
this,
the
the
the
stub,
is
not
ready
right.
The
reflection
represents
already
once
it
is
absolutely
yes,
I
I
expect
a
relatively
significant
improvement
when
this
is
also
done,
so
this
will
need
to
go
into
the
stub
right
now,
I'm
using
just
buckets
to
look
things
up
to
group
into
traces,
but
to
think
of
this
technique
it
should
be
faster
because,
right
now
it's
like
a
global
concurrent.
D
That's
it
I
just
wanted
to.
Let
you
guys
know
that
this
is
happened,
and
what
what
the
whole
reason
why
I
started
doing
this
like
essentially
api
is
not
used
by
any
of
this.
We
don't
need
to
wrap
them,
so
that
kind
of
helps
us.
We
don't
interrupt
all
activity
apis.
We
only
need
two
activity
apis
that
we
actually
need,
and
then
we
can
add
more
rep
more
if
we
ever
need
it.
E
Okay,
so
I
had
one
last
thing
that
I
stuck
on
the
edge
end
of
there.
I
had
just
wanted
to
spend
a
few
minutes
talking
about
our
plans
for
doing
some
sort
of
a
beta
release,
and
if
we
could
see
about
you
know,
maybe
coordinating
that
in
the
same
way
they
do
it
in
the
in
the
dot
net
in
the
sdk
sig
there,
where
they
actually
have.
E
You
know
beta
beta
milestones
with
you
know,
with
the
tickets
associated
with
the
or
the
issues
associated
with
them,
and
if
that's
something
that
we
could,
we
could
get
get
going
going
with
and
kind
of
identify.
What
what
we
think
at
this
point
is
a
reasonable
target
date
for
a
first
beta
release.
A
I
I
don't
have
a
good
feeling
for
that
at
this
moment,
because
you
have
is
still
to
do
a
lot
of
work
before
that.
I
I
I
personally
kind
of
a
bit
not
happy
about
that,
but
I
actually
don't
have
a
a
suggestion
to
come
up
with
that,
because
I
think
we
are
it's
struggling
kind
of
to
get
the
momentum
yet
a
bit
on
the
work
that
we
need
to
do
greg
has
been
doing
the
prototype.
A
G
A
E
Yeah
I
mean
I'm
I'm
fine
with
that,
and
I
want
us
to
be
like
realistic
with
where,
where
we're
at
and
everything
so
it
you
know,
I'm
totally
fine
with
us
not
having
a
date
right
now
at
this
point,
but
does
it
still
it
does.
It
makes
sense,
at
least
to
create
a
beta
milestone.
E
Then
and,
like
you
know,
identify
the
issues
and
at
least
come
up
with
what
we
know
is
going
to
go
in
that
first
beta,
and
at
least
we
can
kind
of
organize
around
that
a
little
bit
in
terms
of
like
okay.
We
know
you
know
these
things
need
to
make
it
for
the
beta
other
things.
We
can
push
out
a
little
bit
and
at
least
kind
of
start,
organizing
the
work
like
that
it
sounds.
A
Really
good
to
me,
you
know
it's,
it's
a
becomes
a
forcing
factor
for
us
to
to
be
kind
of
trying
to
to
eventually
get
to
be
able
to
say,
hey,
I
think,
x-date
we
are
going
to
be
able
to
do
stuff.
You
know
so.
E
Okay,
yeah,
and
we
certainly
don't
have
to
with
a
you
know.
If
we
create
a
milestone
in
the
repository,
we
don't
have
to
put
a
due
date
on
it
to
start
off
with
anyways,
so
yeah.
E
Okay,
I
will
see
about
if
I
can
do
that.
I
gotta
see
if
I've
got
the
right
permissions
in
that
repository
or
not,
but
but
yeah
I'll
check
on
that
ping.
E
D
Yeah
yeah,
I
I
feel
the
same
about
the
progress,
but
there
is
other
like,
for
example,
right
now,
I'm
working
on
this
in
a
very
time-boxed
way,
there's
other
things
I
need
to
to
work
on
it
as
well,
so
I
will
certainly
continue
making
progress,
but
it
will
be.
You
know
not
as
much
as
I
would
love
to
so
you
know,
but
it
is
what
it.
D
D
I
have
more
profiling
questions
if
david
feels
like
it,
but
nothing
on
on
this.
D
D
But
I
I
want
to
because
I
know
that
it's
not
central
of
the
conversation,
so
I
want
to
make
sure
that
nobody
else
has
anything
to
do
with,
like
the
current
current
only
ongoing
work
and
if
not,
then
I
happily
asked
those
questions.
Yeah.
A
D
Okay,
so,
like
we
already
chatted
like
the
last
couple
of
weeks-
and
I
shared
the
notes
with
everybody,
so
if
you
guys-
but
so
this
is
just
like
the
continuation
of
thinking
about
profiling
and
the
the
I
mean
the
new
relic
has
already
a
profiler,
so
we
can
always
kind
of
if
we
ever.
If
this
ever
became
interesting
for
this
group,
we
can
always
be
reusing
some
of
those
thoughts,
but
my
thinking
is
basically
can
profiling
happen.
D
Like
like
10
years
ago,
people
used
to
believe
like
this
kind
of
distributed.
Tracing
cannot
happen
in
production
non-stop
because
it's
too
expensive.
D
So
the
thinking
that
I'm
following
is
now
we
think
about
this
about
profiling,
but
does
it
really
need
to
be
that
way?
Can
this
be
done
so
so
fast
that
that
it
can
be
on
all
the
time?
And
so
I
was
looking
at
etw.
I
was
looking
at
the
vent
pipes
and
I
was
looking
at
the
profiling
api
and
so
far
I'm
not
entirely
convinced
that
event
pipes
can
be
on
all
the
time.
G
D
Yes,
I
wasn't
precise
you're
right,
I
meant
like
because
for
for
profiling,
you,
you
need
stack
sampling
and
I
think
one
of
the
reasons
for
it-
and
this
is
david-
I
I'm
guessing
it.
So
one
of
my
questions
is
it's
because
is
because
the
sampler
in
the
stack
sampler
inside
inside
of
event
types
is
tries
to
do
a
really
good
job
about
having
always
having
good
stacks,
including
native
stacks,.
D
Oh
yeah,
so
it
stops,
it
stops
threads
one
by
one,
it
stops
all
threads
or
how
does
it
do.
G
And
the
reason
is
because
there's
a
lot
of
trickiness
in
there's
there's
a
lot
of
caveats
in
when
we
switch
to
linux.
There's
there's
various
things
like
so
our
managed
il
stub
helpers.
So
we
not
not
iowa
stub.
D
So
interesting,
so
what
if
on
either?
So
what
if
we
did
the
following?
So
actually
two
questions.
One
is
about
managed
tech
working
using
the
do
do
a
profile
callback,
whatever.
Whatever
this
the
profiling
api
is.
So
if
I,
if
I
pick
one
thread
so
I
have
one
collection
trend
like
collection
thread,
then
I'll
just
pick
a
thread
and
chisek
and
then
I'll,
rather
than
suspending
the
entire
run
time,
I
will
look
through
all
threats
I'll.
D
So
when
whatever
the
next
one
is
I'll
pick
it
and,
I
will
say,
walk
the
stack
for
me
and
I
actually
haven't
tried
it
but
like
according
to
the
docs
it's
if,
if
it's,
if
it
starts
and
managed,
if
it's
already
managed,
everything
is
good.
If,
but,
if
it's
in
native
then,
depending
on
what
type
of
native
it
may
or
may
not
be
good
yeah,
but
if
it's,
if
it's
in
the
bad
type
of
native,
then
it
still
won't
blow
up,
it
will
simply
say
I
cannot
walk
the
stack.
G
Not
always
so
there's
lots
of
tricks
here,
there's
lots
of
things
that
can
trick
you
so
on,
and
it's
also
only
tested
on
windows,
so
the
so.
If
you're
talking
about
windows,
then
then
it
works.
That
way.
G
G
What
that
thread
is
doing
so
if
that
thread's
in
native
code
and
it
happens
to
be
holding
the
the
heat
block
like
it's
doing
a
native
allocation
and
just
catch
it,
the
wrong
place
and
it's
holding
the
heat
block
now
you've
suspended
that
thread
and
if
you
do
any
native
allocation
on
your
thread,
you're
going
to
deadlock
and
the
same
thing
with
the
loader
lock.
G
So
if
it
happens
to
be
inside
the
loader
lock,
and
then
you
do
anything
that
triggers
a
dll
load,
then
you're
going
to
deadlock
again,
and
so
there
are
various
ways
around
that.
What
we've
told
people
to
do
in
the
past
is
basically
you
can
you
can
spin
up
a
canary
thread
and
then
you
can
tell
the
canary
thread
to
call
do
stack,
snapshot
and
then
wait.
G
You
know
what
a
like
10,
you
know,
10
milliseconds,
whatever
some
reasonable
amount
of
time
and
then,
if
it
doesn't
complete
within
the
reasonable
amount
of
time
you
say:
okay,
you're,
probably
dead,
and
then
you
can
just
kill
the
you
can
kill
the
thread,
unsuspend
the
thread
and
then
try
again
later,
but
so
that
that's
pretty
much
what
we've
told
people
to
do.
It's
it's
inherently
unsafe,
because
you
have
no
idea
what
the
threat
is
doing.
But
given
those
couple
things
that
works
now,
but
then
the
issue
is
on
linux.
G
There
is
no
suspend
thread
api,
so
all
the
debugger
stuff
on
linux
is
locked
down,
and
and
so
you
have
to
have
p
thread,
access
and
p
thread
has
really
special
dependencies
and
it's
actually
something
that
our
team
struggles
a
lot
with.
G
But
it's
untested.
So
we
don't.
You
know
it's
not
that
it
won't
work
hold
on
so
basically,
instead
of
calling
so
on
windows
where
you
would
call
suspend
thread.
What
you
can
do
on
linux
is
is
register
an
interrupt
handler
and
then
send
a
an
interrupt
to
that
thread
to
the
thread
you
want
to
to
walk.
G
Yeah
you
right,
you
you
inject
it
and
then
and
then
it
will
call
your
interrupt
handler
and
then
you
can
block
in
the
interrupt
handler
until
you're
done
walking
the
thread,
and
so
you,
how
do
you
make.
H
Sure
that
you.
G
D
The
api
promises
that
it
will
suspend
b
for
me,
so
it
once.
G
D
And
then
there
is
a
big
article
in
the
docs
that
says.
If
I
want
to
walk
the
native
thread,
then
stack,
then
I
need
you
to
spend
it
twice.
G
Yeah,
so
it
only
does
that
on
windows,
so
it's
okay,
it's
if
deft
and
it
will
not.
It
will
not
do
the
suspension
for
you
on
on.
D
Linux:
okay
and
then.
D
H
G
I
don't
know
the
technical
details.
I'd
have
to
look
it
up.
This
is
just
from
conversations
with
other
people.
I
see
it.
I
got
it
good,
so
so
I've
never
tried
it
myself,
but
but
so
I'd
have
to
look
at
the
details,
but
the
I
think
it
was
jetbrains.
Somebody
filed
an
issue
and
said
that
they
were
doing
this
way
and
I
did
a
couple
fixes
and
they
seem
happy
with
it,
but
I
don't
actually
know
how
successful
they
were
if
they
ran
into
other
issues.
D
So
so,
essentially
the
run
time
when
it
walks
the
manage
stack.
It
assumes
that
the
stack
doesn't
change
in
the
process,
so
it
makes
assumption
that
the
thread
is
paused.
So
if
you.
G
Oh,
you
know
what
this
might
not
even
work,
so
I'm
just
looking
at
the
code
now
so
the
issue
is.
This
is
like
the
way
that
I
intend
for
it
to
happen
is
with
like
the
only
officially
supported
way
is
by
suspending
the
runtime
with
that
suspend
runtime
api,
and
so
everything
else
is
kind
of
on
uncharted
waters,
and
so
just
reading
through
the
code.
Here
it
actually
just
won't
work
at
all.
So
if
you,
you
call
it
from
a
different
thread
on
linux,
it
just
will
return
enough.
Impul.
D
Even
if
that
thread,
even
if
the
thread
is,
has
been
suspended
in
some
way.
D
G
D
For
that
I
need
to
I
mean
I
could
use
some
of
the
other
profiler
callbacks,
but
those
callbacks
tend
to
be
at
the
beginning
of
a
method
and
the
end
of
a
method,
so
they're
not
they're
not
evenly
distributed
across
time.
So
I
will
not
be.
I
will
be,
you
know
to
do
proper,
cpu
sampling.
I
can't
you
know,
do
it,
because
I
I
am
you
know
over
over
focusing
on
some
kind
of
events.
I
cannot
just
do
it
at
random
time
intervals,
right
or
at
equal
time
intervals.
D
G
Yeah
all
right,
you
know
the
kind
of
the
intention
behind
the
future
was
that
you
would
run
light
diagnostics.
You
know
so
you're
attached
and
you're
not
doing
stack,
sampling
and,
and
you
would
do
stack
sampling
kind
of
reactively.
You
know.
So,
if
you,
if
you
see
a
bunch
of
exceptions
or
some
or
something
you
care
about,
then
you
would
say:
oh
okay,
now
I
need
to
actually
collect
some
stacks
and
then
you
turn
it
on
collect
the
data
and
then
turn
it
back
off
was
the
how
that
feature
was
thought
about
so
you're
right?
D
Yeah
well
for
for
previous.net
versions.
It
is
what
it
is,
but
going
forward,
maybe
not
in
this
right
today,
but
maybe
in
the
long
term.
We
can
have
a
conversation
about
how
we
actually
make
it.
You
know,
because,
like
we
can
do
it
for
java.
So
this
is
this
is
not
about.
Like
a
data
dock
thing,
it's
about
a
dot
net
thing
versus
other
runtimes
right.
D
E
D
Okay,
so
there
is
another
possibility
you
can
keep
a
shadow
stack
using
using
the
callbacks.
Yes,
there
is
like
all
these
every
time
you
start
a
method
and
finish
a
method.
There
is
a
callback
right
and
I
haven't
tried
it
yet,
but
I
read
a
bunch
of
articles
and
they
kind
of
have
conflicting
opinions
about
how
fast
versus
slow
it
is
one.
E
G
So
it's
probably
gonna
be
faster,
so
I
haven't
measured
it,
but
it
would
be
faster
than
keeping
the
the
manage
like
the
pausing,
the
runtime
on
all
the
time.
So
basically,
what
would
happen
is
every
every
call
the
jit
will.
So
if
you
use
elt
hooks
every
call
the
jit
will
it
will.
G
So
those
are
the
one
you
get.
You
get
a
callback
on
method
enter,
you
get
a
callback
on
method
leave
and
then,
if
you
get
a
callback
of
a
tail
calls
itself
yeah,
and
so
the
jit
will
will
just
call
directly
to
you,
provide
it
so
there's
three
ways
to
run
it.
There's
the
and
well
there's
basically
two
ways
to
run
it.
G
The
third
way
is
deprecated,
so
so
there's
the
fast
way,
which
is
you
just
give
it
an
address
and
the
jet
will
literally
jump
to
that
address
and
so
then
you're
in
charge
of
knowing
all
the
abis
and
making
sure
you
don't
you
back
up
anything
you
use
and
you
don't
destroy
any
data
and
cause
corruption.
But
then
the
advantage
of
that
is
it's
as
fast
as
you
make
it.
It's
literally
just
the
jet
would
like
literally
do
a
jump
instruction
to
whatever
function,
but
it
will
put.
G
It's
almost
it's
kind
of
like
a
function
call
except
for
it.
Doesn't
it
it
specifically
doesn't
do
any
prologue
or
epilogue
for
you,
so
so
it
won't
back
up
any
registers.
It
won't
do
anything
except
for
except
for
backup
there.
You
know
it
does
create
the
return
address,
so
it
you
know
where
to
return
from,
but
other
than
that
you're
on
your
own
and
so
but
then
there's
also
what
so
that's
called
fast
path,
elt
and
then
there's
slow
path.
Elt,
which
is,
we
will
do
the
we
will.
G
Actually,
you
know
back
up
all
the
registers
and
and
etc,
but
I
bet
even
even
slow
path.
Elt
would
be
faster
than
than
the
managed
stack
sampling.
D
G
D
Well,
I've
seen
the
public
api
for
the
for
what
you
call
slow
pass.
You
you
essentially
you
go
to
the
profiler
interface,
whatever
version
and
you
give
it
a
function
pointer
to
a
function
and
then
it
will
start
calling
it.
G
G
So
there
is
so
in
that
same
one,
there's
they're
called
like
set
enter,
leave
function,
hooks.
G
Well
so
there's
three
there's
three
methods:
there's
center
set
enter
the
function,
hooks
set,
enter,
leave,
function,
hooks
two
and
set
enter
the
function,
hooks
three
and
so
set
enter,
leave
function
hooks
by
itself
is
fast
path
and
then
the
one
with
a
three
after
it
is
slow
path.
And
then
the
two
is
the
middle
one.
G
That's
kind
of
deprecated
and
you
shouldn't
use
okay
and
it's
not
named
very
discoverable,
but
so
there
is
a
so
that
you
set
it
up
just
the
same
way,
except
for
in
just
what
the
callback
is,
so
that
you
won't
get.
D
So
it
sounds
like
if
I
want
to
prototype
something
I
go
with
the
with
a
with
three
and
then,
if
it
works,
sounds
like
getting
going
from
this
three
to
two.
None
like.
G
G
Yeah
it
just
will
it's
like
some
registered
backup
and
stuff
that
it
would
be
faster,
but
not
like
crazy,
depending
on
it's
not
going.
G
D
Because
because
I
mean
twice
as
fast,
if
if
my
callback
is
doing
nothing,
then
it
might
be
twice
as
fast,
but
since
my
callback
will
actually
be
doing
a
bunch
of
work
yeah,
you
know
I'm
just
trying
to
think
so.
Okay,
so
if,
if
we
did
that
and
then
I
would
keep
a
shadow
stack
of
essentially,
I
would
need
to
take
my
thread.
D
Id
look
up
and
some
table
map
where
I
map
my
thread
id
to
to
some
place
in
memory
where
I
keep
my
shadow
stack,
yeah
right
and
then
I
just
update
my
shadow
stack
where
I
I
want
to
do
as
little
work
as
possible
in
this
code.
So
I
probably
like
I
think
this
this
parameter
gets
the
method
token,
which
is
just
a
number
right
yeah.
It
gets
the
past
ten.
So
I
put
it
on
the
stack.
I
need
nothing
else.
I
return
immediately
right
and
then
I
can
have
another
thread.
D
G
So,
generally
speaking,
yes,
I'm
trying
to
think
if
there's
any
specific
input,
so
we
have
tests
that
do
the
shadow
stack
thing
and
it
we
just.
We
use
a
thread
local
dictionary,
but
we're
not
because
it's
the
test,
we're
not
super
concerned
about
performance
and
so
you're.
G
Yeah,
you
would
have
to
be
worried
about
locking.
Since
you
know
all
these
are
going
to
happen
up
across
all
the
threads,
so
you
would
have
to
be
you'd
have
to
worry
about.
You
know
concurrent
data
access
on
whatever
data
structure
you
used
to
either
make
it
thread
local
or
you'd-
have
to
lock
on
it.
G
Thread
id
doesn't
change,
but
you
would
so
all
I'm
saying
is
you
would
either
have
to
have
your
your
array.
You
know,
however,
you're
representing
the
data
structure.
It
would
have
to
be
thread
local,
but
if
you
use
a
one
that,
like
a
you
know,
you
could
imagine
as
a
dictionary
of
thread
ids
to
to
a
list.
You
know
like
here's,
your
thread
id
and
then
here's
a
list
of
function,
tokens
and
then
you
you
would
every
thread
id
would
be
accessing
the
same
dictionary
is
what
I'm
saying,
because.
D
Yes,
yes,
yes,
I'm
just
thinking
whether
I
can
implement
it
if
in
a
log-free
way,
for
example
by
assuming
that
a
threat
never
dies
right,
of
course,
it
kind
of
die,
it
can
die
right,
it's
red
bull
can
create
a
threat
and
then
threat
can
go
away.
G
D
Is
and
threaded
these
are
large
numbers
I'll
have
to
think
so,
maybe
somehow
by
by
making
an
assumption
that,
like
this
collection,
never
shrinks
and
maybe
wasting
a
little
bit
of
memory
for
it,
but
not
too
much.
We
can
somehow
make
it
a
lot
free.
G
Yeah,
I
expect
I
mean
I
don't
I
don't
have
it
solved
already,
but
I
expect
that's
true
and
in
general,
once
the
thread
id
is
assigned
it
shouldn't
change
for
it
should
it
could
be
reused,
but
I
don't
think
it.
I
don't
think
a
thread
will
ever
have
his
id
changed.
D
F
Think
so
greg
taking
a
step
back.
The
goal
of
of
this
conversation
was
to
try
to
be
able
to
measure
cpu
time
as
instead
of
wall.
Clock
time
is.
Is
that
correct.
D
Both
so
so,
essentially,
essentially
I'm
trying
to
understand
whether
we
can
have
a
profiler
that
that
is
always
on
now
cpu
versus
wall
clock.
In
an
ideal
case,
I
would
like
both
I
actually
had
a
conversation
with
with
like
like
after
after
our
last
conversation,
I
had
a
conversation
with
the
java
folks,
and
they
gave
me
examples
where
cpu
profiling
does
make
sense,
and
also
looking
at
just
perfume.
I
went
back
and
I
re-watched
and
re-read
a
bunch
of
docs
for
purview
and
perfume
does
cpu
profiling.
D
So
so,
where
you're
interested,
how
much
cpu
time
you
actually
took
versus
how
much
walk
lock
time
you
you
took
in
order
to
do
something
yeah,
I
don't
know
how
perfu
does
it,
but
I
think
maybe
maybe
etw
stack
sample
events
they
just
like
contain
whether
or
not.
D
D
D
But
I
was
so
my
first
thinking
was
when
I
do
stack
sample.
I
don't
know
whether
the
thread
is
actually
using.
Cpu
quantum
is
ready
to
run
but
not
running
or
waiting.
So
if
I
just
do
stack
sampling,
all
I
get
is
that
I'm
in
a
certain
api
like
this
is
my
stack,
and
this
is
all
I
know
so
that
means
it's.
A
naive
sampling
based
profiler
will
be
walk
lock,
because
I
just
know
that
I
was
in
this
api.
I
don't
know
what
I
was
doing.
A
Yeah,
so
so
the
problem
of
that
will
be
over
subscription
kind
of
you
have
too
many
things
to
run,
so
the
thread
is
wait
a
long
time.
As
far
as
I
remember-
and
this
is
10
years
old,
so
maybe
things
changed,
but
windows
had
the
red
thread
event
for
that,
so
yeah
they
do
yeah.
They
do
so
you
could
the
analysis
that
was
done
like
by
experts,
and,
I
think
perfect
view
was
pretty
similar.
In
that
context,
it
was
kind
of
okay.
A
D
Things
forward,
you
know
I
haven't
looked
at
a
perfume
code,
but
according
to
the
docs,
it's
really
strange.
It
sort
of
implies
that
etw,
like
the
the
the
the
events
that
are
made
by
the
kernel
to
collect,
stack
samples.
D
It's
almost
sounds
like
it
only
looks
at
threads
that
are
actually
cpu
bound,
because,
essentially
you
need
to
collect
more
data
to
the
workload
profiling.
D
A
A
D
Yeah
yeah,
but
imagine
imagine
like
the
thing
is
in
a
naive
way.
None
of
this
would
be
necessary.
The
naive
way
I
thought,
would
be
actually
workload
profiling
versus
cpu
profiling
because,
if
you
just
say
every
whatever
one
millisecond
I
gonna
sample
my
thread
right,
that's
it,
but
the
threat
is
like
in
a
maybe
not
actually
running
it's
just
there.
It's
ready
to
run,
but
it's
not
running
it's
not
using
cpu.
So
that
means
it's
using
wall
clock
time,
but
not
cpu
time.
D
H
A
Thanks
dave
greg,
I,
as
I
said,
this
is
very
old
knowledge.
So,
let's
let
me
try
to
remember
here,
so
it's
a
double
profiling
when
you
say
profile
what
means
it's
going
to
generate
one
event
in
the
frequency
that
you
choose:
they
have
a
default
frequency
for
any,
let's
say
physical
thread,
and
then
it's
going
to
somehow
correlate
using
contact
switches
and
this
kind
of
stuff
is
going
to
correlate
you're
going
to
be
able
to
correlate
what
was
the
stack
at
that
time
right.
D
D
D
The
whole
business
about
etw
is
hinging
on
the
fact
that
you
need
to
run
as
admin
in
order
to
collect
kernel
events
so
yeah,
you
need
a
special
privilege,
I
think
not
necessarily
to
me,
but
it's
a
high
priority.
No,
no!
No!
No.
You
need
a
special
privilege
to
connect
to
collect
each
w
events
for
a
specific
process
full
stop,
but
to
collect
kernel
events.
You
need
to
be
admin.
D
Okay
and
kernel.
Events
are
the
ones
with
the
stacks,
so
stacks
stacks
are
inside
of
events
emitted
by
kernel,
not
by
the
runtime.
The
runtime
just
emits
additional
events
that
contain
information
to
map
function,
pointers
to
method
names.
Now,
the
if
a
profiler
was
based
of
what
it
means
is.
If
a
profiler
is
based
on
etw,
then
the
space
where
we
would
be
playing
in
is
not
as
it's
now
now
this
the
technology,
the
data
collection
technology,
is
no
longer
in
the
tracer
or
auto
instrumentation
agent,
whatever
we
call
it
now.
D
This
the
whole
collection
technology
will
live
in
some
sort
of
agent,
whether
it's
open,
telemetry,
collector
or
some
other
process
that
it
needs
to
be
installed
on
the
box.
That
has
the
right
privileges
that
collects
etw
on
demand
with
the
right
frequency,
and
that
means
that
the
deployment
and
the
maintenances
of
this
local
windows
service
is
specific
for
every
platform.
D
So
in
the
vm
it's
one
way
or
an
edge
vm
it's
in
another
way,
because
you
need
to
get
to
the
vm
on
an
aws
vm,
it's
another
way
and
so
on,
and
then,
if
it's
platform
as
a
service
like
for.net,
probably
I'm
just
like
for.net
on
windows,
because
htw
is
only
windows.
The
priorities
would
be
vms,
then
azure
app
service,
like
in
terms
of
usage
right
containers
more
important
than
linux,
but
there
is
no
ecw
there
anyway.
D
So
so,
essentially,
if
we
wanted
to
explore
profiling,
then
the
whole
the
whole
place
we
would
be
playing
there
would
be
inside
of
like
separate
process
and
then
every
time
a
new,
a
new
platform
becomes
available
a
new
way
to
deploy
this
the
thing,
and
I
don't
think
from
specifically
open
telemetry
perspective.
I
don't
think
there
is
like
any
kind
of
reasonable
resourcing
to
to
play
in
this
game.
D
So
as
a
vendor,
you
guys
or
us
or
uralic,
or
whoever
may
or
may
not
be
interested
in
thinking
about
this,
but
as
open
to
limits.
This
will
not
fly.
A
Yeah,
I
I
that
was
something
that
I
was
a
bit
hoping
because
I
know
that
there
are
those
perfect
views
to
run
on
unix
for
some
time.
I
I
was
hoping
that
the
clr
had
done
something
in
that
regard,
but
it
it
did
something,
but
perhaps
it's
not
kind
of
not
only
feature
parity
with
what
exists.
A
A
D
Yeah
so
and
unix
has
this
perf
thing:
it's
a
also
kernel
level
thing
like
etw
and
there
is
not
as
much
support
for
there
is
like
there
is
a
application
that
you
can
install
on
a
unix
box
to
collect
things
from
this
person,
and
it
also
actually
works
like
in
the
in.net.
D
Dotnet
has
like
it's
in
my
notes.
Actually
in
the
document
that
I
shared
there
are
links
to
this
tool
that
does
it,
but
this
tool
is
uses
kernel
apis.
So
it
is
possible
to
create
like
this
tool
as
an
executable.
It's
a
standalone
thing,
but
it's
possible
to
create
a
user
mode
library
around
those
same
apis
and
then
ask
the
kernel
to
emit
those
events
that
they
would.
They
will
contain
stacks,
and
that
would
be
just
like
etw.
You
just
need
to
build
some
of
the
pipeline
yourself.
D
A
D
D
No,
no,
no
I
mean
it
is
possible
like
like
chris,
was
showing
you
relic,
has
a
profiler.
It
works
just
fine,
it's
great
it's
just
then,
please
correct
me:
if
I'm
wrong,
it's
too
slow
to
have
to
be
on
all
the
time.
F
C
D
And
the
the
the,
but
I
think
the
main
reason
for
it
to
be
not
so
fast
is
not
that
you
walk
all
threads.
Even
if
you
walk
the
subset
of
the
threads
using
some
sort
of
smart
decision,
because
you
first
suspend
the
runtime.
That's
what
makes
it
slower
right.
F
Yeah
that
I
mean
that's,
that's
part
of
it
with
that
being
said,
we
still
only
do
it
periodically
on
windows
as
well,
where
we
don't
have
to
suspend
the
runtime
itself.
F
Yeah
we
have
the
same
same
implementation,
but
even
before
we
supported
linux,
we
still
only
periodically
did
it
on
windows
and
so
yeah.
With
this
approach,
you
you
do
have
to
be
careful
about
which
threads
you're
calling
the
apis
from.
So
I
want
to
say,
there's
like
two
or
three
different
threads
involved
in
the
whole
profiling
process,
where
you
keep
one
thread,
managed
only
another
thread
native
only
and
then
you've
got
another
thread.
That's
interacting
between
managed
and
native.
D
Oh
so
once
you
collected
the
data
in
order
to
like
expand
that.
D
That
make
sense
that
makes
sense.
Yeah
yeah,
you
do
need
you
do
get
into
the
tricky
station.
If
you
ever
run
managed
code
on
a
thread,
then
you
can
get
suspended
for
gc
and
then,
if
you
were
actually
doing
a
stack
walk
at
the
time,
then
you
you
screwed.
F
Yeah,
the
apis
actually
have
some
safety
mechanism
in
it
that,
if
you're
actually
trying
to
use
those
apis
from
a
thread
that
got
polluted
by
managed
code,
it
airs
out.
D
D
And
if
we
don't
that
we
have
to
go
the
etw
route,
then
it
will
be
a
new
thing
for
every
time.
There
is
a
new
platform,
and
I
don't
even
know
where
to
begin
on
things
that
are
like
functions
where
an
agent
is
not
realistically
going
to
be
installed
right,
like
aaa
out
of
process
agent.
A
D
The
thing
is
whatever
we
do:
we
need
to
start
with
all
the
net
versions.
So,
even
if
the
microsoft
forks
add
something
in
dot
net
six,
then
you
know
what
about
all
the
existing
things.
A
Yeah
but
but
should
they
release
the
six?
Let's
say
at
least
I
don't
know
what
has
died
by
then
4.5.
Never
4.6
is
going
to
take
forever
else
to
die.
Yeah
yeah
yeah.
I
don't
know.
F
At
the
same
time,
I
mean,
depending
on
what
the
feature
is,
it
may
be
perfectly
valid
to
say
that
hey,
we
only
support
this
feature
for
this
version
of
net
and
we
have
reasons
for
it
and
we
may
be
able
to
do
something
not
as
good
on
an
older
platform
or
or
maybe
not
at
all,
but
there's
still
a
path
forward.
D
True
true,
but
it
will
be,
it
will
need
some
thinking
that
may
or
may
not
be
like,
then
there's
because
the
thing
is
it's
like
this.
It's
like
when,
as
an
engineer
when
I
actually
think
about
architecture
and
doing
things,
then
we
talk
together
here
in
the
circle
and
think
about.
How
can
we
do
this?
You
know
how
can
we
come
up
with
joint
standards
right,
but
sort
of
my
company
owns
our
companies
or
our
time?
D
So
if,
if
we
say
okay,
here's
a
feature
that
we
would
like
to
spend
some
time
on
in
the
space.
So
now,
if
the
product
organization
believes
that
this
will
actually
generate
revenue,
then
they
okay
do
it,
and
I
trust
you
in
in
the
way
like
whether
we
should
do
it
in
a
standard
way
across
companies
or
not
whatever
you
guys
figure
it
out,
but
it
makes
sense
to
do
this
feature
right,
but
so,
but
questions
like
can
we
drop
support
for
this?
D
For
this
version
they
will
have
something
to
say
about
it.
They
might
say
well,
look!
No!
If
you
drop
support
for
this
version,
then
like
don't
work
on
it,
because
then
your
time
is
better
spent
elsewhere,
because
we
have
not
enough
customers
or,
like
our
important
customers,
are
running
that
version
that
you
tend
to
not
support,
and
in
that
case
like
you
know,
so,
when
we
were
discussing
event
pipes
and
saying
we
can
avoid
supporting
dot
net
core
2
that
sounded
reasonable.
D
But
when
we're
talking
not
support
the
any
any
existing
versions,
I
don't
think
it's
it's
it's
possible
and
these
these
traditional
profile
apis
they're
the
same
in
all
of
them.
So
it's
not
gonna
like.
If
you
support
four
six,
we
will
also
support
for
five.
F
Where
we
wanted
to
surface
some
garbage
collection,
metrics
and
so
on,
windows,
so
net
framework,
what
what's
the
best
way
to
access
garbage
collection,
information.
B
F
So
so
you
got
from
perf
counters:
well,
that's
not
available
in
net
core
or
on
linux.
So
what
do
we
do
there?
Well,
the
event
pipe
came
out,
and
so
I
want
to
say
that
was
with
net
core
2.2,
and
so
that
was
where
we
were.
We
started
experimenting
with
it,
and
so
we
were
able
to
get
the
data,
but
dotnet
core
2.1
was
out
and
2.0
was
still
highly
used,
and
so
we
already
knew
right
off
the
bat
that.
F
Sure,
well,
I
don't
remember
if
there,
if
the
library
was
available
to
read
the
perf
counters.
F
So
so
that
there
might
have
been
that
in
there
too,
but
anyways,
and
then
we
ran
into
problems
with
getting
the
data
from.net
core
2.2,
because
it
there
was
a
memory
leak
and
so
we
we
then
decided.
Okay,
we
can
support
this
for
net
core
3.0
and
higher,
because
that's
where
the
the
memory
leaks
were
fixed
and
so
yeah
we're
not
supporting
all
of
the
customers.
F
D
So
I
think
it's
good
that
david
mentioned
this
thing
about
stopping
threads.
That
will
save
save
me
some
time.
I
was
thinking
about
actually
doing
some
prototyping
around
it.
D
But
since
this
is
like
I
mean
we
could
suspend
the
world,
but
I
really
kind
of
don't
believe
it
will
be
fast
enough,
like
I
don't
know
what
what
like
you
guys,
have
it
like?
What's
your
what's
your
opinion
about
this,
is
it
like.
F
Yeah
so
I
mean
for
me,
I
I'm
still
trying
to
understand
all
of
the
value
that
having
this
profiler
provides
for
me,
the
the
biggest
value
that
I've
personally
seen
from
it
is
with
aiding
in
the
discovery
process
for
so
for
different
people.
F
So
let's
say
you
just
got
this
application
and
the
agent's
running
it's
sending
up
data,
but
you're
not
getting
a
whole
lot
of
visibility
into
what's
going
on
there,
and
so
you
can
run
the
thread
profiler
to
get
a
better
idea
of
what
type
of
code
is
running
in
that
application,
and
that
allows
for
a
couple
of
things.
One
you
can
see
what
libraries
are
are
being
used
and
then
two
it
can
kind
of
give
you
an
idea
of
where
time
is
being
spent.
F
You
know
I
I
don't
remember
off
the
top
of
my
head,
I
want
to
say
for
a
two-minute
window
will
sample
every
100
milliseconds
and
I
think
that's
the
default
setting.
D
It
it
tells
you
still
not
precisely
but
pretty
well
yeah
what
what
you,
where
are
you
spending
your
time.
F
Yeah,
and
so
what
I've
seen
people
do
is
use
this
information
to
then
determine
where
they
want
to
put
some
custom
instrumentation
in
place
so
that
they
can
get
some
additional
visibility
into
their
application.
D
Yeah
I
mean
the
high-level
scenarios
is
say
you
have
an
existing
application,
that
you
don't
want
to
change
and
or
you
maybe
you
do
want
to
change,
but
you
don't
know
how
with
profiling.
First
you
can.
There
is
two
two
gen
high
level
investigations.
One
is
you
you
want
to
increase
your
response
times
or
sorry
decrease
your
response
times.
So
you
want
to
know
where
your
your
traces
of
spending
time
and
all
you
want
to
make
your
save
money
in
your
for
your
for
your
infrastructure.
D
F
Yeah,
I
don't
know,
I
thought
I
remembered
hearing
something
about
app
insights,
doing
some
periodic
profiling,
but
I'm
not
familiar
with
with
what
they
were
doing.
D
F
Parts
of
that
but
yeah
I
remembered
hearing
that
they
they
do
some
sort
of
periodic
profiling.
D
H
D
All
right,
anyway,
cheers
guys
I'll
I'll
chat
to
david
about
this
and
I'll
share.
What
I
find
out
so
in
case
you're
interested.
F
F
I
was
mostly
asking
the
question
about
cpu
time
versus
wall
clock
time,
because
I
wasn't
quite
sure
how
we'd
be
getting
from
the
sampling
that
we're
doing
from
the
profiling
and
then
mapping
that
into
cpu
time.
D
So
the
thing
is
yes,
I
was
thinking
about
it
and
from
the
kernel
perspective,
so
for
etw,
it's
easier
so
say
you
have
eight
cores.
You
may
have
thousands
of
threads
but
say
your
your
clock
came
right.
You're
you're
one
millisecond
expired
and
you
want
to
do
sampling
right.
D
You
don't
want
to
sample
100
threads,
because
why
would
you
even
do
that?
You
want
to
sample
the
eight
that
are
running
on
the
cpu
and
the
os
knows
which
ones
are
running
on
the
cpu.
So
now
that
I
thought
about
it,
I
actually
like.
While
we
had
this
discussion,
it
makes
sense
that
etw
samples
are
always
cpu
bound,
so
it
it
when,
if
you
are
in
the
etw
world
you
are
by
default,
you
do
only
cpu
sampling,
you
don't
know
anything
about
wall
clock.
You
need
to
do
additional
information
to
actually
do.
D
Workloads
profiling
because-
and
this
is
the
context-
switch
information
because
in
the
context
of
without
it,
essentially,
you
will
never
get
samples
of
threats
that
are
not
actually
making
forward
progress.
Why
would
you
right?
I?
I
don't
actually
know
I'm
just
kind
of
reasoning
about
it.
I'm
just
thinking
a
lot
right
now,
but
if
we
are
in
a
profiling
api,
it's
the
other
way
around.
We
have
a
list
of
all
the
threads
that
the
clr
knows
about,
but
we
don't
know
which
ones
are
making
progress.
E
A
One
thing,
but
but
that
should
slow
down
stuff,
is
that
on
that
level
the
clr
can
help
a
lot.
I
think,
because
there
is
a
bunch
of
events
about
things
that
you
are
not
going
to
have
red
thread.
The
thread
is
ready,
but
at
least
you
are
going
to
have
the
blocks
and
weights.
You
know,
I
think
the
concurrent
collections
tasks
I'll
have
a
bunch
of
events
for
this
stuff.
If
I
recall
correctly,.