►
From YouTube: 2020-10-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
C
A
So,
first
this
week
I
I
I've
been
involved
with
this
splunk
conference
and
kind
of
a
bit
busy
almost
couldn't
do
much.
But
yesterday
I
I
I
tried
to
upstream
I
created
a
pr
to
upstream
the
the
chains
from
datadog
until
they
commit
just
before
the
reorg
of
folders.
That
zach
did
so.
We
can
kind
of
have
that
as
a
cut
point,
because
we
already
have
a
small
piece
of
code
separate
of
that,
and
then
we
can
follow
the
same
pattern
that
is
there
down
on
on
data.
A
In
this
regard,
zach
mentioned
on
the
guitar
about
doing
kind
of
how
I'm
a
discussion
to
see
the
best
way
to
do
this
kind
of
upstream
I
did
simply
yesterday.
I
pick
up
the
the
sha
that
was
before
the
committee
of
the
reorg
and
just
did
the
the
merger
from
that
point
you
know
so
I
I
think
this
works
reasonably
when
the
the
the
the
things
are
kind
of
very
close
when
they
are
right
now.
A
D
I
think
I
I
don't
mind,
but,
like
one
thing
to
consider
is,
do
we
need
to
solve
a
problem
before
it
exists
or
not,
because
I
think
if,
if
any
of
us
would
like
to
change
something
in
a
way
that
will
break
this
very
close
compatibility,
then
we
can
just
say
before
doing
this.
It
would
require
discussion
with
the
group
and
until
we
actually
need
it,
then
why?
D
Why
change
it
I
mean
in
the
long
term,
monica's
like
setting
names,
will
need
to
be
changed
to
open
telemetry
like
close
to
ga
by
the
actual
structure
of
the
ripple,
or
I
don't
know
unless
we
need
it.
E
The
main
thing
I
was
concerned
about
was
since
us
at
daviddog
are
pretty
quickly
iterating
on
all
this,
and
so
it
had
been
a
while
since
we'd
upstreamed
some
changes.
So
with
this
with
this
big
merge,
paula
was
doing
that
in
effect
brought
things
up
to
date.
I'm
just
wondering
if
we
want
to
do
that
in
smaller
batches
or
if
on,
like
a
certain
cadence,
to
bring
up
changes.
A
Yeah,
I
I
think,
ideally
now
that
we
are
kind
of
we
are
really
close.
Now
I
I
actually
I
we
should
do
these
merges
very
frequently.
You
know
it
will
be
much
better
and
I
think
now
it
kind
of
got
pretty
big
because
we
took
I
think
more
than
two
months
to
do
it,
so
it
got
pretty
big.
So
perhaps
I
can
create
a
task
for
me
to
do
this
kind
of
every
monday
or
so
and
you'll
be
much
closer.
D
I
think
also
right
now
and
as
as
things
change
in
the
future,
we
will
adjust,
but
right
now,
essentially,
most
of
the
code
flow
is
from
data
dock
into
open
telemetry.
D
So
if
and
when
others
start
contributing,
of
course,
we
will
like
discuss
and
figure
it
out,
but
for
now,
as
long
as
it's
like
99
percent
one
way
I
mean
it's
a
no-brainer
like
whatever
data
dog
structures
we
need.
We
need
to
take
because
there
is
actually
no
arguments
against
otherwise.
But
of
course,
once
once,
other
people
choose
to
make
significant
contribution.
Then
of
course
we
will
need
you
to
like
immediately
adjust
as
is
appropriate,
but
until
then
I
mean
what
I
think,
whatever
just
copy
dated.
Okay.
A
A
Okay!
I
didn't
put
a
pr
for
the
dynamic
load
in
that
based
on
greg's
prototype.
I'm
I'm
gonna
do
that
after
I
merge
the
reorg,
because
then
I,
instead
of
moving
from
the
location
that
I'm
calling
right
now,
I
already
do
in
the
place
that
have
the
the
folder
structure.
A
D
No
no
dependency.
I
I
wanted
to
make
this
sort
of
sort
of
decoupled
early
on,
but
eventually
yes,
you're
right,
it's
just
for
debugging
and
everything
it
makes
very
convenient
to
have
like
just
console,
not
don't
think
about
another
dependency
and
so
on.
So
I
was
actually
thinking
about
this
just
the
other
day.
So
what
I
wanted
to
suggest-
and
let
me
know
whether-
because
I
didn't
want
to
like
take
a
dependency
on
other
other
libraries
right
away.
D
So
what
I
thought
of
doing
is
slightly
change
the
slogging
class
that
I
have,
and
essentially
I
have
like
no
more
like
four
methods,
like
log
exception,
log
message,
log,
something
else
whatever
I
would
right
now,
they're
just
methods.
I
would
make
them
properties
that
take
function,
pointers
and
by
default
they
would
be
initialized
to
my
thing
that
just
dumps
to
console
and
that
way
the
library
can
sort
of
stay
independent
in
terms
of
logging
and
then
in
our
early
prototypes.
D
When
we
integrate
it
into
the
rest,
we
can
just
set
those
function,
pointers
to
whatever,
like
the
standard
logger
in
the
and
the
tracer
is
doing
and
later
on.
The
canon
factor.
A
Yeah,
so
so
you
still
wanna
keep
for
now,
at
least
initially
a
kind
of
one
level
find
that
in
direction,
instead
of
going
to
straight
to
the
the
the
one
using
use
it
on
the
the
data
choice.
D
Like
if,
if
that's
okay
with
you,
because
right
now,
I'm
kind
of
using
it
for
should
last
like
put
things
to
console
in
a
way
that
is
quickest,
quickest
to
read,
without
consideration
of
like
files
and
rotation
and
all
of
these
things.
So
I
think,
like
this
level
of
interaction,
would
allow
us
to
plug
in
easily
for
end-to-end
tests,
but
like
for
at
least
for
the
like
early
on.
If
we
can
have
the.
A
A
If
you're,
okay
with
it,
it's
not
important,
sorry
yeah!
No,
I
I
I
I
I'm
just
thinking
yeah.
Perhaps
we
we
start
with,
we
keep
the
kind
of
very
simple
indirection
level,
just
as
you
said
so
in
the
beginning,
it's
very
easy
to
experiment
and
try,
but
then
not
not
too
far
down
the
road.
We
we
we
make
the
switch
okay,
okay
sounds
good.
I
I
see
that
you
are.
You
are
interested
in
proposing
a
alternative
idea,
so
I
I
would
like
to.
D
Hear
that
so
I
was
discussing
with
lucas
earlier
today
and
with
other
people
on
the
team
earlier
in
general
and
so
stepping
back
from
from
what
we
already
did
right.
D
What
are
I'm
just
trying
to
kind
of
revisiting
the
risks
and
the
upcoming
work
where
it
goes
right?
Many
risks
are
cons
concerned
with
the
whole
library
loading
thing,
but
they're
sort
of
hiding
from
us
addressing
the
that
will
be
resolved,
like
I'm
confident
that
we're
on
the
right
track.
It's
just
a
matter
of
time
and
putting
in
some
work
for
the
library
I
think,
but
until
we
actually
done
this
they're
hiding
from
us
all
the
risks
related
to
like
performance,
you
know
activities
and
whatnot.
We
don't
control
them.
D
We
have
to
put
essentially-
and
if
you
look
at
the
early
prototype
that
is
currently
pushed
of
my
code,
that
does
the
shimming
right.
The
shimming
is
not
only
necessary
for
the
library
loading,
the
actual
just
reflection.
We
all
have
to
do
the
shimming
for
like
id
and
so
on,
to
make
sure
that,
like
all
the
different
versions
of
diagnostic
source
like
these
are
the
things
that
we
discussed
the
other
day,
so
then
there
is
performance.
D
D
Can
we
at
least
for
a
few
minutes
re-discuss
this
other
thing
that
we
kind
of
briefly
visited
and
then
dismissed?
So
what
if
we
don't
use
activities
as
the
underlying
exchange
type?
D
What
if
we
use
some
kind
of
implementation,
either
modified
span
as
is
or
a
new
spam
type,
doesn't
matter
like
some
kind
of
internal
implementation
spam
as
as
the
exchange
type,
it
would
be
optimized
for
our
purposes
and
it
would
have
like
the
id
a
bunch
of
tags
whatever
we
need
now.
The
way
it
might
work
as
a
suggestion
that
I
would
like
to
explore
together
is
so
we
make
we
use
it
specifically
as
internal
only
exchange
type.
D
That
means
that
we
create
an
active,
so
it
will
go
like
this.
First,
we
don't
support
activities
before
version
five.
So
if
somebody
emits
activities
with
older
versions
of
diagnostic
source,
we
don't
listen
to
them.
D
We
do
listen
to
diagnostic
source
events,
but
that's
a
separate
conversation,
four
activities
that
are
essentially
open,
telemetry,
compliant
meaning
starting
with
version
five,
not
not
net
version
five,
but
but
diagnostic
source
library
version
5..
So
essentially
we
would
at
runtime
look
whether
or
not
diagnostic
source
version,
5
or
later
is
loaded.
D
So
there
is
still
some
reflection
necessary.
If
loaded,
we
would
instantiate
our
own
activity
listener,
listen
to
all
activities
that
the
customer
creates
and
convert
them
into
the
the
this
internal
exchange
type.
D
If
diagnostic
source
of
version
508
is
not
loaded,
we
are
not
listening
to
activities
like
the
customer
would
need
to
do
something
like.
Essentially
they
want
not
supported
basically
on
their
own,
like,
I
think
it
would
be
open
to
limit.
I
don't
know
whether
it's
good
for
all
customers,
but
I
think
it
would
be
open,
telemetry
compliance.
D
I
think
the
only
problem,
the
only
guys
who
would
have
a
problem
is
application
insights
because
they
they
they
were
using
activities
a
lot
before.
But
for
that,
if
that
really
becomes
a
problem,
a
bytecode
level
integration
can
be
created
to
to
do
something.
D
So,
let's
not
talk
about
applications
for
now
so
again,
integrations
in
in
that
scenario,
like
the
bytecode
instrumentation
that
we
use
for
libraries
would
emit
the
internal
exchange
time
span
directly.
D
D
That
means
any
library
that
chooses
to
emit
telemetry
using
either
open,
telemetry
or
activity
is
automatically
covered,
and
that
requires
a
simplified,
slightly
simplified,
a
module
loading
logic
that
still
listens
to
loading,
still
inspects
where
what's
going
on
and
because
we
still
need
to
listen
to
diagnostics
event
from
older,
older,
diagnostic
source
versions.
D
So
if
an
older
diagnostic
source
version
is
loaded,
then
we
still
need
to
dynamically,
discover
it
and
do
all
the
right
things,
but
only
around
diagnostic
source
types,
not
around
activity
types,
which
means
less
very
similar
logic
around
loading
but
simpler
logic.
Around
reflection,
so
advantages
of
this
overall
approach
is
that
I
think
we
can
move
a
little
faster
early
on
because
there
is
less
of
this
whole
reflection
thing
and
we
break
a
dependency
or
like
this.
D
This
exchange
type
that
is
central
to
the
entire
thing
we
again
full
control
on
it,
that's
the
advantage.
I
think
it's
a
very
significant
advantage.
The
disadvantage
is,
I
think,
initially,
very
little
but
long
term.
So
that
means
that,
from
the
perspective
of
bytecode
level
integrations,
there
is
no
big
disadvantage.
They
just
limit
activities.
D
We
would
create
a
spam,
so
essentially,
two
objects
instead
of
one
now
in
the
next
year,
or
so.
This
is
a
relatively
rare
case.
I
don't
expect
many
libraries
to
do
this
very
very
soon,
most
libraries
are
using
diagnostic
source,
which
is
different.
This
problem
doesn't
apply
to
that
right
for
diagnostic
source,
I'll
get
to
it
in
a
second,
but
in
some
longer
future
we
we,
when
a
lot
of
libraries,
use,
use
diagnostic
source,
sorry
use
activities.
D
What
I
is
a
potential
mitigation
strategy
for
this
is
that
by
then
dotnet,
like
the
subsequent
version
of
the
net
might
come
around
by
the
time.
This
really
takes
off
and
becomes
frequent
and
microsoft
might
consider
doing
changes
to
the
activity
class
in
a
subsequent
major
version
that
might
help
us,
for
example,
it
could
make
it
non
like
non-boxed.
Sorry,
none
like
what
is
called
so
you
can
derive
from
activity
and
then
our
internal
exchange
type
can
derive
from
it
or
and
so
on
and
so
on.
D
C
C
C
A
I
I
think,
activities
current,
it's
part
of
the
early
versions,
but
I'm
not
100
sure.
D
D
A
integration
like
a
bytecode
instrumentation
creates
this
internal
exchange
type.
Then
a
customer
code
doesn't
see
it
and
they
still
think
that
like
essentially,
but
they
because
they're
already
using
activities
they
like
just
the
fact
that,
like
we
are
listening
to
these.
D
A
Me
yeah
and
just
for
the
record,
the
current
the
activity
dot
current
is
present
since
four
zero.
Two
one.
D
It's
not
just
current,
it's
also
apparent
it's
also
parent
at
least
the
entire.
The
entire
chain
of
activities
will
be
broken,
so
say
say
you
have
a
scenario.
A
typical
scenario
would
be.
You
have
asp.net,
so
the
very
root
thing
would
be
diagnostic
source,
which
means
we
create
our
internal.
We
listen
to
diagnostic
source,
that's
an
event,
not
an
activity,
so
we
create
our
internal
representation
and
then
a
customer
wants
to
admit
some
library
that
chooses
to
emit
this.
Like
has
they
have
no
parents
there?
Is
they
can't
reason
about
id?
D
And
look
as
we
had
a
conversation
about
this
earlier-
and
I
forgot
about
this
if,
if
there
are
workarounds
like,
I
think
this
yeah,
this
is
why
we
set
out
on
this
approach.
Thank
you.
Okay,.
D
Is
still,
there
is
still
still
one
thing
sorry
to
go
back
to
this,
so
as
we
go
so
we
have
to
use
activities
eventually
so,
but
in
order
to
be
able
to
move
sooner
and
faster,
I'm
just
seeing
how
much
work
it's
taking
and
there's
all
sorts
of
projects
going
on.
So.
D
Right
now,
libraries
still
there
are
still
very
few
libraries
that
who
actually
emit
activities
rather
than
diagnostics,
and
it
will
change
over
time,
but
right
now
there
are
virtually
none,
I'm
actually
like
who,
which
important
libraries
actually
emit
activities
right
right
now.
Today
I
mean
five
is
only
coming.
D
So
then,
maybe
we
can
modify
the
existing
the.
Maybe
the
long-term
strategy
should
be
what
we
always
said,
but
rather
than
wrapping
everything
into
the
into
the
wrappers,
we
can
finish
the
logic,
the
library
loading
logic
as
discussed
with
the
same
plan
and
then
only
for
when
we
actually
implement
and
test
the
reflection
wrappers.
D
We
focus
on
diagnostic
source,
so
first
the
plan
of
actually
starting
to
using
activities
exchange
types.
We
postpone
make
no
change
for
now
make
make
the
change
around
library
loading
complete
this
ship
it
and
then
change
the
diagnostic
source.
D
Listening
that
already
exists
in
the
tracer.
This
is
how
we
listen
to
asp.net.
For
example,
we
convert
that
to
use
the
this
reflection,
dropper
functionality,
which
will
make
the
versioning
problem
go
away,
and
it
will
also
solve
some.
We
have
like
some
customers
complaining.
They
have
all
sorts
of
weird
setups
where
they
build
against
one
runtime
run
against
another
runtime,
and
so
we
think
they
are
on
foot.net
and
they
built
against
look
at
core
and
eventually
like
at
the
end
of
the
day.
D
By
doing
this,
we
would
gain
an
earlier
win
by
having,
like
this
library
logic
in
production,
what
not
just
focus
on
diagnostic
source
and
then
kind
of
move
on,
and
then
we
can
revisit
actual
activities
as
a
subsequent
independent
step
later.
D
A
I
think
I
I
have
to
think
more
about
that
with
some
perhaps
concrete
examples.
You
know,
okay,.
D
Maybe
you
can
split
it
even
further.
Should
you
make
this
this
an
easier
decision,
because
I
think
whether
we
need
we
need
it
for
ga
the
whole
activity
thing
that
may
be
a
discussion,
but
splitting
the
work
into
these
steps,
I
think,
would
be
a
value
anyway.
If
we
focus
on
continue
working
on
the
library
loading.
D
But
then,
when
we
talk
about
reflection,
wrappers
just
focus
on
diagnostic
source,
get
that
done
get
this
part
shipped
and
into
production,
and
apparently
we
can
think
about
whether
when
or
not
the
whole
activity
thing
needs
to
be
completed
for
ga
and
if
yes,
then
we
do
it.
If
not,
then
we'd
still
do
it.
But
later,
but
at
least
we
prioritize
the
work
that
is
specifically
for
diagnostic
source
version,
because
that
would
definitely
need
without
it
like.
We
are
like
having
the
versioning
problems.
A
Yeah
but
but
for
instance,
if
we
have
an
application
that
has
part
instrumented
with
opentalent.net,
then
we
we
are
not
going
to
be
able
to.
There
is
a
escape
valve,
but
that's
kind
of
the
escape
valve
will
be
say
say
to
people
using
use
open
tracing
instead,
but
if
they
have
open
telemetry-
and
we
can't
support
that,
then
I
think
it's
it's
gonna
be
really
strange
to
say
that
that's
jay.
A
Perhaps
we
do
the
path
that
you
are
talking
about,
first,
as
a
kind
of
for
us
to
get
a
more
the
footing
and
find
the
issues,
but
I
think
4ga
we
do
need
to
support
activity.
D
D
D
That
would
already
be
a
lot
of
value,
so
I
think
you're
right
about
this,
like
kind
of
we're
open
telemetry,
but
we
don't
support
the
open,
telemetry
api,
that's
strange,
but
if
we
can
at
least
like
first
ship
that
part
for
diagnostic
source,
that
is
because
all
that
work
needs
to
be
done
anyway.
It's
not
like
we
are
doing
some
extra
work.
It's
just
like
splitting
it
into
slightly
different
shipping
steps
than
than
we
initially
were
talking
about.
A
Yeah
I
I
I
was
thinking
that
there
is
part
of
the
work
that
perhaps
we
could
do
in
chunks
with
some
limitations
that
at
least
allow
us
to
start
using
the
code.
You'll
be
things
like
if
you
could
do.
A
Let's
suppose
we
don't
handle
at
first
the
case
that
we
fail
to
load.
You
know,
because
from
net
core
app
we
we
should
always
succeed
for
the
the
existing
versions.
You
know
it's
part
of
the
the
bcl,
so
we
may
don't
get
a
good
version.
That's
why
we
need
the
wrapper.
So
we
can
implement
the
missing
functionality,
but
the
cases
that
we
are
not
going
to
be
able
to
load
typically
should
be
from
dotnet.
A
So
dotnet
I
mean
the
framework.
So
in
the
framework,
then,
then
we
at
least
you
can
start
experimenting
with
net
core
version.
Two
one,
two
one
you
know,
and
so,
but
so
you've
been
do
the
static
load.
D
A
The
beginning
not
static
loading,
we
we
do
dynamic
loading,
but
that
is
part
of
the
the
run
time
so
being
part
of
the
runtime
you
can
just.
If
we
do
the
proper
loading,
we
should
be
able
to
load
that.
A
A
A
We
do
the
rendering,
so
that
is
in
itself
kind
of
very
different
than
having
the
delegates
through
some
version
that
at
least
some
of
the
functionalities
provided.
A
I
I
have
the
code
working,
but
the
thing
is
that
we
need
to
put
and
test
all
of
that
you
know.
So
what
I'm
saying
is
kind
of.
If
we
have
the
wrapper
working,
we
can
kind
of
have
two
paths
to
attack.
We
can
kind
of
start
to
deploy
using
that
path
of
the
you
want,
using
the
wrapper
why
we
do
the
parallel
work
on
the
version
that
uses
the
vendor
version.
D
Okay,
so
in
case
that
we
detect
the
case
where
we
would
otherwise
fall
back
to
the
vendor
version.
What
would
we
do
in
that
case.
A
For
now
we
can
start
with
kind
of
not
supported
while
you're
working
finishing
the
test
for
all
of
that,
because
what
I
have
for
that
is
basically
the
following.
I
have
one
implementation
running,
but
we
need
to
plug
all
of
that
and
make
the
finger
out
consistent
right.
D
Here's
the
problem-
and
I
I
it
may
not
be
a
problem.
So
please
correct
me
if
I'm
wrong,
because
everything
every
every
major
update
here
is
immediately
gets
into
production
for
both
you
guys
and
us
right.
D
If
some
intermediate
version
does
not
support
a
scenario
that
is
supported
today,
that's
then
we
cannot
go
that
route,
and
today
we
do
support
instrumenting,
full
framework
applications
that
don't
reference
diagnostic
source.
A
D
But
then
there's
no
point
having
steps
then
we
can
like.
If,
if
we
don't
put
any
prototype
in
production,
then
we
should
just
take
the
fastest
implementation
route
without
intermediate
steps.
The
intermediate
steps
are
all
about
shipping,
some
some
intermediate
step
into
useful
usage
by
customer
yeah,
but.
A
D
A
Usually,
what
I
have
in
some
cases
is,
of
course,
I
I
don't
scale
to
do
this
with
all
customers,
but
usually
we
have
some
kind
of
these.
Are
the
customers
that
I
have
contact
that
I
worked
with
them
to
some
level
and
they
have
kind
of
stage
dev
environments
that
we
usually
when
we
are
trying
to
deploy
new
features.
We
go
to
these
people
and,
besides,
after
we
do
our
internal
tests,
we
kind
of
have
this.
D
I
understand
the
thing
is
for
us:
it
may
not
work
because
we
don't
have
currently
pms
with
time
scheduled
to
drive
this.
It
means
that
what
you
describe
makes
total
sense,
but
we
won't
be
able
to
do
it.
That
means
we
will
immediately
lose
all
the.
We
will
diverge,
essentially
in
that
case,
so
so
I'm
like
I
I'm
trying
to
find
some
way
where
we
can
split
the
work
in
chunks
where
chunks
are
actually
like.
D
You
know
shipped
to
production
and
like
either
just
change
the
existing
functionality
towards
a
better
architecture
or
improve
some
some
scenario
rather
than
reducing
functionality.
Because
then
we
have
immediately
created
a
fork.
G
I
just
want
to
comment
right,
quick
that
you
know
I
do.
I
do
have
some
some
time
to
be
like
devoting
towards
you
know
getting
customers
involved
and
testing
like
an
early
version
or
a
beta,
but
I
also
you
know
I
don't
have
a
lot
of
bandwidth
for
that.
I
don't
want
to
put
myself
in
critical
path
where
we're
like
you
know,
blocking
forward
progress
waiting
for
you
know,
feedback
on
something
as
well
too.
D
Yes,
so
right
now,
when
we
ship
something
it's
a
no-brainer
for
us
to
immediately
contribute
to
the
community
the
secondary
fork,
any
any
code
change
will
be
like
okay,
we
do
this,
for
our
business
should
or
should
we
not
invest
the
time
ensure
like
duplicating
the
work
in
in
the
community
version
and
so
early
on
with
solid
few
people
involved.
D
I
don't
think
we're
as
as
an
hotel
group
and
I'm
in
a
good
position
to
to
have
this
conversation,
because
business
constraints
might
often
win
almost
almost
always.
So
I
would,
I
would
be
worried
about
creating
a
fork
that
requires
duplicating
work.
C
Yeah
so
instead,
maybe
instead
of
us
thinking
about
it
as
a
literal
fork
in
the
code.
What
if
we
simply
thought
about
it
as
more
of
we
can
leverage
some
sort
of
configuration
to
turn
on
and
off
behavior,
and
we
could
selectively
have
these
things
off
by
default,
but
that
but
then
let's
say
there
is
a
customer
out
there
and
there
is
time
to
work
with
that
customer
to
try
out
something
and
then
to
work
with
them
and
just
turn
it
on.
D
That
you
can
do
for
sure.
Okay,
so
that's
not
a
problem
in
in
that
sense,
but
so
what?
What?
What
are
the
steps?
Because
there
is
two
things
that
we
touched
on,
that
we
can
do
both
or
just
one
of
them?
Also
one
is
like
I
was
saying:
do
all
the
library,
loading
logic
and
reflection
first,
only
for
diagnostic
source.
That
would
be
one
step
and
then
next
step
add
add
reflections,
capabilities
around
activities,
and
you
were
saying:
let's
only
do
the
rabbly
logic
without
the
rendering
stuff
right.
D
And
would
you
want
to
okay?
Would
you
want
to
so
once
we
have
library
loading
without
the
rendering
do
you
want
to
also?
What
is
the
next
step
that
we
will
do
first
make
it
like
configurable
whatever,
then
next
step
is
add
reflection
around
diagnostic
source?
Okay,
then
we
do
that.
The
next
step
is
add
reflection
around
diagnostic
source.
Sorry,
then,
that
that's
done,
then
we
have
two
possibilities.
One
is
either
do
all
the
restaurant
library
loading
or
do
reflection
around
activities.
D
What
would
be
in
your
mind
the
next
step.
D
D
Okay,
so
for
us
for
us
I
mean,
as
as
this
group,
we
can
do
either
for
us,
the
preferences
clearly
towards
diagnostics
like
diagnostic
source
and
and
library
loading.
The
reason
is.
D
Today,
like
the
scenario
where
we
do
all
the
necessary
things
around
library,
loading
and
diagnostics,
but
not
activities,
solves
actual
customer
problems
without
regressing
existing
functionalities.
So
once
it's
done
all
our
customers
can
get
it
and
we
can
get
start
getting
like
feedback
performance,
all
the
things
that
that
are
wrinkled
out
from
from
actually
using
it
in
production.
D
Until
until
library
loading
includes
the
rendering
part.
This
does
not
solve
any
customer
problems
for
us.
So
it's
all
a
theoretical
exercise.
A
And
I
actually
just
another
thing
came
to
my
mind
for
for
for
us
to
be
able
to
deploy
to
the
customers
we
need
to
support
different
kind
of
exporting.
You
know
we
support.
Other
formats
doesn't
need
to
be
anything
specific
to
us.
We
do
consume
jaeger's
repeating
otlp,
but
we
we
need
a
different
format.
You
know
so
that
implies
another
work
also
coming
forth
first,
so
perhaps
actually
staying
in
the
pa.
A
The
path
that
we
are
is
is
kind
of
the
thing
that
we
have
to
do
it's
possible,
for
if
some
someone
has
the
cycles
to
at
the
same
time,
working
on
the
the
question
of
how
you
are
gonna,
plug
the
exporters,
for
instance,
to
support
these
different
formats.
You
know
that
that
is
the
part
that
we
eventually
need
to
get
to,
but
I,
as
far
as
I
know,
no
no
one
had
chance
to
work
on
that.
Yet.
C
A
D
D
But
for
this,
can
we
I'm
thinking
I'm
trying
to
kind
of
pick
up
on
your
idea
of
focusing
on
the
on
the
net
core
framework
right
now?
If
we
have
this
rare
case
where
net
core
does
not.
A
Because
one
thing
that
we
can
do
for
the
framework
because
we
have
the
nougat
packages
from
the
rc
version,
it's
kind
of
support
that,
because,
if
they
they
have
a
net,
a
framework
that
a
red
reference
diagnostic
source,
we
are
able
to
use
the
reflection
the
same
way.
You
know
the
load
is
gonna
succeed.
A
D
That's
what
I
did
in
my
early
working
prototype
that
was
already
working,
but
then
some
problem.
No,
no!
It
was
a
potential
problem,
but
I
think
it's
a
real
problem.
It's
like,
if
you
don't,
have
file
system
access
or
if
you
don't
have
permissions
to
write
your
application
there.
Actually,
then,
you
are
out,
but.
A
D
If
you
put,
if
you
put
libraries
out
of
application,
default
application,
probing
path,
you're
loading
them
into
own
con
loading
context
you
for
some
investigation
prototype
that
might
work,
but
for
for
actual
production.
You
want
the
library
to
end
up
in
the
default
load
assembly
loading
context,
and
for
that
it
needs
to
be
on
the
default
probing
path.
Once
you
once
you
do,
load
from
it
ends
up
in
a
different
context,
but.
D
B
Yeah,
but
that's
a
reference,
so
you
could
add
an
assembly
reference
to
a
dll
with
the
profile
apis,
but
you
can't
do
anything
to
change
the
probing
path
I
think,
and
to
get
it
to
to
get
it
to
load
in
the
default
assembly
load
context.
It
has
to
be
loaded
through
the
normal,
like
the
probing
path.
Stuff,
there's
no
way
around
it.
Yeah.
D
Let's
see
oh,
how
much
work
is
remaining
for,
like
the
whole,
rendering
thing.
A
That
is
the
the
the
work
for
basically
testing.
All
of
these,
you
know
kind
of
get
the
writing
unit
test
for
a
bunch
of
this
stuff.
You
know.
A
So
so
so,
basically
I
got
the
this
stuff
working
with
rendering
I
can
pass
the
unit
tests
that
are
modified
from
dotnet,
but
I
need
to
kind
of.
I
think
we
need
our
tests
to
ensure
that
that
the
wrapping
object
works
for
all
the
scenarios.
That's
that's
what
is
lacking
test
so
think.
Like
this,
I
have
good
coverage
for
the
rendering
the
code
that
was
vendor
that
the
code
that
we
pull
it,
because
it's
the
same
task
that
we
have
in
dotnet
in
in
in
the
dotnet
one
time.
A
But
I
don't
have
a
tests
to
cover
the
our
stub
when
it's
using.
D
Oh,
oh!
Yes,
that
for
sure,
but
that's
I
mean
that
only
makes
sense
once
the
stub
is
finished.
D
Yes,
yes,
and
that's
that
and
the
stub-
that's
not
a
rendering
issue,
because
whether
the
stub
falls
back
to
a
vendored
version
or
the
stop
falls
back
to
a
like
actual
version
or
whatever
the
stub
falls
back
to
the
test
is
the
same.
So
we
have
to
do
the
test
anyway.
So
that's
not
additional
that
doesn't
make
rendering
more
expensive
than
any
other
yeah.
A
No
you're
actually
right.
Actually,
perhaps
perhaps
we
can
invert
the
path
invert,
the
path
that
I
suggested.
Actually
you.
You
are
right
because.
A
We
need
to
to
write
and
test
the
the
wrappers,
the
tests
for
all
the
wrapping
of
the
loaded
version.
Whatever
is
the
version,
but
potentially
you
could
do
the
rendering.
A
And
and
use
that
code
you
know,
so
we
can
progress
on
that
side
having
the
code
using
the
stud,
but
but
then
that
assumes
that
there
is
no
diagnosis.
We
end
up
in
the
same
problem
that
jim
says
there
is
what
we
end
up
in
the
same
problem,
because
unless
is
a
feature
flag
like
we
said,
we
end
up
with
the
same
problem,
because
if
there
is
some
customer
that
already
has
a
reference
or
something
we
are
not
going
to
be
able
to
see
anything
of
that
you
know
so.
D
Why,
if
we,
if
you
do
the
rendering,
then
the
logic
is,
is
essentially
speaking
up
whatever
is
loaded
and
if
nothing
is
loaded,
then
it
uses
the
vendor
version.
A
Yes,
but
that's
that's
what
I'm
saying
if,
if
there
is
something
in
the
code,
let's
say
we
right
now,
don't
complete
for
kind
of
this
week,
we
don't
complete
the
work
on
the
delegates
and
wrappers
for
mo
to
support
multiple
versions,
we're
just
using
the
vendor
code.
If
we
just
use
the
vendor
code
and
the
application
has
something.
Oh.
D
Oh
so
so
let
me
clarify,
I
understand
sorry,
for
what
happens
in
my
head
in
terms
of
if
vendor
so
in
terms
of
getting
to
the
first
working
prototype
and
of
course
there
is
a
million
edge
cases
that
we
need
to
kind
of
test
and
validate,
but
the
the
library
loading
logic,
the
delta,
between
what
we
already
have
and
where
we
need
to
get
to
is
days
of
work,
but
so
hardening
that
is
like
not
hardening
getting
that
to
the
first
version
that
can
be
like
tested
end
to
end
is
days
way,
less
work
than
the
reflection,
wrappers
reflection
reference
is
a
tedious
work
of
creating
a
wrap
around
everything.
D
So
that's
that's
and
I
don't
know
how
much
the
rendering
is
that's
up
to
you,
but
assuming
that
rendering
is
magically
done,
and
I
can
simply
reflect
over
my
own
assembly
and
buy
and
get
a
a
type
object
that
represents
the
type
with
the
name,
activity
and
diagnostic
source
and
a
few
other
types
that
I'm
using
assuming
that
is
already
given
by
whatever
you're
doing
the
library
loading
logic
is
a
few
days.
A
Oh
thing
that
we
can
do
in
that
case
is
separate
kind
of
because
the
activity
stop
just
need
to
use
the
the
reflection
wrappers
if
we
are
going
to
the
path
that
we
loaded
some
version.
D
A
So
perhaps
we
go
that
path
right
now.
I
I
I
kind
of
focus
on
getting
stubs
working
with
all
of
that
and
that
out
of
that
I
mean
out
of
memory
and
right
now
at
first,
not
it's
not
going
to
be
usable
for
to
distribute
to
anyone,
but
at
least
we
have
our
internal
thing
that
they
achieved.
D
Okay,
that
makes
sense
we
should
we
should
book
some
time
and
talk,
look
at
code
together
to
like
make
it
concrete.
I
don't
think
the
entire,
like
maybe
out
of
outside
of
this
meeting,
and
anybody
who
likes
to
come
should
come,
but
we
we
can
just
do
the
true
facts.
I
don't
think
we
actually
need
anybody
else
unless
they
are
interested.
A
Yeah,
friday
or
monday,
if
it's
good
for
anybody,
because
I
I'm
today,
I'm
involved
with
the
conference
and
the
thing
so
sounds
good,
either
friday
or
monday.
For
me,
it
sounds
good.
Sounds.
A
D
Cool
took
all
the
time
that
we
had,
but
I
don't
know
thank
you
for
for
clarifying
this
and
chris.
I
completely
forgot
about
this
thing
that
you
mentioned.
A
D
Yeah,
I
was
really
kind
of
thinking,
because
microsoft
is
doing
all
the
right
things
from
their
perspective
about
being
super
conservative
with
this,
and
that
was
like,
like
the
the
the
optimizations
that
I
asked
for
at
the
end
of
the
cycle
like
from
looking
wearing
the
microsoft
head,
I
completely
agree
that
they
were
rejected,
but
wearing
purely
our
head
is
like
oh,
come
on
guys.
A
I
think
I
think
I
I
I
I
at
least
have
to
to
get
back
to
some
other
stuff.
I
think
we
already
run
the
hour
it's
about
to
complete
the
hour,
a.
C
Quick
question
paulo,
so
you
got
that
one
pr
out
there
we're
trying
to
pull
back
in
the
changes
from
datadog
it
looked
like
it
was
hung
up
on
the.
What
is
it
the
agreements
that
everybody
has
to
sign.
A
D
Data
dog
should
already
be
automatically.
A
Signed,
I
think
the
the
the
problem
is
on
the
bot.
Last
week
I
have
the
same
problem
on
the
collector
people
and
there
was
stuff
there
to
to
that
time.
Somebody
said:
oh
now,
it's
good
just
close
them
open.
I
try
closing
open,
didn't
work
this
time
you
know,
so
I
have
to
pick
them.
A
One
thing
that
I
was
going
to
mention
about
profiling-
I
I
I
read
the
dark
dragon.
Thank
you
very
much
for
for
the
dog.
D
By
the
way
to
know
that
you
like,
when
you
open
it,
it's
probably
in
read-only
mode,
if
you
like,
actually
can
do
it
right
now.
If
you
open
it
up,
it's
read
only
so
then
you
can
go
edit.
D
I
know,
like
sometimes
microsoft,
does
this
weirdly
like
overall,
this
microsoft
thing
is
so
much.
I
already
did
comment
yeah
so
yeah.
Now
now
now
you
can
do
reviewing,
and
now
you
can
do
comment
yeah
and.
D
A
D
To
add
it,
but
but
yeah
and
and
you
can
like
it's
still
redundant-
you
can
click
this,
then
you
can
actually
change
the
document,
and
here
you
can
only
here,
you
can
still
change
it
and
it
will
do
track
changes,
and
if
you
do
edit,
then
it's
no
longer
able
to
create
track
changes.
So
this
is
the
way
to
actually
use
and
for
I
don't
know
why
it
cannot
be
the
default.
Microsoft
is
editor,
is
so
much
better
than
google,
but
this
usability
of
like
the
mode
you're
in
kind
of
sucks.
A
You
you
mentioned
profiling
and
and
cli
stuff.
David
is
vince
moore,
so
he's
still
around
on
microsoft.
Doing
this
stuff,
he
retires
he's.
B
Gone,
my
team
owns
half
of
what
he
used
to
own
and
there's
another
dev
who
owns
the
other
half,
but
so
so
vance
is
gone.
He's
working
on.
He
has
this
project
about
what
was
it
making
carbon
dioxide
or
I
don't
know
I
have
to
look
it
up.
It's
some
environmental
project
he's
working
on
now.
It's
like
a
he
went
and
created
his
own
company
yeah.
D
I
think
I
think
when,
when
when
windows
didn't
want
to
emit
each
w
went
correctly,
he
would
just
like
really
thoroughly
look
at
it
and
say:
are
you
sure,
and
then
it
just
emitted
whatever
needed
to
be
emitted.
D
Yeah
anyway,
actually
david,
I
was
gonna
talk
to
you
at
some
point
about
these
things.
I
don't
know
whether
it's
convenient
now
or
some
other
time
that
you,
whatever
whatever
time,
is
convenient
for
you.
A
Yeah,
okay,
okay,
I'm
gonna
jump
out.
I
I'll
try
to
catch
up
later.
I
I
think
that
is
the
recording
anyway,
so
yeah
yeah.
D
Yeah
and
I'll
add
to
these
notes
so
that
everybody
can
benefit
from
it
cool,
so
thanks
guys
bye.
So
what
I
wanted
to
ask
about,
I
mean
the
etw
on
on
without
elevated
permissions.
I
just
have
to
try.
D
I
think
this
is
the
kind
of
thing
that
I
only
completely
trust
when
I
see
it
like
on
an
actual
platform,
so
I
I
would
need
to
create
some
prototype
that
listens
to
etw
from
the
same
process
and
then
run
it
on
like
azure
app
service,
where
the
permissions
are
reduced
in
order
to
see
whether
it
collects
what
it
needs
to
collect.
F
D
Yeah,
so
I'm
more
interested
in
like
learning
about
all
of
this
topic,
so
I'm
thinking
right
now
I
looked
at
new
rally
code.
Very
briefly,
you
guys
are
doing
workload
profiling,
so
you're
telling
how
long
a
particular
methods
run
in
in
milliseconds
and
then
a
subsequent
step
from
it
could
be
that
we
can
also
say
it
for
for
a
particular
span
within
the
within
the
trace
right.
We
could
say
this
particular
span.
D
I
mean
the
the
overall
time
of
the
spell
we
know
anyway,
because
we
time
it
but
like
we
can
say
how
long
a
particular
spend
spent
in
in
each
method
but
generally
from
profiling
standpoint.
It's
also
interesting
to
know
how
long
was
it
actually
doing
useful
work
versus
just
waiting
on
things
either
because
it
was
waiting
or
because
it
was
waiting
to
be
scheduled.
D
B
So
none
of
that
none
of
the
ways
that
we
talked
about
so
far
are
going
to
be
able
to
give
you
cpu
bound
time,
so
the
only
way
to
get
cpu
bound
time,
so
what
perfu
does
and
what
perf
collect
on
linux
does.
Is
it
just
looks
at
thread,
contact,
switch
events
and
then
creates
the
graph
of
when
so
so
in
on
windows
and
linux?
It
you
basically
get
thread
contacts
event
from
the
runtime
or
from
the
os,
and
then
you
can
create
a
graph
of
when
you'll
say
it
again.
B
Right
again,
the
last
thing
so
in
on
windows,
perfu
uses,
etw
events,
you
there's
there's
a
thread
context
switch
event
and
on
linux
the
I
think
we
use
I'd
have
to
look
it
up,
I'm
less
familiar
with
the
linux
one,
but
I
think
the
tool
perf
uses.
How
does
it
get
thread?
Context?
Events,
I'm
not
actually
sure
exactly
how
it
gets
them,
but.
D
They
come
from
from
the
from
linux;
they
don't
come
from
the
event
types
in.
B
So
so
that's
how,
if
you
need
to
do
an
investigation
but
to
to
back
up
a
step
the
the
view
of
the
runtime
team
is
that
when
once
you
get
down
to
the
level
of
actually
talking
about
like
blocked
time
and
scheduled
time,
that's
more!
That's
a
that's
enough
of
an
investigation
that
you're
you're
running
custom
tools
and
you're
custom
collecting
custom
data
when
you're,
just
when
you're
just
trying
to
look
at
a
process
at
a
high
level
and
say
like
are
there
bottlenecks?
B
Are
there
whatever
that
that
sampling
is
enough,
so
you
know
sampling
combined
with
so
we
have.
We
have
a
cup,
we
have
a
tool
called
net
counters
and
you
can.
It
will
give
you
things
like
memory
usage
and
I
think
it
gives
cpu
these
are.
These
are
also
these
are
just
etw
events
right,
additionally,
they're
baked
so
starting
on.net
core.
So
originally
again,
like
everything,
there's
a
complicated
answer
on
desktop.net
we
have
a
thing
called
performance
counters
and
then
on.net
core.
B
We
have
a
basically
a
re-implementation
of
performance
counters,
but
in
a
vent
pipe
so
that
it
works
cross-plat.
B
Yeah
I
know
about
that
yeah,
and
so
so.
Basically,
our
our
view,
as
the
runtime
team
is
that
is
that
stack
sampling
plus
net
counters
is
enough
to
give
you
a
high
level
view
and
that
thread
context.
Switch
events
are
pretty
specialized
and
that
you
would
you
shouldn't,
need
them
for
kind
of
general
investigations.
It's
you
know
it's.
It's
a
pretty
targeted
investigation
once
you
get
to
the
point
that
you
actually
want
to
look
at
when
threads
are
blocked
once
they're,
when
they're
scheduled.
D
So
so,
let's,
let's
check
to
understand
this
from
a
customer
perspective,
so
I'm
in
envisioning
some
view,
like
all
the
all
the
aapm
products
have
where
you
have
the
trace
and
like
it's
horizontal
lines
for
each
span
right
and
in
in
the
context
of
like
a
profiling
additional
data.
There
is
some
sort
of
information
like
a
flame
graph
embedded
there.
That
says
how
much
time
each
span
spent
in
each
method.
D
So
if
we
follow
your
recommendation,
then
you
know
that
it
spent
in
a
particular
metal,
but
you
don't
know
whether
it
was
actually
doing
anything
or
whether
it
was
waiting
for
a
monitor
or
whether
it
was
simply
not
scheduled
right.
That's.
B
Kind,
so
yes,
and
no
in
managed
code,
the
threads.
So
what
you
end
up
doing,
is
you
kind
of
build
that
muscle
memory
yourself?
So
when
you
look
at
a
stack,
a
a
code
shouldn't
be
inside
of
a
random
jitted
method.
If
it's
not
doing
anything,
almost
all
the
time,
I'm
trying
to
think
of
it
ever
would
be,
but
so,
if
you're
waiting
on
an
event,
and
so
if
you
do
a
like
a
manual
reset
event
or
whatever
and
you
and
you
sleep
on
it,
then
you're
actually
gonna
end
up
in
some
jit
helper
somewhere.
B
That's
like
you
know,
do
one
weight
or
whatever
it's
called
and
and
so
you
can
look
at
that
stack
and
see
that
that
thread
is
sleeping
and
if,
if
the
leaf
frame
is
some
random
jitted
code,
it's
doing
work
and
it's
so
so.
There's
kind
of
there's
three
states
that
the
threat
could
be
in
it
could
be
sleeping,
in
which
case
you
will
be
in
the
jet
helper.
That
says,
like
sleep
and
sleeping,
isn't
waiting
for
a
monitor.
What
would
you
do
right,
yeah
right,
waiting
for
an
event
to
fire,
the
other?
D
Get
so
if,
if
I
understand
correctly
from
what
I
already
looked
at
when
I
like
at
the
whole
event
stuff,
is
you
as
well
as
actually
ico
profile,
api
users
spend
all
threats
and
whether
the
thread
was
actually
running
or
or
not
running?
You
still
sample
it
right.
D
So
this
approach
would
help
us
understand
when
things
were
like
waiting
on
a
monotone
right,
but
it
wouldn't
tell
us
whether
we
actually
like
be
making
progress
at
the
time.
D
So
one
idea
that
I
had,
but
I
don't
know
whether
I
haven't
yet
investigated
that
that
whether
it's
possible
so
does
the
runtime.
If,
if
we
use
ico
profiler,
does
the
runtime
offer
some
kind
of
list
of
all
the
threads
with
some
metadata?
Maybe
I
can.
D
B
B
So,
if
you're
talking
about
on
the
like
machine
instruction
level,
whether
or
not
the
thread
is
scheduled
or
whether
or
not
the
like,
like
you
know,
if
there
is
work
to
be
done
and
you're
not
waiting
on
a
monitor
and
just
do
you
have
the
quantum
on
the
processor
like
we
have
no
idea
about
that,
like
which
threads
are
actively
scheduled
by
the
the
operating
system
and
which
threads
are
running.
B
At
that
point,
it's
all
about
the
native
threads.
There
is
no
like
management
right,
and
so
so
we
we
jit
code
and
then
we
start
executing
the
native
code.
But
then,
at
that
point
it's
up
to
the
operating
system
to
schedule
it.
However,
once
and
we,
and
so
we
don't,
we
don't
have
any
concept
about
whether
it's
not
it's
running,
but
you
know
at
the
oh,
at
the
runtime
level.
We
have
you
know
threads.
B
We
have
the
concept
of
a
thread
being
active
or
not
so
if
it,
whether
or
not
you
know
that
so
the
the
runtime
has
a
concept
of
a
thread
and
then
the
os
has
a
concept
of
a
thread
and
and
so
the
runtime
threads
can
start
and
die,
but
those
runtime
threads
may
be
scheduled
to
any
number
of
os
threads
and
may
switch
around
and
etc.
D
Because
but
how
does
it
work
with
stacks?
Because.
B
It
will
just
run
on
one
thread,
so
typically
there'll
be
a
one-to-one
mapping,
but
the
async
stuff
is
where
it
starts
to
get
weird.
So
and
that's
you
know
when
you
so
when
you
have
an
async
app
and
then
it
does
async
stuff
it
can.
The
continuations
can
get
scheduled
on
any
number
of
the
thread
pool
threads.
F
D
Logical
thread
that
I
understand,
that's
that
that's
all
clear
to
me,
but
that's
that's
the
logical
thing.
B
D
Worries
yeah,
so
that
makes
sense.
C
Hey
greg
a
quick
question
so,
while
you're
looking
while
you're
thinking
about
this
blocking
time
and
waiting
time
and
all
of
that,
I
I'm
just
trying
to
understand
the
the
driver
for
that,
because
if
we
look
at
the
open,
telemetry
sdk
itself,
I'm
pretty
sure
it's
not
dealing
with
any
of
those
concepts
at
all.
And
it's
really
just
dealing
with
wall
clock
time
as
far
as
how
long
spans
take,
and
so
I'm
just
wondering
if
that
might
lead
to
some
confusion
for
people.
D
I
think
it's
a
good
question.
I
think
the
answer
is
is
more
around
presentation
in
the
ui
right,
because
from
so
so,
essentially
you
the
information
that
we
in
an
ideal
case.
If
everything
was
possible
and
easy
we
would
want
to
communicate,
is
here's
the
spam.
D
D
What
is
a
good
way
to
display
this
in
a
like
ui,
I
don't
know,
but
I'm
sort
of
not
thinking
about
it
right
now.
D
I'm
thinking
more
about
like
is
this
feasible
to
collect
this
information
in
a
performance
using
a
performance
overhead
that
is
like
allows
things
to
run
in
production
at
a
you
know
for
prolonged
period
of
time,
and
if
it
is,
then
we
like
what
is
a
high
level
approach
for
to
doing
this
and
then
once
it's
clear
then
I
would
like
you
know,
make
some
notes,
and
we,
like
we
here
or
datadock,
as
a
company
can
decide
whether
it's
something
that
we
want
to
do
or
not.
C
D
So
datadock
has
a
java
profiler
and
that's
like
you,
you
can
go
and
play
around
with
it.
I
played
around
with
it,
which
is
some
limited
degree,
and
that
does
all
sorts
of
fancy
things
like.
We
don't
have
anything
4.9.
That
is
not
already
in
the
open
teleme.
D
You
know.
If
I
could
snap
my
fingers
and
implement
it
in
one
day,
then
we
would
have
it,
but
that's
that's
not
the
way
it
works.
C
Yeah,
so
new
relic
has
a
similar
concept
if
you
dig
through
the
code,
so
it's
not
necessarily
blocking
time
or
waiting
time,
but
you'll
see
in
the
code
that
there's
this
concept
of
inclusive
time
versus
exclusive
time,
and
so
this
inclusive
time
you
can
think
of
as
okay.
This
is
just
a
duration
of
each
span,
but
then
exclusive
time
tries
to
take
into
consideration
parent
and
child
and
it
would,
under
certain
circumstances,
subtract
the
exclusive
time
of
a
child
from
the
parent.
A
C
Calculation,
but
that's
something
that
we've
we've
had
but.
C
D
Yeah
yeah,
I
I
would
feel
happy
if,
if
we
conclude
that
doing,
cpu4.net
is
like
not
a
good
idea.
I
would
like
this
to
be
like
this.
Is
here's
a
good
reason
rather
than
where
we
got
confused
and
didn't
properly
investigate?
That's
why
I'm
asking
all
these
questions,
and
so
what
you're
saying
david
is.
D
B
Ask
on
windows:
you
can
on
atw
what
you
need,
there's
kernel
providers
and
then
you
can.
You
can
request
contact
switches
from
the
os
and
so
then
you
can
piece
it
together.
So
you
know
the
threads
and
then
you
know
when
a
thread
is
scheduled
on
the
cpu
and
when
it's
off
and
so
so
an
etw
can
on
on
linux.
We
do
it
in
another
way
that
I
would
have
to
look
up.
B
I
don't
know
the
exact
technical
details,
but
you
can
you
can
do
it
another
way,
but
so
that
perf
collect
script
that
I
sent
you
last
week.
Is
you
just
have
to
look
at
that
and
see
how
how
it.
D
At
it
already
and
it's
actually
a
really
cool
piece
of
scripting,
but
so
far,
my
conclusion
from
it
was
that
it's
too
hard.
So
if
we
wanted
to
go
the
event
route
versus
ico
profiler
road,
then
the
consequence
would
be
that
we
support
all
versions
on
windows
through
each
w
and
on
linux,
only
starting
from
three,
because
because
event
types
are
good
enough.
There.
B
D
B
Sort
of,
but
generally
speaking,
like
the
place
where
I
expect
you
to
end
up
is
to
say
like,
as
you
know,
if
you're
a
if
you're
an
apm,
I
think
wall
clock
time
is
good
enough
for
you,
and
I
would
expect
to
once
you
get
to
the
point
where
you
need
cpu
level.
B
Was
the
threat
active
or
not,
then
I
think
it
would
be
fine
to
say.
Okay
at
that
point,
you
need
to
go
out
and
get
perfume
or
you
need
to
go
out
and
get
like
intel's,
whatever
it
is
v
tune
or
whatever
and
like,
and
you
need
to
manually
connect
collect
a
trace
and
you
need
to
dig
in
and
like
actually
do
the
work
yourself
like.
That's
where
I
expect
you
to
to
like
once
you're
once
you
talk
like
once,
you
need
context
switches
once
you
need
that
level
of
granularity.
D
Right
so
say,
I
would
like
to
know
how
I
can
reduce
my
cost
for
for,
like
I'm,
a
customer
apm
customer.
Like
I
don't
know
some
company
coca-cola
right,
I
don't
know
it's
called
somebody
somebody's
customer,
I
don't
know
so
and
I
would
have
the
service
and
it
costs
the
fleet
running.
The
service
cost
me
a
million
bucks
a
year,
and
I
would
like
to
reduce
it
to
800
000
bucks
a
year
right.
C
D
Like
what
are
the
bottlenecks
like?
Oh
this,
this
span
seems
to
take
a
lot
of
time.
Is
it
because
it's
actually
doing
busy
work,
or
is
it
just
waiting
on
something.
B
So,
internal
to
the
runtime,
we
do
90
of
our
investigations.
You
know,
that's
a
I'm,
you
know
about
guessing
right,
that's,
but
we
do
most
of
our
investigations
just
using
sampling.
We
don't
you
know
the
times
when
we
actually
need
to
collect
cpu
scheduling
events
is
really
low.
It's
that's
pretty
much
like
there's
you're
blocked
on
network.
I
o
or
something
where
you
really
need
to
know
like.
B
Why
aren't
the
threads
executing,
but
so
so
perfume
is
a
tool
that
we
use
internally,
and
that
does
if
you,
if
you
collect
a
perfutrace
that
does
stack
sampling,
it
doesn't
do
anything
else.
It
doesn't
do
anything
fancy,
it
doesn't
do
cpu
context,
switches,
it
doesn't
whatever
it
just.
D
Okay,
so
to
help
us
answer
this
I'll,
do
the
following
I'll
ask
the
folks
in
datadog
who
have
no
context
on.net,
but
have
all
the
context
on
java?
Why,
for
java,
it
seemed
to
them
that
cpu
profiling,
cpu
clock
is,
is
a
thing
yeah.
It
could
be
either
that
there
is
something
java
specific
or
we
are
missing
a
scenario
that
actual
customers
ask
for.
C
D
Or
they
just
did
it
because
they
could
either
way
we'll
have
an
answer.
And
if
actually,
I
think
this
is
a
cool
opportunity,
because
if
the
answer
is
there
are
actual
important
customer
scenarios
that
customers
have
worried
about.
Then
in
the
long
term,
like
the
runtime
team
and
us,
we
can
kind
of
collaborate
to
maybe
also
tweak
your
understanding
and
if
they
did
it,
because
it's
java,
specific
or
because
they
just
could
then
probably
you're
giving
good
guidance
and,
like
certainly
you're
in
that
case,
you're,
giving
good
credit.
B
D
Yeah
I'll
I'll
I'll
ask
them
I'll
figure
it
out.
D
So
I
can
tell
you
next
week
and
just
just
in
the
meantime,
while
I'm
still
I'm,
I
will
certainly
ask
them
for
sure,
but
just
going
back
to
the
events
for
a
second,
so
the
thread
switch
events
in
etw
makes
sense
on
linux
sounds
like
event:
pipes
won't
do
it
now,
when
you
said
like
here
this
part
when
you
are
sleeping
so
when
you
are
when
you
are
waiting
to
be
scheduled,
we
cannot
detect
it
through
like
direct
event.
D
B
You'll
get
a
stack
where
it
will
show
you,
as
whatever
the
weight
of
weight
method
is,
will
be
at
the
top
of
the
stack.
D
Okay,
so
so,
essentially
for
for
event
pipes
for
that
stuff,
I
will
in
order
to
count
the
time
the
thread
was
waiting
on
something
I
would
need
to
magically
hardcode
a
bunch
of
methods
that
I
know
are
waiting
methods
and
like
look
at
the
time
spent
on
those
methods.
B
B
But
I
mean
you
could
get
the
data
directly,
you
could
hook
the
the
weight
method
and
then
you
know
to
call
into
your
managed
help
in
whatever
helper
function.
You
want
and
saying
this
thread
is
about
to
wait
and
then
this
thread
is
done
waiting
right
and
you
could
actually
with
the
manage
code,
you
could
collect
it
yourself
and
it
probably
wouldn't
be
that
much
work.
D
Correct
but
it's
additional
turf
overhead.
So
if
I
can
do
this
by
simply
extracting
information
from
the
stack
samples
that
I'm
collecting
already,
that
would
be
faster.
B
Would
be
faster,
you
could
so
for
the
perf
overhead
you're
right,
but
I
would
expect
it
wouldn't
matter
because
you're
in
a
you're
in
a
path
where
you're
about
to
sleep
you're,
about
to
wait
on
an
event.
So
if
it
takes
a
couple
extra
cycles,
it
probably
wouldn't
matter.
Although
true.
D
But
unless,
unless
I
I
realized,
I
had
don't
need
to
wait,
so
I
need
to
be,
I
can't
I
cannot
be.
I
will.
I
won't
be
able
to
instrument
just
the
beginning
and
the
end
of
it
of
some
api.
I
will
have
to
instrument
something
deeply
in
it
because,
right
I
I
I
I
don't
want
to
just
know
that
I
entered
the
waiting
method
all
right.
I
want
to
know
that
I
actually
ended
up
waiting
right.
B
D
D
Okay,
got
it
and
for
for
cpu
bound
on
etw
and
on
windows,
I
can
use
the
context,
which
is
in
htw
and
linux.
I
have
to
do
something:
linux
specific.
B
D
Okay,
so,
and
then
the
the
is
there
a
like,
like
the
the
the
collection
of
threats
inside
of
the
clr,
the
list
of
threats
right,
it
doesn't
tell
me
anything
about
that
stuff
right
it
just
it's
just
managed
threats
that
lists
and
managed
level
information.
D
B
B
D
So
if
I
wanted
to
be
somehow
super
smart
like
if
I
wanted
to
do
something
like
this,
so
not
events
but
this,
but
rather
than
because
this
approach
suspends
all
threads
right,
it
suspends
first,
it
suspends
all
the
threads
waits
for
them
to
be
suspended,
and
then
it
does
the
the
the
sampling
right.
If
I
suspend
a
subset
of
threads,
then
I'm
dealing
with
all
these
potential
deadlocks
problems
right.
D
That's
a
whole
other
conversation
that
I
need
to
really
understand.
Maybe
a
separate
conversation
before
I
do
this.
I
want
to
prepare,
think
about
it
and
make
sure
I
don't.
I
use
your
time
respectfully
yeah,
but
if
I
just
wanted
to
somehow
suspend
threats
in
a
safe
place
that
are
bound,
I
cannot
like
how
would
they
even
go
about
this.
F
Is
because
the
runtime,
the
os
event
is
asynchronous,
so
I
cannot
rely
on
those
events
to
to
know
right
right.
Those
events.
B
Are
just
they're
going
to
call
over
atw
or
and
or
on
linux,
what
something
yeah,
lte,
tng
and
so
yeah
those
will
just
those
will
just
come
down
the
pipe
they'll
be
async
and
you
won't
be
able
to
respond
to
them.
F
D
B
Okay-
and
they
even
will
have
like
they'll,
have
a
time
stamp
to
tell
you
exactly
when
it
happens,
it's
just.
You
won't
get
it
necessarily
at
the
same
time
stamp
as
it
happened.
Yes,
yes,
yes,
yes,
so
yeah,
I
so
I
don't
know
of
any
good
way
to
to
know
what
cpu
is
scheduled
from
this
sort
of
thing.
B
So,
even
if
you
could
selectively
selectively
park
threads
and
not
have
to
not
have
to
do
the
whole
run
time,
I
don't
know
if
there
there
is
a
good
way
from
the
strictly
from
the
profiler
interface,
because
you're,
because
you're
talking
about
what's
scheduled
at
the
os
or
not.
D
B
Yeah,
the
only
the
only
way
I
can
think
of
is
to
do
the
the
rewriting
so
that
you
know
when,
when
you
know,
when
a
thread
calls
into
a
sleep
method,
although
once
you
start
thinking
about
that,
it's
easy
to,
like
you,
can
p
invoke,
and
once
you
leave
the
runtime,
then
native
code
could
sleep
or
wait
on
an
event
or
whatever?
And
so
even
so,
it
would
probably
get
trickier
and
there'd
be
a
lot
more
things
to
to
rewrite.
But.
B
D
Yeah
yeah,
okay
and.
D
B
Time
so
on
windows
you
can
get
stuff
like
that
through
various
ways.
I
don't
know
how
practical
it
would
be
like,
for
instance,
on
windows,
anybody's
free
to
open
up
a
debugger
and
you
can
implement
your
own
debugger,
and
so
this
is
theoretical.
I
don't
I
don't
suggest
you
go
down
this
route,
but
the
there's
I'm
trying
to
remember
the
name
of
the
api.
It's.
B
What
it's
debug
api.h
has
a
bunch
of
stuff
in
it,
and
so
so
you
can
but
but
like
don't,
go
down
this
path,
I'm
just
saying
theoretically,
because
it's
too
complicated
or
too
slow
or
both
there
will
be
ramifications.
So
so
what
you
can
do
is
basically
open
up.
You
can
write
your
own
debugger.
B
The
windows
windows
apis,
have
things
where
you
basically
can
just
say.
You
know
I
want
to
be
a
debugger
and
you
get
it's
like
the
pro
cor
icor
profiler,
except
for
for
native
debugging,
and
then
you
can
get
any
thing
you
want.
You
can
get
module
loads,
you
can
get
threads
scheduled.
You
can
get
exceptions,
your
first
chance
and
second
chance,
but
but
then
you
would,
you
would
be
attached
as
a
debugger
and
so
anytime.
Anybody
else
wanted
to
attach
this
debugger.
B
So
but
long
story
short
is
nobody
that
I
know
of.
Does
this
currently
including
the
runtime
team,
and
so
I
assume
that
there's
a
reason
why?
But
you
know
everything
that
that
we
do
that
collects
cpu
events
is
an
out
of
process,
tracing
thing
and
collects
them
via
you
know,
by
opening
a
trace
session,
either
through
etw
or
or
perf
collect
on
on
linux.
D
Yeah
so
and
the
linux
perf
collect
just
because
you
want
to
cover
all
versions
curve
collects
rather
than.
D
Except
for
the
version
coverage
where
event
pipes,
don't
work
very
well
on
version
two,
if
we
said
like
okay,
we
just
don't
target
version
two.
I
mean
it's
out
of
support
next
september
anyway.
If
we
where
to
say
that,
would
you
prefer
prof
collect
or
would
you
prefer
event
pipes
on
linux?
Well,.
B
B
D
B
Eventually
get
there,
but
I
think
it
will
be,
but
I
think
it
will
be
interesting
to
talk
to
the
java
guys
and
find
out
exactly
what
they
do
and
why
they
do
it
because,
like
we,
our
team
doesn't
collect
cpu
scheduling,
stuff
automatically.
We
never
have
it's.
You
know
internally.
Any
of
our
tools
that
do
it
are
all
like
they
do
it
through
either
etw
or
perf.
B
Collect
is
the
tool
that
does
it
and
it
does
it
by
by
launching
perf
and
lttng
and
creating
a
tracing
session,
and
so
even
even
the
internal
microsoft
people
I've
talked
to
they
do
yeah.
I
don't
think
they
do
live.
I
would
have
to
check,
but
I
don't
think
they
do
live.
Cpu
scheduling,
information.
D
Okay,
cool,
okay,
anyway,
thank
you.
So
my
takeaway
is
this.
Let
me
know
if
I
heard
you
correctly
like
go
and
critically
validate
that
cpu
profiling
is
even
like
something
that
customers
want
and
if,
if
the
answer
is
yes,
then
make
sure
to
be
clear
why
this
is
even
something
that
they
might
want,
yeah
and
in
terms
of
workload.
Profiling
go
play
around
with
etw
on
restricted
access,
and
if
that
flies,
that's
a
better
way
to
go
than
than.
B
Yeah,
that's
right,
and
the
other
question
is
is
how
are
the?
If
so,
if
you
talk
to
the
java
guys
and
it's
something,
customers
want
and
there's
scenarios
for
it.
The
the
next
follow-up
question
is:
how
are
they
currently
getting
it,
because
my
mental
model,
I
don't
like,
I
don't
know
a
way
to
get
it
in
proc
and
you
know
maybe
I'm
missing
something,
but.
C
B
So
I'm
not
so
I
don't
really
know
anything
about
the
jvm,
but
so
on
on
clr.
So
our
rvm,
we
don't
we
don't
do
anything
with
the
os
like
we.
We
fundamentally
don't
have
the
knowledge
of
if,
if
we're
actively
scheduled
or
not
we're
just
just
like
every
other
program,
as
we
we
run,
and
then
we
just
assume
that
the
os
takes
care
of
scheduling
and
we
don't
really
mess
around
with
it.
B
There's
a
couple
points
like
this
is
it's
kind
of
probably
not
relevant,
but
like
the
gc,
will
mess
with
thread
affinities
and
thread
priorities
at
times,
but
just
to
make
sure
that
the
gc
can
run.
But
it's
not
you
know
it
doesn't
know
whether
or
not
it's
being
scheduled
other
than
to
bump
its
priority
up
to
high
so
that
it
has
the
best
chance
of
being
scheduled
and
so
so
yeah
it
would
be.
B
B
Yeah
we
jump
to
that
address
and
just
let
it
run
as
if
it
were
native
code
and
we
don't
we
don't
do
it
like
anything
with
scheduling
or
whatever
we
just
once
you're
in
jitted
code.
It
should
be
the
same
as
if
it
were.
You
know,
c,
plus
code,
it's
just
to
the
cpu.
It's
it's
native
code.
B
B
Basically,
stack
sampling
will
cover
the
majority
of
scenarios
and
that
we
view
cp.
You
know
actually
looking
at
context,
switches
and
cpu
scheduling
as
a
really
like
niche.
You
know
a
very
specialized.
D
I
tend
to
agree,
like
I
think
time
spent
waiting
on
a
monitor
is
important,
but
if
you,
if
you
have
that
and
the
wall
clock
time
and
number
of
threads
where,
when
you
know
the
number
of
threads
suddenly
got
bigger,
it's
a
bad
thing,
then
it
should
be
enough.
I
I
like
I
personally
think
so
it
just
I.
I
know
that
the
java
guys
went
after
it
and
I
think
it
would
be
sorrow
for
there
is.
There
is
like
two
aspects
from
it
for
me
as
like.
D
When
I
wear
my
open
telemetry
head,
then
I
ask
myself
like
my
like:
why
am
I
talking
because,
like
there
is
like
first
okay
start
different
wearing
my
datadog
head,
I'm
trying
to
head
I'm
trying
to
answer
like
okay
for
java
they're
doing
this?
It's
it's
like
it's
part
of
their
business
like
should
that
net
do
this
now
or
sometime
in
the
future,
or
what's
the
deal
for
the
net
and
profiling
right?
That
is
something
that
the
company
wants
me
to
give
an
unsaturated.
D
I
think
that's
a
good
question
right
for
for
open
telemetry,
it's
a
very
similar
thing,
because
we
are
building
essentially
the
same
thing
and
an
addition
is
like:
hey
guys,
I'm
doing
an
investigation
and
the
community
can
help
me
by
sharing
information
that
I
would
otherwise
take.
You
know
five
times
as
long
to
find
out
by
myself
and
in
order
to
give
it
back
to
the
community.
D
I
will
summarize
this
summarize
this
and
share
so
that
everybody
who
is
part
of
the
group
can
can
take
a
look
and
that
way
everybody
benefits.
D
So
that's
the
background
of
the
thread
and
specifically
for
cpu
yeah,
so
we
have
again
for
datadog.
If
one
team
went
after
it,
then
I
feel
like
I
need
to
be
able
to
answer
the
question:
should
this
other
team
also
go
after
it
and
whatever
the
recommendation
is
answer
the
question,
why
right
yeah?
That's
why
I'm
considering
it
carefully,
but
it
may
be
that
no
cpu.
B
I'm
thinking
about
it
more
so,
if
you're,
if
you're,
actually
talking
about
cpu
contact
switches,
it
would
have
to
be
kind
of
out
of
proc
or
you
know
asynchronous
style
to
get
them
delivered.
Wouldn't
it
because
so,
if
you're,
if
you're
a
cpu
and
you're
executing
some
piece
of
code
and
you're,
basically
trying
to
fire
an
event
that
says
I'm
about
to
schedule
some
thread
for
some
quantum
of
time,
then
in
order
to
do
it
synchronously
you
would
have
to
then
schedule
a
different
thread.
B
D
But
I
mean
this
is
for
events
but
like
if
essentially
the
the
when
I,
when
I
get
callbacks
for
for
stack
samples,
I
get
the
thread
id
there
right
right
yeah.
So
I
could.
I
guess
I
could
not
know
if
if
the
s
knows
whether
or
not
the
threaders
have
cpu
right
now,
it
could
populate
some
flag.
D
Yeah,
but
if
it
doesn't
then
we're
out
of
luck,
okay
got
it.
Thank
you
so
I'll
check
about
this,
and
I'll
summarize
like
I,
I
was
just
typing
fast.
Something
is
wrong
with
that,
but
you
know
I
had
this
thing
lately,
where
quite
often
like
several
times
a
day,
I
type
something
and
suddenly
a
button
doesn't
get
stuck,
but
it's
it's
as
if
it
gets
stuck
and
it
just
does
this
yeah
and
then
I
don't
know
why
it
happens.
It
just
happens
all
the
time.
So
all
right
I
have
to
leave.
D
Now,
though
I
need
to
go
anyway,
guys
yeah,
that's
it
I'll
clean
it
up,
update
this
document
so
that
everybody
can
benefit
and
I'll
figure
out
the
cpu
profiling
thing.
Okay,
yeah
thanks.