►
From YouTube: 2020-08-06 .NET Auto-Instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
D
I
I
I
just
had
a
chat
with
sergey.
He
he
can't
attend
today
and
he
is
gonna.
Try
to
publish
afterwards.
A
If,
if
you're
the
meeting
host,
if
you
give
me
permissions
to
record,
then
I
can
make
a
second
recording
and
that
I
can
definitely
share
just
like
we
shared
the
previous
ones,.
D
A
I
understand:
well,
I
guess
that's
all
we
can
do
do
you
know
who
is
actually
the
meeting?
Organizer.
D
No,
what
happens
is
that
the
the
the
foundation
has,
like,
I
don't
know
four
or
six
of
these
zoom
meetings
and
they
are
shared
between
all
the
sigs
and.
D
A
Hi
everybody,
that's
exciting.
We
have
a
new
segment
welcome,
so
I
think
we
have
an
agenda.
A
A
F
C
G
A
Delay
so
everybody,
I
think,
maybe
you
can
go
over
this
part
about
january
goals,
and
so
because
you
were
driving
part
of
it,
but
kickstarting
is
hi
everybody.
A
Thank
you
for
coming
and
we've
been
talking
a
lot
about
net
open
telemetry,
focusing
on
the
sdk
and
the
auto
instrumentation
was
kind
of
formally
always
and
a
little
add-on,
and
we
were
spending
actually
a
significant
amount
of
time
on
talking
about
it.
So
we
just
made
it
an
explicit,
separate
meeting,
so
we
can
focus
all
the
time
on
this.
There's
people
who
are
interested
in
it
and
that's
it-
welcome
I'm
very
excited.
A
I
think
porter
had
a
whole
list
of
things
just
to
start
with
in
the
technical
level.
A
Yeah
so
then,
I
was
going
to
hand
it
over
to
paul
to
paul
to
to
talk
about
some
technical
stuff
that
we
would
like
to
start
with
for
the
joint
tracer,
and
then
I
think,
is
the
force
from
they're.
Here.
A
Yeah,
hey
ellen
yeah
yeah.
If
you
said
that
you
are
likely
to
have
time
to
prepare
and
to
show
us
a
little
overview
of
the
new
relic
agent,
are
you
will
you
be
able
to
do
that.
D
Yeah,
so
I
I've
been
trying
to
to
push
this
auto
instrumentation
for
net
for
quite
some
time.
There
was
some
work
that
needed
to
happen
on
the
dotnet,
the
the
sdk
itself
and
and
the
guys
now
are
doing
amazing
work
and
pushing
a
lot
of
stuff
that
we
needed
to
to
happen,
and
I
hope
it
a
little
bit.
The
guys
are
doing
most
of
the
work
there
see
you
and
microsoft
folks,
and
also
some
folks
outside
called
blanche
and
other
people
helping
a
lot.
D
Alan
also
have
helped
me
a
lot
right
now,
and
so
so
that
kind
of
gave
a
pause
on
the
process.
But
I
think
we
got
to
a
point
that
it
was
mature
and
we
already
have
the
decision
from
the
runtime
from
the
dot-net
runtime
regarding
the
the
active
source
that
actually
becomes
the
the
core
way
to
instrument
manually
or
the
auto
instrumentation
for
the
dotnet
in
the
long
run,
so
that
felt
like
that
was
a
a
a
great
moment
and
also
greg
joining
datadog.
D
That
is
the
path
that
open
telemetry
has
set
up
for
starting
these
sigs
for
auto
instrumentation.
So
that
was
the
path
that
was
done
for
java.
I
think
python
was
is
slightly
different,
but
that's
the
path
that
the
the
foundation
set
up
to
to
to
do
for
creating
auto
instrumentation
and
we
haven't
been-
have
some
high
level
discussions
and
try
to
put
a
rough
road
map.
D
I
think
perhaps
I
hope
everyone
had
a
chance
to
see
that
dock
with
that
high
level
road
map,
it
is,
is
more
a
tentative
thing
for
the
things
that
we
want
to
achieve
in
the
short
run,
but
it
kind
of
should
guide
us
and
also
has
the
basic
stuff
about
what
should
guide
our
let's
say:
models
operangi
for
for
the
the
sig
itself.
A
So
is
it?
Is
it
linked,
probably
not
that
the
the
roadmap?
This
was
actually
a
great
document.
Thank
you
for
that.
It's
not
linked
from
the
meeting
notes
yet
right,
so
you
should.
D
Yeah,
I
I
I
I'll
put
the
link
here
under
the
the
basic
information.
I
will
put
that
doc
here
too
right
now.
The
link
is
not
that
at
my
hand
here,
but.
E
D
Wear
that
after
and
for
anyone,
it's
a
short
document
but
has
kind
of
these
outlines
and
kind
of
a
little
bit
of
the
history
of
what
went
through.
D
So
I
I
think
it's
great
to
see
this
number
of
people
here
and
the
the
interest
and
let's
try
to
kind
of
really
start
with
lots
of
energy
and
collaboration
and
kind
of
make.
This
project
succeed
for
all
of
us,
you
know,
and
so
we
can
get
our
dot
net
user
base
kind
of
satisfied
with
what
open
telemetry
is
offering
for
auto
instrumentation,
and
we
don't
need
to
have
that.
We
can
live
with
a
single
effort
that
we
collaborate,
not
distinct
everyone
kind
of
repeating
similar
stuff
everywhere.
D
You
know
so
that
that
was
my
kind
of
the
the
history
and
kind
of
the
the
the
kickstart
for
for
the
the
group.
We
have
the
repo
I
didn't
create
yet
cold
honors
and
these
things.
D
But
I
think
that's
a
moment
that
we're
coming
the
beginning
and
we
have
the
opportunity
to
have
lots
of
us
start
as
approvers
and
maintainers,
and
collaborate
on
that
just
just
to
mention,
because
when
you
are
starting,
the
the
sig
is
a
moment
that
you
have
more
flexibility
for
this
type
of
thing
and
then
later
as
the
things
progress,
the
criteria
is
a
bit
more
restrictive.
D
So
now
is
if
you
want
to
really
contribute
and
participate,
it's
a
good
moment
to
get
involved.
You
know,
so
I
think
this
is.
This
was
the
the
the
main
thing
that
I
I
want
to
mention
and
I
would
like
to.
I
don't
know
if
what
order
the
most
people
want
to
do,
but
I
think
a
good
thing
will
be
for
us
to
then
discuss
kind
of
the
immediate
next
steps,
but
I
think
perhaps
we
can
go
to
the
presentation
by
by
chris.
A
Yeah,
maybe
just
as
a
just
quick
just
in
case,
we
have
no
time
at
the
end,
just
to
give
you
guys
a
little
bit
of
context
for
the
progress
of
moving
the
dd
tracer
into
the
open,
telemetry
repo.
A
So
this
I
was
blocked
with
doing
any
technical
things
on
just
making
sure
that
everything
is
aligned
with
our
product
or
honest
leadership
and
make
sure
that
you
know
everything
is
done
correctly.
A
We
had
the
meeting
earlier
this
week,
so
there
is
no
more
blockers
on
that
end
and
then
now
I'm
working
with
a
partner
inside
of
data
doctor
just
make
sure
that
the
formalities
are
done
correctly.
I
think
there
is
some
document
that
needs
to
be
processed
to
sign,
to
something
like
that
with
the
open
telemetry
board
to
make
sure
that
stuff
is
done
in
the
right
way.
So
I'm
working
on
on
that
there
is
a
little
bit
lack
of.
It
is
positiveness.
A
If
you,
I
don't
know
whether
you
guys
are
aware
of
there
is
a
relatively
like
for
datadog
size.
There
is
a
significant
size
conference
that
we
are
running
end
of
this
weekend
beginning
of
next
week.
You
should
you
should
guys
will
come.
It's
called
dash
and
people
are
very,
very
busy
with
preparations,
so
that
caused
a
little
bit
of
a
delay
from
the
product
owner's
side
to
get
the
right
things
going
here
so
yeah
anyway.
So
the
all
the
decisions
are
taken.
A
There
is
no
problem,
just
like
we
thought,
and
then
I
will
be
working
through
the
formalities,
with
the
board
signing
the
right
form
or
whatever
it
is,
and
then
I
will
make
a
big
squash
commit
to
into
the
report
to
to
get
a
source
status.
A
And
something
that
we
should
do
this
week,
if
we
have
time
at
the
end
next
week,
but
as
soon
as
possible
is
we
should
look
at
the
road
map
and
what
it
is
that
we
would
all
like
to
do
to
the
to
the
tracer
in
the
next
few
months.
I
think
I'm
guessing
that
most
of
the
work
will
be
around
fundamentals
like
reliability,
performance
and
around
moving
towards
standards,
for
example.
Right
now
we
are
not.
A
A
So
ellen,
let
me
stop
the
share
so
that
you
can
take
over
and
then
you
can
teach
us
all
about
the
things.
E
Yeah
chris,
let
me
just
jump
in
right,
quick,
and
just
I
just
wanted
to
this-
is
in
the
median
agenda
there.
E
So
people
can
kind
of
review
this,
the
blog
post
later
at
their
own
leisure,
but
I
just
wanted
to
to
call
out
that
this
is
you
know
new
ford
new
relic
here
that
we
are
now
have
open
sourced
all
of
our
instrumentation,
and
that
includes
our
our
agents
and
so,
along
with
that,
we're
also
committing
to
open,
open,
telemetry
and
planning
on
standardizing
on
it
as
well
too,
and
so
that's
why
you're
scs
now
more
we're
involved
and
active
and
why
we
have
chris
chris
involved
as
well,
now
too,
and
so
with
that
go
ahead
chris
and
take
it
away.
E
A
I'm
actually,
I
was
gonna
say
like
this
is
maybe
not
constructive.
This
is
just
sharing
a
sentiment,
but
I
personally
am
so
very
excited
that
so
many
people
are
now
joining
the
open
source
approach.
To
you
know
the
bits
that
run
customers,
machines
and
it
kind
of
indicates
that
this
is
the
right
way
to
go.
You
know,
different
companies
who
initially
took
different
approaches
are
all
believing
it
now
and
doing
it.
A
I
think
one
interesting
part
that
we
in
this
group
are
slightly
different
from
the
sdk
sig.
Is
that
in
that
group
people
open
telemetry
has
a
specification,
and
this
was
about
creating
a
technology
that
adheres
to
the
specification,
whereas
here
we
are
more
of
a
working
group
to
create
a
there
is
no
specification
for
trace
right.
This
is
just
a
forum
for
us
to
collaborate
in
common
goals,
which
is
also
kind
of
very
exciting,
exciting.
E
H
H
I
Sorry
I
I
know
I
came
a
little
late
did.
Did
I
miss
some
introductions,
in
which
case
that's
fine
or
or
if
they
haven't
happened?
I
don't
know
if
it'd
be
useful
to
have
them
happen.
I
Oh
okay,
I
didn't
know
if
I
was
famous
or
not
well
in
case
you
don't
know
who
I
am.
I
I've
been
working
on
the
I-core
profiler
api
for
net
for
quite
a
few
years
now,
david
broman
was
my
predecessor.
You
probably
are
familiar
with
him,
because
his
blog
is
where
all
the
information
was
about
how
everything
works
so
anyway,
but
more
recently,
if,
if
stuff
doesn't
work
in
icor
profiler,
I'm
probably
the
guy,
you
can
blame.
B
I'll
go
since
I'm
at
this
top,
so
I'm
on
rug
and
I'm
mostly
working
on
the
java
auto
instrumentation.
Actually,
I've
never
really
written
any
dot
net
code
in
my
life,
but
I
thought
maybe
there
might
be
points
that
I
can
help
out
with
just
from
doing
auto
instrumentation
and
I'm
working
on
x-ray.
So
with
my
colleague
bolu,
just
to
help
out
the
sig
a
little
bit.
G
F
J
Yup,
hey
guys
I'll
round
out
the
microsoft
run
here,
I'm
james.
I
also
work
on
application
sites
with
alex
and
mikhail,
mostly
interested
in
the
bytecode
instrumentation
for
the
profiler
and
also
beyond
station.
A
I'm
greg,
I
am
with
datadock
working
on
all
sorts
of
net
application
monitoring,
and
previously
I
was
very
close
with
the
folks
from
microsoft,
working
on
all
sorts
of
parts
of
application
sets
and
dotnet
them.
D
Okay,
I'll
go
next,
I'm
paulo!
Nowadays,
I'm
working
for
splunk
I've
been
involved
with
the
open
telemetry
since
the
beginning.
I
was
mostly
involved
with
the
collector
and
now
with
the
auto
instrumentation
for
net,
and
I'm
I'm
also
a
ex
microsoft.
H
I'm
allen,
I'm
also
a
new
relic
engineer
with
chris
on
the
same
team
and
mainly
been
ramping
up,
focusing
on
working
with
the
crew,
doing
the
open,
telemetry,
sdk
side
of
things
and
I'll
be
continuing.
My
involvement
there
and
excited
to
that
chris
is
gonna,
be
kind
of
a
buddy
and
more
focused
on
on
this.
So
that's
as
a
whole,
as
eric
said
you
know,
new
relic
is
is
trying
to
ramp
up
it's
in
the
involvement,
so
pretty
excited
about
that.
K
I
can
do
a
quick
intro
of
myself,
so
hey
I'm
prashant,
I'm
with
the
aws
x-ray
team
and
I
mostly
worked
on
like
dot
net
sdk
and
also
python,
and
I'm
just
like
ramping
up
with
the
python
open,
telemetry,
sdk
and
looking
to
contribute
there
and
also
gain
some
knowledge
on
the
dotnet,
open,
telemetry
and
possibly
contribute.
L
I'm
boru
and
I'm
a
honesty
from
aws
sdk
team
and
we
are
currently
in
charge
of
the
aws
extrude
on
the
aging
project
and
yeah.
Thanks.
E
I
Cool
was
that
I
think
that
may
have
been
everybody
yeah
all
right,
well,
good
good
to
meet
you
all,
folks
that
I
that
haven't
met
yet
sorry
for
sorry
for
the
interruption.
No,
it.
I
H
H
Okay,
so
there's
three
main
components
to
new
relics:
instrumentation
we
have
the
profiler
the
agent,
and
then
we
have
our
instrumentation.
H
The
profiler
itself
is
a
native
code
in
c
plus
plus,
and
the
agent
is
a
managed
code
written
in
c
sharp
and
the
instrumentation
is
a
mixed
of
managed
assemblies
written
in
c
sharp
and
some
xml
files,
and
so
the
new
relic
profiler
itself
looks
at
the
xml
files
to
really
know
what
should
be
instrumented
and
then
the
agent
itself
is
what
loads
the
instrumentation
and
is
responsible
for
mapping
instrumented
methods
to
an
appropriate
instrumentation
library.
H
On
top
of
that,
new,
the
profiler
also
uses
some
of
the
core
profiler
apis
to
do
thread.
Profiling.
H
H
It
includes
some
attributes
that
I
don't
have
shown
in
this
example,
but
you
can
include
that
and
it
can
affect
which
wrappers
are
ultimately
going
to
be
used
to
instrument
a
method
and
then
the
other
relevant
piece
of
information
will
be
in
the
match.
Node
and
the
exact
match
node,
where
we
can
define
exactly
which
assemblies,
classes
and
methods
should
be
instrumented
and
in
the
case
of
overloaded
methods
you
can
specify
which,
which
version
via
the
type
parameters.
H
For
the
most
part,
we
rely
on
the
standard
core
profiler
callbacks,
in
order
to
rewrite
the
il
for
the
instrumented
methods.
Our
github
has
a
readme
where
we
show
the
pseudo
code
of
what
we
ultimately
rewrite
the
targeted
method
to
do
and
as
a
high
level
overview
it.
We
generate
the
equivalent
of
a
try-catch
block
around
the
original
method,
and
then
we
call
a
method
with
somebody
asking
a
question.
H
H
Gotcha,
so
we
call
a
a
method
in
our
agent
before
the
original
method
is
executed,
and
then
we
have
the
ability
to
call
another
method
before
we
return
from
this
rewritten
method
and
we
do
support
region.
H
And
then
our
the
wrapper
piece
of
it,
which
is
where
the
instrumentation
lives,
is
in
managed
code
and
for
the
most
part
we
reference
primitive
types
and
some
instrumentation
methods,
but
we
do
have
some
techniques
that
we
use
to
more
safely
reference.
Other
types,
that's
in
an
instrumented
assembly
and
we
generally
try
to
avoid
reflection.
H
Inside
these
wrappers
we
have
access
to
the
parameters
of
the
instrumented
method.
The
result
that's
being
returned,
as
well
as
the
object
that
the
method
is
being
invoked
on
and
then,
when
we're
dealing
with
async
methods,
we
have
some
special
tricks
that
we
use
to
try
to
manage
state
and
get
things
timed
correctly.
H
But
the
scope
of
that
is
beyond
the
scope
of
this
talk,
and
then
I
just
want
to
encourage
people
who
are
interested
in
learning
more
about
this
to
go
to
the
readme
and
there's
a
couple
of
links
in
here
and
I'll
share.
This
presentation
afterwards.
A
And
when
you
rewrite
io
code,
do
you
rewrite
the
I
code
of
the
method
being
invoked
of
the
so
the
core
target
of
the
call
site.
H
H
And
then
an
example
of
our
wrapper
there's
two
methods
of
interest.
The
first
is
the
can
wrap
method,
and
this
is
related
to
the
fact
that
our
managed
agent
is
responsible
for
determining
which
wrapper
can
be
used
for
an
instrumented
method,
and
so
our
profiler
packs
some
additional
information
into
the
call
out
to
the
managed
agent
that's
available
in
the
instrument
and
method
info
class,
and
we
can
inspect
that
to
see
what
method
has
been
invoked
and
we
can
check
to
see.
H
Should
this
wrapper
instrument
that
method,
and
so
we
can
execute
some
arbitrary
logic
in
here
and
then
the
other
method.
We
call
the
before
wrap
method.
H
So
this
is
the
method
that
gets
called
before
the
logic
in
the
original
instrumented
method
is
executed,
and
so
here
we
can
inspect
the
the
method,
parameters
and
the
invocation
target,
and
so
there's
an
example
piece
of
code
here
where
we
grab
the
http
web
request,
object
itself
and
then
at
the
bottom
is
where
we
return
a
delegate
for
the
method
that
we
want
to
execute
after
the
original
method
is
executed,
and
here
we
have
an
example
of
using
the
return
type
of
that
method,
which
is
http
web
response
to
then
execute
whatever
logic
we
need,
and
here
we
have
two
delegates
that
we
can
use
either
on
success
or
on
failure
to
handle
errors
differently
from
successful
executions.
H
And
so
some
of
the
key
design
choices
we
rewrite
the
il
of
the
target
methods
and
we
use
roughly
the
same
il
for
wrapping
each
method,
and
so
that's
one
of
the
reasons
why
we
have
this
intermediate
layer
between
our
wrappers
and
the
profiler.
So
our
profiler
just
calls
a
single
method
within
our
agent,
which
can
then
delegate
out
to
the
appropriate
wrapper
as
necessary.
H
The
we
support
regit,
and
so
this
allows
us
to
change
which
instrumentation
will
be
executed
at
runtime.
There
is
a
caveat
to
that.
H
Let's
say:
you're
instrumenting,
something
that
executes
during
the
startup
of
a
asp.net
core
app
region
won't
help
you
there
if
that
method's
already
executed,
and
it
won't
execute
again
so
reject,
isn't
going
to
solve
all
problems
related
to
instrumentation
and
changing
instrumentation
on
the
fly.
H
So
if
you
use
regit
to
try
to
instrument
that
method
after
that
method's
already
executed,
then
you
won't
be
your
instrumentation
wouldn't
be
able
to
execute,
even
if
you're,
using
regen.
H
H
Yep
and
then
sometimes
in
our
instrumentation,
we
will
reference
target
assemblies,
for
example,
in
the
http
web
request
example
that
I
threw
out
there
we're
directly
referencing
code
related
to
that
web
request,
but
then
there's
different
techniques
that
we
use
to
try
to
prevent
problems.
I
I
H
H
A
Out
of
curiosity
and
this
you
found
this
to
be
faster
than
using
reflection
and
then
caching,
the
delegate.
So
if
say,
you
want
to
access
some
private
methods
x
and
then
you
use
use
reflection
to
access
it
initially
and
then
you
catch
the
delegate.
You
found
the
the
dynamic
approach
faster.
H
Yes,
and
in
fact,
we
found
it
so
for
property
access.
We
found
it
close
to
the
performance
of
being
able
to
access
the
property
directly.
A
Oh
interesting
because
I
I
like
in
a
bunch
of
measurements,
it
seemed
that,
except
for
the
very
first
time
when
the
reflection
is
actually
being
invoked,
once
delegate
is
cached
this.
This
is
the
same
for
for
yeah.
A
But
I
never
actually
thought
about
the
dynamic
method
to.
I
H
Yeah,
so
in
the
past,
we've
done
some
experiments
between
comparing
the
performance
of
different
accessor
methods,
including
the
using
the
dynamic
keyword,
and
so
we
found
that
this
dynamic
method
was
slightly
better
than
using
the
dynamic
keyword
in
some
cases.
H
Yeah
so,
as
far
as
the
reflection
generated
delegate,
I'd
have
to
see
specifically
what
what
code
you're
talking
about,
but
we
compared
it
to
using
pure
reflection
to
execute.
I
This
this
is
probably
a
different
comparison.
Then,
if
you
were,
if
you
were
comparing
it
to
like
calling,
you
know
get
actually
I
forget
what
the
value
is,
but
if
you
weren't
doing
something
to
explicitly
create
a
delegate,
you're,
probably
testing
a
different
path.
Yes,.
A
I
F
A
code
which
involves
your
whatever.
H
I
I'm
not
surprised
that
dynamic
method
is
fast.
I'm
just
surprised
that
it
would
have
any
performance
advantage
over
over
method
info
dot,
create
delegate
I've
seen.
F
Somebody
I've
seen
and
genius
expressions
go
faster
compared
to
like
a
complex
method,
so.
F
Oh
sorry,
but
what
I'm
saying
like
I've
seen
some
benchmarks
as
well,
which
are
trying
to
kill
schemes
finding
when,
like
you
can
use
generic
quotation,
expressions
are
faster,
much
faster
compared
to
using
blacks.
F
H
So
so
and
then
there's
many
different
ways
that
you
can
access
data,
and
so
these
are
just
a
couple
of
the
things
that
we've
used
so
another
thing
to
to
be
aware
of.
With
this
agent
we
only
disable
tiered
compilation
and
engine
images.
H
So
we
disable
engine
images
so
that
we
can
instrument
some
things
that
are
typically
engineered
on
that
framework.
H
Yes,
so
we
disable
engine
images
because
there's
certain
libraries
that
are
pre-compiled
with
engine-
and
I
don't
know
if
those
are,
if
that's
related
to
things
loaded
into
the
gac
or
or
not,
but
because
we
instrument
some
of
the
things
in
there.
We
disable
engine
images
so
that
our
auto
instrumentation
will
actually
work.
I
I
A
And
what
is
what
is
the?
What
versions
of
that
magic
support.
H
H
That's
actually
probably
solvable
now,
because
we
originally
had
an
issue
with
with
how
we
did
our
region
in
implementation.
We
basically
did
it
against
the
roman
article
that
we
were
following
back
in
the
day
and
and
something
about
that
caused
us
to
not
be
able
to
debug
our
managed
agent,
which
was
a
problem
for
our
team,
and
so
we
found
a
hacky
workaround.
H
H
Okay,
so
the
new
relic.net
agent
supports
running
on
linux
and
windows
and
it
supports
net
framework,
4.5
and
net
standard
2.0
and
higher
provides
out-of-the-box
instrumentation
for
many
different
libraries.
I
just
included
a
few
here,
but
the
full
list
is
available
in
our.
A
Documentation
and
just
as
a
question,
so
what?
What
about
custom
custom
instrumentations
if
a
customer
wants
to
create
custom,
spams,
and
things
like
that-
is
this
supported
and
viable
mechanism.
H
Yeah,
so
so
it's
supported
in
a
couple
of
different
ways.
You
can
do
it
by
adding
some
additional
xml
files
that
define
the
instrumentation
and
then,
alternatively,
our
api
package
allows
you
to
use
attributes
to
to
create
additional
instrumentation.
A
And
are
you
using
essentially,
are
you
compatible
with
so
if
a
little
phrase
like
this,
if
a
library
instruments
itself
the
authors
of
the
library
instrument,
the
library
out
of
the
box,
using
some
version
of
of
the
net
recommended
things,
you
know,
starting
with
originally
event
source
and
then
diagnostic
source
and
now
activity
source?
Are
you
are
you
taking
that
information
or
you're,
just
estimating
explicitly
yourself
and
you're,
not
picking
up
that
particular
information.
H
H
But
at
this
moment
this
agent
is
not
using
that
information
to
generate
any
form
of
spam,
but
we
do
use
it
to
collect
memory,
information,
garbage
collection,
information
things
like
that,
for
example
by
listening
to
some
of
the
events
on
the
event
pipe.
D
So
do
do
you
have?
Does
the
agent
have
any
integration
to
capture
any
manual
instrumentation
that
perhaps
already
exists
on
the
on
the
code
like
open
tracing
or
something.
H
A
This
is
actually
super
cool
and
so
on.
On
top
so
you
you,
you
spoke
about
the
very
sort
of
base
level
apm
distributed
tracing
and
we
may
not
have
enough
time
to
go
into
details,
but
generally,
what
are
the
more
advanced
features
that
you
guys
are
building?
On
top
of
it
I
mean
you,
do
you
collect,
because
at
the
very
beginning
you
mentioned
some
profiling
information.
A
If
I
wasn't
mistaken,
so
you
could
collect
like
method
level,
profiling
information
or
do
you
collect,
like
other
things,
that
will
go
beyond
distributed
trading.
H
Yeah
so
there's
there's
things
like
thread
profiling,
where
we
get
snapshots
of
what's
running
in
that
process
for
a
certain
period
of
time.
There's
I
just
I
mean
things
like
distributed
tracing.
H
A
When
you,
when
you
collect
profiling
information,
do
you
associate
it
with
traces
or
do
you
associate
it
with
methods?
So
do
you
associate
basically
with
logical
execution
structure
given
by
the
issue
of
the
traces,
or
do
you
associate
it
with
just
sticks.
H
H
What's
running
on
each
getting
the
stack
snapshot
of
what's
on
each
thread
and
being
able
to
present
that
to
to
a
user.
A
Yeah,
I
I
think
my
question
was
about
this.
This
last
case,
I
I
was
interested
because
I
know
that
there
is
features
like
this
in
different
different
solution
providers
and
there
was
just
never
an
opportunity
to
play
with
new
relic
yet
and
so
say
you,
you
collect
these
profiles,
so
you
know
essentially
how
much
time
is
spent
and
which
method,
but
a
particular
method
could
be
invoked
from
different
apis
offered
by
a
service,
and
then,
when
it's
worked
from
some
apis,
it
could
be.
A
A
So
you
know
there
are
situations
where
you
are
not
just
interested
in
a
specific
stack
like
in
a
traditional
flame
graph,
where
you
just
say
this
is
the
stack
that
takes
a
lot
of
effort,
but
you're
sort
of
more
interested
in
in
the
trace
stack
rather
than
the
metal
stack
to
get
to
this
particular
hotspot.
A
H
I
think
so,
and
I
think
it
depends
on
the
use
case,
what
what
tool
I
would
recommend
to
a
user.
So,
for
example,
the
what
we
produce
with
the
thread.
Profiler
is
good
for
certain
use
cases,
but
I
wouldn't
say
that
it's
good
for
letting
you
know
how
much
time
is
being
spent
in
a
given
method.
H
There's
other
things
that
we
offer
where
it
would
be
more
useful
to
to
let
you
know
how
much
time
is
being
spent
in
a
particular
method.
So
we
have
some
other
concept
called
like
a
transaction
trace
and
you
can
think
of
a
transaction
as
like
a
a
web
request
and
so
it'll.
Let
you
know
how
much
time
was
spent
in
the
most
important
methods
that
were
executing
during
that
request.
H
Yeah
at
this
point,
there's
no
integration
with
the
activity
source,
but
the
activity
api
is
used
to
support
things
like
w3c,
trace
context.
Okay,.
H
Yeah,
so
we're
not
generating
any
any
activities,
but
rather
reading
some
of
the
trace
context.
Information
off
of.
H
Yeah,
so
the
agent
itself
isn't
doing
a
lot
with
logs,
but
rather
we
have
some
plugins
for
different
logging.
Libraries
like
ceralog
and
log4net,
where
a
user
of
that
library
can
then
configure
which
plugins
they
want
to
use,
and
it
can
then
call
into
one
of
our
agent
apis
to
annotate
information
into
that
log.
H
That
can
then
be
forwarded
on
to
new
relic
to
link
things
together
nicely.
H
Yeah
just
kind
of
backing
up
a
little
bit
to
combining
those
questions
like
like
chris
said.
You
know,
we
have
this
concept
of
the
transaction,
which
is
basically
like
a
trace
and,
and
the
I
wrapper
interface
that
that
he
showed
a
few
slides
ago
is,
is
a
spot
where
we
can
basically
do
anything
and
oftentimes
like
if
that's
a
spot,
where
we
would
create,
like
a
span
like
structure
for
creating
a
distributed,
trace
and
also
to
paulo's
question.
H
It's
the
presence
of
this
trace
that
you
know,
like
chris,
said:
there's
apis,
it's
able
to
like
decorate
logs
and
so
on,
and
we
have
plugins
for
that
and.
G
Thank
you
and
I
I
have
a
question
on
xml
xml
context,
where
we
are
defining
a
predicate,
like
some
matching
logic,
on
a
method
to
be
instrumented.
So
as
far
as
understand
this
is
done
to
control
ipa
versioning.
So
when
api
changes
you
know
like
and
whatever
automatic
experimentations
we
applied
to,
it
wouldn't
break
a
break
application
runtime.
H
So
that's
one
use
case
of
it.
Another
use
case
of
specifying
exactly
which
method,
parameters
and
method
that
we're
targeting
is
in
the
case
of
methods
that
have
a
bunch
of
overloads,
and
so
we
don't
necessarily
want
to
instrument
all
of
the
overloads
more
often
than
not.
We
want
to
instrument
the
one
method
that
all
of
those
methods
chained
to.
G
Makes
sense
so
it's
kind
of
like
like
a
wild
card
matching
and
how
how
precise
you
know
that
matching
is
so.
Let's
say
that
your
instrumentation,
you
know
to
instrument
this
particular
method,
say
I
don't
know
sql
connection
execute
right.
You
know
so
by
sending
some
commands,
you
know
to
sql
server.
You
want
to
capture
the
command
because
it's
very
valuable
diagnostic
information.
G
But
let's
say
you
know
that
internal
instrumentation
uses
a
bunch
of
internal
api
as
well,
and
you
would
only
wanna
and
you're
using
this
in
generated
code
as
well,
but
regenerating
code
against
it,
like
whatever
code
you're
injecting,
is
using
that
api.
How
do
you
solve
a
problem
of
when
those
internals
change?
How
do
you
keep
your
instrumentations
still
safe,
so
they're
specifically
like
in
java?
G
They
they
have
a
build
step
like
it
is
as
cool
as
that
you
know,
so
they
have
a
build
stack
which,
like
watches,
like
all
apis,
that
you
are
touching
either
internal
external
and
they
build
this
like
automated
api
matcher
right
and
they
would
instrument
if
api
and
runtime
is
unexpected
just
like
consider
it
not
safe.
So
do
you
have
like
a
similar
logic
here
as
well.
H
So
at
this
point
we
haven't
invested
a
lot
of
effort
into
testing
new
versions
of
each
library.
G
Got
it
so
essentially,
the
matching
logic
is
in
the
shape
of
the
method
to
be
instrumented,
and
the
rest
is
more
or
less
kind
of
just
is
by
case
effort,
correct.
F
F
What
would
be
the
next
steps
so
we'll
leave
it
online
and
I
guess
ready
to
contribute
to
opengl
or
is
having
been
discussed.
Let's
say
if
community
decided
this
could
be
a.
H
F
Next
steps
to
like
we
will
be
revelating
it
offline.
Is
it
like
good
fit
or
not,
but
are
you
guys
I
needed
for
this
agent
to
contribute
to
as
an
opengl
branded
agent,
or
is
this
your
constitutional
potential.
E
Yeah
I
mean
we
are,
we
would
be
happy
to
you,
know,
help
out
and
contribute
in
any
way
that
that
makes
makes
sense
for
for
the
community.
So
you
know
whatever
whatever
shape
that
that
takes.
You
know,
I
think
it's
obviously
you
know
up
for
up
for
discussion,
but
that's
that's
part
of
part
of
us
open.
You
know
open
sourcing,
this
and
everything
is
to
collaborate
with
the
community.
So
I
would
you
know,
I'd
love
to
see,
see
how
that
takes
shape.
I
A
I
was
I
was
like,
I
think
we
at
the
beginning.
We
talked
about
essentially
what
we
wanted
to
cover
in
this
meeting
and
we
are
sort
of
running
out
of
time.
The
next
thing
that
we
all
wanted
to
chat
about,
which
probably
we
need
to
chat
about
next
week,
is
a
specific
technical
roadmap,
and
I
very
much
hope
that
by
then
I
will
have
done
a
commit
to
choose
so
that
we
can
actually
not
only
talk
about
it,
but
actually
look
at
things.
A
So
that
would
be
the
next
step,
I
think,
with
the
additions
from
eureka,
and
next
people
hopefully
learn
more
about
aws.
This
is
very
cool.
I
think
that
we
have
generally
a
very
large
high
level
functionality
over
overlap
with
some
things
that
you
guys
are
doing
in
a
slightly
more
mature
way.
D
Yeah
so
I'd
like
to
thank
chris
for
the
the
presentation,
I
think
I
I
think
the
the
challenges
are
are
clear
for
us
and
kind
of
how
to
make
this
beneficial
to
everyone,
because,
especially
who
already
has
agents
like
new
relic
and
datadog
have
customers
already
that
they
have
to
support
and
improve,
and
also,
I
think,
in
the
long
run.
The
idea
is
that
eventually,
everyone
can
move
to
open.
D
But
I
think
it's
a
challenge
that
motivates
all
of
us
of
kind
of
getting
a
solid
project
that
gets
adoption
and
kind
of
make
this
a
effort.
That's
actually
shared.
You
know
so
yeah
that
that's
that's
the
great
thing
I
I
I
think
we
kind
of
out
of
time.
I
think
we
would
like
to
go
over
kind
of
the
the
the
things
to
kind
of
just
kind
of
while
the
goals
and
things
that
should
direct
our
work.
D
But
I
think
it
would
be
good
to
like
to
leave
that
for
the
next
meeting.
I
Thanks
so
much
everybody
good
to
meet
the
folks
I
hadn't
met
before.
I
will
see
you
all
next
time
and
if
you.