►
From YouTube: 2020-09-16 .NET Auto-Instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
A
Yeah,
it
says
I'm
sharing,
but
I
don't.
I
couldn't
see
this
frame
around
the
browser
window.
That
usually
appears.
C
B
A
A
Let's
wait
for
two
minutes
after
five
past
and
then
I
can
get
we
can
get
started
I
can
share.
There
is
especially
noah
and
david.
I
I
hope
to
get
feedback
from
you.
I
already
filed
an
issue
and
I
guess
we
know
the
answer
to
all
of
it.
I
just
wanted
to
kind
of
circle
around
and
get
some
inspiration
for
workarounds
and
everything.
D
D
That's
so
great,
I.
I
definitely
want
to
continue
this
diagnostic
source
discussion,
because
I,
I
think
it
leads
into
how
we
want
to
approach
sharing
the
code
between
the
sdk
and
this
agent.
A
A
A
A
Yeah
anyway,
so
so
far,
I
just
created
a
separate
repo
in
my
own
private
github
account
because
right
now,
it's
so
early
poc
that
it
kind
of
didn't
matter,
and
I
wanted
to
avoid
the
entire
like
you
know,
we
can
merge
it
anywhere
from
there
and
I
think,
as
I'm,
making
it
a
standalone
library
that
essentially
and
abstracts
away
diagnostic
source
and
it
can
be
either
a
folder
within
the
open,
telemetry
repo
or
it
can
be
a
separate
repo.
A
A
I
started
the
the
rapper
for
for
the
for
the
diagnostic
source.
What
I
have
so
far
is
here.
Let
me
put
it
in
slack:
we're
not
just.
A
In
the
chat,
here's
a
link
so
so
far
I
focused
on
loading
the
library
and
then
that
is
sort
of
done,
at
least
in
a
very
early
version.
I
didn't
run
comprehensive
tests,
but
I
did
some
ad
hoc
testing
and
next
step
is
to
start
creating
api
wrappers
based
on
reflection
and
exposing
them
through
some
sort
of
interface.
A
So
what
I
plan
to
do
is
I
did
some
research
and
I
downloaded
a
whole
bunch
of
different
nougat
versions,
and
let
me
let
me
go
here.
How
can
I
share
this?
Maybe
I
share
this
in
this
way.
If
I
stop.
A
So
basically,
it
looks
like
activity
first
became
available
in
the
gnostic
source
of
this
version.
Can
I
see
my
visual
studio
screen
yeah?
This
was
the
first
diagnostic
source,
nougat
version
that
contained
activity.
That
is
the
ship
date,
and
this
is
the
library
date.
So
I'm.
A
So
you
know
I
actually,
so
I
only
downloaded
major
versions,
so
it
could
be
actually
in
three
point
something,
but
it
definitely
was
not
in
three
point.
Four,
four
three
zero
did
not
include
it.
Okay,
it
could
be
four
three,
something
that
I
don't
know
I
just
downloaded
and
reflected
over
them.
A
A
A
Loader
class-
and
maybe
maybe
I
don't
show
the
code
because
it's
too
slow
to
read
things
but
on
a
high
level.
First,
I
look
which
libraries
are
already
loaded.
So
at
some
point
I
say
initialize.
The
whole
system
essentially
make
sure
that
this
thing
is
available,
basically
either
load
diagnostic
source
or
create
reflection
droppers
if
it's
already
loaded.
A
So
I
look
at
at
which
libraries
are
already
loaded
and
for
that
I'm
using
this
api.
Since
it's
available,
I
actually
am
not
sure
exactly
what
about
all
the
setup
differences,
but
this
seems
to
be
the
newer
api.
The
assembly
load
context
for
this
on
all
platforms
where
it's
not
available.
I
use
the
current
app
domain.
A
A
Also,
I
check
other
load
context
again,
as
the
library
is
loaded
and
is
the
library
is
loaded
in
just
once,
but
it's
not
loaded
in
the
default
context.
I'm
also
bailing
out.
A
D
A
Get
whichever
version
it
was
planning
too
long.
This
misses
some
scenarios
where
the
application
was
planning
to
do
something.
Some
funkiness
was
assembly
loading,
but
that's
so
hard
to
predict.
So
I
actually
sort
of
avoid
dealing
with
that
phenomenon.
Sure
so
now
I'm
saying
just
load
the
library,
and
now
I
expect
that
if
the
library
is
on
the
path
it
will
be
loaded,
I
need
to
check
the
version
one
more
time
later
after
it's
loaded.
If
it's
not
on
the
path,
I
hook
up
the
assembly
resolver
hook.
A
A
So
here
I
was
trying
initially
I
was
trying
to
request
to
load
it
without
the
version
and
if
it
failed,
I
was
doing
some
funkiness
figuring
out
the
version
of
the
thing
that
we
include
and
trying
for
that,
but
it
was
remembering
the
previous
decision,
so
I
needed
to
move
this
into
the
handler,
so
I
hook
up
the
handler
and
if
it
cannot
find
it,
then
it
calls
the
handler
and
what
happens
here
is
essentially.
A
I
expect
that
the
library
will
be
shipped
with
a
version
of
diagnostics
first
and
all
its
dependencies
in
some
sort
of
directory
that
is
like
tracer
directory,
essentially
essentially
so
in
this
thing
first,
I
say
well:
go
look
at
that
directory
and
read
all
the
dll
files
there
and
load
all
the
packages
that
are
included
with
with
the
application.
A
Sorry
who's,
the
library.
So
this
this
is
just
like
going
to
read
the
assembly
names
metadata
from
all
of
those
files
and
catch
them.
Now,
I'm
saying
well
if,
if
the
thing
that
I
cannot
resolve
is
not
shipped
with
my
application
with
this
library,
then
this
this
handler
is
irrelevant.
So
just
do
nothing.
A
A
What
I'm
doing
now
is
I'm
saying:
okay,
I
have
to
load
the
library
into
the
default
load
context,
but
there
is
no
api
for
that
from
a
different
file.
So
what
I'm
doing
is
I'm
taking
I'm
saying?
Okay,
go!
This
go
take
this
library
that
I
shipped
and
copy
the
file
into
my
base
directory.
A
A
I
just
request
a
load
again
so
this
this
will
either
succeed
this
time
or
if
something
went
wrong.
I
will
enter
this
callback
recursively
and
then
I'll
notice
it
and
I'll
bail
out
this
time.
That's
it
so.
The
last
thing
I'm
doing
is
back
here.
I
am
at
this
time
I
either
loaded
the
library
or
I
run
out
of
ways
to
try
and
load
it.
A
F
A
B
A
B
A
I
tested
that,
but
I
haven't
tested
which
load
contacts
it
loaded
into
and
actually
I
suspect,
some
trouble
there
because,
like
what
I
discovered
for
you,
you
know
you
already
didn't
know,
but
I
guess
you
don't
know
so.
Diagnostic
source
takes
dependencies
on
a
few
other
nuggets.
Specifically,
here
is
the
description.
Let
me
share
the.
C
One
second:
okay,
stop!
Stop
sharing
this
start
sharing.
C
C
C
A
Basically
so
yeah
you
can
see
you
can
see,
so
the
diagnostic
source
has
a
bunch
of
transitive
dependencies
on
on
a
recent
random
version,
so
in
order
random
versions,
it
has
more
dependencies
system
memory,
this
guy
buffers
and
vectors.
A
F
A
A
Because
of
that
we
are
at
the
surface.
The
number
of
applications
that
are
potentially
not
supported.
Much
is
much
larger,
because
now,
if
I
I
I
see
that
application
wants
to
load
the
version
of
you
know
of
say,
system
memory
that
is
too
old
for
my
diagnostic
source.
I
need
to
wave
out
as
well.
Somehow
there
is
additional
risk.
What
if
we
know
that
diagnostics
was
fairly
good
with
backward
compatibility.
So
what?
A
F
I
mean
certainly
so
from
the
back
compat
side.
I
I
would
expect
the
bcl
team
to
you
know
apply
the
same
if
not
even
more
diligence
to
those
sort
of
low-level
libraries
then
gets
applied
to
diagnostic
source
itself.
I
I'd
be
very
surprised
if
a
new
version
of
any
of
those
libraries
had
a
breaking
change
that
that
when
you
updated,
you
know
your
app
would
stop
working.
A
Well,
unless
there
is
a
major
version
update.
E
Yep,
okay,
it
won't
change
anything
ever
if
you
like,
even
api
surface
area.
Anything
if
you,
if
you
make
something
public
now
you
have
to
support
that
forever
and
they
won't
ever
take
it
away,
especially
so
there's
a
little
bit
of
deprecation
on
core
once
we
switch
to
core,
but
on
desktop
nothing
ever
changes.
F
Right
so
you're
saying
what,
if
and
make
sure
I
understand
correctly
you're
saying
what,
if
an
app
references,
references,
compiler,
surfaces.unsafe,
0.1,
but
diagnostic
source
requires
as
a
dependency
1.0.
F
F
Right
and
may
I
mean
maybe
this
winds
up
just
sort
of
being
the
same
exercise
in
the
same
way
that
you
sort
of
declared
hey.
If
you,
if
you
reference
a
diagnostic
source
version
that
is
too
old,
then
we
just
sort
of
say
sorry,
your
your
app
doesn't
work
for
profiling,
and
maybe
you
can
supply
the
same
rule
here,
and
hopefully
I
mean
I
don't
know
exactly
which
versions
diagnostic
source
takes
the
dependency
on.
F
A
Oh,
it
actually
takes
some
really
reason:
no,
no,
because
because
the
thing
is
think
about
it,
if,
if
diagnostic
source
was
already
loaded
by
application,
we
don't
have
a
problem.
The
problem
exists
only
if
the
dust
express
was
not
not
actually
loaded
by
the
application.
In
that
case,
we.
A
To
use
our
diagnostic
source,
which
is
recent,
and
in
that
case,
that
references
the
recent
version
of
those
libraries.
A
Because
we
would
like
to
take
advantage
of
the
improvements
in
in
four
five
sorry
in
in
in
five,
I
meant
five
likes,
but
essentially
the
idea
is
that,
if
you're
running,
if
you,
if
you
haven't
logged,
if
an
application
didn't
lock
you
itself
in
into
using
an
older
version
of
diagnostic
source,
then
they
will
get
the
benefits
of
the
whole
activity
source
and
other
modern
developments.
A
Okay,
so
we
we,
we
could
question
it
and
we
could
say
no
right.
A
So
performance-wise
so
so
yeah.
This
is
actually
something
that
we
need
to
decide
or
I
I
I
I
as
a
prototype
I'll
make
some
calls
and
then
we
can
review
them.
But
essentially
there
is
two
improvements
that
are
possible
that
are
made
in
in
the
new
version.
One
is
performance
and
the
other
is
some
some
capabilities
on
spam
that
sorry
on
activity
right.
B
A
That
integrations
could
live
without,
for
example,
like
links
and
events
like
spam
events,
essentially,
the
older,
the
older
version
of
activity
had
fewer
apis
like
there
were.
There
were
some
concepts
from
open.
Telemetry
were
not
present
on
on
that
api,
and
we
can't
represent
it
through
this.
This
whole
strategy,
however,
that
may
be
okay,
because
integrations
can
live
without
it.
F
Okay,
so
so
you
are
yeah,
so
it
sounds
like
the
very
least
you're
contemplating
having
a
degraded
experience
for
apps
that
reference
old
versions
of
titles.
A
Use
it
right
so
we're
definitely,
okay
with
that,
and
so
integrations
cannot
be,
cannot
take
dependencies
on
on
these
newer
features.
They
can
only
make
it
optional
if
you're
using
it.
They
can
query
for
it
at
runtimes
through
some
sort
of
query.
You
know
and
then
take
advantage
of
it,
but
they
cannot
depend
on.
B
So
a
question,
perhaps
my
my
my
break
kind
of
clean
up
my
mind
on
that.
So
if
but
the
scenario
the
profiler
is
going
to
be
the
first
thing
to
load
diagnose
source,
it's
kind
of
a
different
case
that
there
is
already
some
version
loaded
when
the
profiler
gets
there
right.
A
Yes,
so
the
profiler
is
the
first
is:
if
the
profile
is
the
first,
what
happens
is
and
actually
maybe
maybe
you
can
pause
this-
this
dependency
problem,
because
I
hit
some
interesting
things
that
I
would
like
to
validate
with
you
guys
whether
it
makes
sense
first,
when
I
loaded
this
library,
the
it
didn't
like
it,
didn't
fail,
because
it
can't
find
the
dependencies
on
full
framework
it
should
it
should
load
the
dependencies
as
well
or
should
it
only
load
them
when
I
actually
use
api.
A
F
A
I'll
I'll
do
some
investigations
to
I'll
do
some
testing
to
actually
force
the
case
and
so
that
we
can
test
that
all
of
this
loading
works.
So
what
I
hear
is
we're
not
sure
when
it
should
happen
test
it
out.
E
The
topic,
I
thought
at
least
the
assembly
locating
and
version
checking
and
security
and
signing,
and
all
that
happened
eagerly,
but
I
mean
I'm
not
sure
enough
to
say
a
hundred
percent.
A
It
didn't
fail
and
it
should
have
failed
either.
It
found
the
library
somewhere
where
I
had
nowhere
or
it
didn't
happen
eagerly.
I
I
I
I
I
only
got
to
the
current
prototype
yesterday.
I
just
didn't
have
time
to
sure.
If
you're
on
I
mean.
F
If
you're
on
full
framework,
fusion
logging
is
available
to
give
you
an
idea
of
what
is
going
on
with
the
assistant,
loader
and
also,
of
course,
if
you
just
have
it
under
the
debugger,
you
can
always
just
check
the
modules
window
to
get
out
exactly
exactly.
What's
exactly
what
it's
loaded
and
what
is
it?
I
mean
the
mod
yeah
yeah.
It
is
possible
that
the
assembly
loader
might
validate
the
existence
of
something
without
actually
loading
it,
but
like
that's
conceivable.
F
A
I'll
I'll
look
into
it,
I
just
wanted
to
mention
it
because
it
was
strange
and
then
the
other
thing
is
on
core
here's,
the
thing
so
far
on
my
machine-
and
this
could
be
because
I
have
all
sorts
of
things
installed,
but
I
couldn't
hit
a
case
where
it
could
not
find
diagnostic
source.
It
always
finds
diagnostic
source
in
the
framework
directory.
F
The
well,
the
only
circumstance
I
could
think
of
off
hand
would
be
if
you
did,
if
you
did
a
self-contained
publish.
So
instead
of
having
an
app
that
refers
to
the
machine-wide
install
you
gonna
ship,
the
framework
with
the
app
in
a
self-contained
deployment
and
then
within
the
self-contained
deployment,
you
can
do
assembly
level
trimming.
F
A
I
think
even
other,
if
you
have
an
app,
that
is
a
background
worker
picking
up
things
from
a
cure,
rather
than
being
a
web
server.
That
is
also
conceivable
that
it
doesn't
right.
It's
just
you.
A
Yeah
totally,
I
just
feel
like
we
can't
rely
on
this
not
ever
happening.
That's
like
what
you
say
is
not
super
common,
but
it
is
likely
enough
for
us
to
support
should
require
support
for
it.
I
would
say.
F
F
A
Yeah,
no,
I
I
I
tend
to
agree,
I
think,
especially
for
cases
where
it's
not
a
web
application,
but
some
sort
of
background
worker.
That
is
still
a
microservice
that
we
want
to
monitor,
but
you
know
picks
up
things
from
from
a
queue
or
something
for
that
right.
F
Right,
so,
of
course,
this
is
just
me
speculating.
I
don't
have
data
in
front
of
me
to
tell
me
how
common
okay,
okay.
A
So
anyway,
that's
the
thing
so
so
far,
I
I
couldn't
really
test
this
whole
assembly
loading
logic
on
the
full
framework,
because
sorry
on
the
on
the
core
framework,
because
I
need
to
somehow
package
it
in
the
way
that
you
described
to
actually
even
end
up
in
that
situation.
So
that's
to
do.
A
Full
framework,
I'm
hitting
it
every
time
unless
the
application
references
the
nougat,
of
course,
in
which,
in
which
case
so
right,
I.
F
Mean
the
other
I
mean
the
other
thought
I
would
have
there
is
to
say
I
mean
one
lever
that
you
can
play
with.
If
you
can
decide,
if
the
you
didn't,
you
could
decide
that,
instead
of
using
the
latest
version
in
the
fallback
case
that
you're
going
to
use
an
older
version
in
the
fallback
case,
if
that
helps,
I
mean
it's
a
trade-off
between
what
features
do
you
get
out
of
the
box
versus
what
dependencies
need
to
be
there?
F
F
For
example,
you
could
just
copy
the
source
code
of
diagnostic
source
into
some
little
private.
You
know
in
into
your
assembly
directly,
and
you
could
use
it
then,
with
whatever
modifications
you
wanted
so
like.
If
you
wanted
to
get
rid
of
certain
dependencies
and
change
the
apis,
you
could
do
all
of
those
things
because
you're
assuming
that
the
app
is
never
going
to
have
its
own
version
of
diagnostic
source
and
there's
no
interrupt
requirements,
because
nothing
in
the
app
will
ever
look
at
what
you're
doing
so.
A
Actually,
thanks
for
mentioning
it
because,
as
you
were
explaining
this,
I
remembered
why
we
need
to
load
as
a
fallback.
We
need
to
load
the
recent
version.
A
Okay
and
the
reason
is
the
profile
a
lot
relatively
early
on,
so
it
is
quite
conceivable
that
the
application
does
load
diagnostic
source,
but
does
it
later
so
that
means,
if
we
load
it
eagerly,
we
should
either
load
whatever
is
in
the
probing
path
right
or
as
new
as
possible,
because
if
we
load
something
the
application
subsequently
does
something
strange
to
load
it.
Then,
if
we
loaded
some,
if
we
picked
something
newer
than
the
application
wanted,
that's
okay!
F
Yeah,
I
guess
what
I
was
pointing
out.
I'm
not
I
mean
maybe
I'm
missing
something,
but
I
wouldn't.
I
wouldn't
expect
scenarios
where
you
try
to
load
it
using
the
probing
pad
and
that
fails,
but
then
later
on
the
app
still
successfully
loads
diagnostic
source
via
some
more
obscure
mechanism,
it's
technically
possible.
I
would
just
expect
it
to
be
extremely
rare.
A
I
see
so
we
could
essentially
rebuild
that
diagnostic
source
with
all
included
dependencies
and
not
have
this
problem.
F
Right
or
or
you
could
change
it
in
whatever
way
would
make
it
easier
for
you
to
work
with
I.
The
main
thing
I'm
thinking
is
that
you
could
take
that
as
a
signal
that,
under
those
circumstances
that
you
no
longer
have
any
obligation
to
interoperate
with
what
the
app
is
going
to
be
using
like
in
the
99.99
case,
it
just
won't
be
using
diagnostic
source
at
all,
and
in
the
0.01
case,
it's
going
to
do
something
obscure
to
late
bind
to
diagnostic
source
in
a
hidden
way,
but
still
you
could
just
say
all
right.
F
Your
app
is
super
funky
like
we're
not
going
to
interoperate
with
that
diagnostic
source
that
you
loaded
there
and
then
maybe
your
bar
is
just
we
won't
make
your
app
crash,
but
but
if
you,
but
if
your
app
does
something
super
funky
to
load
diagnostic
source,
then
then
sort
of
the
tracer's
version
of
the
activity
tracing.
That's
just
not
gonna
interoperate
with
it.
I
I
don't
know,
maybe
maybe
that
causes
other
problems.
F
A
That's
a
good
point:
yeah
you're
right.
We
can
always
do
this,
let's,
let's
let's
do
this
later,
because
I
don't
want
to
block
on
this
but
you're.
I
agree.
Maybe.
A
Out
to
your
lawyers
to
say:
can
we
please
are
you
guys?
Okay,
if
we
create
our
own
build
of
diagnostic
source
under
a
different
assembly,
identity
and
package,
it
with
a
trace.
F
B
F
I
don't
I
don't
know
if
something
would
take
me
by
surprise.
A
F
A
Okay
sounds
good,
so
we
have
we're
at
least
not
blocked.
I
don't
think
we
need
to
solve
it
right
now,
but
I
would
say,
let
me
find
a
quieter
sport.
There
is
some
life
here.
C
A
A
Basically,
I
think
this
is
definitely
a
viable
strategy
to
take.
If
this
whole
thing
becomes
a
problem.
Okay
sounds
good,
so.
F
One
other
thought
was
just
to
say
that
obviously
we're
making
some
assumptions
about
what
happens
and
doesn't
happen,
but
I
wonder
just
using
your
existing
data
dog
tracer
or
maybe
any
of
the
other.
You
know.
Obviously
we
got
several
profile
avengers
represented
here.
In
theory,
you
guys
are
in
pretty
good
position
to
collect
some
telemetry
from
your
tracers
to
find
out
for
the
real
world
apps
that
your
tracers
currently
already
handle
exactly.
F
F
So
that
that
might
also
help
just
sort
of
validate
like
when
we
assume
that
certain
things
are
very
rare
or
that
we
assume
that
they're,
not
so
rare,
you
might
actually
be
in
position
to
collect
the
telemetry.
That
would
confirm
those
hypotheses
or
invalidate
them,
in
which
case
we
would
know.
You
know
we
better
plan
differently,
yeah.
A
Cool,
so
thanks
for
all
the
feedback
yeah,
this
is
it
so
next
steps
is
to
find
out
why
these
dependencies
did
not
load
eagerly
solve
that
and
then
proceed
to
creating
reflection.
Wrappers
around
all
this.
B
Yeah
greg,
I
I
I
I
want
to
help
you
with
that,
so
I
I
think
we
think
afterwards
like
if
you
have
some
tests
or
tasks
that
we
want
to
kind
of
share,
so
we
can
make
progress
faster
on
this.
If,
if
you
have
something
that
you
think
I
can
help
and
do
in
parallel
or
something.
A
Yes,
so
the
biggest
thing
that
I
don't
know
how
to
do,
but
it
may
be
like
I
don't
know
whether
it's
like,
because
it's
it's
related
to
validating.
So
I
don't
know
whether
you
feel
like
doing
it,
but
like
figuring
out
this
whole,
how
to
get
this
deployed
on
core
in
a
way
where
the
library
is
not
there,
so
that
we
can
just
test
the
whole
behavior
on
core.
I
mean
I,
I
I
never
actually
done
this
standalone
deployments.
It's
probably
easy.
A
If
you
know
what
you're
doing
and
that's
one
thing
that
that
would
help,
and
we
can
also
think
up,
offline
and
and.
A
A
I
need
to
somehow
publish
it
and
then
run
it
in
the
debugger,
so
I
I
I
wonder
how
this
so
I
can
attach
it.
It
should
be
attached
to
the
bucket
you're
running
process
or.
F
Yeah,
but
I
mean
that's,
definitely
one
option.
Another
another
option
is
using
the
visual
studio,
debugger
yeah-
I
don't
know
if
you've
ever
done
this
before,
but
if
you
do
file
open
project
and
then,
instead
of
selecting
a
normal
project
file,
you
can
select
an
executable
as
your
project
and
and
what
you'll
get
is
you'll
just
get
this
little
mini
project
that
the
only
thing
it
can
do
is
it
can
launch
that
executable
for
debugging.
F
F
It
can
do
and
so
and
then
so
what
you
can
do
is
you
can
select
the
dotnet
exe
as
your
executable
and
then
you
can
go
into
the
project
property
pages
and
you
can
set
whatever
arguments
you
need
to
pass
like.
You
probably
need
to
pass
the
path
to
the
managed
dll.
That
has
your
code,
and
maybe
you
need
some
other
arguments
depending
on
your
app
and
then
the
other
thing
you
can
do
is
you
can
set
the
type
of
debugger
that
you
want
to
run
by
default?
F
If
you
point
it
at
a
native
executable
which
dot
net
xe
is
you
would
get
native
debugging,
but
there's
little
drop
down
in
the
properties
page,
and
you
can
say
you
want
course:
you'll
are
debugging
instead
or
or
full
framework
debugging.
You
know
if
you're
doing
that
or
whatever
anyway,
so
once
you
just
set
those
options,
then
you
can
just
hit
f5
and
debugging
should
basically
work
how
it
normally
does
that's.
A
It's
it's
like
a
il
spy
and
debugger
in
one
two.
F
F
A
It
a
little
bit,
I
use
it
mainly
as
the
as
a
disassembler,
but
I
I
I
just
I
was
curious.
I
started
it
as
a
debugger,
it
did
work,
but
it
didn't
really
use
it
to
debug
a
real
problem.
Yet
so
I
I
don't
know
what
this
capabilities
are.
Okay,.
F
I
I
was
just
poking
around
by
the
way-
and
I
just
this-
I
haven't
even
like
read
the
full
content
of
that
link,
but
it
seemed
to
be
a
description
that
that
matches
what
we
were
or
what
I
was
suggesting
to
do,
I'm
sort
of
creating
you
know
creating
this
they're
self-contained
and
then
and
then
beyond
self-contained.
There's
also
the
trimming
feature
that
tries
to
remove
assemblies
that
are
not
used
and
it
looks
like
it
has
both
of
them
described
in
this
little
tutorial
here
come
on,
and
I
didn't
I
didn't
get
to
the
bottom.
F
If
it
talks
about
no,
it
doesn't
doesn't
talk
about
debugging
it,
but
that's
that's
fine,
anyway
yeah.
If
you
do
file
open
project
and
pick.net
exegg
as
your
executable,
you
get.
What
visual
studio
just
calls
an
exe
project
and
then
you
can
debug
it.
A
I
was,
I
was
planning
to
just
build
a
the
application
so
that
it
requires
me
to
press,
enter,
attach
a
debugger
and
then
press
enter
or
something
that.
I
Hey
this
is
joe.
I
have
one
general
question,
so
we've
been
talking
mostly
about
racing,
and
because
of
that
we
are
talking
mostly
about
diagnostic
source
package
and
activity.
I
I
was
wondering
if
there
are
plans
for
supporting
logging
through
either
event
source
or
I
logger,
and
if
yes,
when
we
have
the
same
issue,
versioning
issue
with
like
microsoft,
extensions.logging
package
as
well.
Are
these
packages
also
guaranteed
to
never
break
backward
compatibility
even
with
major
version
or
that
guarantees
only
for
like
diagnostic
source
being
a
special
thing
or
where.
A
A
So
we
go
out
of
the
way
to
not
have
dependencies
and
one
of
the
techniques
that
we're
using
is
something
we
call
vendoring
in.
Essentially,
if
the
license
permits,
we
copy
the
source
code
of
the
open
source
project
into
our
repo
and
build
it.
So
now
the
functionality
becomes
from
the
runtimes
perspective,
no
longer
an
assembly
that
can
have
versioning
conflict,
but
just
part
of
our
assembly.
A
J
A
I
think
those
are
two
separate
questions.
One
is
for
integrations
and
the
other
is
for
our
logging
for
integrations.
A
It's
the
same
as
we
do
with
for
any
library
which
we
instrument,
we
change
the
source
code,
and
then
we
invoke
things
dynamically
in
that
library
as
the
original
method,
but
to
do
our
own
logging.
I
So
my
my
question
was
more
like,
like
when
you
eventually
support
logging,
also
like
as
part
of
the
auto
instrumentation
like
we
not
have
the
same
issue
versioning
issue
as
diagnostic
source,
because
users
may
be
using
a
particular
version
of
microsoft,
extensions
login
to
do
their
logs
and
the
profiler
or
like
the
auto
instrumentation.
It
tries
to
load
its
own
version.
A
I
So
my
understanding
is
auto.
Instrumentation
is
now
like
trying
to
collect
traces
from
an
application
like
without
user,
have
requiring
to
modify
their
code.
So,
similarly,
are
there
plans
to
add,
like
logging
information
like
laws
for
an
application
without
changing
the
use
code,
without
changing
users,
application
code.
A
So
what
we
do
is
the
implementation
details
that
would
know
better,
but
on
a
high
level,
we
we
discover
if,
if
the
application
uses
one
of
the
common
logging
libraries
and
if
it
does,
we
perform
bytecode
instrumentation
to
add
information
about
the
current
span
into
the
log
engine,
so
that
the
user
logs
is
before
the
application.
That
needs
no
changes,
but
suddenly
they
have
an
additional
field
in
their
structured
logging,
which
describes
the
span
and
the
trace
id
and
then
at
period
time.
I
Like
very
soon
because
I'm
trying
to
introduce
like
logging
in
the
sdk
repo-
and
I
was
wondering
like
what
are
the
plans
for
auto
instrumentation,
to
gather
the
those
logs
and
send
it
to
like
one
exporter.
I
A
No,
that
is
not
something
we
are
actually
trying
to
do.
We
are
not
trying
to
say
dear
customer,
your
logs
will
go
to
a
different
destination
than
than
otherwise
right.
That's
that's
something
that
we
can
do
on
a
separate
level,
but
that's
not
something
we're
trying
to
attack
right
now,
we're
saying
you
log,
whatever
you
were
logging
before,
maybe
you
were
logged
into
a
file.
Maybe
you
were
logging
to
splunk?
Maybe
you
were
looking
to
application
sites?
Maybe
you
you're
logging
through?
A
A
Custom,
metrics
or
or
or
autocollected
metrics.
A
Yeah
we
already
do,
but
that's
essentially
all
those
metrics
are
essentially
you
can
you
can
look
at
it
as
an
optimization
of
of
the
trace
information.
We're
collecting
right
because
we
already
collected
the
the
spans
and
the
traces,
so
their
duration
is
all
the
response.
Time
essentially
is
the
is
the
ever.
You
know.
The
duration
of
the
root
level
spends
right.
A
Course
we
want
to
extract
that
that
information
early,
so
that
you
know
sam
it
works
well
with
sampling
and
what
not,
but
it's
on
on
it
still
needs
to
come
from
the
spans
we
need
to
first
collect
to
spend
before
we
can
extract
this
information.
Is
that
what
you
meant
or
something
else.
I
Yeah
something
like
that
because,
like
in
general,
like
we
are
trying
to
cover
like
three
pillars
like
tracers,
logs
and
metrics,
so
like
in
sdk,
we
already
know
what
like
traces
and
metrics
are
like
already.
There
logs
are
being
worked
on.
So
I
want
to
see
like
whether,
like
auto
instrumentation
is
also
covering,
like
these
three
pillars,
independently,
like
tracing
metrics
and
logging,
because
mostly
I've
been
hearing
like
we
are
focusing
purely
on
tracers
at
this
stage,
and
that's
why
all
the
talks
are
about
like
activity
and
diagnostic
source.
I
A
So
so
for
for-
and
this
is
sort
of
my
my
view
on
it-
so
this
is
definitely
up
for
discussion.
So
for
metrics
there
is
two.
There
is
three
kinds
of
metrics.
One
is
custom
metrics,
that's
when
the
customer
damn
I
just
left
there
to
actually
hold
on
a
second.
F
F
I
guess
a
combination
of
two
things.
One
one
is
that
in
that,
in
this
case
it's
the
profiler
emitting
the
data,
but
we
actually
want
the
app
to
be
the
thing
that's
collecting
it,
as
in
like
the
profiler,
is
putting
activities
into
the
chain
of
activities,
but
then
they
might
get
emitted
somewhere
else
like
through
the
logs,
so
that
so
it's
kind
of
this
reverse.
F
Even
if
we
modify
an
existing
logging
statement
to
add
more
data,
just
the
fact
that
there
already
was
a
logging
statement.
There
means
that
the
app
already
has
a
dependency,
and
thus
any
analysis
of
the
il
or
analysis
of
the
assembly
will
tell
us
exactly
what
that
dependency
is,
and
it
will
ensure
that
the
dependency
is
already
loaded
at
the
time
that
the
profile
that
runs
that
code
and
so
all
of
this
work
to
sort
of
load
a
dependency
in
advance
or
guess.
What
is
the
appropriate
dependency
to
use,
doesn't
show
up.
In
that
case,.
A
A
Yeah
and
to
answer
cj
your
question
about
the
pillars
so
for
metrics
custom
magic
was
we're
sort
of
not
in
the
game
of
of
with
the
old
instrumentation,
we're
not
in
the
game
of
custom
metrics,
whatever
the
customer
does.
They
do
right
and
auto
collected
metrics.
There
is
two
things
we
want
to
collect.
One
is
runtime
performance
counters,
so
the
tracer
this
should
it
currently
doesn't,
and
you
know
if
someone
wants
to
contribute
this,
this
would
be
great
and
eventually
it
certainly
will.
A
You
know
it
should
listen
to
the
metrics
emitted
by
the
runtime
and
the
common
application
server
metrics
for
the
respective
platforms.
If
it's
windows
it
would
be,
is
if
it's
linux,
it
would
be
whatever
is
there
so
the
tracer
should
collect
those
those
performance
counters
as
metrics
and
export
them
to
through
the
appropriate
exporter.
A
So
that's
that's
one
thing
and
then
the
other
thing
is
the
metrics
that
are
the
result
of
collecting
tracing
information,
and
that
for
that,
for
that
we
collect
the
spends.
We
give
them
to
the
to
the
exporter,
and
then
the
vendor
needs
to
decide
like
some
people
will
extract
the
metrics
right
away.
Some
vendors
will
use
a
out
of
proc
agent
to
extract
the
metrics.
Some
vendors
will
do
a
combination
of
those
two.
I
I
see
yeah,
so
my
like
one
of
the
thing
which
I
was
trying
to
do
in
sdk
was
I
like.
Tracers
and
metrics
are
like
completely
independent
pillars
like
you
can
enable
tracers
without
metrics,
or
vice
versa.
You
just
enable
metrics,
you
completely
disable
tracing.
You
still
get
like
comfort
metrics,
like
http,
request,
time
response
time.
Average
things
like
that.
A
We
may
consider
it
in
the
future.
I
think
it's
true,
let
you
think
about
it.
So
we
could
it's
conceivable
that
the
tracer
could
say
we
do
some
auto
instrumentation
and
then
we
just
collect
response
times
and
nothing
else
and
that's
it.
It
would
be
difficult
with
with
with
I'm
not
exactly
seeing
the
full
point
of
it,
because
if
you
just
did
that,
then
you
lose
auto
pro
like
context
propagation.
G
A
In
that
case,
why
don't
you
just
collect
these
performance
counters
from
your
service
instance?
I'm
not
exactly
sure
why
the
why
the
nsdk
needs
to
do
this.
It
seems
like
a
soft
problem
that
a
customer
can
already
do
using
other
tools.
Why
would
they.
I
I
It's
already
documented
in
the
open,
elementary
semantic
conventions
for
metrics.
But
if
you
just
read
like
some
performance
counters
from
windows
or
even
counters
in
cross
platform,
you
don't
get
any
of
these
dimensions,
so
it
won't
match
what
open
telemetry
is
saying
unless
you
like
do
similar
thing
as
tracing
and
extract
it.
A
I
see,
I
think
in
the
long
term,
it
makes
sense
to
consider
it
in
the
immediate
term.
The
the
pragmatic
thing
is
that
the
tracer
by
itself
is
not
as
a
like
when
you
look
at
the
decay
right
you
because
you
make
it
a
part
of
the
application
and
it
will
run
somewhere
and
we
were
using
some
tool.
A
This
makes
complete
sense.
What
you're
saying,
however,
in
in
a
for
a
auto
collector
right
for
for
for
the
for
the
input
for
the
auto
instrumentation
agent,
it
never
exists
by
itself.
It
always
exists
as
a
part
of
some
vendor
solution.
A
You
you,
you,
don't
make
the
part
of
your
application
to
deploy
somewhere.
You
do
it
as
a
part
of
I
don't
know
a
new
relic
subscription
deployment,
monitoring,
thingy.
I
I
Okay,
yeah,
it's
something
which
we
would
continue
in
like
future
time,
because
it's
not
yet
already
in
sdk
also,
we
are
still
like
collecting
requirements
and
things
like
that.
But
anyway
thanks
a
good
discussion.
We'll
continue
it
in
the
future.
D
Yeah
greg
my
assumption
with
the
auto
instrumentation
is
that
we
would
likely
want
to
generate
metrics
that
follows
the
open,
telemetry
conventions
and
that
would
be
by
default
and
then
any
vendor
specific
version
of
this
auto
instrumenting
agent
could
then
there
would
be
some
hook
to
transform
the
data
to
whatever
into
whatever
format
it
needs
to
be
for
for
that
vendor
to
consume.
It.
A
All
right
there's
a
true,
because
you
don't
collect
metrics
using
open,
telemetry
convention.
You
export
them
right.
How
you
collect
them
in
process
is
sort
of
irrelevant,
as
it
probably
should
be
done
in
the
most
performant
way,
which
is
probably
not
what
the
tracer
does
right
now.
I
I
doubt
it
does
it
in
the
most
performant
way,
but
it
should
in
the
long
term,
but
then,
when
we
actually
export
them,
then
yes
you're
right.
They
should
be
like.
We
should
support
that.
D
Yeah
that's
a
fair
statement
because
out
of
the
box.
D
Need
to
support
exporting
to
whatever
the
standard
open
metroconsumer
is.
I
I
A
This
the
the
way
how
you
want
to
export
metrics,
so,
unfortunately,
I've
worked
with
matrix
a
lot
and
the
difficulty
there
is
that
you
cannot
completely
separate
it's
like
stuff.
The
way,
the
way
how
you
aggregate
them
and
the
way
how
you
serialize
them
is
not
completely
independent.
So
you
need
to
sort
of
you.
You
will
have
a
problem
if,
if
some
vendors
require
like.
A
One
type
of
aggregation,
some
vendors
require
a
different
type
of
aggregation
and
you
cannot
basically
when,
if
you
try
and
make
it
completely
generic,
you
end
up
with
a
super
super
super
complicated
system.
So
I.
I
Think
we're
going
to
get
that
covered
by
the
open.
Elementary
specification
like
like
aggregators,
are
like
plug-and-play
model.
Like
you,
you
decide
what
aggregation
you
want
to
apply
for
the
metric,
so
but
it's
still
evolving
in
open
telemetry
world,
it's
assuming
it
would
be
evolved
and
ga
by
end
of
this
year,.
I
I
A
I
think
what
what
will
naturally
happen
is
I'm
guessing
and
we'll
see.
This
is
the
suggested
guess
we
will
solve
the
tracing
problem
on
a
on
a
basic
level,
with
the
activity,
work
and,
and
then
people
will
discover.
Okay.
Now
we
need
to
have
exporters.
A
The
expertise
will
likely
look
very
similar,
but
not
completely
identical
to
the
to
the
sdk
because
of
different
requirements
around
how
you
configure
them
and
then
but
the
exporters
essentially,
is
you
give
the
exporters
a
bunch
of
spans
and
then
the
exporter
in
the
first
version
future
we
might
do
something
more
advanced
with
metrics,
but
in
the
in
the
first
iteration
you
will
be
having
exporters
an
exporter.
You
give
the
export
a
bunch
of
spans.
A
Now
the
exporter
is
vendor
specific
and
it
makes
a
choice
either
takes
all
of
these
pens,
serializes
them
and
sends
them
some
somewhere
and
decides
that
the
next
stage
in
the
pipeline
extracts
the
metrics
or
it
says
I
actually
want
to
do
down
sampling
right
now,
because
it's
more
performant,
and
so
I
because
of
that
I
will
extract
some
metrics
right
now
and
send
them
send
like
send
them
on
either
way
after
extracting
the
metrics
they'll
be
like
okay.
A
Now
that
I
have
extracted
them,
I
want
I
need
to
send
them
somewhere
now
the
thing
is
you're
already
in
the
exporter
code,
so
you're
already
vendor-specific.
So
now
we
can.
We
don't
need
to
create
this
abstraction
for
for
metric
sending
we
can
just
say
well,
if
you're
in
the
splunk
exporter
you
do
it
one
way.
If
you're
in
a
data,
docker
sport,
you
do
it
in
another
way.
A
So
that
means
the
first
iteration
of
this
does
not
require
us
to
solve
this
problem,
but
then
subsequently,
when
we
want
to
say
okay,
we
want
to
extract
some
metrics
from
all
spans
that
may
where
it
makes
sense
to
do
this
before
the
serialization
stage.
Maybe
earlier
on
once
we
hit
that
that
actual
practical
challenge
for
the
first
time,
not
theoretically,
it
might
be
needed
in
the
future,
but
we
actually
need
to
extract
some
auto
collected
metric
before
the
sterilization
state.
I
So,
based
on
my
knowledge
like
it,
should
work
slightly
differently
because
it
won't
be
given
to
the
span.
Exporter
like
there
is
a
separate
thing
in
open,
telemetry
called
metric
exporter.
It
works
like
a
completely
independent
of
tracing.
So
that's
where
you
send
your
metrics.
It's
like
very
similar
to
the
span,
processor
and
span
exporter.
There
is
an
equivalent
thing
called
metric
processor
and
metric
exporter,
so
it'll
be
like
a
parallel
pipeline.
It's
not
like
in
the
span
exporter.
You
can
do
extraction
of
matrix.
I
A
Telemetry
recommendations
are
focused
on
on
sdks
right.
Sdk
has
has
a
a
very
reasonable
stance
that
it's
part
of
an
application
and
the
developer
of
an
application
should
have
a
whole
bunch
of
flexibility
about
how
they
do
these
things.
Yes,
exactly,
whereas
a
auto
instrumentation
solution
does
not
necessarily
have
the
same
goals
because
the
the
developer
of
the
like,
because
we
have
the
we
have
the
shared
auto,
auto
collector,
and
then
we
ship
the
ship
we
ship
from
separate
like
from
downstream
repos
right.
A
So
the
vendor
should
have
lots
of
flexibility,
but
the
user,
like
the
application
owner,
should
not
have
a
lot
of
flexibility
to
configure
the
the
auto
instrumentation.
In
fact,
too
much
flexibility.
There
might
create
lots
of
weird
support
cases.
Of
course,
some
some
flexibility
through
options
is
required,
but
we
are
not
trying
to
allow
the
application
to
programmatically
interact
with
the
auto
instrumentation
in
some
really
advanced
way.
A
So
so
some
of
the
recommendations
of
that
are
meant
for
sdk
doesn't
do
make
sense
and
some
do
not.
I
Okay
makes
sense,
yeah,
so
like
one
final
question
and
then
like.
So
if
there
is
an
application
which
has,
let's
say,
open
elementary
api
as
a
dependency
and
it
emits
matrix
and
traces
using
the
open,
elementary
api
and
then
that
application
is
deployed,
and
you
like,
enable
the
auto
instrumentation
from
datadog
or
microsoft
or
any
other
company.
I
A
I
think
it's
a
very
good
question.
We
probably
need
to
talk
through
the.
There
are
probably
different
cases
that
we
need
to
kind
of
talk
through
so
for
for
metrics
right.
So
if
it's
a
custom
metric
right,
yeah,
yes,.
I
A
Yeah,
then,
then,
then-
and
this
is
just
a
very
spontaneous
thought-
so
I
may
be
missing
something,
but
I
would
say
the
auto
instrumentation
shouldn't
care,
the
the
ultimate.
So
if
I'm
admitting
as
a
user,
if
I'm
emitting
a
custom
metric,
then
all
the
trace
I
should
consider
doing
is
the
same
as
for
logs
custom
logs.
A
It
could
go
to
the
api
that
does
the
metric
emission
and
add
some
metadata
to
the
metric,
maybe
like
a
custom
dimension
that
talks
about
talks
about
something
that
we
ought
to
collect,
but
with
matrix
it's
very
very
hard,
because
metrics
are
usually
not
something
that
is
specific
to
a
request.
Metrics,
usually
magic
aggregation,
usually
spends
requests,
so
adding
trace
level
information
to
matrix
doesn't
make
sense.
A
B
B
A
I
would
say
well,
actually,
maybe
we
should
solve
it
using
a
real
case,
because,
theoretically,
there
are
so
many
solutions
that
we
could
engineer
a
solution.
That
kind
of
does
solves
everything
and
nothing,
but.
B
Yeah
well,
what
comes
to
my
mind
in
this
case
it
go
back
to
one
thing
that
we
talked
about
high
level
in
the
past,
but
we,
I
think
we
didn't
close
it
on
that,
but
let's
consider
just
metrics
just
for
the
the
the
time
being.
Just
in
that
case
I
would
expect
us
to
basically
inject
the
open,
telemetry
sdk
and
bring
the
metrics
as
they
are
in
the
application.
You
know
it's
it's
a
similar
case
for
what
we
have
for
open
tracing
and
the
data
dog,
so.
A
Okay,
so
so
let
me
just
repeat:
the
library
emits
metrics,
just
the
philosophy
is
the
same
as
traces.
The
library
does
some
lightweight
emission,
if
no
one
listening,
that's
very
cheap.
If
someone
is
listening,
then
the
aggregation
happens
in
the
sdk,
not
the
library
right
in
this
degree.
Yes
right
so
and
you're
saying
what
happens
if,
if
so,
if
the
application.
A
I
C
A
I
Yeah,
so
it
only
has
api,
so
it
can
do
things
like
like
stat
span
or
start
activity
or
it
for
metrics.
It
says
like
counter
dot
increment.
It
do
not
know
whether
there
is
someone
is
listening
or
it
cannot
control
where
the
exporter
is
because
it's
an
api.
So
there
is
no
concept
of
exporter
or
processor
or
aggregator
in
the
apa
and
by
default,
when
you
run
this
application,
these
metrics
and
traces
are
just
dropped
on
the
floor
because
there
is
only
api,
but
if
you
use.
A
It
must
be
a
optional
integration
and
here's
unless
I'm
missing
something.
Please
correct
me
if
I'm
wrong,
here's
the
thing
for
the
whole
deal.
The
whole
reason
we
are
dealing
with
all
this
trouble
with
diagnostic
source
is
because
dotnet
provided
us
a
common
obstruction,
so
that
the
tracer
does
not
need
to
take
a
dependency
on
open
telemetry
in
order
to
collect
those
traces,
we
only
take
dependency,
on.net
and
magically
the
api
emits
it
and
we
collect
them
without
having
taken
dependency
on
any
of
the
open,
telemetry
stuff.
F
I
Yes,
yes
today,
but
the
plan
is.net
will
ship
a
matrix
api
similar
to
the
ship
activity
in
dot
net
six.
So
I'm
really
thinking
like
one
year
from
now.
A
Where
people
would
be
that
was
if
that
was
the
case,
then
we
would
probably
do
the
same
thing
as
we
do
with
activities.
We
would
collect
this
because
the
the,
if
that
was
the
case,
then
the
api
will,
and
I
would
really
love
to
participate
in
a
working
group
on
the
team.
I
But
like
when
noah
and
like
someone
from
dominating
this
year,
we
can
we
hope
to
have
some
discussions
in
the
next
few
weeks.
So
let
me
come
back
and
ask
like
my
third
question.
So
we
covered
like
traces
and
metric.
Let's
say
like
the
application,
is
using
a
logging
api.
Specifically
it's
using
I
logger
from.net.
I
So
there
is
like
when
user
is
saying,
like
log.log
information
or
log
error,
it
doesn't
know
whether
there
is
someone
listening
or
not,
because
it's
not
a
concern
of
the
apa.
It's
it's
like
some
sdk
has
to
say
this
log
should
go
to
a
splunk
or
application
insights
or
something
but.
I
Logging
statement
done
using
logger
dot
log
information,
I'm
imagining
that
once
the
like,
auto
instrumentation
agent,
is
installed
and
started,
it
should
collect
that
logs
and
send
it
to
like
whatever
is
your
backend
is.
Is
that
like
something
which.
A
I
So
it's
very
similar
to
matrix.
There
is
a
logging
api.net
like
very
similar
to
activity
like
activities
for
tracers
and
metrics.
There
will
be
a
to
be
shipped
api
and
for
logs
there
is,
I
logger
and
event
source
both
are
coming
from
dot
net.
So
my
question
is
same
as
metrics
like
I
have
an
application
which
is
using
the
logging
api,
which
is
still
part
of
the
dot
net
itself.
Specifically,
it
is
I
logger,
and
I
just
write
like
log
statements
you
said,
and
then
I
enable
the
auto
instrumentation
agent.
A
So
so,
let's
think
about
it
together,
because
I'm
not
sure
that
from
a
tracer
perspective,
it's
it's
actually
a
at
least
at
this
stage
of
maturity
or
even
in
the
in
the
like
mid-term
futures.
Some
point:
maybe
it
changes,
but
so
here's
an
application
so
because
the
the
the
tracer
is
not
about
so
an
application
cannot
depend
on
the
trace
that
you
work
correctly
right
that
or
to
work
as
expected.
That
means,
if
I
didn't,
have
a
tracer
and
I
didn't
have
logs.
A
Right,
if,
if,
if,
if
I
have
an
application
that
already
has
logs
and
the
logs,
are
already
ending
up
at
some
place
file,
some
back-end
some
data
store.
I
Yeah,
but
with
the
logger
api,
it's
not
part
of
the
api,
like
you
just
say,
like
logger.log,
you
don't
really
know
whether
it's
going
anywhere
or
it
gets
dropped
on
the
floor.
So
unless
you
configure
some
the
equivalent
of
an
exporter,
I
think
in
I
logger
it's
called
provider,
it
just
dropped,
but
the
thing
is:
it
needs
to
export
this
place.
To
choose
some
place.
I
Yes,
so
I'm
I'm
just
thinking
like
would
would
auto
instrumentation
bring
that
capability
as
well
like
just
like
it's
capable
of
exporting
like
tracers
to
the
back
end
like.
Will
it
bring
the
capability
to
do
metrics
and
logs
as
well?
I
mean
not
today
like
sometime
in
the
future.
I
think
it's
a
very
good
conversation.
I
just
want
to
hear
like
what
others
think,
like
maybe.
A
D
Yes,
I've
got
some
thoughts
here
regarding
logging,
and
so
I
think
for
the
auto
instrumentation
side
of
it.
I
think
what
we
could
bring
to
the
table
is
the
ability
to
add,
like
spanish,
trace
id
information
to
those
log
messages,
but
then
it
would
still
be
the
responsibility
of
the
the
customer
or
the
the
developer
or
the
ops
person
to
basically
say
where
they
want
the
log
information
to
go
so.
D
To
have
a
library,
that's
using,
I
logger
as
kind
of
the
the
generic
interface
for
for
writing
logs,
but
that
won't
go
anywhere
unless
they
configure
some
sort
of
concrete
logger.
D
Whether
they're
gonna
use
it
another
library
like
ceralog
behind
the
scenes
or
use
the
asp.net
core
console,
logger
and
and
even
then
it
really
depends
on
where
that
application
is
running.
For
example,
if
you're
running
a
container
in
kubernetes
odds
are
you're
gonna
have
your
logs
directed
to
the
console
and
then
kubernetes
is
gonna,
then
forward
that
council
log
somewhere
else,
and
so
you
there's
there
could
be
this
external
pipeline
for
actually
getting
the
logs
somewhere
useful.
I
Okay,
is
it
what
about
metrics?
I
want
to
hear
your
thoughts
on
like
metrics,
also.
A
I
mean
there's
not
much
contextual
information.
You
can
add
to
metrics
because
metrics
don't
usually
they're,
usually
aggregated
across
across
requests.
So
the
only
contextual
information
that
could
be
added
to
those
metrics
is
like
environment
related.
A
You
guys
are
both
correct,
so
cj,
I
think
you're
describing
a
view.
That
is,
if
you
look
at
years
ahead
rather
than
like
one
year
ahead.
I
think
the
the
the
the
the
challenges
that
or
the
opportunities
that
cg
describes,
are
actually
spot
on.
Imagine
in
theory,
you
have
an
application
that
it
shouldn't
care
about
collecting
like
if
you
think
about
this
today,
you're
running
in
one
container
tomorrow,
you
migrate
your
container
to
some
other
production
environment
whatever
and
your
logs,
you
wanna,
do.
A
A
That
is
an
ideal
case
already
part
of
your
cloud.
Whatever
platform
you
decided
to
use,
whichever
vendor
you
paid
just
picks
up
those
logs
and
says.
Well,
I
am
I'm
the
data
doc
tracer,
so
I'm
sending
it
to
the
data
doc
logged
back
and
I'm
the
slang
tracer
and
the
new
relic
tracer.
I
am
sending
it
to
the
new
relic
backend,
so
in
an
ideal
case,
this
is
probably
the
right
way
of
doing
things.
However,
the
reality
of
of
the
industry
is
that
today,
customers
set
this
up
independently.
A
Today
there
will
be
a
logging
product
and
a
and
then
apm
product,
and
I
think
this
is
the
case
for
most
vendors.
I
don't
know
any
exceptions.
Essentially
you
go
to
new,
relic
or
splunk
or
data
dock,
and
whoever
and
say
I
want
to
have
my
logs
connected
to
the
cloud
so
that
they
can
create
them
in
sensible
ways.
A
And
then
the
vendor
collects
them
from
whichever
way,
through
an
api
or
from
a
file
or
whatever
was
the
solution,
and
then
it
goes
to
some
back
end
where
logs
are
available
for
query
and
then
that's
it.
And
then
there
is
another
solution
that
you
you,
where
you
have
apm
and
that's
what's?
What
does
auto
instrumentation
should
create
spans
and
traces
and
then
should
send
them
to
the
back
end
and
what
it
also
does
is.
D
Yeah
yeah,
I
do
see
things
in
a
similar
way
and
I
I
do
believe
that
there
are
use
cases
where
it
makes
sense
for
like
a
tracer
to
be
able
to
in
process
be
able
to
send
logs
out
to
to
a
vendor.
But
I
I
don't
know
that
that's
been
a
common
thing.
A
Solution,
I
think
one
other
thing
also
as
an
explanation
for
this
is
the
historical
development
of
the
industry.
The
the
need
and
the
advantage
of
end-to-end
tracing
and
distributed
tracing
is
something
that
is
relatively
recent.
I
mean
five
years
ago.
It
wasn't
really
a
thing
I
mean
it
already
existed,
but
it
wasn't
really
a
big
thing
in
the
industry,
whereas
logging
is
something
that
existed
and
was
understood
as
critical
for
forever.
A
So
the
reality
is
that,
while
in
a
so
for
microsoft,
tends
to
focus
on
kind
of
how
things
should
work
a
lot-
and
this
is
great
because
it
helps
to
drive
the
industry
into
the
right
direction,
but
for
smaller
companies
like
ours,
we
have
to
look
at
the
reality
of
things
and
the
reality
is
that
most
applications
they
already
have
logging.
They
don't
go
like
we
wrote
an
application
that
does
abstract
logging
and
we
want
it
to
be
auto
magically
collected.
They
say
we're
migrating
from
on-prem
to
cloud
and
we
already
have
logging.
A
D
Yeah,
I
just
want
to
talk
a
little
bit
about
the
metrics
case,
because
I
feel
like
there's
an
aspect
of
it
that
that
we
haven't
discussed
yet
and
perhaps
there's
cases
where
there's
some
libraries,
where
those
libraries
are
already
producing
metric
like
information
and
today
either
the
open
telemetry
sdk
itself
could
hook
into
that
and
be
able
to
collect
that
information.
Or
I
could
see
a
case
where
auto
instrumentation
could
then
hook
into
that
and
be
able
to
surface
those
metrics.
A
Yes,
yes,
so
essentially
you're
saying
we
know
that
the
library
produces
essentially
pushes
some
numbers
as
a
metric,
auto
instrumentation
get
injected
in
there
and
steal
those
numbers
so
that
we
can
send
them
to
the
right
place.
Red
chris.
D
I
Been
well
since
I've
looked
at
like
many.net
libraries
now
do
even
counters
like
asp.net
core
kestrel,
like
grpc.
They
all
emit
some
sort
of
counters
using
even
counter
api
as
well
like
it's
very
recent
like
started,
in.net
3.1
only
and
like
greatly
improved
in
dot
net
five.
I
have
seen
like
a
lot
of
new
counters
being
added
in
dot
net
five
timeline.
A
And
another
thing
cj
is
from
from
vendor
perspective,
while
dot
net
five
is
great
and
we're
all
big
fans
of
it.
How
many
applications
are
really
using
it?
If
you
wanna
say
that
you're
now
supporting
like,
I
don't
know,
redis
matrix
or
then
you're
in
for
supporting
it
on.
You
know,
since
for
five.
I
Yeah,
I'm
not
saying
about
that.
I
was
just
saying
like
there
is
a
general
direction
that
dot
net
like
bcl.
I
It
started
in
3.1
and
it's
it's
adding
more
and
more
things
in
fibo
and
I
expect
more
to
be
done
in
like
600
and
like
future
versions
as
well.
Yeah
yeah.
B
Yeah
and
just
going
back
from
from
the
auto
instrumentation
perspective,
we
do.
Wanna,
of
course,
is
not
our
priority
in
the
short
run,
but
we
eventually
wanna
be
able
to
capture
those
metrics
that
are
part
of
libraries
and
framework.
I
Yeah,
so
my
only
concern
not
concern,
like
my
only
question
was
like
we
generally
when
we
talk
about
open
telemetry,
like
the
three
pillars
are
like,
like
distinct
and
covered,
like
separately
by
specifications
and
like
every
language
has
like
most
of
the
languages,
have
two
builders
covered
already
like
traces
and
matrix
logs
being
like
planned
for
early
next
year
for
most
languages.
So
that's
why
I
was
asking
like.
I
Is
the
thinking
similar
in
auto
instrumentation
efforts
as
well,
that
you
will
collect
traces
highest
priority
and
then
eventually,
like
do
metrics
and
like
maybe
in
the
future,
like
logs
as
well,
yeah
and
like
in
in
general,
like
like
splunk
like?
Do
you
care
most
about
tracing
part
or
is
like
logs
and
metrics
are
important
for
you
as
well.
B
I
I
think
for
for
for
us
all,
the
three
are
important,
but
if
I
go
for
my
short-term
mandates
or
not
short-term,
but
more
most
immediately
is
traces.
Okay
and
then
we
keep
moving
on
the
stack
and
but
we
do,
we
definitely
wanted
the
three
of
them.
Okay
and
so.
I
So
what
I
hear
is
like:
yes,
we
all
care
about
three
pillars:
it's
just
that,
like
it
comes
in
different
priority
than
tracing
yeah
yeah,
I
mean
the
whole
context.
Is
I'm
working
on
trying
to
get
some
logging
solution
for
opentelemetry.net
sdk,
primarily
building
on
top
of
file?
Logger,
like
I
logger
will
be
recommended
as
a
logging.
Api
go
and
use
it,
and
open
elementary
will
provide
a
open,
telemetry
logging
provider
with
like
just
like
tracer.
I
B
Are
you
bringing
this
discussion
or
heavy
issues
on
the
the
sig4.net
directly?
I
I
can
try
to
pull
someone
that
is
working
with
logs.
I
Yeah
we
barely
started.
We
just
wrote
like
a
one-page
documentation
explaining
how
to
stamp
your
logs
with
face
id
and
spaniard.
So
we
just
have
like
a
basic
dog,
but
folks,
I'm
running
out
of
bet
you
got
to
go,
have
a
go:
yeah
yeah!
I
think
we
can
continue
this
discussion
in
a
later
time.
Yeah.
It's
probably
too
much
time
for
everyone,
so
yeah
I'll
come
back
to
it
in
like
next
week
or
the
week
after.
B
Yeah
yeah
and
I'll
try
to
to
find
someone
that
perhaps
can
help
with
vlogs
right
now.
I
don't
think
I'll
have
any
sagas
to
do
that,
but
I
I.
D
I
I
Yeah
both
so
first
step
is
already
done.
It's
documented
now
in
the
dot
net
repo
how
to
stamp
your
I
logger
logs
with
trace
id
and
span
id
and
tristate
based
on
some
configuration,
it's
a
feature
which
is
provided
by
like
I
logger
itself.
We
just
documented
it
in
our
opentelemetry.net
repo,
but
the
next
step
is
to
have
an
open,
telemetry,
logger
provider
and
pluggable
exporter,
so
that
it
can
be
sent
to
any
back-ends
just
like
we
can
send
traces
to
multiple
back-ends,
but
I
don't
have
anything
concrete
yet.
I
Yeah,
but
it's
too
early
for
me
to
give
anything
complete,
but
that's
what
I
said
like
in
next
week
or
the
week
after
next
thing.
I
will
try
to
bring
this
up
and
have
like
more
solar
discussions
on
logging.
It
looks
like
there
are
many
people
who
are
asking
injector
and
other
places
about
logging,
so
yeah,
let's
discuss
it
in
another
week
or
two
weeks
time.