►
From YouTube: 2020-09-30 .NET Auto-Instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
D
D
C
B
B
B
C
C
C
All
right,
so
we
are
about
almost
1505
and
a
few
folks
joining
right
now.
A
C
I
think,
instead
of
starting
with
the
status,
let's
go
with
the
questions
about
ga
tracker
coordination
by
eric,
so
we
we
mentioned
briefly
that
we
could
take
some
pm
in
working
around
the
the
the
milestones
and
the
work
that
we
plan
to
do,
but
we
didn't
discuss
further
than
that.
Do
you
have
any
starting
point
eric
that
you.
E
Yeah,
I
just
wanted
to
you,
know
kind
of
ask
the
group,
and
you
know
what
we
think
the
best
way
to
coordinate
is
on
this,
so
that
I
can
kind
of
kick
this
off.
I
did
see
I
spent
some
time.
You
know
looking
at
it
and
I
looked
through
the
the
you
know:
the
road
map
that's
already
out.
E
There
obviously
has
a
section
called
road
map
stages,
which
certainly
covers
you
know
some
of
the
things
that
we
no
doubt
want
to
cover
before
ga
and
some
stuff-
it's
probably
going
to
be
after
ga,
but
I
was
just
going
to
put
out
there
that
I
would
be
willing
to
take
a
take
a
stab
at
basically
adding
a
section
down
at
the
end
of
the
documentation
there
on
the
on
the
roadmap,
that's
kind
of
the
yeah,
the
bucket
list,
for
what
we
want
for
ga,
and
then
we
can
obviously,
like
you
know,
as
a
team
we
can,
you
know,
stack
rank
that
and
figure
out
what
makes
the
cut
line
and
the
order
and
all
that
sort
of
stuff,
but
I
figured
I
would
just
start
with
that
with
that
document
there.
E
C
I
I
think
it's
it's
a
good
point
for
having
some
high
level
discussion.
I
think
you
are
still
trying
to
get
the
initial
steps
and
from
from
my
perspective,
I
I
I
it's
more
than
welcome
to
start
to
kind
of
both
have
a
quick
discussion
here.
I
don't
think
that
we
can
get
much
further
directly
at
this
moment,
but
I
would
love
to
also
to
see
you
starting
to
put
your
ideas
on
the
document.
C
I
don't
know
if
somebody
else
has
what
it
takes,
but
my
take
is
that
if
there
is
anything
glaring
on
the
dock
that
you,
you
think
you
can
just
cause
relatively
quickly.
Let's
do
it
now,
but
then
I
think
then
trying
to
discuss
on
top
of
the
dock
and
kind
of
later
put
that
on
the
repo
you'll
be
a
great
great
route.
For
me,.
B
F
B
F
Suggest,
maybe
what
how
about
we
start
a
new
doc
rather
than
changing
the
existing
one?
That
way,
we
would
kind
of
have
a
kind
of
a
record
of
what
we
had
of
our
very
early
thoughts,
and
then
we
can
just
keep
it
and
have
another
document
that
is
not
now
we
have
evolved
that
way.
We
would
kind
of
not
lose
that
context.
E
Yeah,
I
was
thinking
I'm
fine
with
starting
a
new
document,
but
I
was
I
was
thinking
of.
Basically
just
you
know
appending
to
the
end
of
this.
So
we
didn't
have
you
know
so
that
we
didn't
lose
any
of
that
history
and
and
also
so,
it
could
be
in
the
same
the
same
document,
because
I
think
there
is
still
valuable
information
in
this
in
this
document.
But
I'm
you
know
I'm
okay
either
either
way,
if
we
want
it
in
a
new
document
or
not.
F
Oh,
the
same
document
is
fine.
If
you
start
a
new
section,
I
was
more
after,
like
you
know
my
my.
What
I
meant
is:
let's
not
change
the
existing
like
thing,
because
I
think
paul
paolo
has
phrased
it
very
well,
so
whether
you
prefer
a
new
section,
an
existing
document
or
a
new
document,
I
don't
mind.
I
just
didn't-
want
to
change
the
existing
thing
so
that
we
can
kind
of
study.
E
Got
it
yeah
yeah
sounds
good.
I
was
thinking
I
think
in
the
same
thing,
so
I
will
just
go
ahead
with
a
a
new
section
there
in
that
in
that
document,
and
so
I
was
just
thinking
you
know
at
the
high
level.
We'd
have
this.
You
know
a
list
we
identified,
but
then,
of
course
we
do
want
to.
You
know,
make
that
into
more
concrete
and
actionable
stuff,
and
actually
you
know
take
that
then,
and
we
can
create
some
milestones.
E
You
know
in
the
repository
and
all
that
sort
of
stuff,
but
obviously
that
that
comes.
You
know
later
once
we
get
things
that
were
agreed
upon
and
everything
so
all
right.
So
that
was
really
my
my
question
for
that
I'll
plan
on
having
something
you
know
a
initial
stab
at
this
for
at
least
to
start
the
discussion.
We
can
spend
some
more
time
discussing
it
in
next
next
week's
meeting.
C
Sounds
good
sounds
good
and,
of
course,
feel
free
to
reach
via
the
guitar,
so
the
community
has
visibility
with
any
at
any
moment
or
question
that
you
may
have
or
a
suggestion.
C
Okay,
yeah
sounds
good,
so
another
thing
is
we
had
in
the
past
talked
from
with
mikhail
from
microsoft,
and
initially
we
had
a
few
weeks
back,
an
agreement
for
him
to
be
a
microsoft
maintainer
on
the
on
the
sig,
but
it
seems
that
james
is
going
to
be
really
working
closer
with
us,
so
in
that
sense
makes
more
more
sense
to
have
dane
james
as
one
of
the
maintainers
and
since
you're,
a
starting
group
andrew,
is
still
on
kind
of,
let's
say
on
the
baby
step
phase.
C
I
think
it's
a
it's
a
good
moment
to
have
that,
and
I
I'd
like
to
suggest
that
we
really
take
and
do
put
james
as
the
maintainer
and
if
somebody
else
has
any
other
or
or
perhaps
james
also.
I
think
everyone
here
haven't
been
attending
the
meeting,
so
they
are
already
familiar
with
you.
But
if
you'd
like
to
kind
of
re
rehash
and
reintroduce
yourself
just
to
be
sure
that
everyone
is
aware
of
why
you
are
taking
that
position
on
the
scene.
G
Sure
so,
yeah
right
now,
I
am
technically
not
working
on
anything
in
the
application
site
space
for
the
codeless,
but
that
is
just
because
we're
trying
to
make
our
decisions
internally
mikhail,
is
busy
with
a
few
other
projects,
and
I
think
that
he
can't
take
this
on
as
easily
as
we
first
intended.
G
G
So
I
think
that'll
leave
me
a
good
spot
to
help
with
the
main
maintenance
of
this
project
too.
Sorry,
I
gotta
one.
Second,
actually,
I'm
gonna
go
ahead
and
step
away
for
just
one.
Second,
I
apologize.
C
All
right,
so
I
I
think
I
during
this
week
or
probably
today
or
late
or
tomorrow,
you
put
you
as
officially
as
one
of
the
maintainers.
You
already
have
your
membership
to
open
telemetry,
so
we
should
be
good
to
move
in
that
direction.
F
Yeah,
I
am
all,
and
for
that
I
think
microsoft
seems
to
have
at
least
in
this
particular
planning
cycle
has.
It
seems
to
have
like
some
back
and
forth
about
their.
You
know,
investments
into
the
space
the
the
the
runtime
team
is
super
super
helpful.
I
really
really
appreciate
that
the
the
monitoring
team
seems
to
have
like
price
going
back
and
forth.
F
I
think
it's
great
if
like
if
james
joins
and
the
more
he
contributes
to
better,
if
in
some
long-term
future
microsoft
kind
of
decides
to
put
more
resources
into
it,
then
we
can
always
adjust
people
accordingly
and
if
microsoft
decides
to
put
less
resources
into
it,
then
well,
it
would
be
a
pity,
but
you
know
we'll
run
with
it
as
well.
F
So
I
I
definitely.
I
think
I
very
much
welcome
james.
G
Thanks
yeah,
sorry
about
that,
my
kids
kind
of
ran
in
at
the
same
time
as
I
was
talking
but
yeah
exactly.
We
were
trying
to
figure
out
our
commitments,
and
I
think
this
is
one
that
I
would
like
to
make
anyways
regardless
of
our
direction
here.
Even
if
we
don't
do
all
the
work
that
I
was
planning
on.
I
think
this
would
be
a
great
way
to
flex
that
muscle.
So
I'm
looking
forward
to.
C
It
all
right,
so
I
think
we
can
kind
of
move
into
the
status
and
try
to
coordinate
we're
gonna
do
on
the
next
steps.
I've
been
working
on
getting
building
from
source
diagnosed
source.
I
have
that
working.
I
didn't
submit
a
pr.
Yet
there
is
some
few
issues
that
I'd
like
to
clarify
with
greg
before
I
do
that
then
greg
and
I
can
take
that
after
the
meeting
or
any
other
time-
that's
convenient,
but
that's.
The
first
is
no
step
to
kind
of
us
to
be
able
to
inspect
the
activity
source.
C
There
was
also
other
movements.
Let's
say:
zach
created
the
pr
for
reorganizing
the
ripple.
So
before
we
we
start
to
get
that
from
the
data
dog
repo.
I
would
like
to
kind
of
do
a
a
pool
to
take
the
the
changes
before
that
one.
C
Then
I
will
do
this
during
this
week.
Besides
that,
I
don't
know
greg,
did
you
have
any
new
experiment?
I
I
I
saw
that
you
open
issues
on
the
dotnet
runtime
yeah.
F
Oh
yeah,
we
had
a
kind
of
ongoing
discussion
with
noah
on
on
the.net.net
issues.
F
So
basically,
this
started
with
me
wondering
whether
things
actually
4.5,
I
think
they
are,
and
that
makes
total
sense
so
that
cam
kind
of
became
more
forward-looking
of
how
an
activity
api
the
activity
apis
might
evolve
in.
In
the
you
know,
in
subsequent
releases,
I
think
it's
sort
of
relevant
to
open
telemetry.
F
I
have
posted
links.
I
think
it's
actually
very
relevant
to
open
telemetry
and
north
perspective
makes
a
lot
of
sense.
I
have
posted
links
into
get
a
chat
and
I
would
really
encourage
everybody
to
take
a
look.
The
most
of
it
is
just
kind
of
discussing
of
what
what
might
be
important
for
activity
apis
in
the
long
term,
and
one
thing
that
might
be
actually
relevant
to
implementation
is.
F
I
have
in
one
of
the
comments
and
I
can
later
link
it
in
to
get
the
chat
described
in
detail,
how
I
would
implement
propagation
from
datadoc's
perspective
in
a
world
where
the
tracer
uses
activities
and
the
the
the
difficulty
was
there.
That
activities
as
a
first-class
citizen
only
recognize
w3c
formats
and
many
vendors,
including
datadock,
may
or
may
not
switch
to
that
in
some
long-term
future.
But
right
now
we
use
an
internal
format
and
it's
across
all
languages
and
in
the
short
term
it
certainly
cannot
switch
anything
there.
F
So
that
particular
part
of
the
longer
conversation
was
about.
How
would
I
implement
a
situation
where
I
am
receiving
requests
and
that
may
contain
w3c
headers
or
data
dock
headers
or
both?
F
F
C
So
so,
from
the
open
telemetry
perspective,
what
I'm
saying
is
that
people
are
coalescing
in
supporting
as
kind
of
official,
at
least
b3
and
w3c.
C
F
F
Our
trace.
Ids
are
also
64.
Also
sorry,
16
bytes,
just
like
w3c
our
for
span
ids,
our
trace
ids,
are
also
16.
Bytes
double
c
is
32
bytes.
For
us,
it's
relatively
easy.
We
will
just
take
the
lower
16
bytes,
trusting
the
fact
that
whoever
originated
this
idea
really
made
it
a
random
one,
and
there
is
no
we're
just
kind
of
reducing
entropy
there.
F
There
is
no
kind
of
important
information
that
we're
losing
in
the
upper
32
bytes
four
other
vendors,
who
also
just
have
this
limitation
of
being
a
little
shorter.
The
strategy
would
work.
However.
The
discussion
went
into
a
hypothetical
case.
What
if
there
is
a
vendor
who
put
some
meaningful
bites
into
the
idea,
rather
than
just
having
a
random
random
string
of
a
certain
length?
F
So,
though,
that
I
think
that
is
not
fully
supported
by
activities
as
they
are,
but
at
the
same
time
I
couldn't
point
to
a
actual
real-world
vendor
who
was
affected
by
this
problem.
So
this
was
kind
of
a
a
hypothetical
discussion,
although
very
interesting.
Thank
you
noah
and
the
the
difficulty
is
that
open
telemetry
might
support
b3
and
w3c
as
a
propagation
format,
but
the
reality
is
that
vendors
who
actually
offer
monitoring
solutions
on
our
back
end.
F
We
have
a
certain
id
format
and
it's
not
going
to
change,
maybe
in
some
long-term
future,
and
this
this
is
for
every
I'm
sure
new
relic
is
on
the
same
board
and
microsoft
is
true
and-
and
I
I
bet
splunk
is
as
well
so
whatever
the
propagation
format
we're
using
the
format
of
the
id.
So
let's
play
the
format
of
the
id
from
how
it's
propagated.
F
These
are
two
different
things
we
can
propagating
is
easier
to
change,
although
it
needs
to
be
always
backward
compatible
because
until
customers
pick
up
all
the
new
tracer
versions
that
they
can
take
forever
so
right
now,
data
dock
is
using
only
data
dog,
specific
things
for
net.
I
would
like
to
switch
to
supporting
both
w3c
and
datadog
in
a
way
that
when
we
are
calling
downstream
services,
we
always
attach
both
headers
when
we
are
receiving
services
from
upstream
sorry,
services
calls
from
upstream.
F
We
will
look
at
both
w3c
and
data
dock
id
if
they
are,
if
they
contradict
each
other.
So
if
the
lower
16
bytes
don't
match,
we
will
always
prefer
the
data
dock
id
thinking.
We
don't
know
where
the
other
information
came
from,
but
it
contradicts
our
our
universe.
We
cannot
propagate
it.
If
they
both
agree,
then
we
will
take
the
lower
60
bytes
for
our
internal
universe,
but
as
we
call
downstream
services,
we
will
propagate
all
32
bytes
that
came
in
through
w3c.
F
F
B
So
greg,
I
want
to
say
that
the
aws
team
submitted
something
to
the
sdk
repo
about
wanting
to
do
something
more
meaningful
with
the
trace
ids,
and
so
I,
I
think,
there's
some
details
in
some
of
the
pull
requests
or
issues
in
that
repo.
That
might
help
with
some
of
the
discussion.
D
B
And
then,
as
far
as
new
relic
is
concerned,
we've
pretty
much
adopted
the
w3c
format
and
then
during
propagation
for
backwards
compatibility.
We
send
both
w3c
and
the
new
relic
headers,
but
we,
our
preference,
is
for
accepting
the
w3c
first
and
the
new
relic
headers
just
for
backwards
compatibility.
F
I
think
long
term.
Actually
it
would
be
good
to
suggest
this
with
a
data
dog
as
well.
Realistically
speaking,
I
can
only
kind
of
make
even
like
preliminary
commitments
or
like
share
the
strategy4.net
specifically
and
then
just
kind
of
try
and
influence
the
other
languages.
F
I
think
what
you
say
makes
sense,
but
in
a
short
term
you
know
I
don't
know
how
well
this
can
be
prioritized,
and
so
my
goal
is
to
play
4.net
in
datadog.
I
want
to
play
as
nice
as
you
possibly
can
with
w3c
propagation
to
be.
You
know,
because
I
I
want
to
support
a
customer
to
have
a
mix
of
different
solutions
right,
but
I
don't
want
the
success
of
this
part
to
be
dependent
on
all
of
the
parts
of
datadog
taking
up
this
project.
F
So
I
think
we
need
the
the
bits
that
take
out
the
information
from
the
headers
and
put
it
into
the
activity
on
incoming
and
do
the
reverse.
On
outgoing,
in
in
the
in
our
tracer
like
in
the
open
temperature
chase,
it
should
be
pluggable
not
for
a
customer
configuration,
but
as
we
build
out
and
ship
out
of
our
own
repos,
we
should
be
able
to
kind
of
slot
in
a
different
implementation
of
a
certain
interface,
or
something
like
that.
So
we
can
definitely
do
it
slightly
differently.
F
As
long
as
the
entity
end
system
supports
both.
So
as
long
as
datadog
doesn't
have
the
the
it
doesn't
prioritize
completely
switching
through
w3c,
I
think
we
would
do
something
like
I
described
in
the
long
term.
It
would
be
good
if
we
can
consider
if
we
can
move
to
supporting
w3c
as
first
class
and
data
dog
headers
only
as
a
backward
compatibility
thing
I
just
like
you
know.
F
I
think
this
there
is
a
road
there
and
if
x-ray
wants
to
start
making
parts
of
parts
of
the
id
significant,
that
would
be
a
big
problem.
I
think
for
all
vendors
who,
because
even
if
we
completely
propagate
w3c,
if
we
switch
to
propagation
w3c
first,
that's
something
that
is
feasible
for
a
vendor,
but
changing
our
entire
back
end.
Is
you
know
this
is
probably
going
to
be
cost
prohibitive,
because
the
business
question
would
be,
of
course,
what
you
know.
F
It's
a
lot
of
work,
a
lot
of
cost
a
lot
of
time.
What's
what
customer
scenarios
is
would
be
blocked?
If
we
don't
do
it,
and
I
cannot
think
of
any
as
long
as
we
propagate
things
still
correctly
so
making
the
idea
anything
else
than
random
would
be,
I
think,
extremely
dangerous.
F
This
would
essentially
make
vendors
like
us.
Consider
you
even
want
to
play
the
wcc
game
at
all,
because
it
would
essentially
say
like.
We
would
completely
then
ignore
this,
because
we
can't
we
would
then
not
be
able
to.
You
know,
use
important
information.
I
think
wcc
has
a
provision
for
putting
flags
and
meaningful
bits
into
the
flag,
part
of
the
of
the
header
and
there,
of
course
we
can
pick
it
up
and
that
for
that
we
can
deal,
but
the
id
itself
would
be
really
dangerous
to
make
non-random.
C
Yeah,
I
I
think
they
they
need
to
bring
this
to
the
level
of
open,
telemetry
spec
before
they
can
address
their
issue.
You
know,
because
all
the
a
lot
of
the
code
that
exists
right
now
is
based
on
being,
and
I
mean
open,
telemetry
code.
You
know,
I
I
don't
mean
any
vendor
specific
code.
Open
telemetry
itself
has
a
lot
of
that.
So
I
I
I
think
they
need
to
bring
this
thing
to
should
they
expect
him
of
open
telemetry
yeah
and
if.
F
There
is
a
if,
if
you
guys
have
a
link
to
this,
I
really
would
appreciate.
Then
we
can
comment
and
say
like
please
consider
this.
This
aspect.
C
F
Thank
you
so
anyway.
So
that's
the
propagation
part
of
it
in
terms
of
the
other
work
that
I've
done
so
far
for
let's
just
quickly
reporting
on
it
and
by
the
way
this
week
and
next,
because
there
will
be
less
progress
because
I'm
also
dri,
which
is
for
us
like
we
don't
have
on
call,
isn't
the
phone
never
rings,
but
there
is
always
a
primary
person
to
address
customer
issues
and
currently
I'm
on
that
rotation,
so
I'll
be
making
some
less
progress
in
in
the
coding.
F
For
for
paulo
to
do
the
rendering
to
switch
to
the
to
the
more
robust
system,
I
have
created
a
whole
bunch
of
of
of
reflection,
wrappers
and
instead
of
created
a
structure
of
for
the
whole
library.
So
once
we
load
the
library,
essentially,
we
need
to
I'm
using
I'm
using
expressions.
F
To
and
then
there
will
be
a
if
you
have
an
activity.
So
if
you
want
to
create
an
activity
from
from
an
integration
library,
you
will
call
a
class
called
activity
star.
It
will
using
using
cached
reflection
delegates.
It
will
create
an
activity
instance
and
it
will
return
you
a
struct
struct
that
wraps
that
instance,
and
it
will
have
apis
that
stub
most
important.
F
Tag
get
tag
and
a
bunch
of
other
things
and
they
will,
under
the
covers,
use
these
this
code
emitted
using
emitted
using
expressions
and
cached
delegates
to
to
call
into
the
activity
apis
in
whichever
library
we
loaded.
F
There
will
be
some
later
on
some
trickiness
around
the
fact
that
we
might
switch
the
library
we're
using
in
in
that
scenario,
where
we
first
loaded
the
the
vendor
case
and
then
the
application
later
loaded,
the
right,
the
right
thing
there
we
might
actually
fail
making
some
calls.
We
need
to
catch
the
right
exceptions
and
we
will
lose
one
or
two
traces
in
that
case,
but
that's
okay.
F
I
think
that
should
be
a
declared
limitation
and,
and
then
there
is
a
whole
bunch
of
work
around
a
compatibility
issue
that
if
the
application
loads
a
new
activity
source
version,
then
everything
is
good.
F
If,
if
sorry
yeah,
I
don't
know,
my
connection
seems
to
be
okay
so
far.
But
please
interrupt
me
if
you
can
hear
me
so
one
one
difficulty
that
I
discovered
that
delays
the
whole
additional
work
is
ids.
So
when
we,
when
the
vendor
or
a
recent
version
of
diagnostic
source,
is
loaded,
then
we
can
use
w3c
ids
and
everything
is
good.
F
However,
if
either
the
application,
for
whichever
reason
just
forced
the
use
of
of
hierarchical
ids
or
it
is
an
older
version
of
diagnostic
tools
that
can
only
use
hierarchical
ideas
and
by
the
way
I
prefer
for
the
system
to
be
as
little
disruptive
to
the
application.
So
if
the
application
opted
into
hierarchical,
ids,
I'm
sort
of
sticking
with
it.
F
So
in
that
case
we
cannot
rely
on
the
on
the
fact
that
there
is
a
a
like
32
64
30
to
16
bytes
version
of
an
id.
Instead,
there
is
a
root
id
like
it
used
to
be
an
activity,
and
then
there
is
a
arbitrary
length
activity
id
which
is
like
span
id,
and
so
that
is
an
additional
problem
we
need.
Actually,
this
is
the
way
I
wonder
about
your
feedback
guys.
We
need
to
the
mocked
api,
the
one
that
says
get
trace
id
get
span.
F
Id
needs
to
work
on
top
of
both
cases,
so
what
I'm
thinking
of
so
far
not
implemented
yet,
but
next
thing
to
implement
is
if
we
have
a
hierarchical
id,
then
for
for
the
for
the
trace
id,
I
will
just
use
whatever
this
thing
gives
me
as
root
id.
It
will
show
it
will
be
shorter
because
that's
what
activity
used
to
generate,
but
that's
what
we
will
have
and
then
for
the
trace
id.
F
I
need
to
hash
the
activity
id
in
some
sort
of
not
not
using
the
dotnet
hashing,
because
it's
not
persistent
but
not
not
like
not
the
same
across
across
processes,
but
built
in
some
very
lightweight
hashing
algorithm
into
the
library
that
will
hash
the
the
complete
activity
id
and
use
that
as
as
as
a
span
id.
H
So
my
limited
understanding
of
what
application
insights
did
is
they
just
set
the
static
property
to
like
that?
If
diagnostic
source
was
capable
of
supporting
w3c
ids,
not
all
versions
are,
but
the
recent
ones
do
that
if
it
was
capable,
they
would
always
force
it
to
be
w3c
ids,
and
I'm
not
aware
that
they
ever
told
me
of
any
problem
that
that
created
in
the
apps
that
they
used
that
on
it's
possible.
There
were
problems,
and
they
didn't
tell
me.
F
But
there's
two
two
two
caveats
for
this:
I
think
first
and
to
correct
me
from
instead
only
the.
H
So
if
the
default
is
w3c
in
three,
it
was
supported,
but
not
the
default.
F
Okay,
however,
the
thing
is
that
these
are
two
related
but
separate
questions.
One
is
from
which
version
we
can
do
this
and
because.
C
F
Either
way
we
have
to
support
the
of
the
version
that
do
not
yet
support
it
in
some
way
correct.
So
my
so
my
question
is
still
remains.
F
You
guys
this
approach
makes
sense
for
you.
The
second
is
application
inside
the
library,
so
they
will,
if
they
force
it.
The
customer
will
notice
this
in
their
testing
environment
in
their
production
in
their
kind
of
deployment
environment.
The
fact
that
if
it
breaks
that's
when
it
will
break,
however,
in
it
with
a
tracer,
we
will
essentially
force
the
change
silently
in
a
production
environment.
F
H
Insights
does
have
a
codeless
instrumentation
scenario
where
they
use
a
tracer
like
agent
to
just
grab
their
sdk
and
forcibly
inject
it
into
the
customers
process
without
the
customer
having
directly
referenced
it
at
build
type.
That
scenario
may
not
be
as
broadly
deployed
as,
let's
say,
data
dogs
is,
but
it
does
exist.
H
As
long
as
the
diagnostic
source
supported
it,
their
scenario
may
not
be
as
broadly
deployed
as
data
dogs.
I
can't
speak
to
that.
It's
possible
that
you
know
you
know
broader
usage.
There
would
have
been
more
issues.
It's
also
possible
that
there
were
some
issues
and
they
just
didn't-
tell
me
about
it.
So
I
yeah.
I
can't
give
you
anything
for
certainty.
I
can
just
sort
of
tell
you
that
they
did
take
some
steps
in
that
direction
and
I
didn't
hear
any
blow
back
from
it.
F
So
so
I
think
that
if
we,
if,
if
our
because
we
have
to
have
some
work
around
for
all
the
versions,
if
that
workaround
is
slightly
less
performant
and
I
would
prefer
to
be
in
keys
of
dart,
I
would
prefer
to
be
on
the
side
of
reliability.
F
So
so,
if
we
have
a
workaround
that
is
slightly
less
performant
and
we
notice
it
and
we
advise
your
customers
to
remove
this
performance
impact,
set
up
some
configuration
flag
that
will
make
us
force
the
w3c
id.
I
think
that
would
be
a
safer
approach.
F
The
thing
is,
we
still
have
to
somehow
support
the
older
diagnostic
source,
either
way.
F
H
F
So
so
so
my
my
question
to
other
vendors
is
this:
one
approach
that
that
I
was
considering
is
to
describe
what
I
just
to
do
what
I
just
described
inside
of
this
activity,
shim
stub.
That
means,
when
you
have
your
integrations,
when
you
have
your
your
your
vendor-specific
code,
that
deals
with
headers
and
whatnot,
you
just
go
activity,
stop
give
me
the
id
and
you
already
get
the
result
of
the
streaming.
F
Basically,
the
result
of
this
hashing
that
I
just
described
an
alternative
is
to
say
that
I
will
give
you
the
id
as
is
essentially,
if
it's
w3c,
then
you
will
get
the
the
normal
thing.
If
you,
if
your,
if
the
activity
was
a
hierarchical
activity
and
you
ask
for
trace
id,
you
will
get
the
same.
F
You
still
have
the
same
api,
so
you
don't
need
to
like,
but
the
the
return
of
the
get
trace
id
will
be
whatever
activity
that
root
id
returns
and
the
return
of
activity
dot
get
span.
Id
will
be
the
result
of
activity.id.
F
So
that
means
that,
depending
on
the
diagnostic
source
version
loaded
at
the
time
and
potentially
on
the
id
settings,
the
response
of
the
of
these
apis
get
trace
and
id
and
get
a
span.
Id
will
always
be
a
string,
but
it
will
be
a
string
with
a
different
format
depending
on
the
version
of
the
library
question
is
what
do
you
guys
prefer.
F
Like
if
you
prefer
to
it
to
be
less
magical
and
you
get
whatever
the
api
does
then
they'll
be
like
then
for
data
doc
I'll
do
an
extension
method
that
wraps
around
it,
that
that
would
be
like
a
get
dated.
Okay,
do
you
think,
and
that
would
do
the
hashing
based
on
you
know
the
the
smart
smart
version?
What
what
is
your
preference.
C
My
connection
broke
a
few
times,
but
if,
if
I
get
right
from
my
perspective,
you'll
be
to
always
return
the
the
the
most
generic
one
that
is
going
to
to
be,
as
I
understand
the
specifications
kind
of
the
wtc,
because
they're
128
bytes
a
bit
and
it
could.
Even
if
you
are
using
some
specific,
like
64-bit
for
data
dog,
it
could
be
embedded
on
that.
You
know
you
know.
F
That
that
that's
for
sure
I
mean
not
so
much
deadlock
versus
w3c,
but
what
what
how
much
magic
we
should
do
for
the
case
that
activity
actually
has
a
hierarchical
id
rather
than
the
w3c
id
right.
F
That's
the
question,
so
I
could
either
give
you
the
actual
activity
id,
which
is
the
long
one,
the
hierarchical
thing,
and
then
you
and
your
vendor-specific
code
hash
it
in
some
way
to
get,
or
I
actually
have
a
hashing
logic
based
in
in
the
in
the
stubbing,
and
we
like
and
always
give
you
as
32-bit.
C
At
first,
I
would
prefer
the
magic
to
see
the
w3c,
because
I
think
that
is
the
new
default.
Is
that
the
direction
of
the
future?
So
I
always
look
to
go
to
that.
Okay,
there
is
the
past
where
there
was
the
hierarchy
was
the
default,
but
we
are
moving
away
from
that
anyway,
even
in
their
own
time,
so
I
I
think
we
should
behave
as
the
direction
of
the
future.
You
know:
okay,.
B
Yeah,
I
think
it
would
be
easier
for
each
vendor
to
not
have
to
be
aware
of
what
specific
version
of
diagnostic
sources
is
being
used
at
the
time
to
then
know
what
type
of
id
they're
getting
back
and
just
have
that
logic
in
in
one
in
one
place,
and
perhaps
if
there's
performance
concerns
or
trade-offs
that
you
want
to
be
made.
Maybe
that
can
be
on
an
opt-in
basis.
F
And
then
day
to
day
guys
we
once
we
do.
We
generate
the
id
as
a
as
a
64
by
sorry
as
a
16
byte
number,
but
once
generated
we
always
treat
it
as
a
string.
We
never
keep
it
as
a
number.
Is
this
correct.
A
No,
it's
actually
treated
as
a
number
and
we're
limited
to
63,
but
it's
not
64.
yeah.
F
Yeah
for
historical
purposes,
okay,
sure,
and
what
would
you
use
it
as
a
number.
A
For
the
message
back
serialization,
it's
it's
it's
a
long,
it's
a
eulog
so
everywhere
in
the
net
code.
It's
a
you.
Long
span
like
these
can
trace
like
these
are
euler's
everywhere.
F
Keep
it
as
a
number,
if
we,
if,
if
we
invented,
we
can
keep
it
around
as
a
number
and
if
we
didn't
invent
it,
that
means
that
it
came
in
as
a
string
anyway,
and
we
have
to
parse
it
right.
If
it's
incoming
from
a
header
right,
if
it's
from
upstream,
then
we
parse
it
as
a
number.
A
F
A
Yeah
yeah,
which
probably
yeah
we
try
to
parse
it.
I
think
if
it
fails
we'll
just
default
to
zero,
which
creates
a
a
new,
a
new
route,
we
lose
the
distributed
propagation
yeah.
F
Okay,
okay
sounds
good
and
noah.
Do
you
have
any
feeling
about
the
performance
overhead
of
this?
There
is
a
conditional,
weak
table
or
whatever
the
class
is.
Essentially,
it
allows
me
to
attach
a
new
property
to
a
tray
class
right
is
this:
is
this
very
horrible
performance,
wise.
H
So
well,
there's
different,
there's
different
aspects
of
performance
that
get
interesting
there
that
so
there's
the
performance
of
the
dictionary
lookups
themselves,
which
are
I
I
would
assume
a
little
bit
more
overhead
than
your
typical
dictionary.
But
I
mean,
probably
you
know
we're
still
talking.
H
I
don't
know
50
nanoseconds
or
something
like
you
know
like
these
are
not
big
times
so,
and
it's
probably
tricky
to
do
better
frank
like
if
I
had
to
guess
in
the
in
the
most
recent
versions
of
diagnostic
source.
We
added
the
custom
property
support.
Yes,
yes,
and
that
would
probably
be
more
well
actually
you
should
test
it
out.
H
It
may
not,
even
because
it's
also
a
dictionary,
it's
not
a
it's,
not
a
weak,
a
weak
dictionary,
though
so
it
still
probably
winds
up
being
better,
but
you
can
try
and
see
so.
F
So
the
newer
versions,
the
newer
versions
is
is
is
is
like
I,
I
was
planning
to
essentially
be
smart
about
the
version
if
the
version
supports
custom,
so
I
I
was
going
to
create
an
object
that
is
the
tracer,
actually
the
tracer
extra
information
class,
and
it
has
all
the
properties
like
the
hashed
ids
that
we
have
and
whatever
else
we
need
to
keep
so
that
we
don't
need
to
do
a
dictionary
lookups.
F
But
I
still
need
to
associate
this
particular
object
with
the
with
the
activity
instance
so
for
activities
that
have
custom
properties.
I
was
going
to
put
it
as
a
custom
property.
F
So
that
at
least
we
only
do
one
lookup
for
for
that
object
and
after
that,
it's
immediate
dereference
right,
but
for
activities
that
don't
support
custom
properties.
I
was
planning
to
use
the
conditional
right.
H
So
that
the
other
thing
I
I
don't
think
it's
going
to
be
an
issue
in
this
case,
but
I
just
let
you
know
about
it
so
you're
aware
so.
The
conditional
weak
tables
under
the
covers
use,
this
handle
type,
that's
called
dependent,
handle
and
that
so
the
gc,
when
it's
walking
the
references,
a
dependent
handle,
is
basically
a
handle.
That
has
that
refers
to
two
objects,
and
it
says:
if
object
a
is
alive,
then
object
b
should
also
be
alive.
H
It's
like
it's
conditional,
I
mean
it's,
it's
exactly
the
same
semantics
you
get
if
object
a
just
had
a
reference
to
object
b,
but
when
the
gc
walks
these
things,
it
basically
has
to
sort
of
iterate
all
the
objects
that,
like
iterate,
all
the
objects
it
thinks
are
alive.
Then
it
goes
to
this
table
and
iterates
through
these
handles,
which
might
create
some
now
new
objects
that
need
to
be
alive.
H
Any
you
know
based
on
what
was
already
alive
and
then
the
handles
just
said.
Well
also
keep
these
other
objects
alive.
Then
it
has
to
take
those
things
and
go,
explore
them
and
figure
out
what
they
reference.
Then
it
comes
back
to
the
weak
handle
table
again
and
because
now
there's
more
things
that
are
alive
and
it
says:
okay,
I
gotta
walk
this
table
again.
H
Are
there
yet
more
things
and
it
and
it
iterates
that
as
many
times
as
it
needs
to
until
new
things
stop
getting
added
to
the
live
set,
and
so
what
it
can
mean-
which
I
don't
believe
will
apply
in
your
scenario.
If
you
basically
build
yourself
a
linked
list
where
each
item
in
the
list
is
using
a
dependent
handle
to
keep
the
next
thing
in
the
list
alive
or
you
know,
or
anything
like
that,
where
there's
this
long
iterative
chain
of
things
being
kept
alive,
the
gc
performance
to
walk
that
starts
getting
pretty
bad.
H
I
don't
think
it's
going
to
apply
in
your
case
because
I
don't
see
any
sort
of
nested
iterations
you're
you'll
have
a
set
of
activities
that
are
alive.
Each
of
those
activities
may
or
may
not
have
a
wrapper
that
they're
going
to
keep
alive,
and
then
those
wrappers
might
have
a
few
sub
objects
that
they
reference,
or
maybe
they
won't,
but
regardless
nothing
in
that
tree.
That
gets
explored
is
going
to
have
yet
another
round
of
dependent
handles
that
will
expose
yet
more
objects
and
so
on.
H
I
would
expect
the
first
round
it
will
find
new
objects
which
are
your
wrappers
and
then
and
then
there's
objects
referenced
from
them
in
the
normal
way
and
then
in
the
second
round.
I
don't
expect
it
to
find
anything
further,
so
I
would
expect
it
will
terminate
after
two
rounds
and
it
probably
won't
be
a
big
issue.
F
Trying
I'm
trying
to
not
box
it
instead,
I'm
trying
to
recommend-
and
this
is
actually
like
very
soon-
I
I'll-
have
like
an
enchanted
example.
So
I
would
really
love
some
some
some
feedback
and
some
code
review,
but
I'm
trying
to
say
so.
I'm
trying
to
never
essentially
put
these
these,
maybe
at
the
end
like
when
we
collect
activities
I
I'll
come
up
with
some
need
to
put
these
in
in
arrays,
but
still
not
not
not
box
it,
so
I'd
rather
I'd.
C
F
It
returned
this
activities
tab
and
you
can
quote
as
many
times
as
possible,
so
the
structure
right
now
at
least
only
contains
one
object,
reference
to
the
actual
activity
where
the
static
type
is
object
and
the
random
type
is
whatever
it
is
so
creating
an
instance
of
this
of
this
struct
is
very
fast,
so.
H
F
Exactly
exactly
that's
exactly
what
I'm
doing
in
order
to
prevent
things
from
being
boxed,
the
the
conditioning
weak
table
would
be
between
the
activity
object
itself
and
and
the
the
extra
information.
If,
if
I
made
the
struct
being
a
object
and
started
passing
it
around,
then
I
didn't
need
the
conditional
weak
table.
I
could
just
attach
the
information
directly
to
the
struct,
but
I
wanted
to
avoid
avoid
creating
an
additional
you
know:
okay,
all
right,
but
then
again,
if
in
most
cases,
so
so
I
manufacture
this.
F
If
it
turns
out
that
this
additional
information
is
only
necessary
in
some
cases,
then
I
think
the
approach
that
I'm
using
is
better,
but
if
it
turns
out
that
this
additional
information
about
activity
needs
to
be
always
there
for
the
tracer,
then
it
might
make
sense,
because
we
still
need
to
create
an
instance
that
contains
that
stuff.
So
if
we.
C
F
H
Right
yeah,
so
so
I
would,
my
guess
is
that
conditional
weak
table
is
probably
the
best
option.
You've
got
and
yeah.
F
We
will
do
that
for
sure
once
we're
close
to
getting
this
like
done.
I
think
I
don't
know
how
you
guys
feel
about
this,
but
we
will
run
a
whole
bunch
of
performance
issues
and
if
we
notice
non-trivial
performance
degradation,
then
for
from
the
docs
perspective,
the
whole
activity
effort
will
be
questionable.
The
goal
here
is
not
to
degrade
performance
in
respect
to
what
we
have
today.
C
But
but
then
I
I
o,
I
would
say
the
following,
then
I
I
think
we
have
to
do
that.
But
but
then
what
are
our
alternatives
you
know
are:
are
we
looking
into
living
with
activity
together
from
instrumentations
like
the
ones
that
come
from
their
own
time
or
asp.net,
and
also
a
second
type
like
the
one
that
exists
right
now
to
kind
of
try
to
generate
traces
from
them
or
you're
talking
about
kind
of?
Not
even
you
looking
at
diagnostic
sourcing
activity.
F
So,
at
diagnostic
source
we
have
to
look
because
we
need
to
collect
information
from
asp.net
and
whatnot.
So
that's
a
given
for
sure
in
terms
of
actual
activities
and
it's
for
us
as
a
community
to
decide
at
datadog.
We
are
convinced
that
small
performance
impact
is
more
important
to
real
world
customers.
F
Then
interoperation
with
with
like
open
telemetry
standards,
because
the
real
world
customers
at
the
end
of
the
day
they
want
to
see
traces
of
their
application.
F
C
But
just
when
you
say
you
need
to
support
diagnostic
social
media
kind
of
the
activities
generated
by
via
diagnostic
source,
right,
okay,.
C
That
that
that
seems,
like
a
let's
say,
a
reasonable
fallback
plan.
As
long
as
you
support
the
diagnostic
source
generate
activities,
it
seems
a
a
reasonable
fallback
to
look
into
as
there
we
try
the
experiments
and
if,
as
you
said,
we
have
the
performance
issue,
yeah
yeah,
now
that
that
was
something
actually
that
cross
had
my
mind
as
a
alternative
model.
Instead
of
trying
to
support
directly
activities,
I
mean.
F
The
thing
is
that
we
already
do
this:
it's
like
it's
introduction
right.
The
tracer
right
now
listens
to
asp.net's
core
diagnostic
source
and
transitions
it
to
our
internal
spend
representation.
So
if
the
whole
thing
with
activities
doesn't
work
out,
then
we
would
just
improve
the
performance
of
the
internal
span,
representation.
C
Yeah
the
the
question
that
there
are
a
few
follow-up
questions
about
that
model,
but
yeah
that
that
was
basically
the
model
that
I
think
most
people
fought
as
an
alternative
to
supporting
directly
diagnosed
source.
You.
D
F
We
actually
still
would
solve
some
of
the
library
loading
problems
right
now.
We
have
a
customer
case
where
this
is
the
case.
Customer
is
not
using
activities,
but
they
are
using
diagnostic
source
in
a
very
interesting
setup
where
applications
are
compiled
for
one
framework
version
and
you
run
on
another
framework
version
and
there
is
a
whole
mess
around
it
and
to
solve
this,
we
essentially
need
the
same
logic
for
loading,
the
library
dynamically,
but
we
need
far
less
logic
around
the
activity
class
itself.
F
So
that's
the
the
the
progress
report.
Thank
you
for
the
feedback,
everything
my
is
pushed
into
the
report
and
right
now,
I'm
on
dry.
But
when
I
returned
from
this,
my
next
step
would
be
to
first
do
this
id
thing.
I
want
to
do
this
first,
just
because
I
want
to
discover
whether
there
is
any
hidden
work
there
bringing
in
like
some
dependencies.
What
not
and
then
the
next
step
will
be
to
start
writing
these
reflection,
wrappers
and
there.
F
I
would
like
to
prioritize
writing
reflection
wrappers
for
diagnostic
source,
rather
than
for
activity
itself,
and
the
reason
for
it
is
that,
once
that
is
done,
we
can
actually
ship
this
library
in
a
early
release
and
start
using
it
within
the
existing
tracer
for
listening
to
diagnostic
source.
Right
now
we
actually
on.net
core.
F
If
the
is
not
core
we're
listening
to
the
diagnostic
source
by
loading
it
just
because
it's
it's
with
the
it's
part
of
the
runtime
and
on
full
framework
and
dotnet
course,
scenarios
where
it's
not
like
it's
a
standalone
application.
We
just
don't
listen
to
it
and
that's
that
particular
integration
is
not
enabled
so
once
that
work
is
done,
we
can
start
listening
to
it
all
the
time.
C
Yeah,
so,
as
I
mentioned
in
the
beginning,
I
I
will
contact
you.
I
have
the
the
local
build
for,
let's
say
our
emulation
of
diagnostic
source
working.
I
I
want
to
do
more
tests
with
that,
but
I
I
think
that
we
can
integrate
that
with
the
the
the
rapper
to
to
kind
of
make
this
progress.
I.
F
Can
I
can
make
an
update
slot
in
half
a
day
of
work
should
pick
up
on
this
instead
of
the
firebase
system,
by
the
way?
No
one
question
to
you
so
last.
F
Yeah
sorry,
last
week
I
brought
in,
I
took
the
dotnet
repo
and
I
brought
in
the
diagnostic
source
sources
into
the
open
telemetry
report.
F
Way
it
did
it
and
now
pablo
made
made
it
work
and
compiled
correctly
the
way
it
did
it.
I
actually
picked
up
the
entire
history
of
like
get
get
history
like
there's
a
bunch
of
guitar
get
commands
blah
blah
blah.
The
problem
now
is
that
it's
it's
it's
useful
information,
I'd
love
to
preserve
this
history.
The
challenge
is
that
the
legal
thing,
what
is
this
called
like?
If
you
look
at
my
screen
here-
is
the
pull
request.
F
I
H
C
F
Yeah,
so
you
see
this
year,
so
essentially
it
says
everybody
who
ever
contributed
to
the
diagnostic
source
should
be
signing
this
easy
cla.
Unless
so
for
data
dogs,
there
is
some
sort
of
automatic
thing.
If
you
work
for
datadog,
you
automatically
sign
it,
and
then
it
becomes
easier.
So
my
question
to
microsoft
is:
we
can
either
do
a
squash
and
then
it's
not
an
issue,
but
then
we
lose
the
history
or
we
do
some
kind
of
dance
where
everybody
kind
of
presses
a
button
to
send
this
stuff.
H
H
I
would
squash
it
do
it
that
way,
and-
and
I
mean
what
I
would
do
is
I
just
I'd
put
some
reap
me
next
to
it
just
to
make
sure
it's
clear,
you
know,
hey,
we
snapshotted
this
source
from
exactly
this
spot
in
the
git
repo
on
this
commit
and
then
on
the
occasion
that
that
history
winds
up
being
useful.
Of
course,
it's
still
present
in
the
in
the
origin,
repo
you,
someone
would
just
have
to
go,
do
a
little
like
work
to
say.
Oh
I
I
want
to
know
what
that
history
is.
F
C
Yeah,
just
to
also
make
the
other
folks
aware
for
open
telemetry,
it's
totally
okay,
to
incorporate
mit
license.
That
is
the
license
from
dot
net
runtime.
So,
regarding
incorporating
that,
we
just
need
to
keep
the
original
kind
of
license
there.
We
can
put
together
later
the
the
open
telemetry
one,
but
it's
totally
acceptable
as
mit
license.
F
Thanks
thanks
greg:
what's
your
question,
the
stuff
out
of
the
things
that
are
already
vendored
in,
does
any
of
them
have
mit
license.
F
G
Of
them
greg,
I
actually
have
one
question
about
the
diagnostic
source
coming
in.
You
picked
it
up.
As
of
today,
with
the
most
recent
commit.
Are
you
going
to?
Are
we
going
to
version
this,
along
with
the
along
with
a
version
that's
released
on
nougat,
I.e?
We
just
pick
a
tag
that
aligns
to
what
customers
would
have
as
well.
That
way,
we
are
not
getting
a
potential
bug
or
anything
else
that
would.
F
F
Generally
james,
I
think
you've
been
doing
some
thinking
about
the
same
problem
space
in
the
net
and
then
you
guys
sorry
in
microsoft,
and
then
you
guys
disappeared
for
a
week
or
two
from
microsoft
side.
So
there
is
a
issue
here
describing
this
in
detail.
F
F
Yeah,
and
also
actually
for
other
vendors,
like
maybe
you
should
complete
this
table,
and
I
was
very
too
high
level
with
this.
I
was
sort
of
this.
This
would
inform
us
about.
Actually
it
would
be
very
helpful
for
for
what
I'm
doing
to
tell
which
wrappers,
which
reflection
wrappers.
I
need
to
add
to
the
full
effort,
so
I
took
this
list
from
somewhere
online,
some
microsoft
site.
I
can't
remember
which
one
I
could
I
didn't
validate
every
single
one
of
them.
F
So
essentially,
I
wanted
to
create
a
list
of
libraries
that
exist
today
that
emit
telemetry
using
activity,
slash
diagnostic
source-
and
I
took
this
from
some
some
microsoft
side,
but
lucas
you
and
cg
as
well
pointed
out
that
this
is
too
simplistic,
because
some
of
these
libraries
emit
diagnostic
source
telemetry
rather
than
activity
telemetry.
I
think
only
some
very
recent
ones
might
actually
emit
activity
telemetry.
F
I
think
we
would
benefit
potentially
another
issue.
I
can
get
get
it
started
from
a
table
and
please
tell
me
whether
we
would
benefit
from
this
or
not.
I
think
we
would,
if
we
had
a
list
of
libraries,
all
of
them
that
we
know
about
not
just
microsoft,
saying
like
this
version
emits
a
diagnostic
source
telemetry,
this
version
exists,
emits
activity
based
telemetry,
so
that
we
know
what
we
need
to
test
with
what
and
for
diagnostic
sources.
We
need
to
have
these
adapters,
so
does
it
make
sense?
You
have
such
a
table.
C
I
I
think
it's
it's
it's
very
useful,
but
I
I
don't
expect
much
outside
from
microsoft.
Perhaps
zapping
sites
have
a
bunch
of
stuff
in
libraries
but
because
it
wasn't
auto
activity
was
diagnosed.
Source
is
pretty
new,
so
I
don't
expect
many
more
things
outside
microsoft.
You
know.
F
I
agree,
but
even
within
microsoft,
there's
all
sorts
of
sdk
sdks
related
sdks
that
might
be
maybe
should
be
on
this
list
and
are
not
on
this
list
and
also
some
is
the
case.
F
C
You
you
could
open
an
issue
on
github
and
keep
the
table
there
directly,
so
people
can
suggest
an
ad.
F
I
was,
I
was
thinking
the
reason
I
was
thinking
of
a
google
doc.
Then
we
could
have
an
issue
in
both
open
tele,
like
sdk
repo
and
our
report,
and
same
doc
shared
and
then,
when
okay.
C
F
Yeah
and
james
and
like
maybe
the
us
you
can
ask
someone
from
from
the
thematic
microsoft,
has
most
tribal
knowledge
about
this.
So
I
think
it
would
be
a
good
if
you
guys
can
help
rather
than
us,
researching
this
for
hours
and
hours.
You
might
actually
be
able,
once
I
publish
the
document
to
complete
90
of
the
document
in
like
half
an
hour.
D
Yeah
cool
what
else
I've
done.
B
I
I
was
gonna
say
with
that
document.
There's
it's
probably
the
case
where
some
of
these
libraries,
perhaps
even
the
azure
specific
apis.
If
they're
going
through
something
like
http
client
behind
the
scenes,
then
they
might
implicitly
have
this
activity
support.
So
I
don't
know
if
that's
like
a
separate
category
or
or
not.
F
I
would
I
will
put
a
note
about
this,
but
I
think
that
in
this
particular
case
I
would
say
if
it,
if
the,
if
a
library
doesn't
add
any
additional
thing,
we
should
just
like
say,
say:
users
uses
http.
A
I
was
going
to
say:
I've
done
some
research
on
this
also,
I
think
about
a
year
ago
or
so
when
we
first
proposed
using
diagnostic
search.
So
I
got
some
additional
information
in
there
as
well
off
the
top
of
my
head.
I
get
entity
framework
core
does
emit
diagnostic
source
events-
it's
not
on
that
list
and
the
legacy
asp.net
ones,
none
of
them
emitted
natively,
but
there
is
a
separate
nougat
package
for
that.
A
F
I'll
create
a
table,
and
maybe,
if
you
have,
if.
A
F
Can
spend
like
15
minutes
adding
to
it
that
would
be
good
anything
else,
guys
oh
yeah,
zack
and
paulo.
When
is
your
expectation
that
we
can
like?
I
know
that
it's
in
review,
but
given
like
what
you
you
feel
from
the
team,
when
do
you
think
we
can
adjust
to
this
folder
thing
that.
J
You
mentioned
so
from
my
end,
I'm
trying
to
get
some
more
eyes
from
the
data
dog
side
just
to
make
sure
on
our
end
that
we're
okay
with
you
know
how
everything's
shaping
up
and
then
from
there
moving
forward
with
making
sure
that
it's
also
minimal
for
optometry.
C
Yeah,
I
I
from
my
side
you
kind
of
waiting.
I
I
I
I
I
work
together
with
zach,
so
I
basically
in
agreement
with
the
proposal,
I'm
gonna
do
a
review
of
that,
but
we
basically
discussed
at
every
step
on
the
proposal
and
after
that's
done
on
the
the
let's
say
upstream,
then
I
will
pull
one
change
before
that
to
kind
of
update
and
then
I
will
pull
the
change
with
the
real
argument
of
the
the
the
project.
F
C
Yeah,
I
think
I
think,
that's
good
for
me.
It's
a
I.
I
have
a
hard
time
keeping
notes
while
I'm
discussing
trying
to
pay
attention.
So
I
I
afterwards
they'll
try
to
pick
up
all
my
notes
and
put
there,
but
please
guys
take
a
look
at
the
doc
review
and
if
I
miss
something
please
update
there.