►
From YouTube: 2020-07-14 .NET SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
But
I
saw
that,
like
from
other
sig
meetings
like
spec
one
maintenance
meeting,
it
gets
uploaded
automatically
that
at
least
that's
what
the
meeting
note
says.
So
we
need
to
see
if
we
can
just
borrow
whatever
tool
they
are
using
to
automatically
upload
the
video.
Because
last
time
I
asked
sergey
he
said
it's
a
manual
process
and
it's
bit
involved.
B
We
do
have
like
few
of
them
updated
uploaded
to
youtube
like
those
are
the
ones
where
we
discussed
some
important
things,
and
we
explicitly
did
the
task
of
uploading
it
but
yeah.
Let
me
ask
it.
A
B
It's
all
recorded
by
default.
It's
just
that.
It's
not
update
uploaded
to
youtube,
but
it's
all
all
the
meetings
are
uploaded.
Sorry,
like
recorded.
B
B
B
B
I
yeah,
I
think
I
captured
four
of
them
here
like
one
is
the
default
one,
the
single
one,
which
is
to
use
processors
like
this,
is
what
the
ot
spec
also
says,
but
the
issue
is
we
already
create
the
activity
and
we
drop
it
in
the
floor
at
the
processor
level,
which
means
it's
not
so
performant.
B
B
A
B
Is
a
pr
which
michael
already
wrote
which
says
for
the
http
client
instrumentation?
We
have
an
internal
thing
called
filter
function,
which
is
internal,
but
he
is
proposing
to
make
it
public.
So
user
can
supply
a
any
filter
function,
so
we
can
filter
the
activity
itself
from
being
ever
generated
because
it
uses
the
diagnostic
source
filtering
mechanism.
So
this
is
the
earliest
in
the
pipeline,
where
we
can
apply
a
filtering,
so
this
would
be
like
the
highest
in
terms
of
performance.
B
However,
this
would
be
a
custom
solution,
not
like
part
of
the
spec
or
anything-
and
this
will
be
just
part
of
this-
has
to
be
made
part
of
all
the
or
all
the
instrumentation
adapters,
because
http
client
is
one
of
them.
We
probably
need
something
similar
for
sql,
client
and
grpc
client
as
well,
because
one
of
the
use
cases,
the
the
jaeger
also
in
the
libraries
themselves,
would
be
creating
like
telemetry,
which
we
want
to
filter.
That's
one
use
case,
but
that
can
be
solved
using
a
different
approach.
B
So,
and
there
is
an
open
issue
in
the
spec
repo
to
support
this,
but
that
issue
is
marked
as
post.
Ga
so
it
won't
come
in
the
spec
in
ga
time
frame,
so
just
want
to
know
like
everyone's
opinion
on
like.
Should
we
go
for
the
custom
solution
where
we
expose
a
public
thing
to
our
instrumentation,
it's
not
part
of
the
api
or
sdk,
but
it
will
be
just
part
of
the
public
ap
for
http,
client,
instrumentation
and
other
instrumentation,
which
we
have,
and
if
and
when
of
open
elementary
officially
supports
one.
C
Just
a
question
says,
because
this
is
I
I
was
thinking
about
this
from
other
perspectives.
Perhaps
I
should
read
the
the
issue.
I
wasn't
aware
of
the
issue
yet,
but
I
I
was
under
the
impression
that
for
the.
D
C
Achieved
sources
we
would
like
to
kind
of
listen,
then,
by
default,
because
it's
kind
of
similar
to
what
happens
to
open
telemetry
in
general.
But
I
I
I
think
that
for
the
legacy
ones,
the
activity
that
don't
come
the
active
source,
then
we
probably
should
not
be
listening
them
by
default.
B
B
So
if
that
unknown
or
unwanted
thing
is
using
http
client,
then
we'll
just
capture
it,
because
we
anyway
enable
http,
client
instrumentation.
B
B
But
your
point
is
like
it's.
It's
just
for
like
legacy
thing,
and
it's
enabled
because
we
need
other
http
client
cores
so.
D
C
B
B
Like
we
are
going
to
expose
a
new
public
apa
in
the
instrumentation
side,
it's
not
on
ap
or
sdk,
so
it's
like
more
flexible
there,
but
still
once
we
expose
a
public
ap,
we
would
want
to
retain
it
after
beta
as
well
so
trying
to
see
if
it
makes
sense
to
not
expose
this
right
now
and
wait
for
the
spec
to
come
and
then
officially
expose
the
filtering
scenario.
B
The
scenario
is
like:
you
want
to
filter
out
activities
or
spans
which
are
auto
collected
like,
for
example,
you
have
http
client
instrumentation,
so
we
collect
every
telemetry
from
like
whenever
someone
makes
an
http
client
code.
Now
there
are
scenarios
where
you
want
to
you.
You
don't
want
certain
calls
like
let's
say
based
on
the
url.
You
don't
want
this
particular
code.
B
There
are
different
ways
of
achieving
it,
but
you're
trying
to
see
the
the
best
option
which
the
in
terms
of
performance
the
best
option
would
be
to
like
filter
it
at
the
very
root
like
prevent
the
activity
from
ever
being
created
at
all.
But
that
requires
us
to
expose
this
api
already
exists.
It's
just
that
it's
internal
and
we
use
it
internally
to
suppress
the
jaeger
and
sipkin
http
codes.
B
D
You
take
the
message,
so
you
basically
decide
on
whether
or
not.
D
If
you,
if
you
think
about
performance
and
when
you
try
to
avoid
locating
an
object
of
course,
then
you're
already
very
very
performance
focused.
So
one
concern
is
this:
first,
you
know
when
you
start
inspecting
the
request
message,
then
you
need
to
really
inspect
it
in
such
a
lightweight
way,
so
that
you
like
it's,
it's
not
true
to
write
such
code
yep.
D
So
so
one
one
thing
that
is
like
with
all
these
general
processors,
where
you
just
take
a
function
wherever
you
put
it
early
on
or
later
on
in
the
pipeline
when
people
start
using
it
naturally,
and
then,
unless
you
really
really
know
well
with
what
you
what
you
do
and
most
many
applications
in
the
representation
don't
go
into
that
much
thinking,
then
this
is
very
powerful,
but
then
it
ends
up
being
a
more
performance
impact.
So
when
we
create
such
a
mechanism,
then
you
know
is
it
push?
D
Is
it
is
it
worthwhile
thinking
about
a
less
flexible
mechanism,
but
that
has
that
is
hard
that
you
get
wrong
in
terms
of
performance.
B
D
Yeah
so,
for
example,
if
we
actually
took
some
cases
that
that
we
would
say
we
actually,
we
don't
allow
any
function
because
that's
super
flexible.
It's
super
powerful.
This
is
super
easy
in
terms
of
the
elegance
of
the
api,
but
people
might
get
it
wrong
by
making
a
complicated
function
right,
but
if,
instead,
you
really
would
just
ask
people
for
data,
so
specify
a
filter
by
giving
us
some
data
and
we're
only
a
large
field
on
the
data.
D
Maybe
you
say
we're
only
allowing
to
filter
on
some
properties
of
this
request
message
and
you
just
allowed
to
give
us
some
properties
that
are
either
like
only
these
properties
or
all
properties.
Except
for
this,
or
something
like
that.
B
Which
means
like
we'll,
be
writing
code
to
extract
those
like
special
list
of
things.
D
We
would,
for
example,
say
only
the
target
url
you
can
filter
on
if
the
target
url
contains
a
certain
substring
as
an
example
right
or
only
a
certain,
you
know
tight
field
whatever
this
is
way
less
flexible,
but
in
if,
if
we,
if
we
have
a
concrete
scenario
that
would
be
addressed
by
this,
an
advantage
of
this
could
be
that
essentially,
people
are
not
going
to
get
it
wrong
in
terms
of
performance.
D
I
would
define
either
way,
but
I
I
just
wanted
to
raise
that
in
in
the
old
application
inside
sdk.
We
had,
I
remember
several
times
where
customers
sort
of
they
would
use
the
the
old
application,
such
as
they
had
these,
these
telemetry
processors,
which
were
super
powerful,
but
sometimes
the
customer
would
write.
D
B
D
B
But
yeah
just
to
compare
that
with
like
we
have
like
we
have
like
well-defined,
like
spec,
supported
way
to
filter
things
using
activity
processor.
That's
my
like
number
one
approach,
which
is
like
easy.
Like
it's
completely
up
to
the
customer
to
author
there
actually
you're,
saying
there
right
yeah,
it's
already
there
like.
We
don't
really
need
to
do
anything
because
acuity
processor
is
already
like,
like
well-defined
mechanism
to
enrich
or
filter
activities,.
C
B
The
concern
here
is
like
here
at
this
point:
like
activity
is
already
created,
and
then
you
are
choosing
not
to
send
it
to
the
exporter
so
like
it's
not
so
performant
because,
like
this
is
like
this
filtering
is
specifically
targeting
the
http,
client
or
grpc
client
or
sp
net
core.
All
of
them
works
using
the
legacy
diagnostic
source
based,
instrumentation
and
diagnostic
source
has
a
built-in
way
of
like
subscribing
to
events
based
on
a
predicate
like
you,
you
don't
like
blindly
subscribe
to
all
events.
You
subscribe
to
all
those
events
which
meets
your
criteria.
B
Yes,
so
that's
why,
like
michael,
was
pushing
to
make
this
happen
and
yeah?
There
are
like
many
other
use
cases
as
well.
One
of
them
is,
if
it's
already
possible,
to
avoid
diagnostic
services.
I
mean
the
fewer
apis.
We
introduce
the
better
yeah.
The
api
is
already
in
diagnostic
source.
It's
just
like
we
are
the
one
who
subscribe
to
the
diagnostics.
Also
at
the
subscription
time,
we
can
either
say
subscribe
to
everything.
B
That's
what
we
do
currently
subscribe
to
everything
which
matches
the
name
httpclient.out,
but
there
is
an
alternate
api
which
takes
subscribe
to
everything
which
matches
http,
client
and
matches
this
predicate.
So
this
this
filter
function
is
a
north
class.
This
would
be
in
the
http
client
instrumentation.
This
will
be
part
of
the
instrumentation
class
like
which
should
be
like
this
yeah
like
if
there
are
like
no
strong,
really
strong
reason
to
support
this.
I
my
personal
take
is.
D
C
Okay,
one
observation
here
is
that
I
think
the
dot
net
is
hitting
this
kind
of
earlier
than
the
ot
and
the
ot
is
kind
of
giving
let's
say
answer
regarding
the
activity
process
and
the
sample
that's
kind
of
not
ideal,
because
basically
we
have
the
library
instrumented
already
with
activity.
C
That's
why
dotnet
hits
this
before
or
any
others,
because
they're
using
http
and
libraries
and
that
library
doesn't
have
plugin
or
something.
So
I
think
your
idea
of
delay
in
this
kind
of
it's
it's
actually
good,
also
because
I
think
eventually
open
telemetry.
You
have
to
come
with
a
kind
of
to
respect
something
to
handle
that
scenario.
B
Yeah
okay,
so
I
lagged
a
comment
saying
that
we
need
to
address
this
after
beta
and
then
we
can
actually
even
consider
the
proposal
which
greg
made
like
instead
of
being
a
like
very
generic
function,
where
people
could
potentially
go
wrong.
B
You
could
make
it
like
even
refined,
so
that
we
only
allow
like
a
subset
of
filtering
based
on
url
or
other
well-known
things
based
on
like
we
need
to
collect
like
scenarios
which
are
being
used
and
then
expose
it,
but
like
for
short
time
like.
Let's
like
there
is
a
workaround
using
processor
or
even
sampler
like
the
sampler,
is
like
somewhat
powerful
because
it's,
it
still
saves
us
some
location.
B
But
let
me
like
add
notes
here
and
then
come
back
to
it
after
we
do
the
beta
and
then
we
can
talk
about
like
making
this
public.
So
I
got
a.
A
This
may
be
somewhat
tangential
to
to
the
the
solution
here
for
the
use
cases
that
you've
articulated,
but
in
doing
the
grpc
instrumentation.
One
of
the
things
I
I
raised
was
the
underlying
span
created
by
http,
client,
instrumentation
and
kind
of
posing
the
question
of
of.
Is
that
ideal
and
we
had
discussed
maybe
making
that
configurable?
B
Yeah,
I
guess
we
should
like
we
need
to
like
find
a
way
so
that
the
grp
instrument
grpc
instrumentation,
to
put
some
marker
in
the
like
some
place.
I
don't
know
where,
like
so
that,
when
the
child,
the
potential
child
is
actually
cdp,
client
called
so
we
can
like
do
a
filter
based
on
whether
that
marker,
which
was
put
in
place
by
the
grpc
yeah,
I
think
it's
probably
related
but
yeah.
I
need
to
think
through
this
more
cloudy
yeah
and
we
like
the
current
grpc.
We
haven't
solved
that
right.
B
We
currently
collect
both
like
the
and
the
child,
http
client
right
right,
right,
yeah,
okay,
let's
discuss
this
in
the
pr
like
I'll
convert
this
issue
with
all
the
ideas
which
we
discussed,
and
we
can
continue
discussing
it
here
and
come
back
after
we
ship
beta
to
like
change
api
or,
like
figure
out
whatever
approach
we
end
up
taking.
B
Is
there
any
other
comment
on
this
one?
Otherwise,
let's
move
on
to
the
next
one.
B
Okay,
yes,
okay,
so
this
is
about
meta
package.
I
think
this
was
raised
like
quite
some
time
back
because
even
with
the
current
state
like,
for
example,
if
someone
wants
to
use
an
asp.net
core
application,
he's
currently
forced
to
install
at
least
three
packages
like
this
gives
the
like
enable
open,
elementary
extension
method.
Then
we
need
to
enable
this
to
get
dependency
monitoring,
and
this
is
to
get
the
sp
net
core
incoming
request
monitoring.
B
So
the
proposal
was
to
do
like
like
a
meta
package
like
which
would
just
refer
to
all
these
three
packages,
so
customers
of
asp.net
core
they
just
installed
one
package
which
internally
brings
all
the
necessary
packages.
That
is
one
thing
in
terms
of
package
and
second,
is
right.
Now
the
code
would
look
okay,
I
don't
have
the
code
open
here,
but
the
code
would
look
like
substantially
simpler,
for
let
me
show
that
actually
so
that,
right
now
the
code
would
be
using.
B
This
add
open.
Elementary
comes
from
one
package,
add
request,
instrumentation
comes
from,
another
dependency
comes
from
another,
and
this
come
from
another,
so
we
could
like
possibly
make
a
really
short
helper
method
which
takes
all
day
for
like
we
would
simply
say,
enable
open,
telemetry
with
zipkin,
something
like
that.
We
should
internally
enable
everything-
and
we
can
like-
allow
to
customize
like
in
to
turn
off
or
individually
customize
each
of
these
things,
but
that
would
like
really
shorten
the
like
the
lines
of
code,
which
one
needs
to
write.
B
In
most
cases,
it's
just
a
one-liner
and
just
install
one
package,
so
I
think
eddie
made
a
pr
like
proposing
it.
If
there
are
any
comments
on
this,
we
would
like
to
know
so
that
he
can
like
incorporate
that
right.
Now,
it's
not
a
beta
blocker
or
anything.
It's
just
a
really
nice
to
have
enhancement.
B
I
think
it
was
posted
like
long
back
from
apr
from
yeah
include.
So
are
there
any
ideas
or
concerns
with
this
approach?.
B
E
The
first
one
is
who's
going
to
add
test
coverage
for
this
meta
packages.
I
assume
like
with
the
meta
package.
We
need
some
test
coverage
like
a
new
reference
and
make
sure
the
versions
are
well
maintained,
correct.
Yes,
that
would.
B
E
E
F
Yeah
yeah,
I
think
we
have
two
of
the
two
options:
one
mega
package
and
one
that
uses
the
three
pack
package
and
has
some
code.
For
example,
the
asp.net
core
package
has
a
new
extension
for
isrc
collection,
so
that
one
has
some
code
inside
it,
but,
for
example,
for
the
meta
package
that
is
the
worker
one.
I
I
think
I
didn't
add
any
line
of
code.
It's
just
a
an
empty
project
with
the
references
of
the
the
other
package
itself.
B
Okay,
all
right
so
like
there
is
a
like
first
class
thing
in
you
get
called
meta
package
right,
so
we
are
not
technically
using
that
that
meta
package.
B
Yes,
so
like,
I
don't
think
we
need
to
like
solve
this
in
this
meeting,
but
generally,
if
there
are
any
issues
with
this
and
other
ideas,
please
do
share
it
here
once
again.
This
is
not
really
important
for
beta
it's
nice
to
have
things,
so
we
can
get
it
done
afterwards
as
well.
B
B
But
if
anyone
has
any
concerns,
please
do
race,
otherwise
I'll
just
ping
mike
and
sergey,
because
they
are
the
ones
who
needs
to
approve
this.
B
Okay
and
yeah
mike
is
not
it
here,
so
I
need
to
follow
up
with
him
separately,
yeah
yeah,
so
this
is
about
the
beta
release.
B
So
I
tagged
like
this
friday
as
the
day
when
we
are
going
to
do
the
beta
release,
but
I
actually
hit
one
roadblock
okay,
but
it
is
milestone
yeah,
so
I
tagged
beta,
which
says
like
july
17,
but
it's
going
to
be
like
slipped
by
three
to
four
days,
because
the
current
packages
depend
on
preview,
7
of
the
diagnostic
source
and
it's
not
going
to
be
released
until
july
21st.
So
we
will
not
be
able
to
release
anything
you
get.
B
I
mean
we
can
release
to
nougat,
but
customers
would
be
still
forced
to
add
the
private
nuget
fee
to
get
the
diagnostic.
So
so
I
would
propose
to
wait
till
21st
or
22nd
so
that
we
can.
We
can
have
like
a
proper
user
experience,
so
they
just
install
our
package.
They
don't
need
to
do
any
custom
package
and
yeah.
So
this
is
just
an
fia.
I
mean
unless
someone
really
wants
to
have
a
preview
or
a
beta
like
this
week.
B
This
is
what
I'm
going
to
execute
it's
most
likely
after
21,
not
on
21,
so
I'd
assume
that
actually
would
be
next
tuesday
yeah,
so
around
next
mid
week
is
when
we'll
have
the
beta,
okay
and
yeah-
and
this
is
an
extra
item,
mr
step,
like
notification
for
everyone
to
go
and
review
this
vr,
it's
being
it's
already
merged
or
it's
about
to
be
merged.
It
essentially
contains
all
those
changes
which
were
made
in
the
dot
net
activity
to
accommodate
ot
requirements.
B
B
If
there
is
no
parent,
we
were
generating
a
trace
id
which
was
not
going
to
be
the
actual
one
used,
but
this
makes
a
fix
to
that
so
that
we
get
the
sampler
gets
a
trace
id
and
if
the
activity
ends
up
being
created,
that
tracer
is
the
one
which
is
going
to
be
used.
So
that's
one
big
change
and
then
the
second
change
is
about
a
tags
supporting
more
than
string
and
string.
B
We
previously
supported
just
string
and
string,
but
open
telemetry
spec
says
we
need
to
support
more
than
the
head,
and
this
pr
also
addresses
that,
so
these
changes
will
be
part
of
review
eight
only,
but
as
soon
as
we
are
done
with
the
beta,
we'll
get
the
private
build
with
this
change,
not
really
private
build,
but
from
the
separate
negate
source
and
start
changing
our
code,
because,
right
now
we
are
just
putting
everything
as
a
string
so
yeah.
B
If
there
are
like
any
comments,
I
think,
michael
and
myself,
we
already
reviewed
it,
looks
good,
but
if
there
are
anything
just
like
review
this
pr
yeah,
this
was
something
yeah.
We
had
a
very
active
last
week
like
actually
looked
at
the
github
status.
Sorry,
the
pr
status
we
had
like
28
years
merged
in
seven
days
so
and
some
tests
became
flaky
in
the
middle
of
that
huge
number
of
years.
We
like
we're
trying
to
fix
some,
it's
mostly
isolated
to
one
area
which
is
a
batching
processor.
B
So
I'll
be.
I
think
I've
made
one
here,
but
it
doesn't
look
like
he
fixed
it.
So
if
you
do
see
like
any
test
failing
in
the
ci,
don't
just
close
and
open,
because
that's
the
easy
thing
to
do
like
close,
appear
and
reopen
that
will
retrigger
the
ca.
But
if
you
do
that,
just
paste
the
error
message
from
the
test
as
a
comment
here,
so
we
can
go
back
and
investigate.
B
We
want
to
make
sure
like
the
tests
are
like
super
stable.
We
don't
want
to
do
this
close
and
open
work
around.
So
this
is
just
a
temporary
thing
until
we
figure
out
what
is
causing
the
flickering.
So
if
you
see
flaky
test
either
open
an
issue
or
just
add
a
comment
saying
this
test
is
failing
with
the
exact
like
failure
message,
so
someone
else
can
fix
it,
yeah,
okay,
so
next
one
is
also
like
work
in
progress
or
like
almost
done
pr.
So
for
folks
who
are
not
familiar
with
this
approach.
B
This
is
something
which
we
did
in,
like
my
application.
Insights
work
as
well.
So
what
we
are
going
to
have
is
we
have
a
five
two
files
per
project
per
target
framework
which
lists
the
public
api
which
we
are
shipping
and
it
will
be
by
default,
goes
into
this
unshipped
file,
and
once
we
decide
to
release
a
stable
version
or
beta
version,
we
need
to
design
that
we'll
move
it
to
the
shift.
This
becomes
official
apa.
B
The
primary
motivation
behind
this
is
once
this
package
is
added,
if
you
accidentally
or
intentionally
make
any
changes
to
public
api,
and
if
you
don't
add
it
to
this
file,
the
build
will
fail.
So
this
will
force
you
to
be
really
aware
that
you
are
making
a
change
to
public
ap.
So
this
will
also
make
the
job
of
reviewers
very
easy,
because
some
peers
are
just
internal
changes
and
some
are
like
actual
changes
in
public
ap.
B
So
we
want
to
make
sure
like
we
are
not
going
to
break
public
api
once
we,
we
are
still
free
to
do
it
and
we
will
be
actively
breaking
this
week
because
we'll
be
removing
some
stuff,
but
once
we
reach
beta,
we
expect
this
to
be
more
or
less
stable.
B
So
this
is
something
which
we've
been
using
for
a
long
time
in
our
other
project,
so
I
didn't
enable
it
today,
even
if
we
approve
this
pr
will
only
enable
it
after
we
ship
the
beta,
because
right
now,
it's
like
really
huge,
all
the
things
quite
likely
that
we
will
change
something
and
yeah
whenever
we
change.
We
want
to
be
like
explicit
that,
like
we
are
changing
this
intentionally,
for
example,
this
matrix
thing.
B
B
Again,
this
is
just
a
status
update,
so
michael
had
already
integrated
or
incorporated
something
what
we
call
as
the
basis
for
integration
test.
So
inside
github
actions.
Now
we
have
the
capability
to
spin
up
any
dependent,
like
basically
run
any
docker
file,
so
inside
docker
you
can.
We
are
currently
using
to
spin
up
a
radius
and
we
know
how
proper
integration
test
for
radius.
B
We
were
adding
it
for
sql,
but
this
just
forms
the
foundation
on
which
we
can
build
like
more
complex
integration
tests,
but
this
will
be
post
beta,
probably
towards
more
as
we
reach
more
and
more
nearer
to
the
ga,
but
take
a
look
and
like
if
you
have
like,
like
other,
probably
like
this,
makes
more
important
in
the
quandary
paper,
where
we'll
be
having
more
auto
collectors
or
instrumentation
for
other
libraries.
At
that
time.
We
can
talk
about
it
more,
but
this
is
just
status
update.
B
Okay,
I
okay.
There
is
a
issue
with
fumbling,
which
I
I
didn't
create
an
issue
describing
it,
but
since
this
is
post
beta,
let
me
defer
it
for
now,
so
we
can
move
to
other
things
right
now
and
yeah.
Okay,
eric
says
dedicated
to
helping
open
telemetry,
that's
good!
So
can
you
share
like
more
details
before
we
discuss
our
instrumentation.
G
Yeah
yeah
yeah,
so
I'm
I'm
the
product
manager
at
new
relic4r.net
team,
and
I
just
wanted
to
make
a
kind
of
quick
announcement
here
so
in
in
general.
New
relic
has
been
dedicated
to
helping
not
open,
telemetry
and
we've
had
involvement
in
the
like.
The
many
of
the
other
projects
java
go
c-plus
plus
and
the
specs
and
everything,
but
we're
actually
now
making
a
similar
contribution
in
the
net
space.
G
Specifically,
so
I'm
excited
that
we're
actually
making
official
commitments
along
those
lines
in
the
short
term
allen
is
going
to
be
now
has
dedicated
time
and
is
going
to
be
continuing
to
get
plugged
into
the
project
and
then
going
going
forward.
We're
actually
looking
in
the
coming
months,
we'll
actually
be
looking
to
see
if
we
can
dedicate
some
more
some
more
resources.
G
So
with
with
that
in
mind,
you
know
we're
looking
to
kind
of
support
open
telemetry
as
it
heads
towards
ga,
and
so
my
one
question
related
to
this
was
you
know:
is
there
anywhere
to
see
what
remains
to
you
know,
support,
ga
and,
and
you
know,
kind
of
how
to
best
coordinate
that
work
and
where
we
can.
You
know
best
best
contribute.
B
Okay,
yeah,
so,
let's
I
mean
my
original
focus
for
like
this
week
was
just
to
get
the
beta
done
and
for
beta
we
have
a
tracker
ready
which
strikes
like
what
are
the
things
which
we
need
before
we
can
call
ourselves
beta.
You
can
see
like
there
are
a
few
check
boxes
missing
here
and
there
mostly
docs
and
examples.
We
have
basic
example,
but
as
soon
as.
B
Which
we
now
push
to
the
next
weak
middle,
I
will
be
creating
a
similar
tracker
like
this
for
ga.
I
think
other
reports
have
already
created
it
and
I
mean
ready,
may
know
better
like
because
I
think
the
idea
was
to
use
like
kanban
board
like
for
every
repo,
but
I
need
to
come
back
to
it
after
learning,
like
moto's
official
decision
in
the
other
repos
and
then
like
we'll,
have
tasks
sorted
here
and
like
it
would
partially
be
on
me
to
create
the
actual
board
with
all
the
items.
B
But
that
is
something
which
I
was
planning
to
tackle
only
after
the
beta.
So
is
that
fine
like
for
allen
to
wait
until
the
beta
and
help
with
the
beta
things
initially
and
once
we
get
beat
out
of
the
way
we
can
create
a
separate
item,
tracking
gi
readiness
and
then
pick
items
from
there.
G
B
Yeah
and
like
in
short
time,
I
think,
like
lm,
is
already
doing
the
grpc
server.
I
think
the
pr
is
already
there
and
yeah
if
anyone
like
is
having
free
cycles.
Please
take
items
from
here
before
you
actually
start
working
on
it.
Please
do
a
share,
a
comment
saying
that
you
are
going
to
work
on
it,
because
some
of
the
items
are
being
worked
by
someone.
It's
probably
not
visible
here
yeah.
B
What
I
was
facing
last
week
was
both
like
three
of
us
tried
to
work
on
the
same
thing
at
the
same
time,
without
realizing
about
each
other,
so
just
to
avoid
that
in
the
future,
just
make
sure
you
create
an
issue
if
an
issue
doesn't
exist
and
mention
that
you
are
going
to
work
on
it
so
we'll
know
if
someone
else
is
actively
working
on
it.
B
C
Yeah,
I
I
also
recommend
being
in
guitar,
saying
hey.
I
started
to
work
on
this
and
yeah.
B
But
yeah
so
like
you,
you
already
have
like
something
in
enough
in
your
plate,
or
are
you
still
looking
for
something
more
for
this
week?
If,
yes,
let's
sing
after
the
meeting,
so
that
we
can
give
some
time
to
discuss
our
instrumentation,
but
overall,
the
idea
is
yeah
create
an
issue
if
not
exist
and
have
it
self-assent
before
you
make
any
significant
effort
to
make
sure
you're
not
clashing
with
anyone
else
and
yeah
I'll
sing
with
ellen
offline
to
get
see
if
he
can
help.
A
B
Okay,
so
apollo
had
shared
this
document
earlier
followed
like
do
you
want
to
just
walk
us
through
this?
I
read
this
and,
like
you
know,
like
just
work.
B
We
don't
have
any
other
open
topic,
so
let
me
just
ask:
are
there
any
topics
about
sdk
apa
other
than
auto
instrumentation?
If,
yes,
please
speak
up
now,
otherwise,
we'll
I
have.
D
A
quick,
quick
question
just
for
a
pointer
for
so
I'll,
be
working
on
some
prototyping
in
using
the
new
activity
apis
in
our
auto
instrumentation
tech,
I'll
be
working
on
it
later
this
week,
and
it's
it's
it's
basically
in
line
with
with
what
paulo
is
about
to
talk
about,
but
I
so
far
I
haven't
actually
worked
with
it
all
that
much.
I
was
just
following
the
discussion,
so
is
there
like?
Should
we
really
learn
and
do
is
there
like?
D
B
B
B
D
Mind
pasting
it
into
jitter
or
chat
for.
H
But
what
about
like
kind
of
deeper
documentation
you.
B
Know
there
is
no
other
documentation
for
the
diagnostic
sort
for
the
activity
source
thing,
so
it
could
be
mostly
on
the
dotnet
team
to
write
documentation
on
activity
source,
but
based
on
discussions
which
we
had
in
like
previous
weeks.
I
have
one
day
to
write
like
initial
draft
right
here
in
this
repo
and
later
either
move
it
or
clone
it
in
the
dot
net
repo
directly.
But
if
the
first
action
item
is
on
me
to
get
like
a
like
basic
or
not,.
C
D
No,
no,
I
underst,
I
understand
the
concepts
but
I
I
I'm
reached
in
like
deep
deep
like
I
want
to
really
understand
this
deeply
and
the
latest
nougat
is
on
nuget
or
migrate
or
the
the.
B
So
I
think,
once
you
clone
this
ripple
and
you
just
run
it,
it
should
get
you
going
and
the
actual
version
of
diagnostic
source
which
we
depend
on
is
not
in
the
new
gate,
because
the
version
which
we
depend
on
is
like
preview
7
something.
So
you
need
to
add
this
like
source
to
your
package
manager,
so
that
you
can
get
it
right
away.
But
if
you
are
cloning,
the
solution
locally
and
using
visual
studio
or
any
like
totally,
you
should
like
have
it
running
out
automatically
without
doing
anything.
D
And
this
is
also
where
I
can
get
like
if
I,
if
I
just
want
to
prototype,
I
watch
our
trip
chase
changes
in
our
tracer,
then
this
is
what
they
need
to
reference.
B
For
yeah-
and
this
gets
like
pushed
daily
so
daily
builds,
are
published
here
like
we
like
open
elementary
is
depending
on
not
the
daily
build.
We
are
depending
on,
like
particular
version
which
contain
changes
which
we
were
like
after
so
this
is
a
particular
version
which
we
are
depending
on
and
we
will
be
updating
it
like
every
week
or
so,
because
there
are
upcoming
changes,
but
my
suggestion
for
like
getting
started.
The
easiest
is
clone
the
repo
and
run
the
samples.
You
should
be
good
to.
C
B
D
B
Think
this
is
already
a
good
point
and
all
these
symbols
like
are
like
working
end-to-end
like
it
seems
like,
for
example,
if
the
sample
is
sending
data
to
jager,
you
need
to
set
up
jager
locally.
That
instruction
is
there,
but
it's
like
on
you
do
that,
but
for
most
of
the
console
one
these
are
like
self-contained.
All
you
need
is
just
visual
studio
and
run
it.
B
You
don't
need
to
do
anything
else,
so
this
will
be
the
fastest
to
get
started,
but
like
zipkin
or
if
you
set
it
up,
you
can
change
it
and
actually
see
things
there
as
well,
so
yeah
all
right.
So
I
think
polo.
Can
you
yeah?
I
didn't
hear
any
comments
and
I
don't
see
we
have
newcomers
here
as
well.
Yeah,
so
follow
yeah.
You
can
take
over
and
discuss
about
this
plan.
You
do
you
want
to
share
and
or
we
can
just
use
this.
C
No
you,
you
can
keep
sharing
just
so
please
scroll
when
we
get
there,
so
we
haven't
been
talking
for
some
time
and
I
put
this
doc
with
a
kind
of
very
high
level
road
map
that
I
think,
should
try
to
satisfy
the
parties
that
show
interest
in
auto
instrumentation
and
kind
of
trying
to
also
recollect
a
little
bit
of
the
history
for
someone
that
didn't
have
a
chance
to
follow
us
not
involved
earlier
with
the
discussions
and
conversations
so
back
in
april.
C
I
I
started
this
conversation
about
the
auto
instrumentation
and
basically
following
the
open,
telemetry
recommendation
for
this,
and
the
recommendation
is
to
start
with
data
dog,
apm
and
kind
of
they
have
a
anyone
that
can
read
their
recommendation
from
open
telemetry
in
this
regard,
but
use
that
a
starting
point
and
try
to
kind
of
keeping
the
parties
involved
and
be
productive
to
anyone
interested.
C
And
then
there
was
a
an
alternative
that
sergey
presented.
That
is
something
that
he
worked
at
prior
on
microsoft,
that
is
the
clr
instrumentation,
combined
with
the
intercept
extension
that
is
a
slightly
different
model.
So,
instead
of
instrumenting
calc
sites,
it
instruments
the
target
and
you
put
your
code
in
the
in
the
functions
that
you
create
for
that
interception.
C
But
that
uses
the
clr
instrumentation
engine
and
the
the
code
for
both
parts.
The
native
part
for
the
instrumentation
agent
and
the
intercept.
C
Are
not
public
and
the
discussion
lasts
that
we
had
there.
We
were
asking
sergey
to
kind
of
trying
to
make
this
public,
or
at
least
as
a
sample
of
them.
So
after
that
we
we
had
we.
I
I
particularly-
and
I
think,
of
course,
the
the
whole
sig.
I
felt
that
was
more
important
to
kind
of
focus,
on
activity
source
and
whatever
change
or
something
that
was
made
required
to
happen
during
the
the
time
of
the
dot
net
run
time
getting
ready.
C
And
during
this
time
we
discussed
this.
We
changed
a
lot
of
things
and
now,
in
this
work
about
really
moving
the
implementation,
interactive
source
and
during
this
period
triggered
by,
I
think,
greg
and
noel.
C
We
had
various
conversations
about
directions
and
improvements
about
profiler
and
not
only
the
model
that's
used
in
the
in
datadog
apm,
but
also
in
using
the
clr
instrumentation
engine
and
from
these
conversations
came
a
picture
that
there
are
some
targets
that
are
very
valuable
for
us
in
terms
of
improving
performance
for
the
auto
instrumentation
that
exists
from
datadog
and
also
that,
for
microsoft,
is
important
to
have
support
to
the
clr
instrumentation
engine
because
they
already
have
cases
that
use
that.
So
taking
that
into
account.
C
In
the
last
conversations
that
we
have,
I
kind
of
put
a
short
guiding
vision
that
should
be
kind
of
only
the
things
that
it's,
the
top
high
level
things
that
basically
any
good
software
project
should
have.
But
on
the
other
hand,
I
think
it's
good
to
put
out
explicitly
so
we
kind
of
try
to
guide
our
daily
decisions,
not
only
the
big
ones
in
function
of
those.
You
know
so
high
performance
reliability.
C
D
C
C
Also,
it's
kind
of
must
be
a
really
good
experience,
just
as
an
example,
and
should
be
extensible
to
satisfy
the
needs
of
the
people
who
collaborate
on
this
project,
because
perhaps
somebody
wanted
to
do
some
instrumentation
that
doesn't
make
to
the
main
core
repo
and
they
should
be
able
to
kind
of
relatively
easily
build
a
package
with
that
kind
of
instrumentation.
C
So,
besides
that
we
have
kind
of
agreed
on
some
kind
of
shared
goals
and
targets,
and
then,
from
the
last
meeting
from
the
last
sig
meeting
that
we
had,
we
had
a
discussion
and
I'm
trying
to
put
kind
of
explicitly
the
things
that
were
mentioned
by
datadog
by
microsoft
and
from
our
site
splunk,
and
also
from
the
open
telemetry
side.
You
know
putting
that
that
together,
I
kind
of
made
a
a
very
high
level
proposal
that
goes
below
and
this
world
map.
C
Divergence
early
diversions
from
data
dog
that
is
kind
of
gratuitous.
At
this
moment,
let's
say
because
this
is
open
source,
it's
very
common
that
somebody,
let's
say,
comes
with
something
that
oh
I'm
rebranding.
Everything
at
the
beginning
is
not
something
that
we
want
to
do,
because
we
want
to
keep
to
keep
the
collaboration
used
to.
C
We
want
to
keep
that
separation
to
the
last
moment
possible.
You
know,
because
it's
going
to
make
things
harder
to
go
back
and
forth
between
the
ripples.
So
if
we
want
to
change
something
that
should
apply
to
datadog,
we
want
to
be
able
to
chair
peak
from
each
side
or
do
merges
easily
until
we
have
to
really
publish,
and
then
we
do
this
kind
of
distinguish.
C
B
D
Go
ahead
greg.
You
know
it's
it's
public
already,
so
right
now,
the
situation
is
that
the
all
genuine
data
by
policy,
everything
that
runs
on
our
customers-
machines-
is
open
source.
D
So
today
the
tracer
is
open
source
in
a
repo
that
is
owned
and
controlled
by
datadog,
but
is
completely
accessible,
and
various
companies
have
already
cloned
it
and
used
it
for
their
purposes,
which
is
fine,
so
the
conversation
here
is
essentially
to
create
yet
another
clone
that
will
be
branded
as
open
telemachine
and
where
we
will
have,
rather
than
each
company
controlling
things
independently.
We
will
have
a
joint,
open,
telemetry,
guided
process
about
making
changes
there.
B
Yeah
correct
so
like
paula
was
what
paulo
was
saying
was
like
once
we
clone
it
like
we
shouldn't
be
making
like
drastic
changes.
We
should
stick
with
the
original
one
until
until
like
we
have
enough
time
to
okay,
I
think
I
got
it.
Yeah
go
ahead.
C
Because
this
is
is
important
because
until
we
make
the
the
thing
holy
plug
boy
and
then
we
can
ensure
kind
of
that,
the
things
are
not
diverging
just
for
diversions.
You
know
we
need
to
kind
of
in
this
plan
when
the
things
are
pluggable
and
eventually
we
we
get
to
that
good
place
in
in
which
each
one
can
build
their
own,
and
we
can
share
that
that
code.
Then
we
start
to
do
that
kind
of
thing.
C
I
I'm
just
it's
because
in
my
experience
with
open
sourcing
projects,
it's
very
common
to
have
this
kind
of
contribution,
especially
when
you
fork
process
projects
like
that
that
something
somebody
comes
with
pr
saying:
hey,
I'm
renaming
everything
here
and
then
I
we
will
have
to
say
on
there:
hey
not
right.
C
Now
later
you
know
so
I
I
want
to
be
sure
that
that's
clear
since
the
beginning,
but
the
important
thing
is,
I
think
we
want
to
kind
of
bootstrap
the
the
ripple,
and
I
would
love
if
datadog
does
the
the
first
commit
to
to
that
ripple.
You
know
bring
the
code
first
to
make
just
show
their
big
contribution
in
that
sense
and
also
showing
that
for
the
community
that
we
are
trying
to
to
keep
involved.
You
know
yeah.
D
I
think
I
agree
I
we
have
our
next
sprint
starts
next
week,
so
I
will
try
to
get
to
it
this
week,
but
realistically
speaking,
I
think
beginning
of
next
sprint
is
a
good
time
for
us
to
do
this.
So
sometime
in
the
first
part
of
next
week
would
be.
B
C
We
already
have
the
the
issue
I
okay,
I
can
take
care
of
creating
the
repo
we
need
to
select
initial
maintainers
and
approvers
and
after
that,
greg
anyone
in
datadog.
C
C
Just
very
briefly,
maintainers
are
the
ones
that,
besides
the
the
work
of
reviewing
and
kind
of
guiding
the
direction
of
the
project,
they
they
also
are
the
ones
that
authorize,
merges
and
publish
releases
there
is
there
is.
This
is
explicit
definition
in
open,
telemetry
community.
I
will
send
you
the
links
for
both
maintainers
and
approvers.
C
C
D
I
I
I
would
agree
like
add
me
as
maintain
it
to
this,
and
it
would
be
great
if
someone
from
microsoft
would
join,
but
I
think
it
would
be
on
michael
and
and
alex
to
say.
B
But
that's
something
we
say
I
can
ask
and
like
if
they
are
interested
in
joining,
but
it
won't
be
me
for
sure,
because
I'll
be
just
focusing
on
the
other
part
of
things.
So.
B
I
could
help
with
reviews
and
things,
but
so
you
can
start
with
like
following
greg
and
I
think
greg.
You
first
need
to
start
with,
like
joining
the
community
like
like
in
the
same
link
which
follow
will
send
about
community
membership,
which
lists
about
like
the
basic
prerequisites.
B
So
I
mean
today
we
have
like
only
a
representation
from
neural
lake
other
than
microsoft.
So
is
there
anyone
from
new
relic
who
wants
to
be
part
of
like
thing
like
allen
or
if
you
could,
like
you,
one
of
the
one
of
you
mentioned
like
you
want
to
be
there
right.
A
A
I
mean,
of
course,
microsoft
and
noaa,
and
so
on
are
great
for
providing
the
guidance
and
so
on,
but
like
those
those
items
about
regit
support
and
target
method
instead
of
call
sites,
that's
actually
more
aligns
with
the
approach
of
our
agent
and
I'd.
Be
absolutely
happy
to
you
know,
come
to
the
table
and
share
our
approaches,
as
we
kind
of
get
this
thing
bootstrapped
and
get
to
the
point
where
we're
talking
about
evolving
it
and
so
on.
B
G
Yeah
I
mean
in
general,
I'd
say
we'd
we'd
love
to
have
that
commitment.
I
think
it's
just
a
question
of
at
least
at
first
like
how
how
you
know
alan
spence
spends
his
time
and
if
that's
something
he
can
be,
you
know
dedicated
along
both
along
both
lines
or
not.
G
D
D
I
don't
know
I
don't
have
much
experience
with
specifically
open
telemetry
project
administration,
but
maintain
is
more
like
right.
If
you,
if
you
need
to
do
one,
then
you
can
be
or
become
one
at
your
time.
I
guess
you
might.
B
When
you're
starting-
it's
probably
not
applicable,
you
just
start
with
whoever
is
interested
and
then
like
after
that,
if
you
want
to
add
more
people,
it's
mostly
based
on
whatever
is
a
guideline
from
telemetry,
but
it's
mostly
at
the
maintenance
discretion
so
yeah.
So.
D
D
B
Yes,
I
mean
it's
well
defined,
so
it
should
be
like
easy
to
read.
So
the
action
item
is
for
polo
to
get
the
repo
creator.
We
already
have
an
issue
raised,
but
just
get
that
repo
created
and
like
great
to
actually
commit
the
core
right.
That's
pretty
much
the
action
item
we
have
from
this
now
right
so
sounds
good.
C
Yeah
and
just
I
don't
want
to
go
over
the
list
here,
but
then
after
that,
just
to
make
clear
kind
of
we
start
to
have
specific
collaborations
and
to
specifically
start
with
the
tracer
making
that
sure
that
that's
really
pluggable
and
we
are
going
to
plug,
of
course,
the
open
telemetry
one
and
from
there.
I
I
think
the
collaboration
really
starts
for
us
perhaps
start
to
take
separate
meetings
and
focus
on
the
discussion
about
that.
C
And
I'm
I'm
saying
that
splunk
is
is,
and
that
will
be
me
putting
the
resource
to
kind
of
do,
guide
that
discussion
and
pull
for
everyone
in
in
a
nice
spec
or
designed
work
and
implement
what
we
agreed
upon.
B
Would
it
make
sense
to
have
like
a
separate
meeting
or
should
do
you
think
it's
okay
to
just
club
it
with
the
sdk
one,
or
is
it
time
to
split
into
a
separate
meeting.
C
B
Next
couple
of
weeks,
we'll
still
maintain
assist,
and
then
we
can
figure
out
if
you
want
to
speed
up
okay,
once.
D
It
becomes
technical,
it
will
start
taking
a
lot
of
time
but
yeah
for
now
it's
just
organization,
so
paulo.
I
thank
you
very
much
for
for
bootstrapping
this.
This
is
great
I'll
review
this
band
of
tomorrow
and
but
on
a
high
level.
This
looks
this
looks
great,
thank
you
and
yeah.
Then
we
can
get
cranking.
I
think
the
first,
the
first
technical
things
that
I'll
be
looking
at
is
activity.
D
I
would
like
to
prioritize
this
over
extendability
and
the
reason
for
it
is
that
I
would
like
you
know
if
we
in
the
process
of
that
investigating
this
have
any
feedback
from
microsoft
about
tweaking
the
new
activity
apis.
Then
this
is
a
time
critical
manner.
We
can
do
it
now
and
then,
a
little
bit
later,
it
will
be
too
late.
D
So
because
of
that,
I
would
like
I
realized
that
extensibility
is
subject
requisite
for
us
to
start
contributing
more
independently,
but
just
because
of
the
or
the
time
criticality
of
shipping,
the
net
five
apis.
B
On
that
like,
if
there
are
anything
which
requires
changes
to
public
api,
we
are
probably
already
late,
because
the
last
preview
is
preview
8
and
the
code
complete
is
like
in
next
one
week
or
two.
So
it's
very
unlikely
that
we'll
have
time
to
affect
any
apa
changes.
B
But
if
there
are
like
internal
implementation,
detail
like
performance
or
bug
fixes,
then
yes,
we
still
have
time,
but
from
my
understanding,
like
july,
mid
is
like
when
we
like.
We
mentioned
this
like
couple
of
times
earlier,
like
yes,.
D
I
understand
like
we
can
only
get
to
it
as
soon
as
we
can
get
to
it,
but
still,
even
even
though
it
makes
complete
sense.
What
you
say
is
still
if
we
find
out
something
that
is
important
it's
better
earlier
than
later,
whereas
the
the
extensibility
stuff,
however
important
there
may
be
curly
or
late,
is
less
critical
than
the
activity
stuff.
C
D
And
so
what
what
I
will
be
looking
at
initially
and
I'll
share
the
results,
especially
I
don't
wanna
because
completely
moving
over
to
activities,
there's
a
lot
of
work
and
we
will
certainly
we
certainly
want
to
do
this
together.
But
right
now
I
don't
have
the
capacity
to
complete
all
of
this.
D
So
what
I'll
do
is
essentially
I'll
pick
one
or
two
of
our
integrations
and
move
prototype
moving
is
actually
activities
the
activity
source-based
activities
so
that
we
have
a
prototype
and
know
essentially
what's
involved
and
how
this
architecture
might
look
like,
potentially
like
create
a
separate
branch
to
kind
of
make
sure
that
we
can
share
like
use
it
as
a
basis
of
because
if
we
have
a
clear
strategy
and
then
we
just
need
to
repeat
it
for
for
more
integrations,
then
we
have
already
an
architecture.
D
B
G
Actually,
sorry
can
I
can.
I
clarify
something
right,
quick.
I
just
wanted
to
know
in
terms
of
the
auto
instrumentation
side
of
things.
Is
that
something
that
we're,
including
as
part
of
the
ga
release.
C
The
the
the
goal
from
our
side
from
from
splunk
is
to
have
that
together
in
the
open,
telemetry
ga
you
know,
okay,.
D
Okay,
I
I
would
like,
from
my
perspective,
I
am
not
very
worried
about
the
label
j
versus
no
delay.
I
think
my
perspective
is
more
like
because
our
like
our
when
we
actually
ship
the
autumn
instrumentation
thing
as
data
dock,
at
least
for
the
very
shorter
future,
we
will
be
shipping
our
thing.
We
will
just
make
sure
that
there
is
synchronization
of
the
repositories
right.
So
that
means
like
our
stuff
is
already
g,
so
the
data
dog
tracer
is
j
for
data
log
right.
D
So
that
means
that
the
improvements
that
we're
making
in
the
community
they
are
they're
like
more
focused
about
turning
this
into
the
standards.
You
know
adding
activities
adding
all
these
extensibility
points
that
are
specifically
critical
for
the
auto
community
and
at
what
point
we
just
switch
the
preview
label
to
jlabel.
C
Mind
I
I
I
I
I
this
I
I
think
kind
of
for
us
is
because
we
are
investing
in
having
open
telemetry
and
we
want
to
see
kind
of
when
opps
does
its
j
announcement.
The
offering
is
kind
of
really
to
the
biggest
platforms.net
is
one
of
those
platforms,
so.
D
C
One
last
thing
I
would
like
to
anyone,
please
feel
free
to
comment
on
the
door,
can
add
the
thoughts
and
we
can
edit
and
put
in
a
in
a
form
that
kind
of
bring
up
all
the
concerns
and
goals
that
perhaps
are
not
there
that
I
miss
it.
You
know.