►
From YouTube: 2021-04-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
E
F
G
All
right,
I
think,
you've
asked
me
because
he
has
a
time
to
leave
in
about
30
minutes,
so
I
think
pablo
should,
if
nobody
has
anything
against
that,
pablo
should
do
the
demo
first.
So
we
can
kind
of
have
a
discussion
about
that
and
move
them
to
the
taking
care
of
business.
Part
of
the
meeting
afterwards.
H
H
H
Okay
cool?
So
I
just
wanted
to
give
a
quick
demo
of
a
new
receiver
in
collector
contrib,
and
it's
called
net
diagnostics
so
well.
H
H
I
I
don't
I'm
I'm
a
total
newbie
about
you,
know
the
microsoft
ecosystem
and
stuff,
and
and
this
this
this
implementation,
I'm
going
to
show
you
is
actually
right.
Right
now
is
actually
just
mac
and
linux
only,
but
it
should
be
relatively
trivial
to
to
add
windows
support.
So
before
I
get
into
the
actual
receiver,
I
wanted
to
show
you
this
this
tool
called
net
counters.
Are
you
guys
familiar
with
it?
H
Yes,
no,
yes!
So
it's
a
apparently
it's
a
very
simple
command
line
tool
that
you
can
use,
so
you
give
it
a
process
id
and
then
you
give
it
a
group
of
counter
names
and
then
it
will.
It
will
give
you
sort
of
metrics,
more
or
less
in
real
time
for
whatever
process
you're
pointing
it
at
so
I've
got
a
little
hello
world
running
here
in
the
background,
and
I
already
know
what
the
process
id
is.
H
So
I'm
going
to
run
this
tool
against
it
and
we'll
see
what
we
get
so
I
gave
it
the
system.runtime
group
of
counters,
and
so
it's
giving
me
it's
giving
me
all
these
counter
values
all
these
metrics
on
an
interval
of,
I
think,
one
second,
which
is
the
default.
H
So
so
that's
system.run
time
and
then
there's
a
there's,
a
diagnostics
api
that
you
can
use
to
create
your
own
counters.
So
I
did
that
as
well.
Inside
of
this
hello
world
thing,
so
every
half
second,
it
updates
this
counter.
It
just
increments
the
value,
that's
all
it
does
and
then
the
counter
the
event
sources
is
called
is
my
event
source.
So
I
can
give
that
to
the
tool.
H
Instead
of
system.run
time,
I
say
my
event
source
and
I
get
the
value
of
that
counter.
It
happens.
You're.
You
know
running
incrementing
twice
a
second.
So
that's
why
we
get
this
fractional
value
here
on
a
one
second
interval.
So
so
the
idea
was
just
to
take
this
very
simple
command
line
utility
and
turn
it
into
a
receiver
that
generates
hotel,
metrics
and
sends
them
down
the
pipeline.
H
So
here's
basically
here's
the
the
collector
config.
It's
basically,
you
know
analog
to
the
command
line,
arguments
that
we
just
that
we
just
saw
same
process
id.
We
give
it
an
explicit
collection,
interval,
a
counter
name
and
then
we're
just
gonna.
Do
we're
just
gonna
we're
just
gonna
log,
the
metrics
as
they
come
out
the
other
side.
So,
oh
and
a
quick
note,
so
in
this
case
we're
we're
putting
we're
hard
coding,
a
process
id
for
this
collector
config.
H
But
the
idea
is
for
folks
actually
running
this
collector.
They
would
be
using
the
the
receiver
creator
and
an
observer
to
automatically
based
on
you
know,
whatever
rules
that
they're
that
they
specify
to
automatically
find
the
process
ids
and
spin
up
the
receivers.
So
this
is
this
is
more
for
just
kind
of
a
demo.
Don't
actually
expect
customers
in
the
field
to
have
to
know
the
process
ids
ahead
of
time,
so
I'm
going
to
go
ahead
and
run
this
and.
G
H
C
H
Yeah,
absolutely
I
mean
it's
definitely
possible
to
configure
rules
to
only
only
find
dotnet
processes-
yeah
yeah,
but
again
we
do
need.
We
do
need
to
do
a
little
bit
of
work
and
to
make
that
happen.
B
Thanks
for
finding
dotnet
processes
whenever
we
create
an
event
pipe
server,
we
drop
a
semaphore
file
somewhere,
and
I
can't
remember
exactly
where
in
windows
and
app
data
and
in
linux,
it's
somewhere
else,
so
it
should
be
fairly
trivial.
Basically,
you
you
look
in
the
magic
directory
and
then
each
file
will
basically
put
a
something
that
includes
its
pid
there.
That
declares
I
mode
I'm
a.net
process
and
you
can
connect
to
me
with
eventpipe.
H
Yeah,
that's
exactly
right:
yeah,
there's
a
naming
convention,
at
least
for
for
link,
for
you
know
linux
and
mac.
There's
a
naming
convention
for
the
the
unix
domain
socket
file,
so
you
can
just
ls
with
a
you
know,
pattern
and
find
all
the
all
the
eligible
processes
relatively
trivially,
yeah,
that's
right,
but
that's
yeah
and
then
on
windows
like
I
said,
there's
still
a
little
bit
of
work.
That
needs
to
be
done
to
get
all
the
name
pipe.
H
H
F
H
There
we
go
so
what
this
does
when
it
starts
up,
is
it
finds
you
know
like
was
just
mentioned.
The
unix
domain
socket
file
based
on
this
process
id
it
opens
it
up.
It
sends
a
request
using
a
custom
binary
protocol
into
that
socket.
It
says
I'm
interested
in
event,
source
counters
with
this
name.
I
want
this
collection
interval
and
then
it
listens
for
for
a
response
and
then,
basically
from
there
on
out
it's
it's
passive.
H
It
just
listens
for
the
data
streaming
from
that
socket
and
it's
it's
a
custom
binary
protocol.
I
have
kind
of
had
to
reverse
engineer
it
from
that
that
tool
that
I
just
showed
you
that's
that's
all
open
source,
so
that
was
great
and
there's
also
good
documentation
for
it
as
well.
H
So
that's
that's
pretty
much
it
so
we're
getting
basically
the
same
metrics
that
we
just
saw
in
the
cli
tool,
but
we're
now
in
you
know
we're
in
a
hotel
pipeline
and
folks
can
do
whatever
they
want
with
these
metrics,
and
then
I
can
show
you
these.
So
that's
one
event
source
and
then
this
is
the
this.
This
was
the
custom
event
source
that
we
just
saw.
H
I
know
we'll
just
look
at
the
built-in
set
of
of
counters
called
system.onetime,
and
that's
there's
there's
a
lot
more.
You
can
obviously
give
it
multiple
here.
This
is
an
array.
G
One
thing
that
I
think
it's
worth
to
mention
pablo
is
that,
because
folks
may
not
be
familiar
with
the
collector,
the
collector
can
receive
different
metric
logs
and
traces
formats
convert
to
internal
format
and
has
exporters
for
very
different
other
formats.
So
once
you
can
consume
these,
is
you
can
send
to
whatever
metric
exporter?
G
H
Yeah
I
mean
I'm,
I'm
hoping
this
will
be
a
good
foundation
for
and
also
I'm
hoping.
This
will
be
a
good
foundation
for
not
just
metrics
but
possibly
traces
and
maybe
events
converted
to
logs.
I
I
did.
I
did
put
a
considerable
amount
of
effort
into
this
code
base
to
make
it
as
easy
to
understand
and
accessible
as
possible.
So
I'm
hoping
this
this
will
be
a
good,
a
good
start.
I
Just
just
to
make
sure,
do
you
mind
if
I
repeat
the
the
entry
end
scenario
here?
Yes,
should
you
really
understand
so
basically,
the
the
you
edit
this
modification
to
the
the
normal
like
the
open,
telemetry
collector.
So
that's
not
a
special
collector.
It's
the
open,
telemetry
collector.
I
H
This
there's
there's
a
core
collector
and
there's
also
the
the
contrib
collector.
This
one
happens
to
be.
H
Yes,
well
I
mean
the
contrib
one,
the
contrib
one
has
it's
a
super
set
of
of
core,
so
it's
yeah.
G
I
Also,
exporters-
and
this
is
collecting
collection.
I
G
So
and
then
the
country
has
kind
of
not
only
only
open
source
and
vendor,
but
has
this
stuff
that
is
not
maintained
by
the
core,
but
has
a
bunch
of
things
there.
You
know
so
you're.
G
Azure
stuff
that,
as
far
as
I
know,
was
not
implemented
by
microsoft,
but
for
instance,
that
there
is
exported
to
webbing
sites.
You
know.
I
So
so,
and
it
has
a
plugable
architecture
for
both
not
just
exporters,
but
also
importers,
yeah
and
processors
got
it,
and
so
this
essentially
once
it
shipped,
then
the
official
contribu
contrib
collector
will
be
able
to
get
these
dot
map
metrics
and
export
them
to
whichever
export
is
currently
active.
H
Right,
thank
you.
Thank
you
and
that's
pretty
much
it
for
the
demo.
Yeah.
Are
there
any
other
questions.
B
So
I
had
a
question
it
was.
The
config
file
looks
like
it's
really
focused
on
net
counters
and
but
the
you
know
the
work
you've
done
it's
so
you
can
connect
to
the
event
pipe
and
you
can
turn
on
providers
and
you
know,
counters
are
kind
of
a
small.
I
would
even
say
tiny
subset
of
what
of
the
information
you
could
collect
and
I'm
just
curious
if
you've,
if
you've
investigated
or
thought
about
the
other
things,
there's
there's
all
sorts
of
things
you
can
collect.
You
know
everything
from
like
jitted
methods.
B
J
G
I
think
it'd
be
awesome,
yeah
right
yeah,
you
are,
you
are
touching
on
something
that
that's
why
I
kind
of
wanted
pablo
to
bring
here,
because
I
think
metrics
is
just
a
start,
but
I
really
looking
forward
to
perhaps
using
this
for
profiling,
especially
now
that
the
there
is
some
work
on
the
spec
of
open
telemetry
to
collect
profile
information,
and
then,
if
we
have
a
defined
format,
we
can
build.
On
top
of
this,
to
correct.
I
Can
can
you
can
you
is
there?
Can
you
send
me
some
pointers?
I
would
love
to
to
join
the
discussion.
I.
G
I
I
I
will
get
the
point
after
the
meeting
there
wasn't.
I
I
usually
expect
to
kind
of
spec
profile
information.
You
know,
so
that's
one
question
that
I
have
dave
is
kind
of.
Can
we
do
the
profiling
for
performance
measuring
using
the
event
pipe,
because
my
understanding
is
that
tools,
like
perf
view,
use
the
event
pipe.
So
if,
if
that's
true,
I
think
that's
the
path
for
open
telemetry,
to
connect
profi,
to
collect
profile,
information
for
net
applications,
yeah.
B
So
the
answer
is,
you
can
almost
do
it
entirely
with
event
pipe?
The
only
thing
you
can't
collect
is
native
call
stacks
so
just
to
give
a
little
bit
of
background
and
hopefully
not
take
up
too
much
of
this
meeting
that
so
we
have
all
of
our
performance,
profiling
tools.
Internal
to
microsoft,
have
been
built
on
etw
for
a
long
time,
and
so
all
of
the
all
of
the
information,
the
runtime
emits
is
over
etw
events,
and
when
we
went
to
event
pipe,
you
know
we
there's
complete
parity.
B
There
is
whatever
the
runtime
emits
you
can
get
in
event
pipe
or
you
can
get
an
etw,
but
the
one
thing.
So
we
we
built
it
all
of
our
performance
profiling
tools
we
built
on
the
cut,
so
etw
has
a
concept
where
you
can
enable
stack
traces
and
it
will
give
you
it's
like
every
it's,
every
millisecond
or
every
10
milliseconds
or
something
it
will.
It
will
fire
an
event
that
contains
the
stack
of
every
thread
in
the
process
and
that
that's
native
and
managed
threads.
It's
you
know
from
an
os
level.
Here's
the
here's.
B
This
call
sack
on
each
thread,
and
so
when
we
went
to
event
pipe,
is
we
don't?
You
know
since
we're
the
runtime
and
we're
not
built
into
the
os,
as
we
don't
have
that
same
ability?
So
the
run
time
will
emit
manage
stack
traces
because
that's
under
our
control,
but
we
don't.
We
actually
don't
right.
Now,
don't
emit
native
stack
traces
and
the
way
that
we
work
around
it
in
perfume
and
vs
and
other
things.
B
I
So
so
perhaps
I
can
add
some
some
interesting
information
to
this.
Okay,
so
would
like
what
david
says
is
like.
I
can
confirm
all
of
this
on
the
consumer
side
like
we,
you
know
you
guys
are
producing
this
information
that,
yes,
it
can
be
consumed.
All
this
information
is
is
very
nicely
usable.
I
However,
it
all
breaks
down
in
production
of
continuous
profiling,
because
you
need
to
be
admin,
so
this
particular
etw
information
on
windows
is
only
consumable
if
the
process
was
listening
to,
it
is
essentially
has
root
access
on
the
box
and.
I
In
in
production
essentially
like,
if,
if
you
have
full
access
to
the
box,
you
can
log
in
and
collect
it,
but
if
you
want
to
have
some
sort
of
tooling
that
continuously
listens
to
this
it.
I.
I
really
would
love
to
know
how
to
get
this
to
work
in
a
common
production
environment,
but
I
don't
and
we
try-
I
try
it
hard.
I
I
I
The
performance
of
it
is
essentially
prohibitive
to
allow
this
to
run
all
of
the
time
continuously,
and
the
reason
is
that
it
suspends
the
entire
runtime.
So
it's
essentially
for
these
stacks
to
be
collected.
I
It
asks
all
the
threads
threads
to
reach
a
safe
point
like
for
a
gc,
and
then
it
walks
the
stacks
and
sends
information,
and
it
it's
really
beautiful
and
easy
to
use.
If
you
want
to
turn
on
profiling
for
something
you
know
collect
some
information,
then
turn
it
off
and
go
analyze
it,
but
for
like
continuous
profiling,
it's
it's
a
very
significant
overhead
yeah.
B
Yeah,
so
that's
actually
that
reminds
me
that's.
The
thing
I
was
going
to
say
is
that
it
is
expensive
and
it
doesn't
collect
native,
and
we
do.
We
have
investigations
going
for
how
we
can
collect
native
stacks
and
be
less
expensive.
There's
no
like
we
don't.
We
don't
have
the
work
committed
yet,
but
it
is
on
our
team's
plate
that
we
know
it's
a
pain
point
and
we
want
to
fix
it
for
the
stack
sampling.
I
Let's
take
it
offline,
I
have
some
thoughts
about
this,
but
I
don't
want
to
hijack
the
meeting.
So
maybe
we
can
have
an
offline
like
separate
meeting
to
bounce
some
ideas.
G
Regarding
other
things,
that
you
mentioned
was
about
listening
to
activities,
so
because
this
is
is
something
that
we
expect
more
and
more
stuff
on
the
runtime
to
to
be
instrumented
with
activities.
J
Source,
sorry,
is
that
power
you're
asking
me
I
I
I
was
asked
dave
yeah,
okay,.
B
I'm
sorry,
I
thought
you
were
not
asking
me
so
I
wasn't
entirely
listening
to
the
question.
I'm
sorry.
So
it
was
about
activities
and.
G
Yeah,
so
I
I
think
you
mentioned
that
we
could
use
the
event
pipe
to
listen
to
activity
and
activity
source
right.
E
G
Because
this
is,
I
expect
more
and
more
to
to
have
code
with
that,
so
this
is
probably
something
that
we
would
like
to
to
do
on
the
collector.
You
know
so.
E
K
Can
answer
that
question?
I
hey
folks,
I'm
sorry,
so
we
actually
have
something
called
the
diagnostic
source
event
source.
It's
sort
of
using
the
you
know
the
additional
parameters
you
pass
when
you
enable
a
provider.
K
We
have
this
like
dsl,
it's
a
simple
dsl
that
allows
you
to.
You
know
access
properties
on
diagnostic
source
events,
so
you
can
use
that
to
subscribe
to
activities.
So
we
have.
I
can
point
you
to
a
couple
of
examples
of
how
to
do
this,
at
least
with
the
managed
library,
and
it
should
be
straightforward
to
translate.
I
E
E
H
G
Yeah,
I
think
it'd
be
great
to
have
these
links,
because
I
think
this
is
definitely
something
that
we
we
want
to
look
down
the
road
you
know.
I
I
think
I
see
a
lot
of
potential
here.
There
is
the
the
limitations
and
things
that
greg
and
dave
mentioned,
but
I
I
think
that,
because
what
I
hear
is
lot,
a
lot
of
people
that
use
dot
net
is
use
it
to
have
these
regions
of
information
on
windows.
G
They
are
moving
to
different
vms
right
now
and
linux
basically,
and
they
wanted
this
richness
of
information
about
the
events
the
metrics.
All
of
that,
and
I
think
that
this
is
a
a
way
that
kind
of
almost
can
become
a
standard
and
it's
supported
by
open,
telemetry
itself.
You
know,
especially
if
we
start
to
have
the
the
profiling
format.
Also
supported,
becomes
a
very
generic
and
easy
way
for
people
to
get
this.
G
Yeah,
please
send
us
the
link.
We
are
crazy
to
take
a
look
and
perhaps
soon
in
the
future
start
doing
some
work
on
that.
C
This
may
be,
I
have
a
question
just
to
make
sure
to
clarify
and
to
make
sure
that
we're
the
same
page.
So
it
means
that
if
some
api,
like
http,
request,
etc
supports
activity
api,
it
means
that
to
have
the
instrumentation,
we
will
not
even
need
to
have
the
profiler.
We
will
just
need
to
collect
or
we
it
may
be
possible
just
to
have
the
collector
running,
which
will
fetch
all
this
information
for
us.
I
The
big
challenge
in
that
space
is
so,
if,
essentially,
if
you
have,
if
you're
interested
in
telemetry
from
libraries
that
are
instrumented,
then
like
it,
it
can
work
beautifully
and
it
sort
of
works
already.
With
the
dotnet
sdk
open
limit
sdk,
and
with
this
functionality
you
can
collect
the
same
kind
of
information
out
of
proc.
I
It
will
be
very
comparable.
The
challenge
is
that,
if
you,
in
particular
in
the
context
of
this
group,
if
you
have
a
mix
of
libraries
that
are
instrumented
but
also
that
are
not
instrumented
and
therefore
using
auto
instrumentation,
then
the
big
challenge
is
around
bringing
this
together,
because
you
have
now
a
hierarchy
of
activities
or
spends
or
whatever
you
call
them,
which
is
essentially
the
same
thing,
and
the
current
at
least
expectation
for
the
activity.
I
Library
is
that
you
can
not
only
emit
this
telemetry,
but
you
can
also
ins,
inspect
the
current
activity
and
potentially
even
change
its
properties.
It's
very
powerful
functionality,
but
it's
also
somewhat
limiting,
because
it's
now
read
and
write
not
just
write
emit.
I
Activity
submitted
by
libraries
and
libraries
need
to
understand,
and
the
net
engine
needs
to
understand
the
activity
submitted
by
the
auto
instrumentation,
meaning
they
need
to
use
the
same
data
exchange
type
and
that's
what
all
these
problems
that
we're
having
with
versioning
and
and
using
the
right
versions
of
diagnostic
source
on
not
being
able
to
use
it
at
all.
It's
all
related
to
that.
I
If
you
just
write
you're
right,
you
can
just
you
collect
all
this
information
very
easily,
but
then
the
auto
instrumentated
information
sort
of
lives
in
its
separate
universe
and
it's
very
hard
to
reconstruct
the
hierarchy.
And
then
you
also
don't
have
the
expectation
that
so
there.
C
I
Yes
and
also
the
parents,
so,
for
example,
if
you
have
a
library
that
is
auto
instrumented,
then
it
internet
uses
a
library
that
is
explicitly
instrumented.
Then
the
explicit
instrumentation
doesn't
see
the
old
instrumentation.
So
it's
parent
is
not
like
the
parent.
Child
relationship
is
broken
yep.
So
so
that's
that's
challenging.
I
And
if
we
could
somehow
solve
it,
then
then
of
course
it
would
degrade
and
then
one
of
the
big
challenges
for
solving
it
is
even
if
we
solved
it
today,
then
in
the
real
world
we
have
to
support
all
the
older
versions
of
the
runtime,
where
it's
not
yet
solved.
So
you
know
a
good
solution
could
somehow
be
back
portable
to
to,
like
you
know,
every
runtime
version
that
that,
like
hotel,
wants
to
support
for
the
net.
I
I
And
the
thing
is
like
again:
the
customer
microsoft
needs
to
not
only
offer
a
solution
for
those
frameworks,
but
also
as
a
group.
Here
we
need
to
have
an
answer.
You
know
how
do
you
make
sure
that
whatever
microsoft
shipped
for
it
actually
is
installed
on
the
customers
environment?
G
B
Yeah,
there's
no
event,
so
this
is
all
based
on
event
pipe,
which
was
introduced
in
dinette
2.1,
but
it
had
some
fundamental
limitations
and
so
3,
0
and
31
are
really
the
first
time
you
can
use
it
in
production
and
so
done
at
core
30
and
later
and
each
of
the
individual
or
con.
It's
something
that's
under
active
development.
So
each
of
the
things
like
this
a
lot,
some
of
the
stuff
you
talk
about,
might
be
done
at
five.
G
G
Profiling-
but
this
also
gets
me
to
a
related
question
so
but
anyway,
this
seems
also,
of
course
it
can
change.
There
is
no
kind
of
guarantees
on
this,
but
this
is
the
the
future
of
let's
say:
diagnosticon.net
it's
the
present,
but
as
far
as
we
can
say
it's
it's
the
thing,
that's
in
the
future
right.
C
B
Yeah,
that's
what
most
of
our
internal
tooling
runs
on.
You
know
the
the
events
instead
of
running
I-core
profilers
at
this
point
and
the
you
know,
the
way
that
I
see
it
in
my
head
is:
if
you
need
to
consume
information
from
the
process
and
you
don't
need
to
interact
with
the
process
at
all,
then
it's
we're
trying
to
support
all
those
scenarios
with
eventing,
but
and
so
the
the
the
niche
that
the
I-core
profiler
is.
You
need
to
actually
interact
with
the
process
and
you
know,
modify
you
know,
rewrite
il.
You
know.
B
G
Yeah,
so
I
I
think,
for
instance,
greg
mentioned
some
scenarios
that
doesn't
work
for
auto
instrumentation
and
the
focus
of
this
group
is
auto
instrumentation,
but
I
think
even
there
are
scenarios
that
people
may
want
to
to
not
do
the
auto
instrumentation
collect
distill
this
information
and
I
think
the
collector
in
this
case
comes
very
handy
and
also
because
it's
an
ecosystem
with
the
collector.
So
let's
say
you
are
running
on
kubernetes.
The
collector
already
have
a
bunch
of
things
to
collect
information
about
kubernetes.
You
know
so
it's
one
more
thing.
G
That's
been
covered
with
the
collector,
which
makes
the
case
very
compelling
to
me
that
for
open
element
at
least
we
should
really
be
investing
on
this
diagnostic
receiver.
I
think
one.
C
We
are
not
changing
the
application
itself,
but
we
are
doing
something
like
more
in
the
background
and
also
someone
can
still
add
a
real
profiler
there,
like
you,
know,
memory
profile,
performance
profile
whatever,
so
we
are
not
closing
a
door
which
we
do
right
now.
In
my
opinion,
when
we
are
using
our
current
auto
instrumentation
approach,.
I
Well,
microsoft
has
a
library
called
clre
that
promises
to
to
attach
multiple
profilers.
The
challenge
around
this
library
is
that
it
seems
to
be
very
good.
I
have
like
no
real
problems
with
this
library,
but
it's
not
an
official
product,
it's
an
open
source
project
and
there
is
a
team
who
that
is
committed
to
yeah.
C
I
B
Well,
so
all
of
those
tools
that
you
see
the
ones
that
specifically
you
mentioned
memory
profiles,
performance
profilers,
most
of
those
are
not
written
on
icor
profiler.
At
this
point
at
least
the
microsoft
provided
ones.
So
you
would
be
able
to
use
those
tools
current
the
microsoft
provided
ones.
I
think
jetbrains
uses
an
icore
profiler
for
some
of
their
stuff,
but
like
all
the
microsoft
provided
ones,
they're
well,
actually.
I
And
yeah,
and
but
is
is
ico
profile,
is
it
like?
Is
it
I
think
in
the
future
is
is?
Is
it
going
to
see
investments
or
likely.
B
B
When
we
add
tiered
compilation,
we
added
a
bunch
of
apis
to
support
tiered
compilation
and
the
profiler,
and
so
it's
you
know
we're
going
to
keep
the
the
you
know,
we're
doing
fixes
we're
doing
bug,
fixes
we're
doing
features
to
make
sure
that
that
parity
is
there,
but
we're
probably
not
going
to
do
like
you
know
like
a
whole
rewrite
or,
like
you
know,
like
brand
new
apis
or
that
sort
of
thing.
G
And
that
reminds
me
about
the
scenario
that
we
keep
hearing
people
trying
to
do
kind
of
generate
the
the
the
full
assembly,
the
the
ahead
of
time,
compilation
instead
of
jitting,
because
I
core
profile
for
me-
I
I
are
immediately
freaking
in
jitting,
so
for
this
scenario
about
ahead
of
time,
compilation
what
what
will
be
the
alternative?
I
I
I
think
then
we
we
are.
We
have
to
use
the
event
pipe
to
get
this
kind
of
information
right.
B
So
there's
a
couple
of
different
ahead
of
time:
compilation,
technologies
so
and
the
the
the
it's
one
of
those
things
where
the
the
nuances
are
there.
So
the
one
that
you
might
view
is,
as
ahead
of
time
is
so
we
have
this.
This
thing
called
we
call
single
file
internally,
where
we
you
know
so
we
we
merge
all
the
I
o
and
all
the
different
libraries
into
one
big
I
o
file,
and
then
we
and
then
we
cross,
gen
it,
and
then
we
package
it
all
together.
B
But
it's
still
running
fundamentally
as
core
clr
under
the
covers.
It's
just
we've
cross-gendered
everything
so
that
it's
so
that
there's
as
little
jitting
as
there
possibly
could
be
in
that
scenario,
profiling's
still
an
option
because
we're
just
we're
running
the
same
core,
clr
runtime
that
we
do
in
any
other
version
of
net
or
the
the
new
you
know.net
5
and
later,
but
that
we
also
have
another
technology.
That's
called
native
aot
right
now,
and
that
is
there's
no
jitting.
There's
no
run
time.
B
B
So
right
now
is
it's
not
really
we're,
not
shipping
it
officially
for
anything
so
that
if
we,
I
don't
think
we're
even
at
the
point
where
we
say
this
is
for
one
scenario
or
another,
is
we're
still
we're
still
in
an
experiment
and
talking
to
people
and
seeing
you
know
who
it
works
best
for
and
that
that's
under
active
development,
and
I
I
would
be
surprised
if
we
ended
up
supporting
I
core
profiler
there.
B
As
my
my
hunch
is,
we
would
probably
support
event
pipe
and
not
I
core
profiler
there,
but
it's
way
too
early
to
say
you
know
for
sure.
What's
going
to
happen
there
or
or
even
if
it's
going
to
take
off
as
a
technology
it
you
know
it's
right
now,
there's
a
couple
internal
partners
that
are
using
it
and
that's
pretty
much
it.
G
Yeah,
I
I
I,
as
I
said
I
I
I
see
a
lot
of
potential
on
this
and
I
it
this
is
an
area
that
I
would
like
to
invest
in
the
future.
You
know
and
having
the
opportunity
of
working
with
you
dave
and
noah
and
folks
from
microsoft
that
are
implementing
this
and
we
are-
and
if
you
become
consumers
of
that,
I
think
there
is
a
chance
that
we
can
evolve
the
the
capabilities
in
the
collector
in
a
way
that
benefits
the
net
ecosystem
pretty
well.
You
know.
G
G
And
I
think
we
can
move
to
our
taking
care
of
business.
Part
of
the
meeting.
G
You
are
mute,
I
think
you
are
trying
to
speak.
Oh.
B
I
muted
myself
yeah,
so
I
was
a
yeah.
I
was
just
saying
that
it's
really
exciting
to
see
that
open
source
projects
are
buying
into
you
know
to
the
things
that
we're
creating.
You
know
from
from
the
beginning,
with
eventpipe
and
the
diagnostics
channels.
You
know
the
whole
goal
was
to
write
specs
and
have
it
be
open
source,
so
things
like
this
could
happen,
and
it's
really
really
interesting
to
see.
You
know
it's
just
awesome
to
see
that
the
community
is
buying
into
it.
E
H
For
the
for
the
ipc
protocol,
so
I'm
happy
to
see
people
using
it
in
the
wild.
G
Yeah
and
we,
we
hope
to
start
to
interact
more
with
you
as
we
get
into
our
hands
dirty
there
and
start
to
write
more
stuff.
For
that.
G
All
right
that
will
be
kind
of
the
moment
for
zach
to
roll
the
cred.
So
we
switch
topics.
I
Hey
david,
I
have
some
questions,
follow-up
questions
for
what
you
said
earlier,
but
I
want
to
take
it
offline
like.
When
is
a
good
time,
perhaps
after
this
meeting
or
some
other
time,
what
would
be
a
convenient
time
for
you,
yeah.
L
After
this
meeting
is
fine
or
just
any
any
time
really
yeah,
okay,
cool
I'll,
send
you
an
invite.
G
G
Okay,
so
for
the
agenda
today,
I
don't
know
who
put
that
question
about
the
github
actions.
G
And
there
there
was
a
bunch
of
stuff
about
status
that
I
would
like
to
check.
If
there
is
anybody
with
any
pr
that
needs
some
extra
discussion
or
anything
that
feels.
E
A
F
G
Yeah,
okay,
so
we
will
have
some
follow-up
issues
but
and
tests
that
we
need
to
clean
up,
but
otherwise
looks
good.
G
N
Yeah,
I
think
just
w3c
propagator
first
and
then
this
one,
but
other,
don't
think
so
that
there
are
other
details.
G
Okay
and
then
we
have
this
about
the
synchronization.
G
G
Discussion
and
things
go
back
and
forth,
but
it
seems
that
the
latest
is
that
there
is
one
opinion
from
kevin
that
this
is
not
necessary
and
robert
still
think
is
needed.
C
E
D
Yeah
yeah.
No,
I
appreciate
that
I
I
kind
of
brought
in
kevin
because
I
trust
him
with
a
lot
of
low-level
analysis
and
debugging
and
so
yeah.
I
think
his
assessment
was
that
this
is
almost
a
no-op.
I
don't
I
don't.
D
We
can
merge
this.
I
I
kind
of
trusted
his
analysis.
I
think
it
mitigates
a
small
amount
of
things.
I
think
the
true
fix.
I
think
that
would
absolutely
work
and
we
agree
on
is
if
we
do
a
barrier,
a
full
memory
barrier
and
that
I
don't
know
what
the
performance
implication
of
that
is,
but
that
would
probably
happen.
D
D
So
in
this
case
right
here
we
could
be
reading
a
stale
value
and
for
some
traces
are
out
because
we
didn't
know
yet
that
it
was
initialized.
So.
I
Yeah,
it
sounds
like
a
benign
race
condition.
So
if,
if
we
initialize
it
at
some
point
during
runtime
and
before
it's
not
initialized
and
then
at
some
point,
it's
initialized
what
if
we
just
realize
it
a
few
milliseconds
later,
then
it
was
it's
as
if
it
was
initialized
a
little
later.
So
who
cares.
C
I
Okay,
sorry,
I
actually
shouldn't
be
saying,
because
I
haven't
looked
at
the
scenario,
so
I
haven't.
C
D
C
It
works
only
in
very
strict,
like
it
works
in
a
very
specific
scenarios
which,
from
what
I
know
and
my
experience,
this
is
the
exactly
case
when
volatile
help
in
99
volatile
doesn't
help
because
it
works.
I
I
Yes,
I
made
the
experience
that
volatile
can
be
dangerous
because
it
promises
more
than
it
delivers
and
then,
if,
if
I
am
not
a
hundred
percent
sure
that
volatile
does
what
it
should
do,
but
I
am
really
sure
that
synchronization
is
essential.
I
I
use
interlocked
with
the
risk
that
I
actually
over,
like
overdo
things
and
if
I
am
but
before
oh,
I
just
think.
Maybe
it's
like
fine
to
have
a
benign
denying
like
async
like
something
it
does
not
propagate.
But
who
cares?
If
that's
not
a
requirement,
then
something
is
written.
It's
benign
race
condition
so
whatever
so,
but
I
don't
know
the
specific.
This
is
just
general
multi-threaded
guidance.
I
don't
know
the
specific
scenario
and
there
is
also
volatile.read
that
lets
you
be
like
volatile,
but
without
having
a
volatile
variable.
D
Yeah
so
like
my
assessment
of
the
kind
of
the
probability
of
this
being
an
issue
is
really
really
small,
and
so
it
we
we
can
add
it
or
if
we
don't,
I'm
completely
fine
with
the
way
it
is
right
now
I
just
because
of
the
way
that
we
we
initialize
it
on
the
main
thread
and
then
later
on
any
async
one
it'll
just
read
whatever
value
it
reads,
but
nothing
else
is
gonna
unset
it.
D
So
I'm
happy
either
way
to
merge
it
or
to
just
or
close
robert.
Do
you
have
any
strong
opinions.
G
Okay,
all
right,
so,
let's
merge.
I,
I
know
by
experience
that
these
discussions
really
can
take
a
really
really
long
time
and
they
are
tricky
yeah.
G
Yeah
yeah,
so
so
let
let
let
let's
merge
and
we
can,
if,
if
people
are
curious
and
want
to
keep
discussing
about,
we
can
keep
trying
to
learn
more
and
more
and
digging.
But.
C
In
in
the
in
the
description
appear
description,
I
have
put
a
hyperlink
which
describes
how
volatile
works,
and
it
has
exactly
as
an
example,
a
pattern
which
is
like
here
as
a
good
example
of
using
volatile
as
marking
that
something
has
completed
so
making
something
completed
and
initialize
is
almost
like
the
same
stuff
right.
So,
if
you
are
more
interested,
you
can
read
also
yeah,
that's
all
from
my
site.
O
G
Okay,
then
we
have
the
github
actions
stuff
that
we,
I
don't
know
who
who
raised
the
github
actions.
E
G
Open
telemetry
have
been
moving
a
lot
of
their
stuff
to
github
actions.
So
if
you
check
the
the
ripples
around
open
telemetry,
they
use
github
actions
for
most
everything
you
know,
so
I
think
the
the
idea
is.
I
asked
to
kind
of
make
standard
and
follow
that
standard
for,
for
us,
too,
you
know.
G
C
A
github
issue
which
I
have
created
but
because
basically
nikita
showed
me
even
some
issue
in
the
specification
or
somewhere,
which
did
which,
even
where
it
was
discussed
and
agreed
that
gig
have
actions
is
the
like
ci
approach
that
we
should
go
to,
and
one
of
the
reasons
is
that
we
have
some
speci
in
the
organization.
We
have
some
special
additional
amount
of
resources
which
we
do
not
have
in
azure
pipelines
and
even
microsoft
couldn't
get
additional
resources
for
open
telemetry
organization.
G
Yeah
and
I
know
that
the
hotel
sdk
moved
it
to
github
action.
C
G
Some
time
ago,
that
was.
G
I
think
I
think,
right
now
we
miss
something
to
restore
our
artifacts.
You
know
our
build
is
not
doing
that.
G
The
first
thing
that
comes
to
my
mind,
is
kind
of
let's
take
a
baby
step
here,
add
a
github
action
to
produce
and
store
the
artifacts
and
run
unit
tests,
but
I
don't
know
who
has
the
bandwidth
and
commitment
to
do
that
yeah?
I.
I
Was
just
going
to
say
I
have,
I
think
it's
as
a
aspiration.
It's
awesome
I'm
in
full
support,
but
given
given
all
of
the
things
that
we
want
to
do,
where
is
really
the
priority
of
it?
How
how
how
much
closer
we
would
bring
us
towards
the
release
compared
to
all
of
the
other
things
we
want
to
do,
and,
given
that
do
we
really
want
to
be
spending
time
on
it.
I
I
not
saying
we
don't.
I
just
want
to
make
sure
that
we
actually
think
about
it
in
this
way,.
D
I
would
also
add
that
if
we
don't
consider
this
being
urgent
as
in
needing
in
the
next
like
four
weeks,
we
at
datadog
are
also
making
strides
to
make
our
build
process
improve
that
because
we
actually
need
to.
D
We
need
to
make
our
build
process
flexible
for
our
own
internal
builds,
and
so
we
are
looking
at
improving
that
as
well,
and
so
one
side
effect
or
the
result
of
that
is
we're
hoping
to
have
one
consistent
way
to
build
everything,
and
such
migration
to
github
actions
would
probably
benefit
from
happening
after
that,
once
we've
kind
of
fixed
that
down
just
because
you
probably
experienced
with
like
the
linux
build,
you
know
we
have
the
dockerfiles,
but
then,
if
you're
doing
a
regular
build
like,
I
can
use
dyna
sdk.
D
I
can
use
ms
build.
So
it's
a
little
haphazard
right
now.
C
I
feel
similar,
especially
that
even
running
something
local
machine
in
the
same
way
as
on
other
pipelines
is
tricky,
and
I
have
created
a
github
issue
for
it,
and
I
also
linked
it
because
I
think
andrew
lock
is
working
on
it.
Yeah
and
I
was
also
talking
with
him,
but
he
says
that
I
don't
know
what
is
the
like
estimated
time
to
arrive,
because
he
said
that
as
the
more
he
goes,
the
more
his
stuff
to
he.
He
finds
problematic
like
a
minefield.
I
I
Is
like
you
know
in
this
group:
do
we
need
this
now?
If
we
do,
then,
let's
not
wait
for
datadog,
but
if
we
don't
and
whether
or
not
is
a
question
for
us
together,
I
kind
of
think
we
don't
need
it
right
now,
but
if
we
do
then
of
course
I'm
wrong
anyway.
If
we
don't,
then
let's
spend
time
on
other
things
that
we
do
need
right
now.
I
Mainly
about
github
action
because
of
unifying
the
pipeline,
it
is
very
doubtful
that
we
would
do
it
here.
Faster
than
datadock
is
already
doing
it.
We
would
just
create
divergence.
Yes,.
E
G
I
I
think
we
agree
that
we
should
not
invest
into
this
right
now.
Please
correct
me
if
I'm
wrong
guys,
because
we
think
that
having
a
unified
pipeline
that
can
be
run
locally
and
in
whichever
cloud
we
pick
for
ci
is
more
important
and
somebody's
already
working
on
it,
so
we
will
wait
and
for
the
outcomes
of
it,
and
then
we
will
revisit
this
talk.
Is
this
correct
my
understanding,
guys.
C
Yeah,
that's
the
question:
is
it
possible
that
to
prioritize
it
hard
on
datadog
site,
I
can
even
help
if
possible.
I.
I
Yes,
I
think
the
the
best
way
of
doing
so.
Yes,
I
think
the
best
way
of
doing
this
is
to
sync
up
with
andrew
and
get
exactly
what
his
priority
with.
I
This
is
right
now
and
if
you
are
happy
with
this
priority,
then
you
can
have
a
plan
directly
with
him
without,
like
us,
stepping
on
your
toes
how
to
go
about
it
and
how
to
to
help,
and
if
you're
not
happy
with
the
priority
he's
putting
into
it,
then
let's
you
can
pick
me
either
before
the
next
meeting
directly
or
during
our
next
meeting
here
and
then.
E
I
C
I
think
it
will
be
good
if
you
other,
especially
other
maintainers,
take
a
look.
Maybe
it
will
be
also
good
to.
I
don't
know,
try
to
prioritize
stuff,
or
at
least
even
for
sake
of
transparency,
to
know
what
is
coming,
and
maybe
it
can
be
also
valuable
for
data
dog
if
you
are
developing
something
to
take
a
look
etc,
and
maybe
there
are
some
concerns
that
we
should
not
do
or
something
is
like
you
know,
explained
in
that
way.
I
I
think
it's
a
brilliant
idea,
and
I
think
we
should
do
this
in
a
group
rather
than
because
this
is
something
that
we
can
it's
best
talked
about.
We
can
do
this
either
during
the
next
meeting
or
have
again
a
special
one
for
it.
I
I
C
I
agree
with
a
separate
meeting,
however,
maybe
it
will
be
good
to
start
doing
it
asynchronously
and
reading
it
before
the
meeting.
So
if
anything
is
not
clear
before
it
will
be
better
to
prepare
stuff
before
the
meeting
so
do
not
lose
too
much
time
during
the
sig
meeting.
So
we
have
everything
prepared
or
the
details
are
clear
for
everyone
and
just
to
focus
on
prioritization
during
the
meeting.
I
G
And
just
one
thing
regarding
the
net
diagnosis,
I
think,
because
we
are
focusing
on
on.net
and
instrumentation.
G
I
think
it
will
be
a
very
natural
place,
because
this
is
just
one
of
the
receivers
that
exists
on
the
collector
that
if
we
want
to
kind
of
do
stuff,
also
bring
that
topic
to
be
part
of
our
umbrella
of
code
and
action
that
we
do.
Regarding
instrumentation
in
general,.