►
From YouTube: 2020-11-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
D
E
C
I'm
glad
I
could
join.
I
had
some
connectivity
issues
in
the
indiana,
the
first
part
of
the
days.
I.
C
When
I
first
moved
to
the
u.s,
I
lived
in
seattle
and
then
later
in
kirkland,
and
I
didn't
have
a
choice
and
it
was
really
horrible
and
since
then,
whenever
I
do
have
a
choice
I
choose
just
by
default,
whatever
it
is,
that
is
not
comcast.
So
if
there
is
only
conquest
and
some
other
choice,
I
don't
even
do
any
research.
You
just
choose
not
comcast
yeah
and
so
far
it
always
worked
better.
C
Although
now
that
I
have
comcast
with
with
my
roommate,
it's
still
better
than
comcast
comcast
used
to
be
like
a
few
years.
D
G
There's
some
blooper
reels
on
youtube
if
you
ever
wanna
and
where
there's
a
lot
of
the
improv
that
got
cut
out
of
the
show,
if
you
haven't
seen
those.
C
By
the
way,
david,
thank
you
for
your
response
to
the
email.
I
actually
done
a
lot
of
research
in
the
meantime.
G
C
Actually,
I've
done
little
else,
but
I
do
have
so.
I
actually
have
answered
some
questions
to
which
you
didn't
know
answers
before.
So,
if
you're
curious,
I
can
at
the
end
of
the
meeting
I
can
take
10.
C
and
then
so
don't
worry
like
writing
more
for
now,
because
what
they
found
out
may
be
interesting
for
you.
G
Yeah
yeah
the
is
it
about
etw
generally.
C
It's
I
kind
of
gave
up
on
hw
and
I
started
looking
at
using
profiling
apis
in
different
ways.
E
Even
for
us
yeah,
but
by
the
way
I
I've
been
been
digging
kind
of
all
the
meetings
recordings
that
we
have
do.
We
had
a
few
missing
and
eddie
help
it
tracking
those
downs.
I'm
gonna
try
to
publish
at
least
to
some
put
to
put
in
the
dark,
because
I
think
there
are
very
interesting
discussions
even
for
kind
of
not
directly
related
to
the
the
work
in
the
project,
but
they
are
pretty
useful
and
informative.
So.
C
So,
for
for
specifically
profiling
related,
I
summarized
what
we
talked
and
shared
the
docs.
So
there
is
like,
I
think,
everything
except
for
the
last
conversation,
but
the
last
conversation
didn't
have
sort
of
any
outcomes,
mainly
questions,
but
all
the
other
ones.
I
have
summarized
and
shared.
E
I
think
eric
has
some
discussion.
I
I
had
actually
some
days
off
since
last
week
and
I
I
what
I'm
planning
to
do
right
now
is
just
to
bring
the
change
that
I
have
with
the
vendor
code.
E
Now
that
we
catch
up
with
the
reorg
to
bring
that
to
the
to
greg's
branch
and
also
craig's
branch,
I
mean
the
one
that
he
has
been
prototyping
and
also
we
should,
if,
if
they're,
similar
I'll
bring
that
to
massachusetts
and
start
to
catch
up
with
datadog,
because
I
my
feeling
is
that
if
you
wait
too
much,
we
don't
do
that
too
too.
Frequently,
it's
gonna
get
pretty
hard
to
to
keep
to
keep
track.
C
So
so
because
I
was
sort
of
working
on
this
other
stuff,
so
we
we
have
merged
the
latest
from
datadog
into
master
last
week,
right.
E
C
Okay,
so
that
means
we
merged,
not
everything,
but
almost
everything
right.
C
E
I
think
that
has
already
zach
probably
know
better.
Like
is
three
weeks
old
about
right.
E
Yeah
so,
and-
and
my
feeling
is
that
perhaps
we
should
be
trying
to
do
integrations
kind
of
every
week,
otherwise
they
get
a
bit
too
bigger,
so
that
that's
kind
of
because
actually,
I
think
the
best
way
to
do
this
is
because
I
can't
do
merge
because
of
the
history
thing.
I
think
I
will
have
to
do
cherry
pick,
so
I
do
cherry
picking,
arrange
if
I
do
every
week,
then
I
think
it's
manageable.
C
Okay,
so
two
things
one
is:
that
means
ud
block
blocked
on
me,
merging
from
martha
into
the
future
branch.
I
can
get
this
done
by
the
end
of
tomorrow
so
that
it's
easier
for
you
to
to
so
that
like
I
can,
because
I
was
editing
this,
your
library
and
everything
so
I'll
get
that
for
you.
So
you
don't
need
to
worry
about
it
and
then
you
can
merge
the
rendering
into
something
that
is
already
aligned
with
the
directory
structure.
C
Okay,
so
that
that
will
save
you
some
time.
I'm
not
sure
I
can
do
it
today,
but
I'll
get
this
done
by
the
end
of
tomorrow
and
then
maybe
we
can
look
together
at
at
get
and
come
up
with
some
way
to
to
do
this
merging
some
sort
of
in
the
future
semi
automatically,
so
that
you
don't
need
to
cherry
pick
regularly
or
something.
C
How
did
you
do
the
directory
thing,
how.
I
E
The
the
other
one
I
have
to
squash
because
of
the
merges
there
was
a
big
list
that
didn't
pass
the
the
cla
thing,
the
signature
thing
then
afterwards
I
just
did
a
cherry
pick.
You
know
because
it
was
one
change,
and
I
just
cherry
picked
that.
E
C
Yeah,
I
I
have
no
updates.
I
was
not
working
on
any
of
this
since
our
last
conversation
so
yeah
tomorrow
I
do
the
merge.
Then
we
can
talk
about
git,
stuff
and
making
progress
on
features.
I
don't
expect
anything
this
week,
but
hopefully
next
week
I
am
busy
with
like
all
the
other
things
that
I'm
working.
C
However,
next
week
after
we
do
the
the
rendering
what
I
expect
should
be
done,
because
I
don't
think
it's
a
lot
of
work
is
the
library
loading
logic
updated
so
that
it
takes
advantage
of
rendering,
including
some
trivial
testing,
just
to
make
sure
that
it's
it's
probably
won't
be
enough
for,
like
all
cases,
but
just
just
to
make
sure
that
it
has
passed
through
some
sanity
testing.
C
It
doesn't
fail
on
the
first
load
or
something
for
like
I
did
it
before,
and
I
did
my
initial
prototype
for
full
framework
and
for
for
core
and
then
after
that,
the
next
steps
would
be
to
write
reflection,
wrappers
around
everything,
and
I
already
wrote
a
couple
but
like
we
chatted
before.
E
Yeah,
I
I
think
eric
wants
to
discuss
about
alpha
myosin.
I
think
we
should
take
a
quick
stab
at
that.
K
Yeah
exactly
that
was
me
that
that
stuck
that
on
there
I
had
just,
I
did
create
a
milestone
there
in
the
repo
forest,
and
I
did
actually
call
it
an
alpha
and
I
called
it
version
0.3,
because
I
was
thinking
that
maybe
it
should
match
up
with
the
version
of
the
hotel.
Specs,
that's
tagged
as
0.3,
but
I
don't
know
if
that's
that's
important,
that's
the
last
version
that
didn't
have
anything
related
to
metrics
in
it.
E
So
one
thing
that
I
don't
remember
it's
a
kind
of
couple
of
months,
since
I
looked
through
the
specs,
I
had
the
impression
that
they
have
these
specs
in
a
way
that
was
kind
of
okay
version
x.
But
then
you
have
some
features
there
that
are
part
of
the
spec
but
are
kind
of
optional
for
a
release.
Let's
say
you
have
0
3,
let's
say
0
4
with
metrics,
but
you
can
do
a
zero
for
just
racing.
E
C
What
does
it
actually
mean
for
us
to
be
compliant
to
the
spec
in
this
particular
case,
because
respect
doesn't
really
say
much
about
auto
collection.
E
I
I
think
that
it,
it
means
about
the
symmetric
conventions,
especially
for
tracing.
You
know,
because
it's
it's
not
so
much
about.
You
are
right.
It's
not
so
much
about
what's
available
because
we
are
doing
auto
instrumentation,
but
it's
about
what
we
generate.
Let's
say
for
http
exceptions,
and
this
kind
of
thing
there
is
a
bunch
of
conventions
about
how
to
express
that
meaning
like.
E
C
E
C
B
Yeah,
the
other
factor
that
might
come
into
play
is,
I
don't
know
if
there's
ever
going
to
be
a
desire
to
expose
some
sort
of
open,
telemetry
api
from
this
project,
in
which
case
that
might
also
influence
the
the
versioning
scheme.
E
Yeah
one
thing
that
also
may
affect
that:
it's
because
a
lot
of
things
they
have
kind
of-
I
don't
know
if
they
got
to
that
level
and
they
expect
in
which
version.
E
But
I
remember
seeing
a
lot
of
thing:
conventions
about
how
to
specify
configuration
stuff,
for
instance,
via
environment
variables
and
kind
of
that.
That
also
is
something
that
eventually
is
index
back
and
we
we
need
to
catch
up.
I
don't
know
in
which
version
that
becomes,
but
that's
part
of
the
spec.
C
So
two
things
there
so
for
api
there.
We
need
to
be
careful
because
we
don't
wanna,
repeat
the
same
process
that
we're
doing
with
diagnostic
source
for
another
library.
C
So
I
think
there
we
might
be
covered
just
by
the
fact
that
if
somebody
is
emitting
telemetry
using
the
api,
then
we
automatically
understand
it
through
activities
and
we're
sort
of
done.
We
don't
need
to
explain,
expose
anything
there
in
terms
of
configuration,
yes
makes
sense,
I
don't
care
whether
we
targeted
for
any
specific
version
or
not.
C
The
only
thing
that
I
do
care
about
is
that
again
existing
like
vendors
like
for
us,
we
may
or
may
like
as
a
data
that
we
may
or
may
not
want
to
ask
customers
to
configure
things
differently,
because
there
is
another
compatibility
concern
for
every
vendor,
including
us,
which
is,
if
you
have
other
languages
that
are
configured
in
certain
way
and
those
languages,
for
whichever
reason
don't
want
to
switch
to
open
telemetry
standards.
Then
what
do
you
like?
C
Where
do
you
wanna
compatible
to
to
open
telemetry
or
to
all
the
other
languages
in
the
same
vendor,
and
in
order
to
not
get
blocked
on
a
tough
choice?
I
suggest
that
we
do
this
through
an
extendability
mechanism,
just
like
with
all
the
other
things
where
this
was
the
case,
where
we
have
some
sort
of
configuration
provider
that
the
vendor
can
plug
in
and
depending
on
the
when
they
choose
what
the
one
that
chooses
it
can
be.
C
E
Yeah,
this
is
a
a
good
idea,
because
I,
I
think,
a
lot
of
situations
if
we
end
up
with
a
lot
of
distributions
that
becomes
really
relevant.
You
know
so.
K
Okay,
so
I
will
dig
into
the
specs
there
and
see
if
I
can
find
out
more
kind
of
what
you're
talking
about
paulo,
but
what
about
just
stepping
back
for
a
second,
do
we
want
to?
Are
we
okay
calling
this
an
alpha?
Do
we
want
to
call
it
a?
Is
it
no
or
do
we
want
to
call
it
a
beta?
E
I'm
going
gonna
take
give
my
take
here.
I
I
would
like
to
to
call
an
alpha
and
kind
of
start
to
have
issues
based
on
the
spec
that
we
could
track,
even
if
you
are
not
to
the
point
of
really
starting
to
do
then
right
now,
but
at
least
gives
kind
of
a
more
concrete
kind
of
target
for
us,
as
as
a
group
of
people
to
try
to
reach
you
know.
So
I
I
I
really
want
to
see
some
of
that.
You
know.
K
C
I
sort
of
agree,
I
think,
when
we
describe
this,
I
will
probably
have
some
sort
of
little
description
on
github,
I
think
just
for
for
the
community
to
understand
where
we
are
because
we
know
all
this,
but
for
people
who
look
at
it
who
are
not
part
of
this
group,
I
would
put
one
or
two
sentences
to
clarify
what
what
alpha
or
beta
means
or
what
guides
us
to
decide
what
it
is,
and
I
think
part
of
like
what
I
would
like
to
communicate
as
a
part
of
it,
is
that
this
is
mainly
about
the
degree
of
compliance
to
open
telemetry
standards,
because
right
now
again
things
do
work.
C
We,
we
are
sort
of
happy
with
the
stability
and
performance
and
all
these
things,
because
we
are
as
vendors
using
this
thing
all
the
time
so
just
to
kind
of
as
a
public
relations
statement
or
as
a
kind
of
just
education
of
the
community.
We
should
call
out
that,
in
terms
of
the
tech,
tech,
readiness
or
tech
maturity
maturity,
this
is
actually
where
we
have
released
quality,
but
in
terms
of
open
telemetry
compliance
we're
in
an
alpha
stage,
and
this
is
why
it's
enough
does
it?
Would
you
guys
think
about
it?.
B
I
think
that's
fair.
At
the
same
time,
some
of
the
things
that
are
currently
being
worked
on,
like
the
the
wrappers
for
diagnostic
source,
I
wouldn't
necessarily
say
that
that's
at
release,
quality
or
correct.
C
But
they're
also
not
part
of
if
I
was
to
build
the
product
using
the
current,
build
and
just
make
it,
then
it's
actually
not
exercised
in
in
in
the
thing,
so
I
just
wan
and
we
also
don't
need
to
go
there
in
too
much
detail.
C
I
just
want
you
to
kind
of
add,
as
we
like,
wherever
we
as
a
description
like
just
a
brief
sentence,
so
that
people
understand
what
is
the
main
work
of
the
group
going
into
so
that
if
somebody
would
like
to
learn
about
what's
the
deal
with
net
open,
telemetry
tracer,
they
know
what
the
general
status
is.
We
have
a
thing
that
works
well,
but
it's
not
compliant
that
we're
working
on
on
making
a
complaint
sort
of
or
like
we
can
modify
this
description
and
then
you
guys,
you
see
you
see
appropriate.
B
Yeah
so
greg,
I
put
something
out
in
the
readme
this
week
that
could
be
used
as
a
starting
point,
but
but
it
doesn't
necessarily
talk
about
alpha
or
how
it's
being
used
today.
C
I
Thank
you
just
have
a
final
thought.
I
agree
with
all
the
conversations
there
about
the
the
alpha
status
and
so
on,
but
back
to
the
the
versioning
a
conversation
I
just
want
to
share
with
you
a
conversation.
I've
been
having
with
other
peeps
at
new
relic,
specifically
in
the
context
of
the
open,
telemetry
sdk
and
our
open
telemetry
exporter.
I
And
how
should
these
be
versioned,
and,
and
should
there
be
some
sort
of
an
alignment
between
our
exporters
version
and
the
version
of
the
sdk
that
it
is
it
targets,
and
we
haven't
necessarily
come
to
a
conclusion
on
that
front,
but
something
that
I
share
here.
I'll
put
something
in
the
chat,
something
that
I
tossed
out
for
the
purposes
of
that
conversation
was
just
how
microsoft
has
its
versioning
strategy
around
the
sdk
versus
the
runtime
they've,
embraced
semantic
versioning
for
the
sdk.
I
I
say
all
this
to
say
that
if
we
find
ourselves
on
this
project
in
the
position
where
it
makes
sense
to
draw
some
sort
of
a
meaningful
link
between
the
open,
telemetry
sdk
and
this
project,
whether
that
be
because
we
pull
in
a
like
actually
traded
as
a
first
class
dependency
or
whether
we
you
know,
do
the
thing
where
we
have
a
get
sub
module.
You
know
whether
whatever
that
looks
like
that.
Maybe
we
take
a
versioning
strategy
like
like
this.
C
Yeah,
I
think
that
makes
sense,
but
we
can
decide
that
when
we
actually
take
like
when
would
actually
get
does
get
aligned.
C
Data
dock
and
that
can
explain
in
more
detail,
was
not
was
almost
following
semantic
versioning
rather
than
exactly,
and
the
reason
is
because
we
have
a
windows
installer,
and
so
it's
like
when,
when
we
when
we
have
a
new
version
with
with
like
the
what
is,
it
called
the
second
second
number
being
left,
but
it's
a
pre-released
version
actually,
where
like
they
can
explain
this
better,
but
basically
in
order,
because
windows
versioning
is
not
built
for
semantic
versioning
and
we
want
to
share
the
correspondence
between
the
windows
and
store
and
the
actual
tracer
version.
D
Okay,
can
you
explain
a
second
oh
yeah?
It
was
just
I
mean.
Ideally,
we
would
be
able
to
do
pre-releases
like
1.1.0
pre-release
and
then
update
it
when
it's.
If
you
know
we
want
to
ship
it
and
it's
1.1.0,
but
for
msi
versions.
They
only
take
into
consideration
for
upgrade
the
major
miners,
so
major
minor
and
patch,
and
so
we
can't
do
that
same
thing
as
easily,
so
we
always
just
bump
the
patch,
even
between
pre-release
and
natural
official
release.
So
yeah
we've
taken
liberties
ourselves
trying
to
do
versioning.
C
C
1
120,
then
you
get
one
121
prerelease,
but
you
never
get
one
121
fool.
You
just
get
one
122
after
that
yeah
yeah.
E
Interesting,
which
reminds
me
that
I
think
both
greg
and
chris
made
a
point
some
meetings
ago
that
actually
the
install
perhaps
for
the
hotel,
distribution
or
vendor
distribution.
We
could
kind
of
simplify
to
be
some
kind
of
copy
and
environment
variable.
So
we
but-
but
I
I
don't
know
yeah,
I
I
I
I
actually
don't
know
if
I
do
have
on
my
side.
People
use
msis.
Actually
I
don't
know
that
we
we
do
produce
them,
but
I'm
not
aware
if
we
have
customers
using
msi
directly.
C
C
I
I
do
think,
though,
that
a
brief
blog
post
that
describes
it
exactly
would
be
good
like
and
if,
if
we
get
you
released,
and
even
if
you
don't
do
a
messiah,
then
we
should
have
a
blog
post
like
one
for
windows,
machine
one
for
linux
machine
one
for
container
and
whatever
else
we.
E
Wanna
cover
yeah
yeah,
one
one,
but
this
I
think
applies
more
to
the
dot
net
sdk
because
it's
not
out
instrumentation
in
my
mind,
I
think
both
for
azure
lamp,
the
this
kind
of
thing
you
could
have
wrappers
from
open
telemetry.
But
this
is
a
separate
discussion
you
know
but
like,
for
instance,
if
you
have
aws
lambda,
it's
very
common
to
have
kind
of
wrappers
to
do
the
tracing,
metrics
and
stuff
so
but
repress
around
what?
E
No,
basically,
you
just
wrap
around
the
functions
that
are
called
so
in
aws.
That's
pretty
straightforward!
You
know
you
have
an
entry
point
and
you
specify
in
your
configuration.
E
So
basically
you
give
a
usual
to
somebody
just
say:
okay,
my
entry
point
is
this:
I
put
here
this
wrapper
and
then
call
my
original
code.
C
And
like
a
setup
like
reps
point,
yeah.
E
C
Actually
talking
about
reference,
I
was
thinking
it's
an
architectural
question.
What
whether
you
guys
think,
whether
what
is
what
is
whether
boss
should
be
supported,
I
would
describe
what
over
there
is
like
okay
to
make
a
constraint.
So
I
was
thinking
about
this
in
in
the
context
of
data
doc
needing
to
send
traces
as
a
whole,
rather
than
spence
one
by
one.
So
interrupt
me
if
you
think
it
it
completely
doesn't
apply.
C
But
I
was
thinking
about
the
following
right
now:
the
logic
that
that
my
prototype
using
should
decide
when
the
trace
is
complete,
the
local,
the
local
chase
and
the
local
is
like
the
non-re-entrant
part
of
the
entire
distributed
trace,
because
it
could
be
wrenching.
That
would
be
a
different
local
choice.
So
what
I
do
is
essentially
I
decide
that
the
local
route
is
the
one
that
doesn't
have
a
parent.
C
C
So
when
we
don't
stream
in
our
instrumentation,
then
we
create
the
parent
when
a
request
comes
in
everything
is
fine
but
say
somebody
so
now
for
now
we
don't
support
like
workers
where
you
have
like
a
background
loop,
but
once
we
do,
we
can
do
it
in
the
same
way,
but
I
actually
remember
even
myself-
writing
ad
hoc
code
before
all
this
existed
when
we
were
like
just
kind
of
doing
ad
hoc
activity
like
tracing
where
we
would
create
a
root
activity
that
would
just
be
the
loop.
C
The
root
loop
of
such
a
worker.
Essentially
that
activity
would
never
finish,
but
it
would
still
be
some
sort
of
it
would
have
a
name
and
whatnot.
So
if
somebody
did
this
for
whichever
reason,
then
that
strategy
of
understanding
local
traces
would
kind
of
break
down.
C
We
can
either
say
not
a
problem,
we
just
say
this
is
not
what
you
should
do,
not
support
it.
Full
stop,
or
we
could
say
we
invent
some
sort
of
special
tag
on
a
span
that
that
says,
I
am
a
root
activity
which
I
prefer
not
to
do,
because
it
means
we
have
to
look
it
up
for
every
span,
which
is
important
cycles.
B
A
quick
question
related
to
the
approach
that
you
talked
about.
You
said
that
you
look
for
spans,
that
don't
don't
have
a
parent
or
activities
that
that
that
don't
have
a
parent,
yeah.
F
B
Yeah
but
what
happens
in
the
case
where
you
really
are
dealing
with
a
root
activity,
but
context
was
passed
in
because
it's
part
of
a
distributed
system
and
so
that
parent
is
from
a
separate
service.
In
that
case,.
C
So
for
now,
basically,
I
assume
that
this
never
ending
root
activity
is
something
that
people
shouldn't
do
and
if
they
do
then,
like
this,
this
system
will
break
down.
I
don't.
E
F
E
C
E
E
But
but
then
it's
a
different
problem
right.
So
I
I
I
never
see
this
kind
of
process
lifetime
span
that
it's
real,
the
parent
for
everything
right.
Okay,.
B
Yeah,
I've
seen
cases
like
that,
but
it's
mostly
been
unintentional
cases
where,
if
using
async
local
to
flow
context,.
B
A
timer
or
some
other
thing
has
captured
that
activity
and
it
could
keep
it
alive
indefinitely.
E
C
Okay,
then,
I
think
I
won't
worry
about
it
for
now,
because
right
now,
our
even
our
thing
uses
the
same
logic
for
spams,
for
the
expense
that
we
have
and
we
haven't
observed
any
problems,
but
we
also
don't
have
auto
instrumentation
for
the
background
loop.
So
for
the,
for
the
service
scenario
seems
to
not
create
a
problem.
So,
okay,
no,
no
worries
thanks
for
the.
E
Feedback
all
right,
so
I
I
think
eric
said
that
in
regards
to
expect
you
are
going
to
look
to
to
clarify
that
thing
about
the
requirements
and
the
kind
of
mandatory
stuff
for
our
version.
E
E
For
my
part,
I
don't
have
anything
else.
If
somebody
else
has.
C
K
The
only
other
thing
I
wanted
to
just
like
touch
on
before
we
switch
off
that
topic
was
in
terms
of
you,
know,
issues.
If
we
think
that
there's
issues
we
can
start
associating
with
this
milestone.
Yet
at
this
point,
obviously,
we
need
to
come
up
with
a
you
know
the
versioning
scheme
for
it
and
you
know
appropriate
description,
but
are
we
at
a
place
where
we
can
start
associating
some
of
the
issues
that
are
out
there
or
are
they
just
more
kind
of
general
placeholder
issues.
C
I
will
create
an
issue
for
w3c
context,
propagation,
and
I
will,
if
you
don't
mind
guys.
I
don't
want
to
duplicate
context,
because
I
had
already
discussed
the
strategy
for
this
with
noah
on
the
net
runtime
issues,
and
I
just
crosslinked
this
and
then
if
people
suggest
a
different
approach,
we
can
always
like
do
it,
but
I
just
create
a
issue
say
we
should
do
it
and
here's
a
deep
link
to
how
I
suggest
we
do
it
and
yeah.
C
K
K
B
B
Question
that
I
have
when
we're
talking
about
an
alpha,
are
we
talk
talking
about
the
alpha,
incorporating
proving
out
whether
or
not
the
performance
of
the
activity
wrapper
being
good
enough,
or
would
that
come
before
an
alpha.
E
E
Of
course,
we
do
is
micro
benchmarks
before
that
as
needed,
but
I
I
think
we
should
try
to
go
for
the
alpha
and
on
top
of
that
kind
of,
do
the
real
kind
of
overall
benchmark
to
validate
the
approach.
C
I
think
that
makes
sense
the
only
only
the
only
caveat
to
add
to
it
is
that
essentially,
as
long
as
we
haven't
done
a
more
or
less
satisfactory
benchmarks,
we
can't
be
like
irreversibly
committed
to
this
technology
right.
C
E
Yeah
but
but
I
think
some
of
the
initial
kind
of
red
flags
could
be
kind
of
micro
benchmarks,
kind
of.
I
E
And
what
I
mean
is
that
when
we
have
the
rio
profiler
work,
then
we
have
to
do
a
let's
say:
integrate
that
benchmark
to
measure
and
kind
of
see
if
our
initial
fines
are
holding
up.
You
know.
E
Okay,
so
I
think
I
would
I'll
let
you
guys
dave
and
greg
discussing
the
the
profiling
I
and
I
will
drop
off
to
do
other
stuff.
K
B
C
Okay
cool,
so
thanks
dave
thanks
for
for
your
response.
I
looked
at
etw's
and
essentially
so
each
w.
It
looks
to
me
like
this
is
what
I
want,
but
I
can't
have
it
because
of
permissions
yeah.
I
think
that's
a
conversation
more
for
the
and
I
wrote
this
other
email.
C
Maybe
I
I
think
no
again
just
share
his
opinion,
where
I
said
like
really
it's
about
it's
much
for
the
azure
team
rather
than
net,
because
windows
has
this
technology
and
it's
great,
but
I
don't
understand
how
it
can
be
used
well
for
the
like
modern
day
cloud
scenarios,
I
I
know
like
azure
app
service,
allows
you
to
somehow
collect
a
trace
and
then
download
it
and
then
use
perfume
to
look
at
it,
but
sort
of
the
way
how
you
want
to
do
more
than
cloud
monitoring
is
just
you
can't
just
do
it
that
way,
and
so
yeah
every
team
needs
to
somehow
solve
it
or
not
by
exposing
it.
C
Maybe
maybe
somehow
you
can
say
hey,
I
am.
I
am
a
process
and
I
will
I
I
am
allowed
to
collect
etw
kernel
events
if
they
are
about
my
process,
if
they
about
the
same
process,
you
know
or
something
like
that
right.
If
that
was
possible,
then
we
could
build
like
modern
cloud
experiences
on
top
of
etw,
but
without
it
I
just
don't
see
how
it
will
happen.
G
Yeah
so
kind
of
from
from
our
perspective,
we're
just
depending
less
and
less
than
atw,
and
that's
what
our
solution
has
been.
That's
why
we
introduced
event
pipe
and
that's
I
mean
there's
cross
plat,
but
then
also
we
can
just
like
there's
no
permissions
issues
and
we've
been
building
tools
that
don't
rely
on
etw
and
that's
just
so
that's
what
we've
been
doing
is
basically
removing
our
dependency
on
atw.
H
So
so
that
makes
sense,
but
then.
C
Let's
talk
about.net,
so
I'm
kind
of
ravage
the
conclusion
to
at
the
conclusion
that
we
want
to
have
a
continuous
profile
for
the
net
java
has
it
in
in
datadog,
and
it's
also
open
source,
I'm
not
sure
how
it
relates
to
open
telemetry.
C
But
it's
all
like
it's
it's
part
of
like
it's
completely
open
source,
just
like
everything
else,
and
I
think
it
can
and
should
be
done
for
the
net,
because
at
the
end
of
the
day
I
want
like.net
to
be
cooler
than
java
right
so
and
then,
when
we
do
it,
how
this
will
relate
to
open
telemetry
will
be
a
product
question,
but
it
will
also
be
completely
like
completely
in
open
source.
C
Partially,
I
I
mean
it's
it's
again:
it's
open
source,
it's
online
and
like
the
the
people
who
built
it,
I
I
talk
to
them
every
day.
I
didn't
go
into
too
much
detail,
but
partially
so
here's
what
I
found
out,
about.net,
specifically
I
kind
of
stepped
because
we
we
talked
about
cpu
profiling
and
I
more
ask
them
about
java-
why
they
do
cpu
profiling
rather
than
a
wall
clock,
so
that
I
understand
now
that
I
can
explain
and
so
basically
it's
they
name
some
scenarios
where
it
makes
sense.
C
If
you
are,
if
your
scenario
is
to
save
cost
on
a
cpu
bound
application,
then
you
want
cpu
profiling
and
if
you
actually
look
at
at
perfect
tutorials,
I
went
and
looked
at
perfume
videos,
and
it
was
actually
quite
educational
because
I
never
did
this.
I
I
I
read
that
talks
before
when
I
was
just
using
it
for
my
investigations,
but
I
never
use
the
videos
and
vance
is
actually
quite
good
at
like
just
he
is
eloquent
and
explaining
this
right.
C
It's
worthwhile
watching
yeah
and
he
also
starts
everything
with
cpu
investigations
and
only
in
certain
cases
here
switches
to
walk-lock
investigations.
Yes,
he
comes
from
a
more
traditional
kind
of
thing.
You
know
his
example,
some
more
the
type
of
applications
that
were
more.
You
know
relevant
a
few
years
ago.
C
These
days,
the
typical
cloud
application
is
very
much
network
bound,
rather
than
cpu
bound.
So
his
prototypical
case
is
less
frequent
today
than
was
when
he
made
the
videos,
but
still
so
I
I
buy
the
use
case
now,
people
you,
you
have
a
fleet,
you
want
to
save
money
on
on
your
corks.
C
C
Have
a
distinction
between
sleeping
on
a
monitor
versus
either
ready
to
run
or
running?
If
you
don't
have
the
distinction
between
ready
to
run
and
running,
but
you
have
a
lot
of
threads
really.
Only
a
few
of
them
are
running.
So
you
you
don't
know
how
much
time
you
are
using
for
two.
I
have
too
many
threads
first,
I'm
doing
too
much
computation
and
yes,
you
can
indirectly
look
at
it
just
by
looking
at
the
number
of
threads
and
thread.
C
But
essentially,
then
you're
in
a
situation
where
you
cannot
investigate
one
problem
before
you
solve
the
other,
so
you
cannot.
You
already
know
you
have
the
the
too
many
threats
problem
and
you
will
be
now
in
a
spot
where
you
cannot
solve
even
begin
investigating
your
computation
problem
before
you
solved
your
human
insurance
program
and
with
the
cpu
profile
you
can
be
addressing
both
at
the
same
time,.
G
C
So
so
those
are
the
scenarios
why
cpu
profiling
is
useful,
but
then
I
kind
of
stopped
at
it
and
then
I
said,
like
you
know
what
let's
focus
on
building
something
that
is
performant
enough
for
net
and
we
just
do
whatever
is
easier:
first
cpu
or
clock,
and
then
we
take
it
from
there.
So
let's
focus
on
entry
and
functionality
and
performance
first.
I
C
How
to
use
the
apis
and
then
last
time
we
had
a
conversation
where
you
said
that
on
linux,
the
api,
where
you
suspend
only
one
thread,
won't
work.
B
C
And
that's
what
west?
I
continue
to
investigate.
So
here's
what
I
first
found
out.
First,
I
looked
into
using
a
shadow
stack
and
I
even
prototyped
something,
but
when
I
shared
it
with
our
java
profiling
people,
they
just
declared
me
as
crazy
and
they
said
like
no
like.
Yes,
if
you
have
a
lot
of
time,
you
can
finish
your
prototype
and
measure,
but
they
had
a
very
strong
feedback
saying
that
it
will
be
not
fast
enough.
Probably.
E
C
Purely
a
sampling
profile,
so
I
stopped
that
and
I
started
doing
sampling
so
simply
on
windows,
at
least
in
the
first
iteration.
We
can
simply
do
by
essentially
doing
what
you
guys
do
in
your
relic,
but
simply,
rather
than
suspending
all
threads
suspending
threads
in
some
sort
of
selective
way,
initially
just
round
robin
one
by
one
and
taking
a
stack
snapshot
and
then
processing
it
asynchronous.
C
So
this
will
be
my
first
piece
of
work
and
so
sorry,
yes,
so
that
was
suspend
a
thread.
Oh
yeah,
the
the
list
of
threads.
My
questions,
the
list
of
threats
in
that
that
we
have
in
dotnet
right
the
native
one,
is
it
stable?
C
Does
it
change
the
order
if
I
kind
of
keep
scrolling
through
it,
and
then
I
like,
if
say,
if
I
see
if
it's
constant,
of
course,
if
it's
like,
if
the
number
of
students
changes,
then
it's
something
else
but
say
it
doesn't
change
between
a
few
invocations
of
this
thing.
If
I
essentially
do
my
tick
and
at
this
particular
millisecond,
I
look
at
these
10
threads
and
then
I
remember
the
offset
and
then
my
next
millisecond.
I
look
at
my
next
10
threads.
G
The
so
you're
talking
about
the
court
there's
a
profiler
api
to
get
all
threads
right.
There's
like
the
new
threats
or
something
like
that
yeah,
so
that
hands
you
back
an
iterator
and
so
you
might
be
able
to
do
it
and
the
threads
might
be
in
the
order.
But
it's
I
don't
have
to
check.
I
don't
think
we
certainly
don't
guarantee
that
it
is,
but
it
just
might.
It
might
happen
to
be.
G
C
Good,
okay,
thank
you.
This
is
good
and
then
then
next
is
linux.
So
here's
how
the
java
folks
are
doing
it
in
linux.
You
can
register
a
interrupt
headline
for
a
thread.
G
C
G
C
And
I
can
now
do
a
stack
walk.
My
first
question
is
so
so
this
now
the
threat
is
suspended.
I
could
do
a
stack
work
from
a
different
thread
or
from
the
same
thread
like
the
code
actually
executing
this
backward.
I
now
have
the
choice
yeah.
My
question
is
for
the
runtime
api.
If
I
know
that
the
thread
that
I
want
to
walk
is
suspended,
can
I
ask
the
runtime
to
do
this
technology.
G
Not
as
it's
currently
written
so
you
so
there
is
just.
There
is
just
code
that
basically
checks
to
see
if
you're
on
the
same
thread
and
and
like
there's
a
there's,
a
condition
that
says
if
you're
on
a
non-windows
platform
and
you're,
not
on
the
same
thread,
just
return.
You
know
e,
no,
not
impul
or
something
like
that,
and
so
you
would
have
to
call
it
from
the
same
thread,
although
I
think
you
might
be
able
to
in
in
your
interrupt
handler.
G
G
So
then,
if,
if
you're
running
on
the
same
threat,
then
it
should
work,
but
you
there
are
a
handful
of
issues
and
so
first
off,
if
you
it's
pretty
easy
to
deadlock
yourself,
if
you're,
if
you're
just
in
a
random
point
of
execution,
the
clr
might
be
doing
some
so
the
cr's
in
the
middle
of
the
stack
walk
itself,
then
it
might
be
holding
on
to
certain
blocks
and
then,
when
you
call
stack
walk
you
know
it
would
be
pretty
easy
to
get
into
a
situation
where
you
where
you
deadlock
yourself.
G
So
that's
that's
one
concern
in
the
past.
So
that's
that's.
Also
a
concern
on
windows.
If
you
once
you
start
suspending
arbitrary
threads,
you
have
no
idea
what
that
what
locks
those
threads
are
holding.
C
Pretty
well
described
in
the
docs
all
the
situations
that
we
need
to
take
care
of:
okay,
yeah.
I
have
because
on
windows,
I'm
doing
it
from
a
different
thread
by
inspecting
the
the
the
suspended
threads
memory.
So.
G
Yeah,
even
that
it's
pretty
easy
to
run
into
deadlock
scenarios,
because
so,
if
the
runtime's
in
the
middle
of
suspension
or
or
stack
walking
itself,
it
you'll
deadlock
yourself
and
what
we've
told
people
in
the
past
is
to
basically
do
it
like.
Have
your
do
stack
snapshot?
Oh
that
that's
the
problem,
so
so
on
windows.
What
we've
told
people
in
the
past
is
you
can
create
a
canary
thread.
G
So
you
just
have
a
thread
that
actually
do
stack
snapshot,
and
then
you
wait
a
little
bit
to
see
if
it's
making
progress
and
then,
if
it's
not
making
progress,
you
can
kill
that
thread
and
then
spawn
a
new
canary
thread,
but
that
won't
work
on
linux
because
you
can't
call
it
from
a
different
thread.
So
you
would
have
to
so.
G
You
might
be
able
to
work
around
it
by
instead
of
creating
canary
thread.
You
can
call
it
from
your
thread
but
spawn
a
thread.
That's
checking
to
see
if
you're,
making
progress
that
sort
of
thing,
but
it
would
be
complicated
and
it's
not
something
that
I
that
anybody's
done
before
that
I
know
of
so
you're
kind
of
blazing
a
new
trail.
C
I'm
not
aware
of
a
continuous
profile
for
the
net,
so
while
I
would
prefer
to
reuse
it,
you
know
known
art
as
much
as
possible
at
the
end
of
the.
G
That's
one
one
is
just
you
generally,
don't
know
what
the
runtime
is
doing,
so
you
generally
don't
know
what
locks
we're
holding
and
it's
pretty
easy
to
run
into
a
deadlock.
So
you
have
to
have
some
situation
to
detect
and
back
out
if
you
run
into
a
deadlock,
but
the
the
second
thing
is
so
so
if
you
just
take
an
arbitrary
stack
on
linux,
there's
going
to
be.
You
know
if
the
runtime
calls
out
to
something.
G
So
if
you're
in
you're
in
either
runtime
native
code,
like
this,
the
c
plus
that
we
write
or
you're
in
a
p,
invoke
and
you're
out
in
some
native
code
somewhere,
all
that
code
is
going
to
be
in
you
know
it's
going
to
be
native
code,
so
you're
going
to
have
to
unwind
it
with
lib
unwind,
but
the
any
managed
code.
G
G
But
it's
never
been
thoroughly
tested,
and
so
you
might
run
into
more
issues
where,
where
there's?
Basically
this,
you
know,
so
you
have
the
native
that
you
can
do
within
level
unwind
and
you
have.
The
manage
you
can
do
is
do
stack
snapshot,
but
there's
these
in
between
frames
that
that
you
might
run
into
issues
with.
G
Stages
right
it,
so
I
can
try
and
find
the
issue.
Somebody
was
trying
to
do
what
you're
doing
before
and
ran
into
all
these
issues
and
complained,
and
I
fixed
I
tried
to
fix
some
of
the
best,
like
basically
band-aid
fixes
just
to
get
around
it
and
but
never
heard
back
from
him.
And
so
I
don't
know
if
he
was
ever
successful
or
not.
C
If
you,
if
you
can
find
an
issue
that
would
be
super
helpful,
but
here's
another
provocative
thought.
I
looked
at
the
runtime
implementation
of
the
stack,
walk.
G
C
And
it
does
a
lot
of
things
and
I
only
just
looked
briefly
because
you
know
I
was
doing
all
these
other
things
tries
and
arrows
that
I
just
described,
but
you
know
I
actually
don't
know
too
much
when
you
say
manage
manage
frames
native
frames,
I'm
not
yet
an
expert
on
this.
I
I
understand
how
stack
work
stack
is
laid
out
and
when
you
correct
me,
if
I'm
wrong,
when
I
think
frames,
usually
it's
all
about
laying
out
parameters
and
things.
C
G
Yes,
but
there's
so
this
is
one
of
those
things
where
we've
really
made
it
complicated,
because
we've
used
all
the
same
terms.
G
So
in
you
know,
when
you're
talking
about
actual
native
code
executing
on
the
machine
like
yeah,
there's
the
frame
chain
and
the
frames,
and
that's
basically
how
this
you
know
the
abi,
the
unix
abi
right
and
there's.
You
know
it's
if
you're
passing
in
arguments,
they
go
in
these
registers
first
and
then,
when
you
run
out
of
registers,
you
put
them
well,
that's
amd64,
but
you
know
that
that
whole
thing
and
then
the
ebp
ebp
chain,
like
that's
so
those
are
like
those
are
native
frames,
but
then
inside
of
the
inside
of
the
runtime.
G
We
also
have
additional
data
that
we
keep
and
so
exceptions
and
security,
and
all
this
stuff
right,
yeah
and
just
generally
every
managed
code
has
a
thing.
A
capital
f
frame
is
what
we
call
it.
It's
you
know
it's
a
class
frame
inside
the
runtime
and
so
there's
a
bunch
of
different.
These
frames
there's
like
interrupt
frames
and
you
know,
like
cue,
call
frames
and
and
manage
frames
and
then
there's
frameless
methods
which
are
methods
that
don't
actually
push
an
actual
frame
onto
our
stack.
G
But
then
you
can
look
them
up
in
different
ways,
so
in
the
so
so
there's
a
native
frame
and
then
there's
a
managed
frame,
which
is
like
all
of
the
all
the
details
that
we
need
to
keep
about.
The
managed
implementation
of
the
method,
and
so
the
lib
unwind
can
walk
the
native
stuff,
but
once
you
get
into
managed
code,
that's
where
that's
where
you
need
to
use
the
clr
stack
walk.
H
Now
so
label
went
want
want
to
work
with
managed
so.
C
G
C
Need
any
of
this
for
a
profiler,
all
I
need
to
know
is
which
method
I'm
in
I
don't
need
to
actually
be
unwinding.
These,
I
don't
need
to
be
catching
exceptions,
and
I
don't
need
to
be
even
looking
at
parameters,
although
this
is
what
I'm
asking
like
yeah,
don't
they
just
meet
the
like
the
the
stack
pointer
chain.
G
Chain,
but
I
this
is
now
we're
starting
to
get
into
technical
details,
and
I
would
have
to
look
up
exactly
what
the
problem
is,
but
basically
I
don't
think
we
always
follow
the
ebp
chain
either
like
we
just
basically
once
we're
in
managed
code
is
we
have
our
own
conventions
and
we
don't
follow.
We
don't
necessarily
follow
like
the
unix
abi
conventions.
C
You
know
much
better,
I'm
just
kind
of
because
the
convention
is
very
easy,
like
jump,
jump,
jump
jump
like
once
you
get
once
you
get
to
like
additional
information
parameters
like
blah
blah
blah.
Then,
of
course
you
have
whatever
conventions
and
they
may
change.
C
You
need
to
be
like
complicated,
but
for
just
a
stack
like
I'm
just
thinking,
for
example
like
etw,
for
example,
when
it
works
or
perf
on
on
linux
right,
the
runtime
is
able
to
walk
the
chain
and
to
package
it
into
the
event
interpreting
it
as
in
mapping
it
onto
onto
the
names
of
the
methods
that
require
special
information
from
the
runtime
yeah
or
interpreting
the
pdb,
depending
on
you
know,
whatever
the
logic
there
is,
but
it
seems
that
both
windows
in
case
of
etws,
the
windows
kernel
and
the
linux
kernel
and
case
of
perf
are
able
to
walk
the
entire
stack
without
even
realizing
that
this
is
a
managed
stack
and
have
enough
information
to
put
the
entire
chain
into
the
etw
event
or
perf
event,
so
that
it
can
later
be
mapped
onto
a
descriptive
information.
G
There's
a
so
we've
added
a
bunch
of
stuff
to
perf,
there's
extensibility
points,
and
so
what
perf
does
under
the
covers
is
there's
a
there's,
a
kernel
api
that
will
just
return
the
raw
stack,
and
so
it
will
say
here's
32k
of
the
top
of
the
stack
and
then
and
then
just
hand
it
to
so
perf
calls
these
kernel
apis
and
gets
this
32k
of
of
raw
stack,
and
then
we
pass
in
a
bunch
of
extra
information,
so
perf
has
they're
called
something
maps.
G
I
can't
remember
what
the
map
it's,
but
they
basically
we
pass
in
extra
information
as
the
runtime.
So
when
you
use
the
proof
collect
strip,
we
collect
a
bunch
of
information
about
jit
code
and
ready
to
run
code,
and
then
we
create
a
map
out
of
it.
And
then
we
pass
that
map
to
perf.
So
perf
can
can
understand
how
to
un
to
map
ips
back
to
addresses.
H
G
G
I
thought
it
was
the
exception
information,
but
then,
but
then
you're
right.
Why
would
you
need
that
to
walk
the
stack,
so
I
have
to
look.
C
The
stack,
if
you
want
to
actually
do
something
you
know
something
more
advanced,
but
if
you
just
want
to
because
this
is
what
I
was
thinking
essentially,
I
was
thinking
whatever
etw
does
to
just
create
a
list
of
instruction
pointers.
Yeah
right,
we
can,
we,
we
don't
need
to
use
any
library.
We
can
just
do
it
right.
G
H
C
Then
I
still
have
a
instruction
point
and
I'm
not
exactly
sure
how
would
you
represent
a
kernel
called?
No,
how
can
it
be
like
kernel
code
is
not
not
a
part
of
the
threat
cannot
be
intro
kernel
right.
You
change
into
kernel,
always
on
a
different
thread,
and
you
wait
on
on
the
response.
Don't
you.
G
Yeah
yeah
now
the
kernel
calls
a
context,
switch,
isn't
it
or
am
I
wrong
yeah,
so
the
stack
itself
yeah,
I
wouldn't
it
would
just
have
user
code,
and
now
I'm
trying
to
remember
what
was
the
issue.
So
let
me
look
and
find
the
issue,
so
I
think
I'm
pretty
close
to
finding
the
issue.
Okay,
there's
just
the
guy,
the
guy.
I
was
talking
to
likes
to
create
a
whole
bunch
of
issues,
and
so
I
have
to
find
exactly
which
one.
G
So
this
I
pasted
this
is
the
issue
I
was
thinking
of
where
somebody
is
asking
you
know:
can
they
use
an
ip
like?
Is
it
possible
to
get
a
call
stack,
that's
ips
only
with
standard
tools
on
linux.
H
C
Yeah
no
worries
so
basically
yeah.
This
was
this
was
the
outcome
of
like
what
I
found
out
and
my
next
right
now,
I'm
just
dealing
with
scaffolding
and
hating
c,
plus
plus,
but
it
works
for
that
and
yeah.
C
I
think
in
a
couple
of
because
I'll
be
working
on
it
relatively
focused.
So
in
a
couple
of
weeks,
maybe
we'll
talk
about
like
the
details
of
it
for
now
I'll
just
do
it
with
using
stack
the
sealers
api
and
run
into
my
first
deadlock
and
have
that
joy.
C
Okay,
bye
cheers
guys.
Chris
did
you
have
any
questions
about
this
or
you
muted,.
B
Yeah,
no,
I
I
don't
really
have
any
questions
at
this
point
interesting
conversation,
so
I'm
just
trying
to
soak
up
all
the
details.
C
Yeah,
so
you
are
you
guys?
What
is
it
like?
Just
out
of
curiosity,
are
you
guys
like
right
now,
both
like
your
whole
dotnet
data
collection
thing?
Are
you
driving
it
forward
a
lot
or
you're
kind
of
maintaining
it
and
then
you're
going
to
switch
to
open
telemetry?
It's
not
decided
yet
or.
B
Yeah
so
good
question,
so
there's
a
couple
of
us
that
are
focused
on
open,
telemetry
and
so
I've
I've
been
set
to
the
side
so
that
I'm
not
focusing
on
the
dotnet
agent
that
we
have
anymore
and
and
so
that
I
can
help
bring
this
this
along
faster
or
in
theory.
B
C
B
Now
I'm
helping
with
the
the
exporter
that
we
have
for
the
hotel.
I
B
And
and
other
things
there
and
so,
but
with
that
being
said,
we
still
have
a
team
dedicated
to
supporting
our
existing
product.
But
eventually
the
goal
is
to
to
replace
that
product
with
the
open,
telemetry.
C
Makes
sense,
cool
cool
all
right?
Well,
thanks
very
much.