►
From YouTube: 2021-07-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
Oh
he's
going
he's
visiting
his
home
he's.
B
B
D
Here,
I'm
not
sure
if,
if
ryan
is
here,
but
we
have
been
sort
of
collaborating
outside
of
this
meeting
to
discuss
the
cumulative
delta
process
here,
currently
new
relic
only
accepts
aggregation,
temporality
delta
for
our
back
end,
and
so
we
have
an
interest
in
this
chemistry,
delta
processor
as
well,
and
I
think
aws
has
a
use
case
for
it.
D
We
have
a
number
of
features
that
we
want
to
add
and-
and
I
guess,
bug
fixes
and
stuff
like
that
behind
ryan's
pr,
but
we
wanted
to
use
that
as
kind
of
a
starting
point.
So
I
I
approve
that
pr,
with
the
caveat
that
it's
going
to
be
modified.
B
Okay,
I
will,
I
will
take
a
look,
so
I
don't
know
if
you
realize,
but
we
made
a
lot
of
changes
to
the
metrics,
because
we
had
to
move
to
the
zero
nine
so
me
and
me
and
alex
we
were
very
focused
on
on
stabili
stabilizing
the
metrics.
So
that
being
said,
that's
not
an
excuse,
but
but
I
will.
I
will
prioritize
the
these
pr's
next.
B
E
Yeah,
sorry,
yeah,
sorry,
I
was
a
little
bit
late.
Sorry,
so
are
we
okay?
Oh,
is
it?
Are
you
talking
about
the
agenda?
I
added
for
the
pr
I
heard.
F
E
E
Yeah
here
were
some
pr,
so
I
saw
some
of
your
comments,
but
for
the
first
one,
it's
like
cumulative
to
delta
confessional
logic.
We
got
some
comments.
I
added
some
of
the
basic
feedbacks
and
I
had
a
meeting
with
ellen
and
we
are
thinking
like
we
can
the
primary
use
cases
we
can
he
also
or
implemented
something
on
on
the
other
side,
and
I
think
maybe
we
can
also
take
some
of
his
works.
Also
together
put
them
together.
E
B
I
think
I
will
look
again.
I
was
mentioning
before
you
joined
that
my
primary
focus
was
to
to
transition
to
metric
to
the
new
metrics
0
8
changes.
So
I
think
next
comes
your
prs
and
other
sprs
that
are
still
in
country,
not
reviewed.
One
thing
that
I
want
you
to
do
as
soon
as
possible
is
just
rebase
and
make
sure
that
you
use
the
latest
core
version,
because
we
deprecated
bunch
of
metrics
apis,
and
I
don't
want
you
to
use
the
old
ones
and
then
us
going
back
to
change
them
again.
E
Oh,
I
see
yeah.
I
will
do
that
today
and
I
will
think
about
maybe
on
slack
for
that
one
and
for
the
second
one
so
yeah
I
saw
like
you
asked
for
punisher.
You
know
and
punia
replied
yesterday
or
whether
it
should
be
a
new
processor
or
a
fire.
It
should
go,
but
I
think
punya
suggested
like
we
can
match
this
as
a
new
processor.
E
So
also
like.
What's
your
call
here.
G
C
It
sounds
like
we
have
two
different
sorry,
maybe
I
I
don't.
I
was
not
aware
that
that
is
a
policy
that
we
had
adopted.
I
knew
that
we
are
not.
We
are
trying
to
not
do
much
in
the
metric
transform
processor
because
that
has
been
implemented
in
a
way
that
we
don't
want
to
support.
I
didn't
know
that
there's
a
separate
thing
saying
that
we
will
not
add
any
processors
across
the
board.
That
seems
like
a
very
draconian
decision,
and
I
I
was
not
familiar
with
it.
C
G
B
C
Yeah
jurassic
but
again
like
how
important
is
the
core
contribution
distinction.
Given
that
what
you're
saying
right
that
we're
going
to
have
components
like
pretty
much
everything
in
core
is
going
to
move
to
contribute.
H
Yeah,
well,
that's
that's
actually
why
I
asked
because
my
understanding
is
that
we're
moving
everything
out
of
core
into
contrib
and
it
doesn't
make
sense
to
add
anything
new
yeah
right
now
to
the
core
right.
Only
if
anything
than
just
ridicule
trip.
C
Yeah
correct
and
I
I
thought
this
was
a
contrib.
Well
let
me
let
me
not,
it
is
it
is,
it
is
contributing,
but
it's
just
about
the
fact.
C
So
I'm
almost
certain
that
it
is
not
the
right
path
right,
but
we
we
have
to
sub
like
it's
like
a
pressure.
It's
like
a
pressure
vent
so
that
people
are
not
blocked
on
getting
their
work
done.
While
we
come
up
with
the
right
design
correct,
but
but
I
think.
G
Vagrant
we
have
made
a
proposal
on
the
processors
and
you
know
we
can
definitely.
We
also
have
been
working
on
a
metrics
processor
design,
so
maybe
we
can
have
a
design
review
next
time.
Yeah.
B
Let's
do
that,
I'm
super
happy.
We
already
agreed
about
high
level
things
so
somebody's
starting
to
to
do
the
work
yeah,
but
in
the
meantime
I
don't
want
to
block
ryan.
So
let's,
let's
accept
that
pr
for
the
moment.
The
reason
why
I
asked
punya
was
because
I
thought
that
punia
is
leading
the
effort
and
I
want
his
opinion
and
to
know
about
the
people's
needs.
Like
look,
it
came
one
extra
functionality
which
I
think
it's
it
can
be
easily
done
by
a
generic
transform
processor
that
can
support
this.
But
anyway,
I.
C
Think
that's
right
bob
and
I
agree
completely
and
I
you
know
min
has
been
leading
the
men.
Yeah
has
been
working
on
the
kind
of
the
unified
thing
across
signals.
Yeah.
G
And
we've
also
made
progress
on
specific,
starting
to
think
about
specific
signals,
also
right
on
the
processors.
So
definitely
I
think
it
would
be
good
to
have
a
sink
on
that.
C
B
Okay,
perfect:
let's,
let's
not
stay
too
much
on
this,
so
ryan.
As
I
said,
please
just
make
sure
that
today,
after
this
meeting
your
grade
your
core
dependencies
and
and
stuff
just
that
you
don't
use
the
old
apis
that
we
are
just
planning
to
remove.
E
Yeah
I'll
do
that,
as
I
miss
the
fast
part,
did
you
say
anything
specific
for
the
first
one,
like
logic,
do
you
have
any
core
requirement,
as
I
said
like
it,
supports
the
primary
use
case
and
we
will
send
ellen
will
send
another
pair
on
top
of
this,
who
is
kind
of
like
already
ready
on
his
side?
Do
you
have
any
like
hard
requirement
for
this
that
I
I
missed
to
be
I
mean,
did
you
discuss
anything
before
I
joined.
B
G
Rihann,
I
have
a
question
on
on
your
processor.
Will
this
be
a
breaking
change
once
things
change
and
get
consolidated
into
a
new
metrics
processor?
Will
this
be
a
breaking
change
on
you?
I.
E
G
Yeah
I
mean
I
think
we
should
discuss
that
because
men
had
addressed
this
in
the
you
know
in
the
design
that
we
are
proposing
and-
and
I
think
maybe
it's
good.
K
And
look
at
that
so
maybe
can
I
ask
one
question
so
right
here
I
know
you
have
this.
You
need
this
data
for
different
use
case
in
aws.
We
already
have
a
data
like
a
delta
or
rate
calculation
logic,
existing
aws,
internal
folder
folder
right.
So,
like
I
told
you
before,
what
is
this
processor
right
now
only
need
to
satisfy
aws
requirements.
If
that's
the
case,
probably
we
should
not
have
this
processor
just
using
the
current,
the
the
functionality
or
utility
we
already
have
in
the
hotel
collector.
You.
K
K
B
Okay,
so
guys,
I
think
that
delta
to
cumulative
and
cumulative
to
delta
are
independent
of
transforming
stuff.
So
I
think
that
will
definitely
this.
We
already
decided
that
that
will
be
a
a
processor
that
will
probably
stay
for
longer,
because
the
difference
between
that
and
any
other
transformation
is
because
this
one
requires
state
for
for
doing
things.
So
we
agree
that
this
will
be
a
standalone
processor.
So
I'm
not
worried
about
that.
I'm
worried
about
the
one
that
calculates
rate.
E
G
H
Yeah,
okay,
I
thought
someone
was
gonna,
say
something
before
me,
but
excuse
me
but
yeah.
So
I
just
wanted
to
ask
what
is
your
current
views
or,
and
you
know
what
is
the
current
state
of
the
creation
of
the
new
distributions
repository
right
after
I
added
this
item
I've
got,
I
saw,
I
saw
a
message
added
like
yesterday
by
sergey
and
then
one
today
by
mark
carter,
but
I
mean
I
guess
it's
the
question
for
bogdan
and
lavolita.
H
If
it
is
something
that
we
still
want
to
pursue
and
if
so
I
would
like
you,
you
know
to
do
a
comment
there
mainly
to
confirm
that
we've
talked
about
this
before.
G
Yeah
I
mean,
I
think
that
I
mean
at
least
I
I
think,
we're
all
in
agreement
that
we
do
want
to
have
a
tooling
for
being
able
to
have
the
flexibility
to
build
core
releases
for
the
collector,
as
well
as
additional
components.
You
know
being
bundled
with
the
core
for
additional
releases
and
have
that
flexibility.
G
So
my
understanding
was
that
we're
working
together
on
building
out
this
functionality-
and
you
know,
anthony
and-
and
some
of
our
other
engineers-
have
also
worked
on
building
out
additional
goal,
build
tools
so
jurassic
confirming
that
we
do.
We
are
also
an
agreement.
H
Yeah,
okay,
yeah,
I
mean
I
I'm.
I
think
you
were
here
last
week,
but
I
ran
a
demo
on
what
I
have
so
far
on
the
distributions
repository.
Basically,
it
is
already
available.
It
is
already
able
to
create
a
core
distribution,
and
the
status
as
of
today
is
that
it
can
generate
for
a
good
number
of
platforms
and
architectures.
H
H
G
H
Macos
is
working
with
cross
compilation
and
I
have
to
verify
how
it
works
with
homebrew,
because
I
have
no
idea
how
it
works.
It's
my
class
is
not
my
road,
but
you
know
for
the
plc.
I
think
it
is
already
good
enough
for
us
to
show
and
tell
and-
and
perhaps
you
know,
move
from
the
plc
stage
to
a
more
you
know.
This
is
the
solution
that
you
want
stage.
B
Jurassic,
I
I
see
that
you
asked
for
a
new
repo,
I'm
a
bit
confused
there.
I
would
prefer
actually
to
put
our
distribution
in
the
same
repo
with
the
builder.
Just
for
for
simplicity,
I
I
I
know
I'm
biased,
because
I
I
worked
in
a
monorepo
for
a
long
time
and
I
really
enjoyed
that,
but
I
I
think
it's
better
to
have
them
closer
to
to
the
builder
and
also
we
could
run
things
and
check
with
our
distributions
if
they
are
building
and
stuff
like
that,
when
we
change
the
building
yeah.
G
B
G
H
Yeah
the
builder
the
target
audience
for
the
builder
is
really
people
like
you
know
me:
building
a
distribution,
a
like
red
hat,
distribution
of
the
collector,
not
so
much
end
users
and
the
target
audience
for
the
distributions
repository
would
be
end
users.
You
know
people
who
were
going
there
to
find
distributions
and
the
thought
behind.
That
is
that
we
couldn't
build
a
ui
for
the
distributions
repository
that
would
allow
people
to
mix
and
match
and
download
whatever
they
want
from
that
repository.
H
The
tool
that
has
been
used
by
the
distributions
to
buy
the
to
build
things.
L
I
think
I
think
I
have
a
I
have
something
to
add.
I
think
people
got
confused
because
you
know
distribution
they're
already,
like
you
know,
red
hat
distribution
and
other
distributions,
and
so
on.
Is
this
repo
is
going
to
be
somewhere
that,
like
we
will
be
building
those
type
of
distributions
or
is
it
just
kind
of
like
sample
distributions
yeah.
G
I
thought
it
was
more
focused
jurassic
on
developer
distributions.
I
didn't
think
it
was
actually
end
user
distributions,
because
that
comes
with
a
whole.
You
know
productionized
platform
and
and
guarantees
that
end
users
need
to
have
especially
customers.
So
I'm
not
sure
I
agree
with
your
assumption.
H
Yeah
so
distributions
the
output
of
the
distributions
repository
and
the
name.
We
can
change
it
to
better
match
the
expectations,
but
the
output
of
that
repository
is
a
binary
that
end
users
can
use,
you
know,
can
download
and
then
make
use
of
so
the
core
distribution
that
we
have
today
or
whatever
we
go
to
the
release.
Tab
on
on
github
and
whatever
we
see
there
to
download
and
to
make
use
of,
is
now
going
to
be
made
by
the
distributions
repository.
So.
H
Is
only
going
to
be
an
api,
it's
only
gonna
be
a
library
type
of
repository,
so
not
producing
any
any
deliverables.
Okay,.
H
Contrib
is
going
to
be
the
second
and
just
for
fun.
I
added
a
a
one:
one
distribution:
there
called
a
load
balancer
just
with
the
load
balancing
exporter.
You
know
just
to
prove
the
point
that
we
can
make.
We
can
have
multiple
distributions
built
as
part
of
that,
but
I
agree
releases
is
sounds
like
a
better
name
for
that.
L
H
All
right
so
before
before
we
move
forward,
can
I
ask
you
all
to
read
the
proposal
again
and
go
over
there
and
make
those
changes
or
you
know,
make
those
suggestions
there
and
then.
M
H
M
Who
can
sorry
eric?
I
had
one
extra
question
for
jurassic
before
we
moved
on.
Is
that
okay
sure,
when
you
also
mentioned
adding
for
fauna
load,
balancing
like
to
release
or
distro
it?
Would
this
potentially
be
a
place
in
the
future
where
something
like
a
cloud
native
or
some
other
like
including
other
combinations
of
processors
and
receivers
and
exporters
live?
I
know,
that's
a
topic.
We
talked
about
a
few
months
back
and
I
was
hoping
to
connect
the
dots
yeah
so.
H
That
repository
so
that
releases,
our
repository,
is
only
going
to
contain
things
that
we,
the
open,
telemetry
community,
wants
to
deliver
to
end
users
right
so
the
agents
in
the
future
perhaps
or
side
cars
in
the
future,
but
right
now
only
the
core
and
the
contrib,
which
is
whatever
we
produce
today.
Already
excuse
me
now
for
the
future.
H
What
we
want
to
do
is
to
get
the
builder
and
have
a
place
where
people
can
add
metadata
like
the
manifest
files,
and
that
and
people
here
can
be
vendors
and
we
provide
a
ui
for
for
users
to
select
which
components
they
want.
I
think
the
example
that
was
mentioned
last
week
was
caddy's
caddy
web
server,
where
you
can
select,
which
modules
you
want
to
use
and
produces
a
binary
for
you
right.
H
So
that's
the
end
goal,
but
it's-
and
I
think
you
know
the
proposal-
brings
all
the
tools
all
the
pieces
together
to
make
that
possible
in
the
future
I'll
review
the
rest
of
the
proposal.
B
H
Just
so
sorry
about
them
so
who
commits
here
to
read
the
proposal
by
next
week,
so
eric.
G
L
F
Digital
collector
yeah,
so
it's
just
open
pr
waiting
to
be
merged.
Okay,
sorry,
but
I
would
like
to
call
this
like.
B
Sorry,
I
just
want
to
make
sure
we
we
stay
on
the
same
line.
I
think
so.
B
Sorry
she
before
calling
on
this,
but
but
it's
good
for
everyone
to
to
stay
on
track
on
on
this
meeting
jana.
You
are
the
next.
L
Yeah,
I
have
a
idea.
There
is
a
bunch
of
like
productionization,
related
issues.
L
L
So
I
was
wondering
like
is
we're
getting
closer
to
you
know
stabilization,
maybe
after
stabilization
or
something
for
the
collector,
should
we
start
a
bit
of
like
a
focus
group
of
a
couple
of
people
to
work
on
this
type
of
issues
sounds
good,
but
let's,
let's
wait
a
bit
I'll.
I've
been
socializing
this
list
with
other
people
to
see.
If
you
know
anybody
wants
to
volunteer
and
so
on.
B
So
I
think
I
think
to
be
honest:
if
if
we
have
somebody
leading
this
effort
and
that
person
is
willing
to
to
spend
time
even
now,
I'm
happy
a
bunch
of
them.
We
need
anyway
at
any
moment.
So
so
you
don't
have
to
wait
for
anything
just.
G
G
L
G
I
B
I
think
I
think
we
can
have
a
second
time
on
the
same
see
if,
if
you
don't
want
to
start
a
proper
seed
in
the
whole
community,
we
can
have
a
second
meeting
under
the
same
seed
collector
and
and
have
that
it's
up
to
you,
I'm
I'm
yeah.
L
I
just
I
just
don't
want
to
create
too
much
process.
To
be
honest,
you
know
whatever
is
the
lightweight,
because
this
might
this.
This
working
group
may
also
dissolve
after
a
while,
like
once
we
address
a
couple
of
like
you
know,
big
issues,
or
it
may
actually
be
a
you
know
long-term
group,
and
we
can
maybe
turn
it
into
a
working
group
if
we
feel
like
it's
going
to
be
a
more
of
like.
I
B
B
C
Hi,
so
first
one
should
be
quick.
I
know
we
have
a
bunch
of
outstanding
issues
about
migrating
components
from
core
to
a
different
repository.
I
was
curious
what
would
happen
to
the
code
history,
things
like
blame
and
prs
and
stuff,
so
I
tried
a
couple
of
approaches
for
for
doing
that.
I've
I
made
a
poc
where
I
picked
one
approach
just
to
show
the
trade-offs,
there's
a
little
description
of
what
happens
jurassic.
Thank
you
for
your
comment.
I
I
understand
that
point
of
view.
C
I
would
be
curious
to
hear
from
other
people.
It
should
not
take
very
long
to
evaluate
this,
maybe
maybe
like
two
or
three
minutes
so
so
please,
after
the
meeting,
take
a
look
and
and
comment
on
the
issue,
I'm
totally
fine
with
any
of
the
resolutions.
I
would
just
like
to
hear
a
few
more
comments.
L
C
C
Yep
so
so
I
think
I
think
yurasi's
comment
is
totally
valid
and
if
we
just
say
great,
when
we
make
the
merge,
then
we
reference
the
commit
the
the
last
commit
where
it
existed,
so
that
people
have
an
easy
time
going
back.
Yeah.
N
Hey
hold
up
yeah
yep,
can't
we
actually,
I
believe
we
could
for
a
director
who
could
copy
the
entire
git
history.
Is
that
a.
C
You
you
can,
you
can
read
the
emanuel,
I
I
I
called
out
that
option.
Also
you
can
take
a
look
at
again.
There
are
some
downsides,
it's
a
smaller
number
of
commits,
but
now
you
will
get
duplicate
commits
when
those
commands
touch
multiple
directories,
and
so
the
history
will
in
the
end,
because
we
are
porting
like
70
commits.
We
may
well
end
up
with
2500
commits
which
are
not
the
same,
commits
that
the
ones
we're
talking
about
here.
H
A
O
H
G
In
the
good
logs
you
would
see
it,
but
again
I
mean,
let's
start
and
take
a
look
and
and
again
I
I
definitely
towards
preserving
history,
but
we
need
to
figure
out.
You
know
the
cross.
Repo
dependencies
are
the
issue.
Another.
C
C
C
C
Right,
there's
one
merge
permit
for
each
time
I
bring
the
p
for
for
each
component
migration,
but
that's
one
merge
minute.
All
the
historical
commits
are
the
same,
commit
ids
as
before
cool
right.
That's
so
that's
the
difference
between
doing
a
sub
tree
cut
and
doing
a
whole
like
doing
the
whole
history.
Anyways,
I'm
happy
to
write
more
on
that
issue.
I
think
that's.
I
don't
think
it
is
worth
the
communities
like.
I,
I
don't
think
it's
worth
talking
about
it.
I'm
happy
to
answer
that.
B
G
C
I
have
a
second
thing
to
talk
about.
After
the
get
get
surgery,
which
is
always
fun
jurassic,
yes,
sign
commits
are
going
to
be
signed.
Yes,
yes,
okay,
cool
yeah,
because
the
hashes.
D
C
Exactly
yes,
that's
a
really
good
point
with
the
subtree
strategy
the
signed
commits
will
will
now
no
longer
be
signed.
It's
a
really
good
point.
If
it's
a
manual,
if
I,
if
I
cut,
if
I
slice
by
directory,
then
the
signed
commits
will
now
be
you
know
it
will
look
like
like
some
someone
is
impersonating
bog
done
in
our
code.
N
Well,
so
does
you
know,
does
do
the
old
sign
commits
actually
matter,
because
in
order
for
sign
commits
to
be
changed,
they
would
have
to
go
rewrite
the
history.
I
think
I
think
yeah
just
to
preserve
history.
We
we
shouldn't
matter,
sign
committee,
didn't
matter
too
much
it's
after
we've,
it's
after
we've
done.
The
migration
that
we
can
restore
sign
commits
from
that
head.
Does
that
make
sense.
C
I
don't
fully
understand
but
again,
if
you
can
comment
on
the
issue
we
can,
we
can
carry
on
the
discussion.
There
cool
a
comment.
N
N
C
Punia,
you
have
a
last
one
as
well.
Yes,
last
last
issue,
thank
you
alex
okay,
so
the
last
issue
we've
had
a
few
instances
now
for
the
ones
I'm
familiar
with.
We
changed
stackdriver
exporter
to
google
cloud
exporter
and
now
we're
talking
about
doing
some
other
changes
that
will
where
we
will
create
one
one
processor
and
later
on
the
functionality
will
be
subsumed
into
something
else.
So
I'm
wondering
if
it
makes
sense
for
us
to
have
a
config
rewrite
layer
so
that
the
processor
can
truly
be
deprecated
safely
again.
C
Correct
so
if
I
can
write
so
if
I
can,
when
let's
say
that,
let's
say
that
the
new.
C
I'll
talk
about
the
two
examples
I
know
about
one
is
this
change
from
stackdriver
to
google
cloud.
So
today
we
have
a
google.
What
we
have
is
a
go
module
dependency
between
this,
the
kind
of
the
correct
name
and
the
old
deprecated
name
and
maintaining
this
co-module
dependency
is
a
little
bit
complicated
because
the
tooling
doesn't
understand
tooling,
expects
that
all
the
modules
in
contrib
repo
are
totally
independent.
B
B
This
picture
ever
exported
yeah,
but
people
don't
usually
import
that
exporter
I
mean
that's,
that's
something
that
may
break,
but
I
don't
yeah
yeah.
C
L
C
So
the
other
situation
I
want
to
talk
about
is
like
this
idea
of
specialized
processors
that
we
are
now
merging
that
will
eventually
be
subsumed
by
a
more
generalized
one
yeah
right.
It
would
be
very
nice
if
we
could
say
great
once
the
generalized
one
is
ready.
We
got
the
specialized
one
and
just
leave
it
as
a
little
rule.
That
says,
when
you
see
a
processor
config
for
this
thing
construct
the
other
one.
Now,
of
course
again
you
can
do
this.
C
You
can
write
a
factory
today
that
will
create
the
other
thing,
but
at
that
point
you
are
creating
a
go
module,
dependency
and
again
dealing
with
these
issues.
It
might
be
simpler
if
you
could
say:
oh,
don't
don't
don't
use
this
config
use
this
other
config,
I'm
just
doing
a
yamaha
transformation
so
happy
to
find
an
issue
about
this,
but
I
wanted
to
bring
it
up
in
this
group
because
we're.
G
Interested
this
was
discussed
earlier
also
when
we
discussed
the
general
processor
design,
and
you
know
how
we
would
retain
configurations
across
renamed
components
or
new
components.
You
know
being
used
instead
of
the
old
ones,
and
there
is
some
discussion
in
the
design
dock
also
on
this.
I
G
We
have
addressed
if
you
can.
Please
take
a
look
at
that
again,
you
know
happy
to
again.
There
is
obviously
a
configuration
layer
that
we
need
to
take
care
of
so
that
there
are
no
breaking
changes
for
the
customer
or
the
end
user,
but
you
know
again,
let's
let's
kind
of
drill
down
into
it,
a
bit
more
in
terms
of
how
that
we
would
do
these
configuration
rewrites,
because
I
don't
necessarily
think
it's
only
a
goal:
module
problem
right.
C
G
We
added
you
know
a
section
min
actually
did
some
more
work
on
that.
So
that's.
If
you
could,
please
take
a
look
at
it.
I'm
happy
to
schedule
more
time
to
go.
C
L
I
have
I
have
one
more
question
actually
related
to
this.
So
right
now
you're
trying
to
motivate
people
to
move
from
the
deprecated.
You
know
module
to
the
new
one.
Do
you
like
log
anything
like
hey?
This
is
deprecated
like
like
like
what
is
your
overall.
We
do.
G
Yeah
and-
and
you
know
also
the
you
know-
discussion
also
has
been
around:
do
we
actually
explicitly
tag
experimental?
You
know
in
the
name
of
the
component
versus
you
know,
removing
that
tag
right
so
again
there
have
been
different
options
that
have
been
also
considered
there.
L
L
B
L
B
Sorry
that
doesn't
need
any
dependency
on
any
component
correct
because
it's
going
to
be
on
yaml
file.
So
it's
even
though
the
there
is
going
to
be
some
some
vendor
specific
code,
but
it's
it's
not
gonna
bring
any
dependency.
It's
just
gonna
be
some
some
regex
and
whatever
strings.
L
Actually,
like
you
wanna,
you
know,
have
the
structs
right
like
the
config
structs,
you
may
wanna
have
the
complex
tracks.
I
don't.
H
So
I
just
basically
a
link
here
in
the
chat
of
one
upgrade
from
the
version
from
the
previous
version
of
or
no
what
is
that
what
it
takes
to
get
it
to
0
19
0,
which
is
quite
old
now,
and
what
we
do
is
we
we
go
over
the
configuration
and
we
know
well,
we
go
to
the
release
notes
and
then,
before
releasing
the
operator,
we
make
a
upgrade
routine
and
that
is
part
of
the
019
0
of
the
operator.
H
So
whenever
an
operator
018
is
migrated
to
019,
this
function
here
is
executed,
so
most
of
the
logic
to
migrate
that
those
kind
of
those
kinds
of
configuration
files.
It's
there
already
and
based
on
our
history
here
and
based
on
the
eager
history
as
well.
Most
of
the
updates
they're
quite
easy,
so
new
flags
that
appear
they
are
they're
replacing
old
flags.
So
it's
just
renaming
flags.
H
In
our
case
here
we
had
one
big
change
with
endpoints.
I
think
became
host
ports
instead
of
endpoints
or
vice
versa,
but
all
of
that
could
be
done
on
with
gold
code
and
no
external
dependencies
whatsoever.
H
Yeah
the
operator,
the
operator
for
version
019
provisions
and
a
collector
019
and
whenever
there
is
a
new
upgrade,
so
I
look
at
the
cr,
so
we
store
the
state
in
the
cr.
So
in
the,
if
you
have
a
hotel
call
cr
in
your
kubernetes
in
the
status
object,
we
store.
What
is
the
version
of
that
open
branch
collector
and
whenever
a
new
operator
starts,
it
looks
for
all
the
operands
that
it
takes
care
of
and
migrates
one
by
one.
B
B
G
Yep,
exactly
and
and
also
there
are
other
dependencies
on,
say
prometheus,
which
actually
does
not
do
versioning.
Today,
at
all,
we
had
this
discussion
in
the
prometheus
work
group
earlier
today
with
richard
and
the
prometheus
team,
and
you
know
they
don't
have
a
clear
way
of
actually
versioning
all
the
components
that
are
available
in
in
the
you
know,
because
they
don't
look
at
prometheus
as
a
library.
G
So
there
is
an
issue
in
terms
of
being
able
to
establish
clear
versioning
for
every
single
component
across
the
board
and
having
that.
C
I
C
Concerned
with
yeah,
and
if
you
wanted
to
use
the
collector
the
the
operator
with
a
custom
distribution,
your
upgrades
would
not
necessarily
be
sufficient.
Is
that
right.
H
G
That's
correct
and,
and
I
mean
but
that
information
jurassic
is
in
the
change
log
right
I
mean
it's
not
necessarily
being
encoded
anywhere
else.
H
H
It
is
way
too
complex
for
my
head,
but
you
know
one
thing
that
I
that
I
keep
thinking
of
is
you
know
we
can
and
we
should
do
better
with
backwards
compatibility,
so
do
changes
in
two
or
three
stages
right,
and
we
do
that
on
on
the
eager
side
as
well,
so
before
we
remove
something
we
deprecated
for
at
least
two
versions
like
two
versions
before
we
add
a
warning
one
version
before
we
had
a
big
warning
and
the
version
that
we
removed.
G
But
but
jurassic
that's
a
convention
you're
adopting
on
the
project
which
is
which
is
okay,
that's
totally
okay,
I
mean,
but
that's
not
necessarily
built
in
into
the
tooling
right.
That's
just
a
convention
that
the
project
has
adopted.
So
I
mean
that
that's
a
good
way
of
doing
it
in
the
short
run.
B
G
B
N
L
B
Even
that,
I
think,
would
be
cool
and
have
that
option
for
people
and
say
hey.
You
should
run
this
before
upgrading
to
the
next
version
and
see
and
see
what
changed.
G
B
B
I
think
I
think
we
should
all
put
more
thoughts
into
jana's
document.
Add
comments
there
for,
for.
L
B
Okay,
do
we
have
any
more
topics
for
today.
G
G
N
L
M
P
P
G
G
G
R
R
Oh,
we
have
some
someone
you
I
I'm
gonna.
Guess
your
name
seth!
I
I'll
ask
you
later
have
to
introduce
yourself
after
more
folks
during
the
meeting.
Q
Yeah
I
was
worried.
I
wasn't
gonna
join
the
right
zoom
meeting
because
last
time,
with
all
the
calendar
changes
me
greg
and
and
chris
joined
the
wrong
meeting
and
then
had
to
switch
over
to
the
real
meeting.
N
R
L
Q
R
Yeah,
I
think
I
think
he'll
be
just
us,
then
let's
go
until
the
35
and
then
restart.
Q
R
R
Because
hi
greg
because
basically
was
the
conflict
and
I
said,
oh
sure,
no
problem,
you
know
we
changed,
we
changed
the
calendar,
but
then,
after
that
it
was
not
easy
at
all.
So.
R
Okay,
so
I
think
everyone
that's
gonna
join
is
here
I
I
don't
know,
since
you
have
a
a
name
that
I
didn't
see
before
in
the
meetings.
Seth
would
like
you
to
introduce
yourself.
R
Okay,
yeah.
R
R
All
right,
I
I
put
some
topics
related
to
some
of
the
stuff
that
we
talked
before
is
some
of
the
pr's.
R
I
R
Remove
the
strong
name,
because
we
don't
ship
a
nuget
package,
nobody
interior
is
going
to
link
against
the
the
anything
all
of
the
poc
and
also
in
theory
that
doesn't
prevent
us
from
loading
any
any
instrumentation
as
long
as
we
do
properly,
without
using
their
reference
to
his
strong
name
so
and
actually
a
bit
faster
than
I
was
expecting
robert
merged.
The
my
my
pr
before
you
had
a
kind
of
discussion
on
that.
R
I
I
wanted
to
hear
if
anybody
has
any
concern
it's
on
the
plc
branch,
but
if
you
already
anticipate
some
problem
or
if
you
have
any
concern
about
that,
the
main
advantage
is
just
to
simplify
things.
You
know
I
I
basically
wanted
to
kind
of
not
have
to
do
internals
and
perhaps
remove
the
the
configuration
for
the
build.
You
know
about
using
strong
names.
R
Yes,
yes,
and
basically,
what
what
we
have
to
think
that
happens
there
is
is
basically
the
following
and
on
the
plc
branch
we
build
an
application.
The
application
is
a
strong
main
sign.
It
uses
a
library,
that's
the
strong
main
sign
and
we
instrument
both
to
to
to
be
sure
that
things
are
working.
The
main
thing
is
that,
basically,
is
in
the
end.
R
Is
assembly
load
code
right,
but
since
we
don't
have
the
strong
name
and
the
application
could
have,
but
the
application
has
for
its
own
reference,
the
stuff
that
we
inject
on
the
code
end
up
happening
with
the
load
without
the
key
and
without
the
other
information,
and
it
at
least
so
far
it
has
worked
the
cost
of
free,
adding.
If
you
we
find
something
is
not
that
big.
So
I
thought
it
was
a
good
moment
to
experiment
with
this
on
the
poc
branch.
P
Now
I
I
could
be
completely
wrong
about
this,
but
was
there
an
old
security
control
that
could
be
used
so
that
only
strong
named
assemblies
could
be
loaded
within
a
process?
R
Some
restrictions
about
that.
I
think
you
can
do
some.
I
don't
know
the
defaults
of
that.
I
should
because
I
did
some
work
on
that,
but
I
don't
remember,
but
there
is
some
restrictions
about
what
assembly
load
can
do
you
know
on
the
framework,
I
I
the
test
that
I
did.
R
I
didn't
encounter
that
it
worked
with
the
default
configuration,
so
I
don't
know
if
it's
possible
for
perhaps
somebody
to
kind
of
lock
down
the
machine
in
a
way
that
doesn't
work,
but
I
would
say
that
at
least
on
the
typical
flow
it's
working
you
know.
So
perhaps
I
I
should
try
to
follow
up
on
that.
R
With
assembly,
let's
say:
assemble
load
lock
down
on
windows.
I
I
don't
have
a
better
name
for
this.
I
think
actually,
I,
as
I
said,
I
think
I
actually
worked
on
that,
but
I
really
don't
remember
what
was
that.
You
know
that
I
I
do
remember
that
there
was
something
some
scenarios
that
we
prevent
loading
if
there
was
a
change
or
if
it's
kind
of
injected
cold
kind
of
power,
shell
trying
to
load
that
we
presented
something
something
along
those
lines.
I
I
don't
remember
I
have
to
check
so.
R
This
but,
as
I
said
at
least
the
test
that
I
did
with
the
framework
and
the
mind,
of
course,
the
concern
is
with
framework
because
dot
net
core,
it's
nothing
about
the
the
strong
name.
But
I
will
do
a
a
follow-up
on
that.
R
Okay,
so
robert
didn't
join
us
today
and
I
I
kind
of
wanted
to
discuss
this
pr,
this
pull
request
here.
The
main
thing
is
because
we
use
source
instrumentations
on
the
with
the
sdk
and
in
the
case
that,
for
instance,
he
was
the
code
was
trying
to
automatically
load
a
bunch
of
instrumentations,
but
the
application
doesn't
have
the
dependencies.
So,
for
instance,
we
are
trying
to
load
the
source
instrumentation
for
asp.net
core
and
it's
a
console
app.
So
the
dependencies
are
not
there.
R
R
If
doubt
you
explicitly
or
adding
the
instrumentation
during
setup
time
or
kind
of
adding
a
source
to
listen
to
those
activity
source.
So
I
think
it's
in
line
with
that,
but
it's
different
than
a
lot
of
the
instrumentation
that
is
different
from
what
datadog
does
is
different
from
a
lot
of
the
instrumentations
that
we
have.
I
think
it's
also
different
from
your
relic,
but
at
least
in
the
short
run,
I
think
it's
the
safest
approach.
You
know
one
needs
to
be
explicit
about
what
source
instrumentation,
instrumentations
they're
going
to
add.
P
E
P
I
I
think
that
that's
a
good
approach,
short
term.
I
know
that
one
thing
that
we've
done
in
the
past
was
sort
of
declare
a
minimum
set
of
dependencies
and
then,
if
those
dependencies
are
satisfied,
then
we
attempt
to
to
load
the
instrumentation.
R
R
I
I
I'll
be
curious
that,
because
that's
kind
of
it
was
one
of
the
things
that
come
came
to
my
mind
and
but
I
I
want
to
ask
how
how
that
worked
in
practice.
You
know
it
was
reasonable.
Just
a
few
corner.
P
So
so
it
ended
up
being
just
a
few
corner
cases,
and
so
a
lot
of
it
was
trying
not
to
have
dependencies
in
the
first
place.
R
I
see
and
and
did
you
did
you
do
something
like
try
to
load
these-
I'm
not
sure
how
it
would
work
with
the
sdk,
but
because
this
is
something
that
crosses
my
mind,
also,
it's
kind
of
on
the
load
of
the
assemblies
and
kind
of
keep
track.
Oh
I'm
loading
this
assembly,
I'm
loading
this
assembly,
I'm
loading
data
and
oh,
I
complete
the
set
for
this
instrumentation
and
then
kick
the
instrumentation.
P
Yeah
so
we've
there's
a
few
tricks
that
we've
used
and
and
so
nested
statics
being
some
of.
T
R
I
see
I
see
so
perhaps
if,
when
we
get
to
that
stage,
we
look
at
the-
and
this
is
part
of
the
new
relic
open
source
code
right
right.
P
Yeah
so
yeah,
because
I
mean
the
idea
is
that
there's
a
set
of
instrumentation
libraries
and
we
attempt
to
load
all
of
the
libraries
but
there's
a
intermediate
layer
that
sort
of
has
a
handle
to
to
the
instrumentation
code,
and
then
we
check
to
see
if
it's
relevant
or
not,
and
then,
if
something
needs
access
to
types
that
are
in
another
assembly.
That's
where
we
we
use
a
nested
static
trick,
because
then
the
statics
will
only
load
cause
that
dependency
to
be
loaded.
R
I
see
I
see
yeah,
so
I
think
when
when
when,
if
we
eventually
get
to
that,
then
I
think
we
look
at
the
the
code
that
is
there
on
your
relic
too,
to
use
as
a
beta.
It's
it's
already.
R
Functionally
testing
practice
so
yeah
we
can.
We
can
leverage
that
experience
there,
so
that
was
that
this,
the
other
pr
erasmus
started
this
big
pr
with
a
lot
of
renames
and
adding
tests,
it's
kind
of
quite
big,
but
I
think
it's
accepted
by
one
bug
that
robert
pointed
I
think
it's
almost
ready
to
be
merged.
R
But
on
the
other
hand
I
know
that
rasmus
is
on
a
pto,
so
I
think
I
have
the
rights,
not
only
I
have
the
right
from
our
ripple
for
for
sure,
I'm
gonna
do
a
pass
and
try
to
just
close
that,
so
we
can
make
progress
there
and
not
have
kind
of
he
have
to
rebase
this
thing
when
he
get
back
and
not
us
also
moving
ahead
and
then
having
to
to
pull
stuff
from
there.
So
I'll
try
to
see
the
bugs
and
run
this
locally
if
there
are
no
bugs.
R
I
think
I
emerge
basically
as
it
is
so
you
can
make
progress
and
avoid
that
there
was
a
a
thread
on
slack
about
the
versions
covered
by
the
sdk,
the
hotel
sdk,
because
they
they
being
microsoft.
They
are
very
following
very
close
with
the
end
of
life
dates.
I
know
that
a
lot
of
the
users
are
not
so
diligent
about
that,
but
I
think
for
us,
since
we
don't
have
a
version
release
it
for
now
it's
a
kind
of
a
path
forward
for
us.
R
If
that's
the
case
just
to
follow
that,
and
I
would
say
that
in
general,
the
the
the
vendors
have
offerings
for
the
lower
versions,
so
customers
that
want
users
that
want
to
move
to
open
the
limit
without
instrumentation.
You
have
to
deal
with
this
version
requirements
from
the
sdk.
R
I
would
say:
that's
not
ideal,
but
is
the
pace
that
things
are
moving,
which
reminds
about
other
things.
So
we
are
doing
some
changes
on
the
sdk
like
environment
variable
configuration.
That's
why
robert
actually
start
that
pr
with
the
sdk
version
bump,
because
he
added
the
environment
variables
for
jaeger
and
we
were
handling
on
the
poc
branch
and
now,
actually
you
can
remove
that
code
and
the
sdk
itself
handles
that,
and
we
are
doing
this
for
more
and
more.
R
But
that
puts
us
in
a
position
that
we
need
to
update
the
reference
and
they
don't
plan
another
official
release
until
november,
when
they
they
should
have
update
spec
and
metrics,
but
they
are
releasing
about
every
month
or
so
they
are
releasing
alpha
beta
versions
of
the
sdk
and
that's
I
I
think
we're
going
to
be
updating
whenever
they
have
a
new
version
update
to
get
these
fixes
that
we
are
putting
on
the
sdk.
R
Of
course,
if
we
need
something
earlier
than
that
to
test
to
validate,
we
can
always
ask
people
on
the
sdk
to
publish
release
for
us.
You
know,
but
the
cadence
we
should
expect
should
be
every
month
by
default.
Let's
say:
cj
did
mention
that
they
plan
to
perhaps
have
a
more
frequently
if
they
need
something
for
metrics.
R
So
if
we
see
any
other
release,
perhaps
we
we
can
take
advantage
of
that,
but
anyway,
once
more,
if
we
feel
the
need,
we
can
ask
them
sorry
about
my
dogs
here.
Some
somebody
is
walking
outside
and
he's
giving
me
the
alarm.
P
Zach
greg:
do
you
have
any
concerns
about
the
net
framework
support
there.
Q
R
Okay,
last
last,
I
think
last
meeting
I
mentioned
a
little
bit
about
kind
of
doing
some
validations
for
the
devops
scenarios.
I
had
done
some
initial
in
the
in
the
last
meeting
I
think
and
after
that
I
look
a
little
bit
more
in
the
dotnet
framework
kind
of
trying
to
do
binary,
redirects
with
red
built
applications.
R
It
worked
so
far,
whatever
I
tried
kind
of
having
a
dependence
of
independence
and
if
I
do
the
proper
redirect
that
sometimes
is
not
easy
to
get
right,
that's
kind
of
annoying
because
you
have
the
dependence
of
the
dependence.
Basically,
basically
you
have
to
do
something
like
this.
R
You
have
to
scan
the
nuget
package,
look
at
their
dependencies
and
kind
of
bringing
all
of
that
to
the
redirects
and
as
long
as
the
instrumentation
have
that
everything
that
I
tried
work,
you
know,
as
I
said
in
the
past,
I
don't
think
we
have
anyone,
that's
going
to
be
able
to
take
care
of
that
kind
of
for
this
release
with
the
sdk
this
eventual
alpha,
but
it
seems
a
path
that
we
can
look
in
the
future.
R
You
know
I
I
I'm
doing
this,
because
I
want
to
be
sure
that
perhaps
we
have
some
path
there.
You
know
kind
of
not
not
very
complicated
or
hard
to
implement
and
it
seems
reasonable
so
far,
which
gets
me
a
kind
of
an
idea
that
perhaps
dot
net
two
will
be
very
good
for
that.
You
know
robert
had
mentioned
this
before
and
I
think
down
the
line.
R
If
we
get
to
the
point,
we
perhaps
can
have
a
dotnet
command
line
to
that
kind
of
you
give
some
information
and
you
just
have
the
xz,
and
it
generates
this
kind
of
at
least
the
bind
redirect
configs
and
make
the
scenario
more
make
the
scenario
easier
for
people
that
are
in
this
case
that
they
don't
have
access
to
the
build
project.
R
So
so
far
so
good.
I
I
don't
want
to
be
kind
of
too
optimistic
about
that.
I
really
want
to
do
some
some
more
validations
and
tests,
but
so
far
so
good.
R
Okay,
these
were
the
topics
that
I
wanted
to
cover.
Anyone
want
to
bring
something
up.
R
S
I've
been
working
on
on
profiles
and
performance
for
the
last
while
so
that's
why
I'm
sort
of
you
know
my
my
context
is
a
little
limited
too
much,
but
I
can
walk
away
from
these
meetings.
R
I
see
I
see
yeah
on
that
note.
If
you
don't
mind,
I'm
very
curious
because
you've
been
worried
about
the
performance
used
in
the
event
pipe.
So
are
you
going
with
the
traditional
approach
of
sampling
from
the
clr
profiler,
or
did
you
manage
to
somehow
get
a
vent
pipe
to
perform?
Yeah.
S
I
think
the
the
the
what
it
was
not
so
much
about
the
event
pipe
itself.
It
was
about
the
mechanism
specifically
for
profiling
and
collection
of
stack
information.
S
That
means
issue
a
suspension
kind
of
signal
and
then
wait
until
all
of
the
threats
reach
a
safe
point
and
then
stop
and
then
it
will
start
collecting
the
information
collected
and
then
resume
everything,
and
that
is
very
slow
because
of
all
these
weights.
So
it's
really
not
suitable
for
in
production
use
for
for
depth
scenarios,
it's
just
fine.
S
So
if
in
some-
and
we
talked
with
know
about
this
about
two
months
back
so
if
in
some
future,
the
clr
uses
a
more
performant
approach
to
actually
doing
the
getting
the
collection
done,
then
of
course
event
pipes
can
be
used
and
by
as
a
second
challenge
with
event
pipes.
Of
course,
it's
not
supported
in
all
the
versions
that
we're
all
targeting
here
I
mean
we
can
drop
for
five,
but
full
framework
is
is
going
to
stay
for
several
years.
S
So
so
because
of
that,
we
are
using
the
traditional
apis.
R
I'd
see,
I
see
it
works
very
well
on
windows,
but
then
you
it's
not
portable
yeah.
S
Challenge
with
etw
is
you
need
to
own
the
box
in
order
to
use
it,
and
actually
I
I
would
be
very
curious
about
how
your
customers
really
actually.
So
we
unfortunately
made
the
experience
that
there
are
enough
customers
who
are
not
okay
with
giving
the
some
sort
of
agent
or
tracer
or
whatever
we
you
call
it
admin
privileges.
S
So
so,
because
of
that,
you
know,
did
you
make
a
different
experience.
R
We
we
because
we
don't
require
for
the
stuff
that
we
require,
perhaps
that
install,
but
we
we
don't
require
to
run
this
stuff
on
the
admin
we
are
not
doing
profile.
We
are
not
doing
it
w,
you
know,
so
we
we
don't
have
that
pushback,
but
I'm
I'm
sure
that
in
the
case,
if
we
require
that-
and
basically
it's
a
w
does
require
that,
I
I'm
sure
that
you'll
be
the
same
pushback.
R
One
thing
that
I'll
say
is
that
you
can
give.
That
is
a
specific
right
on
the
windows
that
you
can
give
for
people
to
be
able
to
profile.
You
know
you
don't
need
to
make
admin.
You
can
give
a
specific
right
for
people
to
sample
with
etw.
S
So
I
actually
would
love
to
know
more
details.
Perhaps
I'm
missing
something
so
far
from
what
I
know
is
there
are
different
etws
and
so,
for
example,
if
I
want
to
capture
the
ecw
coming
from
the
dotnet
process
from
the
from
the
process
that
runs
application,
then
yes,
that
is
possible
with
non-admin
privileges.
I
I
S
That
I
didn't
hit,
but
perhaps
because
you
know
we
kind
of
didn't
get
that
far
with
so
many
different
windows
versions,
because
we
hit
the
admin
problem
so
to
collect
etws
from
the
that
come
from
the
run.
S
Time,
like
you
know,
gc
events
and
other
events
that
are
specifically
related
should
be
the
behavior
of
the
dotnet
runtime
that
is
possible
on
each
w
was
just
see,
as
you
mentioned
special
permissions,
but
in
order
to
collect
stack
like
cpu
and
and
and
workload
profiling,
that
those
events
are
coming
from
from
the
kernel,
and
that
is
not
possible
as
non-admin.
Unless
I
I
I
missed
something
which
which
would
benefit
you.
R
Once
more,
the
thing
you
you
do
need
some
specific
privilege
to
do
that
and
by
default
just
the
admin
has
that
I
think
so.
You
can
give
this
privilege
to
other
users,
but
I
I
don't
remember
the
set,
and
I
don't
remember
if
the
set
is
really
contained
you
know
kind
of
perhaps
is
one
of
those
things
that
when
you
give
that
right,
you
actually
gain
a
bunch
of
other
things.
You
know
so
I
I
I
try
to
dig
that
this
is
in
my
memory.
For
a
long
time.
R
I
don't
remember,
I
was
just
trying
to
search,
but
I
don't
even
remember
the
name.
You
know
I'd.
S
Love
to
learn
more
about
this,
the
challenge
is,
of
course,
that
the
once
you
start
looking
at
these
stack
atw's,
it's
not
pre-processed.
It's
for
everything.
S
You
can
filter
them
after
collection,
but
yes,
it's
for
everything,
and
that
means
essentially
it
gives
you
a
lot
of
knowledge
about
every
application
that
runs
on
the
box,
and
so
it's
a
lot
of
power
and
can
be
used
for
all
sorts
of
weird
things,
and
because
of
that,
it
makes
sense
that
they're
restricted
to,
if
not
admin,
then
at
least
to
users
who
have
a
very
high
privileges
already.
R
Yes,
yes,
I
think,
I
think,
the
the
windows
that
did
something
about
the
process,
but
basically
what
they
did
is
basically
they
filter
and
actually
the
listener
has
to
specify.
So
basically
you
listen
to
everything
you
know
so.
S
So
yeah,
unfortunately,
I
wish
I
wish
I
could
use
etw
it's
it's.
It's
it's
not
way
less
invasive,
but
I
couldn't
get
this
to
work
and
it
was
permissions
that
were
acceptable
to
customers
so
we're
using
like
a
normal
profiler.
P
Yeah
there
may
be
another
complication
with
etw
ii
and
I
really
don't
remember
the
details,
but
one
of
my
former
teammates
was
experimenting
with
etw
so
that
we
could
tune
our
our
agent
and
what
we
found
was
when,
when
our
profiler
was
attached,
we
weren't
able
to
see
all
the
etw
events.
P
I
don't
remember
all
the
details.
I
just
remembered.
We
only
got
a
subset
of
the
data,
so
then
we
had
to
add
some
work
into
our
profiler
so
that
we
could
then
be
able
to
forward
some
of
that
etw
data
out
of
our
profiler
or
something
along
those
lines.
S
P
S
Yeah
so
yeah,
we
I
use
the
profiling
apis,
which
of
course
adds
a
bunch
of
work,
wants
to
run
together
with
all
the
tracers
because
they
also
are
like
a
profiler.
S
So
that's
one
thing
that
is
can
be
worked
out,
but
it's
a
bunch
of
to
do's
and
also
because
we
want
to
be
very
efficient
and
select
selective
with
suspending
threads.
S
We
use
os
apis
to
suspend
threats,
which
then
means
that
you
invasively
just
suspend
the
thread
which
could
be
holding
all
sorts
of
locks
and
and
then
we
start
stack
walking
it.
But,
as
you
do
this,
you
get
all
sorts
of
weird
deadlock
occurrences.
So
there
is
a
bunch
of
very
tricky
stuff
around
that
you
we
all.
When
we
write
managed
code,
we
all
kind
of
strive
to
write
a
log
free
code.
This
is
on
the
side
right
and
the
thing
is
it's
in
managed
world.
S
It's
kind
of
it's
still
sometimes
tricky,
but
it's
quite
feasible
if
you
are
trained
in
it
to
write
log
free
code,
but
what
you
forget
is
sort
of
not
really
log
free,
because
all
the
memory
management
and
things
like
that
you
sort
of
ignore
and
the
logs
that
are
taking
there.
There
are
not
not
the
locks
that
you
are
kind
of,
then
doesn't
count
for
your
lock
freezes,
but
in
native
you
you
don't
get
this
luxury
and
things
are
not
as
lock-free
as
you
think,
like
any
memory
allocation
can
take
a
global
lock.
S
That
means
almost
any
api
that
somehow
deals
with
something
not
trivial
can
take
a
global,
lock,
any
output,
any
bigger-
I
you
know,
as
is
not
lock,
free
and
all
sorts
of
windows.
Api
locations
are
not
lock,
free
and
even
global
locks,
not
just
like
shared
between
a
few
things.
So
it's
tricky.
S
R
I
just
for
because
we
talked
about
when
I
have
a
time
I
try
to
dig
those
privileges.
I
really
think
that
they
don't
solve,
especially
because
they,
I
think,
when
you
give
that
privilege
to
anybody.
I
think
you
give
access
to
all
process.
You
know,
because
you
can
sample
our
process
so
yeah.
I
I,
but
just
for
the
completeness
of
our
conversation,
I'll
put
I'll
put
on
slack
or
something
so.
People
yeah.
R
All
right,
if
someone
doesn't
want
you
to
step
up
and
say
anything.
P
So
related
to
the
the
call
stacks
and
things
like
that,
there
was
another
question
in
slack
that
I
think
you
responded
to
paulo
about
somebody
wanting
a
feature
that
I
forget
which
vendor
provides
but
provides
something
that
looks
like
a
stack
stack
trace.
But
it's
not
quite
if,
if
I
remember
right
based
when.
I
P
That
software
yeah.
I
R
The
sweet
spot
for
distributed
racing
is
showing
the
interactions
between
different
services,
where
you
have
to
carry
contacts,
but
I
understand
that
the
devs
want
to
also
see
that
as
a
kind
of
profiler
you
know,
but
I
think
there
is
a
bit
of
feature
creep,
but
basically
it
look
at
to
me
what
the
the
person
was
asking
was
about
instrumentation
profile
that
ones
that
collect
every
stack
and
measure
the
duration
of
every
stack.
You
know
after
I
post
that
actually,
but
the
guy
didn't
back.
R
I
actually
found
a
a
slight
better
way
of
doing
than
what
I
posted
that,
because
it
was
a
good
to
refresh
my
memory.
So
there
is
a
stack
frame
that
you
can
create
directly
getting
the
previous
frame,
so
the
information
that
he
gets.
But
I
I'm
talking
about
shaving.
We
have
some
nanoseconds
here
and
there
you
know.
I
think
that
that
thing
should
be
expensive
anyway,
but
I
I
think
that
is
I
I
was.
I
was
curious
for
the
guy
to
come
back.
R
You
know
and
because
I
really
want
to
have
the
conversation
to
understand
where
he's
coming
from.
But
my
impression
was
a
little
bit
of
feature
grip.
You
know
they
wanted
to
see
diagnose,
distributed
tracing
as
a
instrumentation
profiler.
You
know
which
is
kind
of
I
I
consider
two
different
things.
You
know.
S
The
like
the
facilities
in
the
clr
that
exist
they're,
doing
this
for
exceptions
and
for
security,
because
sometimes
you
need
to
know
who
called
you
to
decide
with
the
security
context,
and
so
they
unwind
all
sorts
of
things,
and
that
takes
you
know
per
unwind,
it's
very
fast,
but
if
you
start
doing
this,
you
know
every
millisecond
or
something
it
adds
up.
So
it's
not
true.
S
Should
you
make
it's
like
kind
of
production,
really
actually
pretty
that
the
microsoft
guys
are
not
here
today.
I
was
just
looking
at
a
very
interesting
thing
that
I'm
observing
during
unwinding
that
I
really
did
not
expect
to
actually
see
and
I'm
thinking
that
the
clr,
who
creates
some
some
or
emits
some
code
that
doesn't
conform
to
to
like
windows,
calling
conventions,
but
I
could
also
be
wrong
and
missing
cause.
It's
like
really
really
deep
details
and
I
wanted
to
ask
them
about
this.
It's
yeah
there
are.
S
Leaf
functions
and
I'm
I'm
observing
them
in
places
where
I
didn't
expect
them.
R
I
I
I
actually
did
work
with
some
of
that
stuff,
but
it's
10
years
ago.
I
can't
remember
anything,
you
know,
so
there
was
the
concurrency
visualizer
for
visual
studio.
R
Parse
the
stacks
and
find
the
proper
symbols,
and
there
is
some
there
is
some
trickery
there
on
the
calling
conventions.
You
are
right.
I
I
don't
remember,
but
even
some,
let's
say
knowing
standard
stuff
generate
a
stack
that
of
course
works.
But
of
course,
then
the
profiler
can
understand.
Whatever
you
did
there
on
the
stack.
S
There
is
this
in
x64,
you
put
information
about
your
functions
in
a
table,
so
when
you
unwind
stacks,
you
can
look
up
your
like
where
your
function
code
is
from
from
this
and
there's
a
windows
api
that
you
can
call
to
to
get
this
information,
which
is
called.
I
forgot
what
it's
called,
but
there
is
also
a
such
thing
like
a
frameless
function,
or
usually
actually
usually
by
for
windows.
It's
just.
S
It's
called
just
called
leave
function,
a
function
that
doesn't
call
anybody
and
it
doesn't
require
a
function
ng.
So
when
you,
when
you
unwind
such
a
stack,
you
just
kind
of
determine
that
this
function
doesn't
have
an
ng
in
the
stable
and
then
you
just
change
your
your
stack
point
that
you
look
at
the
next
one,
but
I
really
did
not
expect
to
find
such
a
thing
in
the
middle
of
a
stack.
S
So
you
have
a
frameless
function,
but
it
did
call
somebody
because
it
just
arrives
in
it
by
unwinding
the
stack.
So
I
don't
know
whether
it's
like
the
clr
is
doing
this
for
some
sort
of
optimization
or
what
to
do
with
it
is
so
yeah.
R
Yeah,
I
I
I
talked
about
windows
and
stacks.
I
remember
when
I
had
to
debug
a
go
application
with
wind
dbg.
It
was
not
fun.
The
stacks
look
like
yeah
no
stacks.
You
know
you
have
basically
to
operate
with
win
the
bg
to
understand
the
stacks
of
windows.
What
is
being
called
from
windows
and
gold
debugger
to
understand
what
the
go
code
is
doing,
because
it
doesn't
look
like
anything
to
individual,
so
yeah.
It's
crazy.
R
Or
anything,
we
talked
about
your
pr
about
the
requirement
to
be
explicit
about
the
source
instrumentations,
but
we
agreed
that
that
is
the
the
path
right
now
and
chris
mentioned
that
actually
new
relic
had
some
experience
with
kind
of
checking
the
the
modules
to
kind
of
load
the
instrumentations
in
the
future.