►
From YouTube: 2020-09-02 Node.js Diagnostics Working Group Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
moving
to
the
regular
agenda.
First
item
is
issue
34895
on
the
diagnostics
working
group
repository.
A
Oh,
no,
that's
the
that's.
The
request
on
node.js,
slash,
node,
create
diagnostics,
channel
module.
C
B
Diagnostics,
channel
work
that
microsoft
focus
did
a
few
years
ago
and
just
making
a
bunch
of
changes
beyond
what
they
did
originally
to
make
it
a
little
better
performance
tuned
and
just
got
a
bunch
of
different
feedback
which
I've
tried
to
incorporate
everything.
I
could
currently
I'm
in
the
middle
of
working
on
trying
to
unsubscribing
to
it.
B
It's
been
a
little
bit
awkward
and
haven't
quite
solved
yet,
but
once
I
can
get
that
solved
I'll
get
back
to
there's
a
couple
other
comments
in
there.
That
should
be
relatively
easy
to
deal
with.
C
B
B
Okay
and
the
reason
to
start
with
http
server
is
that's
one
of
the
core
things
that
we
currently
have
to
monkey
patch,
because
we
need
to
be
able
to
wrap
the
entry
points
of
an
http
request
in,
like
the
appropriate
context
to
create
transaction
objects,
yep,
which
we
can
sort
of
kind
of
hack
a
little
bit
currently
by
just
like
inserting
like
request
events
at
that,
like
request
handler
before
the
actual
server
request
handler
and
like
injecting
our
own
precondition
stuff
to
just
set
things
up.
B
B
Yeah
yeah,
this
diagnostics
channel,
I
mean
technically,
we
could
just
admit
like
a
pre-request
event
and
the
same
thing
for
the
setup,
but
there's
a
bunch
of
like
extra
stuff
that
we
want
like
information
about
the
request
like
we
want
to
be
able
to
intercept
the
url
and
things
like
that.
Just
for
being
able
to
attributes
like
this
request
came
from
this
url.
B
B
To
and
if
I
can
get
a
demo
of
that
in
there,
I
think
it
should
help
to
demonstrate
like
yeah
we've
eliminated,
like
this
entire
category
of
patching
that
yep
every
game
has
to
do
and
it's
fragile
and
has
like
demonstrated
broken
several
times.
A
Do
you
have
an
ata
for
landing
this
request?
I
I
know
it's
hard
for
estimating
landing
stuff
on
node.js,
but.
B
Yeah,
I
don't
have
any
specific
eta
I'm
under
contract
from
datadog
to
work
on
this
for
three
months.
I'm
just
re
reached
one
month
in
so
I
have
two
more
months
to
work
on
this.
B
Okay,
I'll
I'll
try
to
get
it
landable
as
soon
as
I
can,
but
I
mean
yeah,
it's
it's
up
to
everyone
else,
reviewing
what
they
think
is
acceptable.
A
It's
a
new
top
level.
B
Right,
it
apparently
is
I'm
considering
a
slight
change
to
I
can
I
can
move
the
module
just
into
internals
and
then
expose
it
in
like
something
else
like
using
the
async
module,
even
though
it
doesn't.
I
don't
think
it
fully
makes
sense
there,
but
just
for
the
sake
of
being
able
to
land
it
in
a
minor.
I
could
do
that
and
then
have
like
a
major
that
re-exports
it
in
the
place.
We
want
it
to
be
ultimately.
C
It's
too
bad,
we
didn't
call
asynch
hook,
something
like
diagnostics
or
something.
Instead,
that
was
a
bit
more
general,
but.
B
Yeah,
what
one
thing
I
was
actually
kind
of
thinking
of
while
working
on
this
is
like
it
it'd
be
kind
of
nice
if
node
had
like
just
a
modulus
like
nursery
or
something
like
that
which,
like
any
new
experimental
thing,
just
automatically,
gets
exported
in
that
and
then,
when
it's
ready
to
like
go
prime
time,
then
it
gets
like
a
major
release
with
an
actual
top
level
module
right
makes
it
like
both
easier
to
get
like
out
there
more
quickly
without
like
interfering
with
the
like
top
level
name
space,
but
also
like
it's
a
bit
more
clarity.
C
A
I
think
the
only
challenge
on
having
a
experimental
or
nursery
top
level
module
is
for
modules
that
are
early
birds,
trying
they
are
going
to
have
to
rename
it
later
in
their
code
when
exporting
and
importing.
B
Oh,
what
what
you
can
do
without
too
much
pain
is
like
like
we
could
put
something
in
this
like
experimental
nursery,
whatever
you
want
to
call
it
module
like
under
a
namespace
so
like
this
is
the
namespace
that's
intended
to
have
as
a
top
level
module
when
it's
ready
to
and
then
when,
when
you
like
go
to
require
it,
you
can
just
do
like
try
catch
require
and
try
to
require
the
like
top
little
name
of
this.
If
it
doesn't
exist,
then
try
the
experimental
path
for
it.
B
B
B
B
Just
does
like
this
dot
channel
that
name
dot
subscribe,
so
it
like
goes
directly
to
create,
create
the
channel
and
subscribe
on
that
channel,
but
that
channel
might
get
garbage
collected
because
it's
not
being
stored
on
diagnostics,
channel
manager
anymore,
so
that
subscriber
actually
needs
to
get
started
in
the
manager
and
then
attached
to
named
channels
anytime.
They
appear,
and
so
like.
B
So
we
need
to
make
sure,
like
all
things
are
like
all
subscribers
are
held
indefinitely
unless
you
explicitly
unsubscribe
it,
and
so
as
part
of
the
unsubscribing
stuff,
I'm
working
on
changing
it.
So
when
you
subscribe
to
a
name
which
would
map
directly
to
the
channel,
it
still
would
store
that
in
the
subscriber
list
on
diagnostics
channel
objects,
instead
of
directly
on
the
channels
immediately.
B
B
You
can
run
subscribe
on
it
directly,
but
that's
less
intended
it's
more
intended
for
like
publishing.
B
So
like
your
module,
you
might,
at
the
top
level,
create
a
channel
object,
and
then
I
keep
calling
publish
on
it
like
inside,
of
your
module
functions
and
just
it
being
held
by
lexical
scope
means
that
it
will
live
as
long
as
it
needs
to.
C
B
Well,
static
ish,
they
like
it
would
live
as
long
as
the
module
so
like.
If
you
require
http
and
then
you
go
and
delete
the
module
from
the
module
cache
and
stop
using
it,
it
might
get
garbage
collected.
B
C
B
Outside
of
chord
yeah
people
could
create
these
anywhere
in
theory
in
in
the
docs,
I
want
to
strongly
encourage,
like
you,
should
create
a
top
level
publisher,
your
top
level
channel,
because
just
creating
these
at
runtime.
All
the
time
is
expensive.
B
B
You
can
reference
this
directly
in
memory
and
so
there's
no,
like
name
lookup.
Every
time
you
go
to
problem,
it's
like
currently
with
an
event
emitter
you
do
like
emits
whatever
your
event
name
is,
and
then
the
data
after
that
and
every
time
you
do
that
it
has
to
look
up
in
the
events
objects
it
has
to
like
find
the
name,
the
names
of
that
list,
and
that
that
lookup
is
not
free
and
also
in
in
the
event,
emitter
design.
B
Whereas
if
you
have
a
direct
reference
to
this
object,
you
you
can
do
something
like
that
in
this
pull
request,
I
have
actually
two
separate
prototypes
for
the
channel
object,
which,
once
it
gets
its
first
subscriber
it
switches
to
a
new
prototype
that
actually
does
publishing.
B
Some
express
folks
but
other
than
that
they
haven't
really
reached
out
too
much
to
other,
like
am
vendors
or
anything
yet.
A
It
would
be
good
to
have
buy-in
from
frameworks
and
from
some
apms
before
landing
it
on
car,
just
to
make
sure
that
it's
going
to
be
used,
because
if
we
release
it
and
it's
not
use
it,
it
doesn't
really
make
that
much
sense.
C
A
C
C
Tomorrow,
one
two
three
is
the
next:
that
right
sorry,
one
to
two
is
the
next
meeting:
let's
see,
is
there
github.com
what
the
agenda
looks
like.
C
C
A
All
right
anything
else
on
this
topic.
B
A
A
A
A
So
we
have
30
minutes
for
a
dip
type.
We
have
a
few
topics.
I
would
suggest
we
go
through
the
existing
user
journey
documents
just
to
see
if
you
are
missing
anything
and.
A
Exceptions
about
allocations,
etc,
gc
activity
more
frequent
crashing
because
of
lack
of
space.
A
A
So
reading
through
these
symptoms,
one
thing
I'm
thinking
is:
we
should
rename
this
from
memory
leak
to
just
memory.
Issues
also
cover
the
cases
where,
because
we're
we're
running
out
of
memory,
because,
for
example,
higher
location
rates.
A
The
other
symptom
we
have
is,
which
is
not
a
symptom.
Okay.
The
next
step
is
to
confirm,
if
you,
you
have
a
mirror
leak.
A
A
E
On
the
other
hand,
the
node
report-
I
guess
we
are
referring
to
the
third
party-
module
node
report-.
A
E
E
A
Oh
yeah,
thanks
users
could
compare
some
of
our
spaces.
A
And
here
is
a
note
that
some
improvements
that
we
can
have
for
the
report.
We
also
have
usa
traces
turn
it
on
and
observe.
Gc
is
getting
less
and
less
effective
after
using
cycle
memory
usage
is
not
going
back
to
previous
levels.
A
A
That
are
also
able
to
detect
how
long
this
is
tracing
gc
is
taking,
I'm
not
entirely
sure
if
it
shows
how
much
space
is
allocated,
though,
so
it's
only
time.
Information.
E
A
The
upside
of
the
perf
hooks
compared
to
the
other
methods,
is
that
it's
javascript
only
it's
on
car.
You
don't
have
to
restart
the
process,
so
this
is
probably
the
best
way
to
to
do
this
analysis
on
production,
but
if
you
also
need
allocation
information,
the
c
plus
plus
api
is
more
comprehensive.
A
Yeah
we
see,
since
this
is
not
not
just
specific.
I
don't
think
it's
worth
going
on
a
deep
time
about
that,
but
just
like
powergrind
mentioning
that
it
exists,
is
important
in
my
opinion,
but
going
on
a
deep
tag
is
probably
not
relevant
to
our
documentation.
A
A
A
A
A
Gaps
in
cohen
use
case,
use
cases
and
tooling
native
memory.
I
don't
think
we
improved
the
situation
here
that
much
since
we
started
this
document.
A
Yeah,
I
think
things
like
js
object,
mapping
to
native
hip.
I
think
it
improved
a
small
fraction
with
the
memory
tracking
mechanisms
we
have
on
car,
but
since
it's
still
a
manual
process,
mostly,
we
can
still
miss
some
allocations
and
especially
native
add-ons
can
miss
the
locations.
A
It's
related
to
the
problem
above
in
my
opinion,
since
we
can't
track
native
mem
heap,
we
can
determine
how
much
space
the
js
is
holding.
E
Yeah,
even
I
did
not
understand
that
quite
much
so
basically,
if
I
understand
correctly,
the
the
live
objects
are
scanned
through
a
graph
based
on
the
active
life
of
the
objects.
Through
the
thread
stack,
everything
else
is
deemed
as
garbage,
so
there
is
no
priority
between
law
fit
between
garbage
objects.
Right.
A
A
A
A
Yeah:
here's
where
leaking
starts
to
get
a
bit
ambiguous
because
we
mix
leaking
from
native
code
and
from
gis
which
have
different
meanings.
A
I
know
I
know
he
improved
some
of
the
situation,
but
I'm
not
sure
if
we
improved
all
of
it,
especially
for
those
objects,
as
I
agree
for
you,
those
are
the
most
likely
to
generate
leaks,
so
it's
probably
worth
double
checking.
E
Yeah,
so
I
guess
there
are
two
things
here:
one
is
the
leak
is
happening
in
the
c
plus
plus
layer
itself.
That
is
some
new
objects
are
getting
created
in
the
c
plus
plus
heap,
and
they
are
not
garbage
collector.
They
are
not
freed
as
expected.
E
That
is
one
type
second
type
is
the
javascript
heap
is
holding
on
to
a
a
huge
object,
but
the
size
of
the
object
is
not
fully
reflected
in
the
javascript
type.
As
seen
through
the
the
heap
profiler
tools,
we
see
a
small
amount
of
fee
pass
retained,
but
behind
the
javascript
object
there
is
a
huge
amount
of
native
object
that
is
pinned
into
the
js
object.
I
think
that
is
a
category
of
problems
which
could
potentially
be
seen
in
the
production,
or
rather
that's
where
we
would
production
could
be
impacted.
A
A
E
E
A
And
just
a
quick
comment:
you
also
mentioned,
like
sockets
file,
handlers,
etc.
Those
are
more
likely
to
leak
system,
resources
and
memory
because
they
don't
hold
that
much
memory.
A
C
E
A
A
So
this
api,
I'm
talking
about
we
had
this
note
that
sounds
like
the
api,
is
not
available
on
profiler
or
snapshot.
I
think
it
is
today.
A
A
And
then
we
had
some
action
items
that
I'm
pretty
sure
we
didn't
complete,
at
least
no,
not
all
of
it
document
apis.
I
think
we
documented
some
stuff
from
mr
leaks,
but
not
of
it
and
engaged
with
v8.
We
haven't
engaged
as
a
working
group
with
yet
in
a
long
time.
A
A
A
The
code
is
very
complex.
It
mixes
it
has
a
lot
of
dumb
stuff
which
is
not
relevant
to
node.js.
A
So
I
think
I
still
think
there
is
potential
for
optimization
on
the
hip
snapshot.
I'm
not
entirely
sure
if
it's
worth
investing
the
time
it
might
be
worth
if
we
have
if
we're
collaborating
with
yate
but
alone.
I
don't
think
we
have
a
much.
A
A
D
E
Yeah,
so
one
of
the
thought
process
I
have
is
it's
just
a
thought,
not
sure
how
the
v8
implements
the
garbage
collection
algorithm
and
how
it
manages
the
javascript
heap
without
knowing
too
much
into
that.
Here
is
my
thought.
E
I
guess
it
definitely
has
the
young
or
new
generation
and
all
generation
concept.
That
means
objects
are
graduated
to
different
levels
of
green
heap,
based
on
that
time
they
survive
in
the
heap.
So
when
we
are
moving
from
generation
to
generation,
is
it
possible
for
the
v8
folks
to
basically
annotate
or
account
or
segregate
each
objects
based
on
the
based
on
the
generation
in
which
they
took
birth,
or
they
lived
and
essentially
being
able
to
give
us
a
snapshot
of
only
the
delta
difference
from
the
last
snapshot?
E
E
E
It
is
maintained
at
any
point
in
time
and
when
we
ask
for
a
snapshot,
just
look
at
this
generation
information
and
provide
only
the
delta.
Maybe
maybe
it
may
be
a
lightweight
approach
if
it
is
implemented
in
the
v8.
But
again
I'm
just
thinking
aloud.
I
don't
know
how
easy
or
how
possible
this
is.
A
A
A
If,
if
it's,
if
the
user
runs
it
on
a
short
period,
I
think
it
will
be
fine,
probably,
but
I'm
not
entirely
sure
in
the
long
run.
A
E
But
if
you
look
at
the
diagnostic
proceedings
for
leak
analysis,
if
you
take
a
profiling
snapshot
or
if
you
compare
two
profiling
snapshots,
look
at
the
new
objects
and
look
at
the
deleted
objects.
You
always
look
at
the
top
objects,
top
10
or
20
objects.
You
don't
really
look
at
all
the
objects
that
got
created
and
all
the
objects
that
got
deleted,
so
there
is
always
a
level
of
approximation
that
is
being
made
in
the
debugging
problem,
determination
step.
E
So
that
means
to
say
this
approach,
though
this
this
is
error
prone,
meaning
it
might
miss
some
of
the
objects
movements
still
for
most
of
the
practical
purposes
for
diagnosing
memory,
leaks
and
finding
the
predominant
leak
suspect,
I
think
practically
it
would
be
hooked.
A
A
Yes,
that
sounds
really
effective
and.