►
From YouTube: 2020-09-16 Node.js Diagnostics Working Group Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
for
users
that
use,
for
example,
a
button
called
exceptions,
it
will
generate
a
card
app,
which
is
good.
A
A
A
The
proposal
is
by
one
of
our
working
group
members
and
I
believe
it
should
make
understanding
errors
when
they
propagate
across
multiple
modules.
Better,
although
I
haven't
read
the
details
of
the
of
the
proposal
yet,
but
that's
a
proposal
that
I'm
really
excited
for.
A
C
Dynamic
subscribers
so
now
the
only
interface
is
to
create
a
channel
object,
and
so
you
have
to
know
the
exact
name
of
the
channel
subscribe
to
it.
C
The
dynamic
subscribers
we
may
decide
to
add
that
back
in
later
or
a
similar
thing,
but
it
just
it
complicated
the
pull
request
enough
that
it
was
decided.
It
would
be
better
to
just
try
and
keep
it
simple,
at
least
for
the
first
pass,
and
I
also
added
some
changes
to
http,
to
use
diagnostics
channel
to
emit
the
start
and
end
events
around
http
request
and
response.
C
There's
an
event
that
is
emitted
just
like
right
before
the
like
request.
Event
is
emitted
from
like
the
server
and
then
another
event
that
is
emitted.
C
Basically,
when
the
finish
happens
on
the
response,
and
so
you
get
like
a
complete
life
cycle
of
the
request
itself,
and
so
with
the
hook
before
request,
we
can
use
that
to
enter
an
async
clinical
storage
context.
So
I
have
I
have
a
test
that
demonstrates
you
can
basically
build
a
simple
apm
with
zero
patching.
C
C
C
C
And
I
also
added
a
very
simplistic
span
api,
so
it
doesn't
go
nearly
as
far
as
stuff
like
open,
telemetry
or
open
tracing
or
any
of
those
sorts
of
things.
All
it
is
is
an
object
that
is
used
to
express
a
sequence
of
events
and
so
like,
basically
with
diagnostic
channel
itself,
you'd
normally
hand
it
an
object,
and
it
just
passes
that
that
object
through
directly.
It
doesn't
do
anything
with
it
with
a
span.
C
It
wraps
that
in
another
object
which
all
it
does
is,
it
adds
an
id
onto
it,
just
a
numeric
id
and
it
shares
it
between
everything
that's
passed
through
this
span,
so
you
can
attribute
everything
to
the
same
span,
but
it
still
doesn't
add
time
stamps
or
like
anything
like
that,
it's
entirely
up
to
the
consuming
end
to
decide
what
other
things
they
want
to
add
to
it.
C
A
Good,
are
there
any
outstanding
objections
at
this
time.
C
There's
not
really
any
outstanding
objections.
There's
a
few
feedback
notes
which
I
haven't
really
done
anything
with,
because
there
wasn't
really
consensus.
C
There
was
a
note
about,
is
diagnostics
channel
the
right
name,
or
should
we
call
it
diagnostic
hooks
to
align
with
like
async
hooks
there's?
Another
note
which
was
is
should
publish
the
right
name
function
to
check
if
this
is,
if
there's
subscribers,
and
should
it
be
a
function
or
should
it
be
a
getter.
A
I
will
try
to
review
it
in
the
next
few
days.
Overall,
I'm
really
excited
to
have
this
api
online.
C
Yeah,
that
should
be
pretty
good,
I'm
currently
working
on
well
in
the
diagnostics
channel
pr
itself,
I
have
like
an
extra
commit
that
adds
that
adds
some
events
to
http,
just
to
prove
it
out,
basically
and
like
to
prove
its
benefit
for
being
like
not
just
in
usual
in
user
land,
but
in
core.
So
we
can
know
that
I'm
at
core
events,
my
current
task,
that
I'm
working
on
at
the
moment
is
creating
pull
requests
to
add
it
to
express.
C
C
So
there's
nothing
merged
yet
it,
but,
like
my
opinion,
is
it
should
be
merged
as
experimental
just
like
everything
else
normally
is
the
the
pull
requests
to
express
is
mostly
to
just
demonstrate
how
to
do.
It
just
show
the
advantage
of
it,
but
I
don't
intend
that
to
be
immediately
merged
or
if
it
is
merged,
it
would
likely
be
like
behind
a
flag
like
a
feature
flag
or
something
like
that.
But
yeah,
it's
more.
A
demo,
like
the
express
pr,
is
more
a
demo
than
anything.
A
So
the
question
proquest
to
add
diagnostics
channel
to
node.js-
is
that
request
also
marking
the
diagnostics
channel
as
experimental
or
do
we
still
need
to
do
that
before
landing.
C
I'm
not
sure
what
I'll
process
there
is
other
than
I
believe
in
the
docs
changes
it
lists
it
as
experimental.
It
introduces
a
new
doc
file.
I
don't
know
if
there's
anything
else,
we
need
to
do
for
marking
a
new
module
as
experimental.
C
Now
the
the
api
itself,
like
it
being
used
in
a
car,
maybe
you
could
put
a
behind
the
flag
but
like
it's
really
like
its
use
in
the
rest
of
core,
would
be
really
minimal,
like
you're
just
creating
a
channel,
and
you
have
like
a
simple
boolean
check
on
it,
so
I
can't
really
foresee
like
any
issues
to
it,
landing
without
a
flag,
but
I
mean
it's
it's
up
to
the
ecosystem
or
up
to
like
the
contributors
to
like
other
contributors
to
decide
what
what
the
landing
process
should
be.
C
I
think,
and
that
hasn't
really
been
discussed
at
all
in
the
poll
request.
Yet
so
I
don't
know
it's
it's
up
to
other
contributors.
C
C
Yes,
there
was
one
small
thing
relating
to
that,
which
is
there's
apparently
an
issue
in
an
api.
That's
it
doesn't
correctly.
B
C
The
context
in
like
a
certain
signature
of
make
callback,
I
forget
the
exact
details
of
it.
There's
just
something
called
issue.
I
just
pasted
it
in
chat.
Now.
There's
I
get
it's
it's
like
a
discrepancy,
but
between
what
the
docs
say
and
what
it
actually
does,
there's
some
uncertainty
about,
which
would
be
the
correct
behavior.
C
I
just
saw
this
before
the
meeting,
so
I
haven't
looked
or
thought
too
deeply
about
it,
but
yeah
you
should
think
about
it.
A
Okay,
moving
forward
to
the
next
item
and
I
am
ready
to
problematic
modules,
dot
md,
so
this
request
opened
two
days
ago,
so
fire
that
I
had
no
idea
existed.
A
Basically,
this
problematic
modules
file,
this
modules
that
break
async
continuity
so
basically
modules
that
the
way
they
implement
asynchronousy
breaks
the
continuity
concept
on
asynchronous,
which
makes
it
harder
for
apms
to
cohere
events
as
well
as
any
other
asynchro
users.
A
A
C
It's
kind
of
problematic.
The
reason
it
was
created
in
the
first
place
was
just
because,
like
yeah
there's
these
contexts
loss
issues
and
some
of
these
modules
and
the
the
reason
for
it
was
that
there
was
a
few
pull
requests
being
opened
to
some
of
these
modules
and
because
async
cooks
is
experimental
and
still
remains
experimental
to
this
day,
a
few
of
those
polar
requests
were
getting
rejected
or
just
sitting
in
limbo
forever.
C
So
we
wanted
a
way
to
keep
track
of
these.
These
things
that
need
to
be
fixed.
C
I
I
feel
like
the
way
forward
is
for
us
to
as
soon
as
possible,
try
to
get
at
least
async
resource
out
of
experimental
and
probably
async
local
storage
as
well,
and
then
we
can
start
actually
doing
something
about
these
modules,
but
yeah
that
having
the
doc
there
was,
in
my
opinion,
kind
of
a
dumping
ground
of
like
this
is
a
bunch
of
stuff,
that's
broken
and
we'll
maybe
get
to
it
someday
and
no
one
actually
does
get
to.
C
C
Yeah,
that
makes
sense
the
reason
for
why
they
break
continuity,
though,
is
not
always
super
well
known,
a
bunch
of
the
stuff
in
there
just
kind
of
reported
that
this
model
breaks.
We're
not
sure
why
like.
If,
if
it
was
like
fairly
clear
how
to
fix
it,
then
they
probably
would
have
just
opened
a
request
for
it.
C
D
How
we
can
tracking
the
the
fix
it
for
for
the
problems
so
some
the
document.
The
issue
has
some
some
modules
that
not
work
so
well
with
asic
rooks,
but
when
they
fix
it,
how
we
can
update
the
issue,
how
we
can
track
it.
A
C
But
if,
if
we
have
like
an
actual
issue
for
this
in
the
repo,
instead
of
just
this
markdown
file,
we
can
link
to
that
in
any
poll
requests
we
make
to
like
other
modules,
to
fix
their
problems
or
like
we
can
also
like
have
somewhere
that
we
can
regularly
post
like
update
comments
and
like
we
could
also
potentially
like,
have
a
separate
issue
for
each
module.
That
has
a
problem,
so
we
can
discuss
them
like
individually,
a
little
more
specifically
in
one
place.
D
So
for
it
into
today,
we
need
to
do
some
issue
on
the
the
module
before
we
track
it
or
like
on
dpf.
D
We
have
just
the
the
module,
but
don't
have
any
issue
points
to
to
the
error
with
asynch
hooks,
so
we
don't
know
actually,
if
the
ethic
hooks
really
work
or
no.
A
I
like
stephen's
idea
to
have
a
separate
issue
for
each
module
and
then
we
can
have
a
label
for
this
ongoing
effort
and
now,
if
a
module
is
working
or
not,
we
should
have
at
least
one
code
example
on
each
oh
yeah.
Even
if
we
don't
know
why
something
works,
we
can
at
least
test
if
it
was
fixed.
C
Yeah
a
reproduction
case
will
help
a
lot
for
like,
especially
if
we
have,
like
other
people
interested
in
helping
the
working
group
that
just
come
along
and
like
redecision
and
what
are
like.
Oh
yeah.
This
model
is
broken
and
there's
this
useful
test
case.
I
can
run
right
here.
C
I
can
use
that
to
debug
this
myself,
so
it
gets
good
like
first
issue
that
kind
of
content
to
work
with
that
also
helps
us
if
we
get
to
fixing
these
things
in
the
future
just
to
have
to
have
a
like
somewhere
to
start
from,
rather
than
just
oh
there's
something
broken
somewhere
in
this
module.
A
A
A
Triaging
issues
on
the
repository
and
best
practices
guide,
so
I
would
suggest
we
to
use
the
journals
again
and
revisit
one
of
the
user
journals
we
done
in
the
past,
probably
performance,
I
think
or
crash.
I
don't
know
which
one
was
the
one
after
memory,
but
we
can
also
talk
about
unified
hooks.
B
B
A
A
And
cause
the
process
to
exit.
In
some
cases,
the
stack
trace
generated
during
the
crash
provides
enough
information
to
understand
and
fix
the
issue.
But
it's
not
always
the
case.
It
can
be
especially
challenging
to
understand
the
fish
behavior
that
lead
to
an
exception
and
either
our
stack
is
different
from
the
original
site
trace
or
we
need
more
information
like
the
internal
state
of
the
application
to
make
conclusions.
A
A
Which
provides
no
trace
data
noise
like
trace,
no
objects
printed
to
the
output
and
the
data
is
available
via
cordon.
Only
this
is
usually
caused
by
bugs
in
node.js,
runtime
and
dependencies,
for
example,
if
it
crashes,
when
some
input
for
a
function
is
missing,
it
will
crash
the
application.
A
Otherwise,
I
just
keep
reading,
since
you
have
15
minutes
and
the
second
type
are
javascript
crashes,
which
come
either
from
uncut
exceptions
or
unhandled
rejections
stack
trace
has
can
have
enough
information
in
some
cases.
A
Third
party
libraries
without
useful
stack
traces
can
be
problematic
to
diagnose
and
in
some
cases
when
the
stack
trace
is
not
enough.
Inspecting
the
hip
is
important
to
look.
A
For
example,
in
an
http
server,
you
might
have
a
crash
that
doesn't
have
anything
attached
to
the
ar
object
and
you
want
to
look
at
what
was
in
the
request
object
at
the
time
of
it,
crashed
so
being
able
to
inspect
the
hip
after
a
crash
happens
is
a
very
powerful
tool
and
in
cases
where
the
original
stack
trace
is
replaced
by
something
else,
finding
the
original
sect
phrase
or
the
original
error
object
is
also
important.
A
A
Coin
tools,
with
gaps
and
user
journeys,
we
have
the
diagnostics
report,
which
was
added
last
year.
I
think
so.
The
car
we
have
your.
B
A
Yeah,
I
know
we
had
this
functionality
when
bringing
to
the
center
ever
for
a
while,
I'm
not
sure
if
it
can
detect
non-iterable
properties
on
an
object
just
because
they
are
monitorable
and
you
have
to
iterate
to
print
them,
and
I
think
this
year
we
merge
a
deep
request
to
have
these
on
on
diagnostics
reports
as
well,
so
the
situation
is
probably
slightly
better,
but
I
think
I
think
that
statement
is
still
true,
but
it
needs
to
be
explained
further.
B
A
A
A
They
can
help
identify
runtime
configuration
misuse,
for
example,
missing
you
limit,
exhausting
sockets,
etc.
It
provides
both
javascript
sectors
and
native
stack,
depending
on
the
use
case.
A
Which
shared
libraries
are
expected
versus
which
ones
are
available,
and
it
has
some
information
about
russia.
Recent
gc
activity,
so
it
provides
a
good
snapshot,
provides
good
information.
A
B
Yeah,
so
the
one
of
the
use
case
would
be
like
diagnosing,
hangs,
or
you
know,
to
figure
out
why
the
process
is
not
exiting.
Is
there
any
active
handles?
Are
the
timer
still
needing
to
fire?
If,
yes,
what
is
the
elapsed
or
what
is
the
expected
time
into
the
future,
etc,
so
that
level
of
inspection
is
possible?
A
Yeah
that
that's
a
very
good
feature,
it's
especially
useful
for
process
that
are
hanging,
but
it
can
be
useful
for
other
processes
as
well.
B
C
A
All
right
moving
to
car
dams,
car
dumps
are
analyzed
with
l,
node
there's
another
two
in
db
v8,
but
it
only
works
up
to
node
8,
which
is
not
maintained
anymore.
So
it's
not
really
worth
adding
to
the
document.
A
It
shows
a
stack
trace
that
caused
the
crash,
which
is
the
exact
sectors
that
cause
the
crash.
So,
for
example,
for
uncut
exceptions,
it
will
be
most
likely
the
sec
trace
of
the
encode
exception,
some
native
functions,
foreign
handling
rejection.
Since
we
can't
crash
at
the
exact
moment
it
happens,
we
can
only
crash
once
we
drain
the
microtest
wheel.
A
A
A
A
Bacterium
start
trace
from
her
as
long
as
it
was
not
overwritten
and
spectator
properties,
which
was
a
bigger
feature
in
the
past
right
now,
since
we
print
properties
to
on
the
radar
and
to
diagnostics
report,
it's
not
that
it's
not
such
a
killer
feature
anymore
compared
to
the
adapters,
but
still
useful.
A
A
It
might
not
be
reliable
to
get
proper
prototype
properties.
It
might
also
be
a
performance
issue
if
you
have
to
iterate
through
all
the
prototypes,
to
get
all
properties.
A
A
On
the
nesting
it
will
be
shown,
but
it
can
show.
So
that's
a
good
thing.
Another
two
that
can
be
used
are
trace.
Events
if
you
can
go
get
the
flow
before
it
crashes,
which
is
not
always
possible,
especially
with
native
crashes,.
A
A
A
B
B
A
Space
usage
and
basically
look
if
the
usage
is
higher
than
available
considerably
higher.
A
That
can
help
determine
if
the
thresholds
from
a
memory
leak
or
at
least
a
memory
exhaustion.
Looking
at
the
javascript
stack,
can
show
the
victim
of
the
memory
leak.
But
that's
not
necessarily.
A
A
A
A
Basically,
any
tracing
tool
that
the
system
the
operating
system
provides
can
be
used
here
we
might
want
a
guide
here
or
at
least
point
to
a
guide
on
how
to
perform
those.
A
Investigations,
the
idea
is
similar
to
valgrind,
the
user
will
trace
memory,
allocations
and
refreeze,
and
when
the
process
crashes,
the
bpf
program
or
whatever
we
print
our
locations
that
were
not
freed
and
those
are
the
leaks
on
the
application.
A
A
B
A
A
A
B
Yeah,
so
for
that
matter
we
already
have
some
documentation
reviewed
in
the
diagnostic
working
group
itself
and
lander.
If
you
look
at
the
documentation
folder,
you
would
see
a
few
of
them
in
completed
state.
So
are
you
suggesting
that
we
could
publish
that
to
the
website
and
then
start
picking
up
the
content
from
this
document.
B
B
Or
it
could
be
parallel
as
well.
For
example,
somebody
can
pick
up
sections
from
this
document
and
pr
in
the
repo
itself,
in
the
documentation
folder
with
the
proper
markdown
and
screenshots
and
other
supporting
data
as
applicable
and
another
workflow
can
pick
up
from
the
lander
prs
to
publish
in
the
website.
A
D
A
A
Great,
that's
it
for
today
have
a
good
weekend.
Everyone
bye,
bye.