►
From YouTube: What’s Up with the Community Post-Mortem Diagnostics Working Group - Yunong Xiao & Michael Dawson
Description
What’s Up with the Community Post-Mortem Diagnostics Working Group - Yunong Xiao, Netflix.com & Michael Dawson, IBM
It’s important to be able to figure out what’s going on when things go wrong in your Node.js production application. Tools are needed to investigate memory leaks, crashes and other "interesting" events in production. The post-mortem community working group (https://github.com/nodejs/post-mortem) is working on these problems. Come and learn about the key issues being worked, and the progress of the working group so far as illustrated through examples and code.
A
Good
afternoon
how's
everyone
doing
this
is
good.
This
is
good.
Looking
crowd
lots
of
people
here.
This
is
great.
We're
gonna,
give
a
talk
here
on
the
nodejs
post-mortem
working
group.
Give
you
guys
an
update,
I'm.
You
know
on
from
netflix,
and
this
is
michael
from
IBM
about
me-
netflix.
You
guys
probably
already
know
who
we
are
we're
really
interested
in
this
stuff
and
we'll
get
into
the
some
that
later.
But
you
know,
if
you
don't
have
netflix
already,
maybe
try
it
out.
B
I'm
Michael
Dawson
I'm
detective
lead
for
the
team
with
an
IBM
that
builds
and
manages
node
for
consumption
in
our
products,
as
well
as
delivery
to
to
our
customers.
Through
things
like
our
bluemix
paths
and
our
development
model
model,
our
development
approach
is
really
very
much
in
the
community.
We
plan
to
develop
out
in
the
community
and
then
pull
those
improvements
back
in.
B
The
other
thing
we
want
to
point
out
is
that
you
know
we're
just
the
two
people
who
were
have
the
privilege
to
here
to
talk
to
you
about.
What's
going
on
the
post-mortem
workgroup,
there's
a
lot
of
other
people
who
are
active
participants
in
particular,
there's
a
couple:
people
Julian
and
David
from
joint
who
did
a
lot
of
the
foundational
work
that
we're
building
on.
B
In
terms
of
the
post-mortem
debugging,
we
have
a
few
people
from
IBM
like
Howard
and
Richard,
who
are
active
on
some
of
the
things
like
node
report
that
we're
going
to
talk
to
about
today.
So
before
we
go
farther
I
just
want
to
make
sure
you
know
that
there's
you
know,
there's
a
good
team
that
people
already
working
on
this.
If
you're
interested,
though
we
could
always
use
more
people
and
more
involvement,
so
you
know
come
out
and
get
involved.
If
you
have
any
interest
at
all.
A
So,
let's
quickly
talk
about
the
mission
statement
right
most
of
you're,
probably
wondering
like
what
is
the
post-mortem
working
group
and
its
really
dedicated
to
the
support
and
improvement
of
pulse
water
and
debugging
for
know.
Jess
and
the
thing
that
I
want
you
guys
to
focus
on
here
is
is
debugging
and
that's
that's
really
really
important
for
us
and
for
a
lot
of
folks,
I'm
sure
in
this
room.
There's
anyone
debugging
know
Jess.
Anyone
anyway
use
some
of
these
tools
right,
you
know,
I'm
sure,
we've
all
used
integrated,
debugger
senior
I
des
we've
used.
A
You
know,
fuels
probably
have
used
the
no
debug
command-line
tool
and,
like
myself,
I
mainly
use
console
door
locks
because
I'm
too
lazy
to
set
things
up.
But
you
know
this
is
sort
of
the
set
of
debugging
tools
in
the
debugging
landscape.
Today,
node
and
there's
a
common
theme
here,
which
is
that
you
can,
you
only
usually
run
these
dividing
tools
in
your
test
environment
and-
and
why
is
that?
Why
is
that
right,
I
mean?
A
Let
me
tell
you
about
wiring
an
example
so
and
like
the
reason
that
you
can't
use
this
in
production,
is
we'll
get
we'll
get
into
that,
so,
for
example.
So
what
do
you
know
what
this
is
yeah?
That's
right!
It's
it's!
It
used
to
be
a
planet,
but
it's
probably
pretty
pretty
sad.
Now
it
still
loves
you
right
and
and
Pluto.
As
you
know,
there
was
a
probe
that
was
sent
to
a
couple
of
New,
Horizons,
probe
and
I.
A
Think
of
like
the
probe
is
like
production
right,
like
I'm,
sending
a
probe
to
Pluto,
and
it's
got
this
very
short
amount
of
time
to
go,
fly
past
Pluto
and
take
images
and
take
pictures,
and
so
suppose
the
probe
crashes
or
it's
got
a
bug.
Do
you
have
do
you
wanted
attach
a
debugger
and
pause
the
probe
and
debug
it
while
it's
flying
past
Pluto?
Of
course
not
where
you
want.
A
You
want
to
be
able
to
capture
all
the
data
really
quickly
and
you
don't
want
to
miss
the
one
singh
lifetime
opportunity,
you're
flying
past
cool
and
sort
of
in
production.
We
have
the
same
constraints
and
it
turns
out
that
NASA
and
all
its
wisdom
back
in
the
70s.
Actually
they
released
the
paper
detailing
a
methodology
called
providing
call
a
cordon
right.
What
they
did
with
spacecraft
is,
they
would
take
a
core
dump
when
I
was
somethin,
failed
on
the
spacecraft
and
then
immediately.
A
We
started
so
because
they're
sort
of
goal
here
is
to
minimally
impact
data
acquisition
right,
you're
flying
past
fool.
You
only
do
it
once
so.
Is
there
a
problem
with
with
with
the
spacecraft
of
the
software?
You
want
to
debug
it
so
that
it
doesn't
happen
again,
but
you
can't
afford
to
do
it
then.
So
what
you
do
is
you
take
a
core
dump
and
then
look
at
it
later
and
that's
sort
of
you
know.
Let
me
give
you
a
brief
history
on
the
cordon,
but
that's
kind
of
a
post-mortem
debugging
as
well.
A
So
back
of
the
day
you
know,
memory
was
actually
stored
on
magnetic
core
memory,
so
here's
a
picture
of
it
all
every
single
pin
is
a
bit
right
and
well.
Programmers
and
engineers
used
to
do
back
in
the
day
before
they
had
to
fancy
debuggers
is
they
would
dump
all
the
contents
of
the
memory
and
and
inspect
it,
and
so
that's
why
they
call
it
a
core
dump.
Right
is
the
contents
of
a
core
memory,
and
this
stuff
was
initially
printed
on
paper
and
the
term
post-mortem
debugging
was
really
born
at
that
time.
A
So
what
that
really
means
for
post-mortem
debugging
is
hey.
I,
take
a
snapshot
of
my
process.
We
start
it
and
then
I
can
look
at
that
post-mortem
to
bug
you
later
and
that's
really
helpful
for
production
right,
cuz,
there's
some
production
constraints
as
I'm
sure.
Most
of
you
know
and
sort
of
the
example
with
NASA
is
up.
Time
is
critical
right
from
netflix
from
any
of
you
out
there
who
are
running
your
own
real
time
services.
You
have
customers,
that's
directly
impacted
by
downtime.
A
So
if
you
have
a
bug
in
production,
you
want
to
make
sure
that
you
resume
service
as
soon
as
possible.
At
the
same
time,
you
want
to
capture
that
state
so
that
you
can
debug
it
later
so
you're
not
hitting
that
production
problem
again
for
us
a
netflix.
It's
not
it's
very
hard
for
us
to
easily
reproduce
the
bug
right.
Given
the
milk
tens
of
millions
of
subscribers,
we
I
F
and
the
different
AVS
has
in
just
the
different
environments.
That's
really
really
hard
for
us
to
do
that
again,
like
I
said.
A
A
What
that
issue?
What
is
and
fix
it
right,
because
the
whole
point
here
is
you
want
to
make
sure
that
you're
debugging
your
problems,
that
you
see
and
not
just
restarting
the
services
and
forgetting
about
them,
because
if
you're,
not
debugging
them
and
fixing
them
they're
going
to
hit
you
again
right
and
so
for
us,
you
know,
customer
available
of
service
available
is
really
important.
So
we
want
to
drive
down
the
cost
for
most
of
our
errors,
and
so
we
know
you
can
do
the
same
thing
here.
A
Actually
so
you
can
enable
know
to
come
pack
or
when
it
starts,
you
can
say
a
node
with
the
abort
on
uncaught
exception,
flag
and
any
time
you
throw
an
error.
That's
on
calling
node,
as
you
know,
normally,
when
you
throw
an
error,
that's
on
caught,
it
immediately
exits
with
a
stack
trace
in
this
case.
A
Even
though
you
have
a
stack
trace,
you
may
not
know
that
the
arguments
or
their
variables
or
other
state
of
all
the
variables
on
the
heap,
the
core
dump.
You
can
look
at
that,
and
this
working
group
is
really
around
providing
the
tools
for
you
to
be
able
to
expect
all
of
that
information
in
addition
that
just
capture
core
dump
in
your
process
exits.
Sometimes
it's
also
really
nice
to
be
able
to
take
a
core
dump
of
your
process.
A
While
it's
running,
even
though
things
aren't
failing
immediately
and
you
can
do
that
on
most
unix-like
operating
systems
by
running
G
core
followed
by
the
pit
of
the
process,
and
this
will
actually
take
a
core
dump
of
your
process.
While
it's
running-
and
this
briefly
pauses
the
process,
but
it's
really
fast.
So
it's
a
really
good
easy
way
for
you
to
capture
a
cordon
of
a
process.
You
think
may
have
gone
awry
and
then
look
at
it
elsewhere
as
well.
A
So
it's
I
mean
especially
glittery
sort
of
the
advantages
of
clothes,
worn
during
debugging
with
an
example.
So
I
give
a
talk
earlier
today
about
our
new
API
and
Netflix
API
architecture,
if
you
guys
are
interested,
should
check
out
the
video
of
that
later,
but
for
all
of
our
clients
when
they
come
in
our
new
architecture,
they
hit
a
note
based
process
where
we've
they've
written
their
api's
and
then
that
that
service
then
forwards
on
the
request
to
the
edge
API
which
the
fords
on
to
the
backend
services
butting
node,
we
were
consistently.
A
We
have
a
connection
pool
that
helps
us
do
client
side
load
balancing
to
the
edge
service.
So
you
can
think
of
this,
like
in
low-level
connection,
pool,
have
a
bunch
of
connections
and
those
random
will
pick
some
subset
of
API
of
instances
that's
available
out
out
in
the
world
and
what
we
were
seeing
when
we're
testing
this
new
stack.
Is
that
the
note
the
note
services
were
consistently
saying
that
there
were
no
free
connections
in
the
pool
which
is
really
weird
and
like?
How
would
you
actually
debug
this?
A
If
you
think
about
it
right
because
I
don't
have
contacts
or
logging?
That's
tells
me
right
now
with
seeing
the
pool
and
when
I
go
look
at
all
the
available
instances.
That's
out
there
in
this
in
the
API
service
they're
all
up
so
there's
some
bug
in
notes,
I'm
wearing
or
no
client
low
bouncer,
where
we're
not
making
those
connections
and
I
don't
actually
know
the
internal
state
of
my
program
right
and
so.
The
other
thing
that
was
hard
about
this
is
that
when
we
restarted
the
service,
the
problem
went
away.
A
So,
like
I
bounced
the
service,
it
goes
away
and
then
at
some
point
in
time
on
some
small
number
of
these
instances.
The
problem
comes
back,
so
you
know,
I
could
go
at
logging
statements,
but
I'd
have
to
wait
for
everything
to
reproduce
and
maybe
I
didn't
have
the
right,
login
steam
and
so
like
hours
later
after
this
iteration
would
take
forever,
and
so
what
we
did
instead
was
again
uses
post-mortem
debugging
techniques.
A
So
we
take
a
core
dump
and
then
we
start
the
app
and
immediately
then
we
have
fully
baked
I
mean
a
low
balance
of
its
full
connection,
so
we're
continuing
to
serve
traffic,
and
then
we
took
this
core
dumping.
We
loaded
it
on
our
debugging
machine
to
try
to
figure
out
why
the
little
bouncer
wasn't
working
and
here
we're
using
a
tool
called
MDB
v8,
which
is
a
tool
that's
produced
by
the
post-mortem
work
group.
A
But
when
the
commander
we're
using
is
showing
you
know,
hey
I'm,
fine,
the
Jays
object
that
contains
all
my
connections
for
the
connection
pool,
and
here
we
could
easily
see-
and
this
is
that
the
act
of
JavaScript
atiyah.
We
could
see
that
hey
for
for
the
we
have
a
bunch
of
free
connections,
but
none
of
them
were
connected.
So,
like
this
instantaneously
pinpointed
us,
you
know
what
that
bug
was
which
is
like
hey
in
this
vaca
cove.
A
Where
were
were
free
and
connections
we
weren't,
you
know
there
was
that
there
was
a
bug
where
we're
not
putting
them
back
into
the
connected
pool,
and
so
this
was
a
really
good
example
of
like
how
post-mortem
debugging
can
really
help.
You
solve
these
really
nuanced,
sometimes
subtle,
bugs
in
your
code
that
you
can't
reproduce
easily
Anderson
production,
and
so
you
know
I
want
you
guys.
A
B
Thanks
yeah
thanks,
you
don't
gave
us
a
really
good
example
of
you
know
why
we
should
be
interested
in
why
this
is
important.
I'm
now
going
to
delve
in
a
little
bit
more
into
the
current
state
and
what
the
working
group
is
doing
to
improve
that.
So
the
overall
mission
of
the
workgroup
is
to
basically
guide
the
improvements
in
the
post-mortem
story.
That
includes
defining
things
like
interfaces
and
api's.
We
want
to
be
a
get
easy
for
people
to
build
new
tools
without
having
to
reinvent
the
wheel.
B
So
you
know
some
good
api's
and
interfaces
that
lets
us
introspect,
the
artifacts
we
can
get
out
for
post-mortem
debugging.
Similarly,
things
like
dump
formats,
we
want
to
define
and
standardize
those
again
so
that
you
know
you
can
build
and
leverage
on
top
of
them
and
then
also
build
up
a
set
of
tools
and
techniques
that
helps
it
helps
people
coming
who
are
newer
to
the
to
the
concept
to
get
involved
in
rate,
get
ramped
up
quickly
in
terms
of
getting
their
debugging
going.
B
So,
taking
a
little
bit
about
look
at
the
state
of
the
tools
today,
there's
really
two
key
artifacts
that
people
use
for
post-mortem
debugging.
The
first
is
a
heap
them,
which
is
a
snapshot
of
your
heap
in
you,
know,
managed
runtimes
of
which
node
is
one
of
them.
You
know
the
the
memory
that
you
use
with
your
application
is
largely
contained
within
that
heap
and
a
lot
of
information
you
can
really
get
out
by
looking
at
the
heap
contents.
B
Today,
traditionally,
that's
done,
you
know
the
heap
dump
module,
which
is
an
MPN
module
available
and
made
by
ben
nor
house,
is
one
that's
most
commonly
used.
You
can
actually
take
the
heap
dumps
that
it
generates
and
open
them
in
chrome,
developer
tools,
and
then
you
know,
compare
between
say
to
heap
dumps
to
see
if
you're
leaking
memory
or
try
and
figure
out.
You
know
some
information
about
the
objects.
There
are,
however,
a
number
of
limitations
in
terms
of
using
the
tool
not
necessarily
related
to
the
implementation
of
the
NPM
itself.
B
Just
due
to
the
fact
that
you
know
heap
dumps
have
a
lot
of
data
relationships,
and
so
the
generation
of
them
is
can
be
quite
long
when
you
have
a
large
heat.
The
second
thing
that's
often
used
is
core
dumps.
So,
instead
of
just
being,
you
know
an
image
of
your
heat,
these
are
a
complete
memory,
age
of
image,
of
the
process
that
was
running
at
the
time
and,
as
you
know,
in
manch
mentioned,
these
can
be
generated
a
bunch
of
different
ways
like
when
you
crash
you
know,
operating
systems
will
often
give
them.
B
Give
you
one
automatically.
You
can
add
the
aboard
on
uncut
exception,
to
be
able
to
generate
it
when
you
have
an
uncaught
exception,
and
you
can
also,
you
know,
use
tools
like
g
core
to
generate
them.
Advantages
is
it's.
You
know
it's
fast
to
create
these
relative
to
heap
dumps.
They
can
still
be
fairly
large,
though,
if
you
have
a
large
process
size.
The
first
way
you
can
start
to
go
and
look
at
those
artifact.
The
cordon
artifact
is
to
just
open
them
up
in
the
debuggers.
B
Unfortunately,
that
can
be
a
lot
of
work
because
they
don't
have
any
knowledge
of
the
structures
of
v8
node.
So
if
you
want
to
actually
look
at
an
object,
you
have
to
know
that.
Okay,
that
map
that
has
the
fields
is,
you
know
four
bytes
into
the
object
and
then
decode
all
that,
and
so
you
can
look
at
it
with
those
tools,
but
that's
a
lot
of
work
so
to
get
over
that
hit
that
hurdle
a
number
of
tools
have
been
developed.
B
So
these
are,
you
know,
basically
core
dump
inspectors
that
know
about
v8
and
node
structures.
There's
several
out
there
today
you
know
mentioned
and
md
b,
which
is
one
of
the
earlier
ones.
I
DD
is
another
one
made
by
IBM
and
ll.
Node
is
a
newer
one
that
does
being
worked
on
within
the
workgroup.
We
take
a
quick
look
at
the
you
know,
some
of
the
commands
between
these
different
ones.
It'll
give
you
a
flavor
of
what
you
can
do
with
them.
You
know
you
print
a
stock
trace
for
the
executing
thread.
B
You
can
find
objects,
you
can
print
the
contents
of
an
object,
find
constructors
all
sorts
of
really
useful
information
today,
unfortunately,
the
story's
a
little
bit
fractured.
If
you
look
at
the
different
tools,
they
have
different
commands.
They'll
have
a
different
subset
of
commands
and
that's
one
of
the
areas
you
know
we're
working
to
try
and
bring
everything
together
to
make
it
easier
and
more
consistent
to
use
across
platforms.
B
So
looking
at
the
current
state,
how
do
we
actually
make
that
better
for
one
we
want
to
improve
ease
of
use
so
that
you
know
involves
better
platform,
support,
more
consistent,
the
consistency
we
want
to
provide
more
API
so
that,
if
you,
you
know,
don't
want
to
use
the
inspectors
that
are
there
yourself
in
the
existing
commands?
You
can
write
your
own
more
easily.
We
want
to
make
sure
you
can
ensure
platform
support
so
across
all
the
operating
systems
and
architectures
common
command
sets.
B
You
only
have
to
learn
things
once
and
then
a
little
bit
different
from
you
know.
We've
been
talking
about
core
dumps,
but
often
you
know.
If
you
have
a
problem,
you
don't
need
all
the
information
in
the
core
dump.
So
is
there
some
smaller
subset
that
we
can
get
even
faster
and
even
easier
and
actually
look
at,
even
with
with
less
tooling?
That
will
help
us
get
going
really
fast
and
that's
what
we're
calling
the
lightweight
done.
B
So
specifically,
the
working
group
is
working
in
a
bunch
of
key
areas
now,
and
you
know
that
the
overall
goal
is
what
we've
outlined,
but
the
first
one
is
the
common
heap
dump
format.
So
that's
a
common
format
for
the
heap
dump,
which
we
can
then
let
tools
interpret
the
other
difference
also
from
what
we
do
today
is
to
be
able
to
generate
that
from
a
core
file.
B
You
know
the
key
thing
we
see
there
is
it's
a
real
enabler
for
new
tools.
If
we
have
this
format-
and
you
know
you
can
use
api's
or
in
you
know,
read
the
the
format
itself,
then
that's
a
way
we
can
sort
of
accelerate
the
path
to
having
lots
of
good
tools
for
doing
course,
modern
debuggers.
Today
our
thoughts
for
generation
is
that
you
know
you
could
actually
generate
it.
B
What's
going
on
that
front
in
terms
of
consuming
those
heap
dumps,
you
know
we're
all
familiar
with
the
the
the
the
tools,
the
Google
tool,
so
probably
something
that's
going
to
convert
that
format
into
the
existing
format
so
that
we
can
still
open,
though
up
in
chrome,
developer
tools
is
going
to
be
useful
and
then
also
the
sea
JavaScript
tools.
So
you
know
basically,
whatever
new
tools
people
can
imagine,
can
read
off
that
core
dump
as
well
in
terms
of
core
dump
analysis.
B
Really,
the
focus
is
you
know,
getting
better
platform
coverage,
reusing
the
existing
command
implementation.
So
MDB
has
you
know
a
good
set
of
implementation
for
those
commands,
we'd
like
to
bring
that
logic
and
code.
If
we
can
to
some
other
platforms
like
ll
node,
which
has
perhaps
a
little
broader
platform,
support
and
then
building
you
know,
common
api
is
that
you
can
interact
on
top
of
stuff
like
that.
B
So
if
you're
interested
in
those
things,
you
can
go
out
to
the
repo
and
look
at
issue
37,
for
you
know,
discussion
on
the
common
c
library,
as
well
as
ll
node,
which
is
a
project
due
to
started
but
will
soon
be
moved
under
nodejs
as
part
of
the
the
working
groups
efforts
the
goal
we're
trying
to
get
to
is
basically
to
this.
You
know
a
picture
where
you
have
this
function.
L
you've
got
your
core
dump
generated.
B
You
now
have
you
know
one
or
more
debugger,
so
MDB
ll
know
dbx,
which
is
an
AIX
platform
support.
Debugger
they'll
have
core
dump
readers
that
know
how
to
understand
core
dumps.
You
know
possibly
from
different
languages,
so
you
don't
sorry
different
operating
systems,
so
you
don't
even
necessarily
have
to
debug
your
core
dump
on
the
same
platform
that
it
was
generated,
then,
on
top
of
that,
the
debuggers
will
provide
a
set
of
api,
so
the
capi
built
on
top
of
that
will
be
the
javascript
api.
B
B
The
last
thing
I'll
mention
that
we're
working
on
is
sorry.
Second
last
is
node
report,
and
basically
the
idea
here
is
that
we
want
to
get
a
lightweight
dump,
which
is
fast
very
fast
to
generate
it's
small,
it's
human
readable,
and
it's
got
the
key
information
that
you
need
to
start
investigating.
There's
lots
of
issues
I've
run
into
where,
if
you
just
had
say
the
environment
variables
that
were
set
when
you
ran
did
was
my
limit.
B
We
might
was
my
you
limit
0
or
what
was
my
heap
setting
that
just
by
looking
at
that,
you
could
basically
figure
out
what
was
going
on,
and
so
it's
to
give
you
that
kind
of
information
being
able
to
be
generated
off
things
like
exceptions,
fatal
errors,
signals
or
even
say
have
the
JavaScript
API
say:
hey
I
want
to
generate
a
node
report.
This
is
a
quick
example.
It's
a
bit
hard
to
read,
but
basically
you
know
there's
there's
already
work
on
this.
B
You
can
go
to
the
repo
nodejs
no
report
and
take
a
look
at
the
codes
it
there,
but
it
gives
you
you
know
just
the
key
information
you
need
about
things
like
the
events
are
going
on
your
OS
and
node
versions:
stock
trace
of
the
current
executing
thread
heap
and
GC
statistics,
resource
usage,
OSU
limit
settings,
all
that
kind
of
stuff
that
I
think
you'll
find
really.
You
know
a
lot
of
cases
will
let
you
solve
the
problem
without
even
having
to
delve
even
deeper
terms
of
the
JavaScript
API.
B
You
know
the
the
goal
here
is
to
make
things
more
accessible
a
lot
today.
A
lot
of
work,
if
you
want
to
do
or
extend
the
tools
in
postmortem,
you
have
to
actually
do
that
in
C
and
C++
code,
which
is
a
bit
of
a
barrier
to
entry
and
and
speed
of
development
for
a
lot
of
people
in
the
community.
So
what
we
want
is
a
JavaScript
API
that
can
either
work
off
that
common
heap,
dump
format
or
perhaps
directly
off
of
a
core
file.
B
In
the
case
of
you
know,
ll
node
is
in
our
proof
of
concept,
work.
The
JavaScript
API
just
drives
and
respects
the
the
core
dump
using
the
L&O
debugger.
If
you
want
to
read
more
about
that
again,
you
can
go
to
issue
33
in
the
repo,
and
you
know:
here's
just
a
sample
application
that
we
wrote
that
you
know
given
a
given
a
core
file
through
javascript
in
a
simple
Express
application.
B
You
can
go
out
and
show
you
the
stack
trace
for
your
application,
so
you
wouldn't
even
necessarily
have
to
copy
that
core
file
off
your
machine.
You
could
simply
have
a
little
application
which
lets
you
lets
you
introspect
it
where
it
is.
So.
In
summary,
you
know
I've
told
you
a
bit
more
about
you
know:
we've
told
you
what
post-mortem
debugging
is
I.
You
know,
I
had
a
very
good
example
of
where
it's
helpful
and
we
gave
you
a
little
bit
of
an
overview
of
you
know.
B
The
activity
is
the
work
group,
the
common
heap
dump
format,
the
api's
were
working
on
and
some
of
the
tools
you
know
ldd
ll,
dbm,
DB
and
node
report
before
I
before
I
hand.
It
back
to
you
know
I
just
want
to
say
get
involved,
I
think
this
is
or
it's
a
really
good
chance
to
learn.
You
know
there.
You
can
learn
low-level
machine
details,
ki
debugging
techniques,
different
platform
and
operating
systems.
So
if
you
want
to
learn
any
of
those
things,
this
is
a
great
place
to
get
involved
to
do
to
do
that.
B
A
It's
just
what
like
what
I
was
saying
earlier.
This
is
really
critical
for
a
lot
of
folks
and
for
us
as
well.
So
we
want
to
see
you
in
the
community,
which
is
you
guys
and
the
foundation
and
folks
working
on
a
really
investing
this
debugging,
a
debugging
tooling.
You
know
for
really
mature
runtime,
that
and
for
for
ecosystem.
We
want
to
see
more
up
take
a
more
adoption,
especially
in
production.
This
is
sort
of
critical
tooling
that
you
need
to
get
this,
get
your
to
be
able
to
run
your
services
reliably.
A
And
like
the
takeaway
here
is
with
post,
more
and
debugging
is
you're
going
to
be
cases
where
some
production
problems
are
just
otherwise
impossible
to
debug,
and
so
the
only
way
for
you
to
do
that
is
by
using
the
post-mortem
tools
where
you
can
save
that
complete
process
date
and
debug
it
later.
That's
what
we
had.
Thank
you
very
much.
A
So
that
already,
actually
that's
a
great
question
that
already
exists
today.
So
if
you
fire
up,
you
know
MDB
or
ll
node
with
a
core
dump
and
you
print
the
stack
trace.
It
will
actually
show
you
everything
from
system
calls
to
lip
balm,
so
it
will
show
you
everything
from
lipsy
to
the
v8,
the
native
v8
stacks
to
the
JavaScript
stacks
and
all
the
way
up
to
stuffing
yours
line.
A
So
you
can
already
do
that
today
and
then,
if
you're
using
other
techniques
where
you
can
sample
the
stack
traces,
this
is
a
little
but
not
relevant.
To
the
post-mortem
case,
you
can
see
you
should
be
able
to
see
everything
so
with
d
trace
or
perf
performance
you'll
be
able
to
stand
post
actress
and
see
everything
all
the
way
down.
The
system
calls
so
that
all
that
stuff
already
exists
today.
C
A
Yeah
yeah
so
I
mean
I,
don't
have
my
claps
I
could
show
it.
So
it's
those
those
staff
frames
for
Native
debuggers,
like
MDB
or
L
on
our
GD
v
right.
They
don't
understand
the
JavaScript
stacks
because
they
don't
know
the
format,
and
this
is
where
this
common
see
AP
and
the
JavaScript
API
comes
in
and
because
it
understands
the
format
of
both.
You
know
variables
on
the
heap
and
stack
frames.
So
it's
able
to
take
that
stack
frame
or
the
set
of
stack
frames.
A
That's
that's
in
JavaScript,
interpret
them
and
then
print
them
out
in
line.
So
if
you
actually
go
far
up,
you
know
grab
them
DB,
v8
or
ll
node,
and
you
just
print
out.
The
statue
is
on
a
core
dump.
You'll,
be
able
to
see
native
stack,
traces
and
JavaScript
stature.
It's
all
in
line
in
one
exactly
as
you
would
expect.
Actually
so
so
all
that
stuff
already
exists.
A
B
A
So
it's
not
in
the
this
is
a
good
question.
It's
line
the
goal
of
this
post
mortem
working
group,
though
we've
probably
should
incorporate
that
actually
Microsoft's
done
some
really
great
work
with
time
travel
debugging
on
their
chakra
core,
which
is
a
competitive
VA,
and
that
lets
you
actually
record
state
of
the
entire
running
process
through
time,
and
it
keeps
that
stating
a
ring
buffer.
A
B
That's
that
the
lines
a
bit
fuzzy
there's
also
a
diagnostic
work
group
and
in
that
work
group,
there's
definite
work
going
on
in
terms
of
tracing
and
adding
and
better
features
to
make
tracing,
lighter
weight
easier
to
turn
on
and
stuff
like
that.
So
I
go
look
there!
If
you
want
to
look
at
that
in
a
little
bit
more
detail,
it
was
a
Diagnostics
workgroup.
So
if
you
go
to
like
no
jf
/
Diagnostics,
there's
probably
some
issues
talking
about
exactly
that.