►
From YouTube: Diagnostics WG meeting - Feb 12 2020
Description
A
Then
welcome
everyone.
This
is
2020
February,
12
12,
not
just
an
estate
working
group
meeting,
and
today
we
will
start
to
discuss
the
current
issues
in
a
time,
boats,
manner
that
we
are
going
to
switch
to
do
a
deep
dive
on
a
memory
leak,
user
journey
main
and
maybe
are
in
a
good
shape,
actually
with
memory
related
documentation,
so
it
will
be
mainly
focused,
I
believe
on
the
user
journey,
so
specific
tools
and
the
gaps
around
those
tools.
So,
let's
jump
to
the
end
of
quickly.
We
don't
have
many
people
today.
A
C
Okay,
so
I
know
I
discussed
this
before
like
several
times,
but
I'm,
proposing
again
removing
the
the
review
restrictions
on
a
lot
node,
because
over
the
past
year
you
have
the
activity
of
other
maintainer
and
contributors
have
fallen
a
lot,
and
we
can
see
that
by
last
year,
when
I
was
trying
to
merge
the
patches
to
make
I
will
not
work
with
not
gs-12.
It
took
months
to
get
everything
margit.
C
C
I
tried
to
ping
several
times
folks,
but
since
the
project
is
not
a
priority
for
many
it
and
taken
weeks
or
months
to
get
reviews
and
due
to
the
nature
of
ll
note
for
it
to
stay
relevant,
it
needs
to
stay
up
to
date
with
at
least
LTS,
preferably
with
no
jazzmaster
and
with
the
current
speed,
it's
not
possible
to
do
that.
Even
if
someone
dedicates
their
time
to
to
fix
v8
every
time,
it
is
upgraded.
B
B
Mean
cuz,
it's
like
if
you're
doing
you
know,
if
there's
nobody
else,
the
challenge
I
can
see.
Is
it
it?
It's
probably
not
one
of
the
easier
things
to
review
and
people
are
probably
just
reluctant
to
lgt
em
if
they
haven't
reviewed
it.
So
you
know
I
I,
think
if
you,
if
you
think
it
would
work
to
just
say,
let's
pick
a
time
that
way
if
people
do
want
to
review,
they've
got
the
opportunity,
but
it
doesn't
actually
slow
things
down
by
too
much
right.
B
What
what
do
you
think
is
a
reasonable
time,
like
you
know,
is
a
week
gonna
delay
you
too
much
or
is
like
so
and
if
the
answer
is
yes,
then
it
should
be
like
we
could
propose
two
days
right
or
what's
a
reasonable.
You
know
time
that
you
think
it's
it's
okay,
to
wait
that
won't
impact
you
too
much
but
gives
people
a
chance.
Three.
C
B
B
C
A
B
C
A
C
At
Netflix
we
end
up
using
our
node
to
get
the
stack
trace
from
the
other
object
several
times,
but
since
the
other
object
usually
is
not
on
the
stack,
especially
on
async
errors,
users
were
doing
several
steps
to
be
able
to
find
easier,
and
sometimes
they
were
finding
so
I
wrote
a
script
that
basically
runs.
Some
of
these
comments
automatically
and
determines
where
they
were
stack,
is
and
prints
it
to
the
user.
B
A
I
think
it
could
be
a
really
win
win
situation
because
it's
a
very
useful
feature,
but
obviously
now
the
income-
and
we
have
to
do
this-
scripting
set
up
at
the
beginning,
so
it
would
be
part
of
the
array
mode.
Both
the
community
will
have
a
new
feature.
They
don't
have
to
do
it.
So
it's
a
win-win
situation.
A
A
B
D
B
B
Okay,
so
I
think
that's
what
we
spent
a
fair
amount
discussing
last
time,
but
that's
that's
one.
It
would
be
good
to
get
it
to
some
sort
of
closure
as
well.
The
question
is
like
4-node
report,
there's
currently
a
single-digit
version,
but
and
it's
and
the
main
question
was
like
well:
when
do
we
bump
it?
What
does
it
mean
right
and
then
it
got
into
a
discussion
of
well?
Should
we
actually
have
something
closer
to
cember,
where
there's
two
versions
so.
A
A
B
Well,
why
is
the
versioning
I
guess
it's
just
like?
You
can
basically
see
a
change
where
the
the
major
like
when
you
when
you
have
a
new
major
version
of
node.
That
doesn't
mean
that
the
report
is
different
right
and
it's
more
that
when
you've
got
a
separate
file,
you
want
from
that
file
to
be
able
to
tell.
D
A
B
B
A
A
B
A
Guys
sounds
good
to
me:
okay,
I'm,
going
to
deep
dive,
okay,
so
basically
for
the
deep
dives
I
copied.
So
we
did.
We
had
two
three
meetings
about
the
process
stretch
earlier
and
for
the
process
crash.
There
were
lots
of
information
that
was
missing
from
the
diagnostic
documentation,
so
first
of
all,
I
copied
that
documentation
structure,
but
it's
a
the
memory
leak,
isn't
a
much
better
state
to
be
honest
and
the
process
crash
was
we
already
identified
editing
tools.
We
were
writing
little
bit
side
effects,
symptoms,
so
I
feel.
A
What's
really
missing,
is
kind
of
discussing
how
those
tools
kind
of
used.
So
was
the
user
journey
to
to
find
a
memory
with
them
and
identifying
what
are
the
gaps
so
in
the
future?
We
can
make
them
better.
So
that's
my
personal
opinion.
What
you
could
focus
today,
but
I'm
open
to
other
other
ways
to
do.
This
sounds
good
to
me.
Yeah
I
will
link
this
document
again
in
the
zoom
chat.
A
I
linked
that
child,
okay,
so
I
copied
over
the
the
tools
from
the
current
documentation.
What
we
have
today
and
read
them,
and
only
the
snapshot
was
calling
out
gaps
might
be
destroyed,
others
as
well,
so
maybe
just
simply,
we
could
start
to
go
through
like
how
would
you
do
it
with
he
profile
at
first,
and
heap
snapshot
is
the
same.
A
B
A
C
C
C
B
C
B
B
C
A
So,
basically
why
this
tool
is
useful
for
finally
question:
sorry
the
which
versus
which
it's
not
versus
just
like
just
like
from
usage
in
the
perspective,
so
I
am
I'm
an
odd
user.
I
suspect
that
I
have
a
memory
leak
in
my
application,
because
I
see
the
symptoms.
What
we
documented
earlier
and
I
decide
that
I
will
take
a
look
with
a
something
hit:
profiler
or
he
profilers
I'm,
just
I'm,
just
collecting
the
context
of
like
as
a
user.
Why?
This
usefully
why
this
tool
is
useful
for
me
right.
B
Like
I'd
start
with
like
convincing
myself,
there
is
a
memory
leak
so
like
things
like
the
GC
traces,
that
would
be
something
that
I
would
turn
on
first
and
try
and
look
at
the
DC
behavior
to
say,
yeah.
Okay
over
time,
I
can
see
that
the
heap
is
actually
increasing
or
the
old
space
size
is
getting
bigger
or
versus
that
you
know
just
going
up
and
going
down.
B
It'd
be
like
okay
turn
on
the
GC
and
you
know
run
load
and
make
sure
that
you're
convinced
there
really
is
a
leak
and
even
like
you
know,
you
could
do
things
like
force
the
smaller
heap
size,
and
so
therefore,
you
should
actually
see
that
you're,
actually
gonna
run
out
of
memory
sooner
or
you
see
more
GC
sooner
or
that
kind
of
stuff.
And
you
can
confirm
that
your
memory
leak
is
in
the
heat
versus,
say,
native
memory.
A
A
B
E
One
other
thing,
I
might
add
to
the
symptoms
is
a
lot
of
times.
We
see
like
kind
of
random,
not
random,
but
like
seemingly
unrelated
exceptions
like
something
about
like
string
operations
or
something
lot
of
times.
These
are
due
to
like
memory
leaks
and,
if
you're,
if
you're
new,
to
note
or
not
familiar
with
all
times,
you
start
blaming
other
things,
but
we're
really
was
just
running
out
of
memory.
E
E
E
A
lot
of
times
when
I
first
think
how
to
memory
leaks
there
be
some
it
can
be
an
operation
around
like
allocating
a
file
handle
or
allocating
you
know
a
string
or
something
now,
and
it
would
crash
on
that
I
think
that
would
be
the
exception.
Grace
it
didn't
like
explicitly
say:
I
was
out
of
memory,
so
it
took
a
bit
of
extra
turn
on
the
internet
to
kind
of
figure
out
and
can
develop.
This
intuition
of
these
probably
are
memory.
Issues
were.
E
B
I
can't
think
of
how
you
would
investigate
them
differently.
The
only
thing
might
be
that
the
slower
ones
you
might
want
to
have
ways
to
force
them
to
happen
more
quickly
or
to
like,
let's
make
the
heap
of
much
smaller,
so
we're
gonna
run
out
of
memory
soon
or
that
kind
of
thing,
hmmmm,
I
suppose
that
could
also
affect
the
sampling
like
for
the
sampling
profiler.
What
kind
of
setting
you
should
set
in
terms
of
how
often
to
sample
right.
B
E
E
E
We've
had
a
couple
instances
were
in
just
stop
taking
requests
at
all.
I
unfortunate
instances
weren't
really
instrumented
very
well
like
they
were
just
kind
of
random
apps
that
I
was
hoping
to
bugs.
I
don't
really
know
why,
but
sometimes
it's
in
some
users
it
just
it
stops
responding
at
all
to
my
request.
E
B
E
B
C
E
E
C
E
A
A
Okay,
I
guess
silences.
We
can
move
okay,
so
in
the
before
this
whole
conversation,
we
started
to
add
that
we
build
us.
A
very
strong
I
mean
maybe
not
very
strong,
maybe
maybe
strong
suspicion.
Based
on
these
symptoms,
we
start
to
use
top
to
kind
of
confirm
that
the
naughty
part
should
we
discuss
like
what
you
look
for,
not
a
pot
I'll.
Just
look
for
the
Eph
like.
E
C
B
C
B
B
D
A
B
Wonder
like
there
is
the
virtual
memory,
so
there's
like
in
the
you
limit
section.
Is
it
do
you
ever
see
that
people
run
out
of
memory
because
they're
you
limit
is
set
too
low
or
is
that
a
different
kind
of
problem
than
like?
Is
that
belong
somewhere
else,
or
does
that
belong
in
out
of
memory
type
discussion,
because
we
have
maximum
size,
K,
bytes
and
virtual
memory?
K,
bytes
and
I
have
seen
cases
where,
like
you,
run
out
of
memory,
but
it's
not
because
you've
got
a
leak
or
anything.
B
B
C
B
A
B
I
guess
this
is
the
case
where
it
looks
like
your
memories
increasing,
but
you
want
to
see
like
it's
interesting
to
see
that
GC
is
actually
working
harder
and
harder
versus
you're.
Just
filling
up
the
memory
right,
cuz
it
it
so
I
would
generally
turn
it
on
and
see
that
yes,
I'm
getting
I'm.
It
shows
you
the
amount
of
memory
you're
using
them.
You
can
sort
of
easily
see
the
the
the
direction
upwards
and
that
you
also
see
that
you're
not
just
going
up
and
down
that
you're
just
sort
of
going
in
that
one.
B
A
C
C
B
B
What
I'll
probably
do
is
once
that
lands.
It
actually
can
help
you
with
crashes
as
well.
So
I
can
add
some
another
section
on
using
it
for
like
double
fries
allocating.
You
know
using
memory
you've
already
freed
that
kind
of
stuff,
because
it
would
be
related
to
sort
of
random
crashes
that
I
think
we
covered
in
the
other
section.
Okay,.
A
B
A
A
C
C
E
C
E
And
they're
like
usually
and
then
it's
like
this
treasure
hunt,
do
all
these
objects
to
try
to
figure
out
where
they
are
I.
Think
the
tricky
thing
using
the
sampling
heap
profiler
is
it
tells
you
where
the
member
who's
allocated,
but
often
not
where
it's
leaked,
so
it's
like
so
tell
cheese
counselor
to
narrow
in
on
the
area,
but
it
takes
this
kind
of
intuitive
jump
to
actually
figure
out
what
the
leak
was.
E
I'm,
not
sure,
there's
like
a
perfect
like
a
common
to
an
arrest
if
eyes,
if
you
forget
to
call
next,
you
won't
finish
the
middleware
chain.
So
you'll
end
up
with,
like
all
the
request
stuff
is
leaked,
so
you'll
see
all
the
actresses
of
where
the
request
was
created,
but
then
now
not
where
you
actually
missed
it.
Calling
right.
B
E
E
I
got
one
for
me
as
you'll,
see
like
a
bunch
of
header,
strings
you'll,
see
like
the
request
bodies
but
then
like
what
you're
hoping
like.
Maybe
as
you
can
tell
what
kind
of
request
leaked
and
you
can
trace
through
that
chain
of
middleware.
But
it's
not
like
a
silver
mold
I'm,
not
sure
any
of
these
tools
really
be
a
silver
mode.
So,
like
analyze
your
code
and
tell
you
where
no.
A
A
E
B
E
That's
what
I
was
saying
a
lot
of
times
the
trick
I
do
is
I,
take
a
snapshot
with
it
I
drink.
You
know,
I
pump
a
bunch
of
traffic
through
it
artificially
and
then
I.
Let
it
drain
for
a
few
minutes.
They
take
another
one
like,
and
you
see
you
know,
there's
all
these
requests
on
the
heap,
but
I
can
see
they
were
allocated
in
this
middleware.
So
it's
probably
this
passed
and
I
can
kind
of
trace
it
through
that
way
right.
But
it's
yeah
I
found
it
much
more
useful
than
this.
B
E
B
Yeah
and
I
guess
the
thinking
about
I
don't
know
if
this
is
actually
that
feasible
in
production,
but
you
could,
with
the
the
sampling
profile,
you're
gonna
have
to
take
the
hit
production
because
you
need
to
be
running
and
sampling
while
you're
running
the
heap
snapshot,
I,
guess
in
theory,
you
could
drive
traffic
to
a
node,
then
stop
driving
traffic
to
that
node
and
take
a
snapshot
after
the
fact.
Yep
and
I'll
stop
take
some
mechanics,
but
it
would
be
possible.
I.
Think.
C
Maybe
there
was
some
optimizations
there:
maybe
people
were
using
it
wrong,
like
maybe
it
is
C++.
Api
is
way
more
expensive
than
the
inspector
protocol
API,
so
I
think
it's
worth
revisiting
our
position
that
it's
not
safe
for
production,
and
you
should
probably
do
that
in
sync
with
the
v8
team,
because
they
have
information
on
how
it
works.