►
From YouTube: Diagnostics WG meeting June 30th 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
Yeah
initially
this
this
issue
was
created
because
we
kill
close
at
the
node
cpu
profiling
roadmap
and
from
from
past
discoveries,
we
figure
out
that
when
we
have
a
lot
of
functions
on
the
re
on
the
hip,
the
profile
start
and
profile
stop
takes
a
long
time,
mainly
because
of
cpu
utilization.
We
start
profiling.
C
It's
appeal
profiling
and
this.
This
is
a
harmful
to
production
environments.
However,
I
actually
in
the
last
three
days,
I
have
tried
some
benchmarks.
I
have
read
the
entire
conversation
entire
issue
and
I
have
tried
been
to
mike
to
to
check
how
how
this
takes,
and
actually
it
is
not
taking
a
long
time
anymore,
see
in
versions,
node,
40,
13
and,
above
so.
D
D
Performance
improvement
at
node,
14
and
another
bit
of
a
jump
at
16
others,
it's
still
more.
That
can
be
done
with
it,
but
yeah.
C
Yeah
so
looks
like
that
it
was
fixed
in
the
v8
and
I'm
not
sure
if
there
are
more
things
to
to
do
in
this
issue.
So
I
just
added
the
flag
to
to
talk
here
and
discuss
if
we
should
keep
the
the
keep
the
eyes
on
it
or
we
can
close
or
even
check
if
the
the
benchmark
the
test
was.
C
Correctly,
firstly,
sorry.
C
Yeah
exactly
I
post
okay,
I
post
the
results
here
in
the,
for
instance,
in
the
v10
they
start
takes
around
office,
600
milliseconds
and
in
the
5v14
it
takes
only
58
milliseconds,
so
it's
a
huge
impact,
but
in
but
we
we
had
some
some
case
that
when
we
had
a
round
of
1500
functions,
fifty
fifty
thousand
functions
in
the
hip.
The
profile
start,
was
taking
what
were
taking
a
round
of
five
seconds
to
to
to
start.
But
it
didn't
happen
with
me.
C
E
A
C
Yeah
in
the
last
benchmark
from
from
from
gerhard
when
we
have
a
round
of
1
millions
functions,
the
start
was
taking
more
than
one
second,
but
in
my
test
is
not
taking
anymore.
So.
C
C
Yeah,
the
the
enable
is
an
entire
point.
However,
I
feel
that
we
have
the
the
cloud
profile
node.js
from
from
google
apis
have
a
similar
issue,
but
I'm
not
sure
if
it
is
a
round-off
profile.
That
start
the
issue
is
from
the
last
year
as
well.
When,
when
all
the
discussions
come
to
to
bring
to
the
discussions,
I
will.
I
think
that
I
should.
C
E
C
D
C
E
D
Also,
looking
into
the
possibility
of
making
a
different
profiler
interface,
that
just
gives
you
people
off
data,
which
hopefully,
we
can
like,
actually
expose
more
directly.
So
you
don't
have
to
go
through
the
inspector
at
all,
because
there's
actually
a
whole
bunch
of
extra
costs
to
go
into
the
inspector.
D
D
But
it's
like
a
protobuf
serialized
data.
Basically,
the
idea
was
just
to
have
like
an
api
that
just
gives
you
a
buffer
and
so
like,
currently
like
the
like
inspector
profiler,
like
it
serializes
all
of
the
c,
plus
plus
objects
into
javascript
objects
which,
like
it's
it's
a
whole
tree
of
stuff.
D
There's
a
lot
of
expense
to
doing
that,
and
it
does
this
in
like
the
inspector
thread
and
then
sends
it
over
web
sockets
back
to
the
main
thread
right,
which
requires
serializing
that
to
json,
which
is
more
expensive
and
then
deserializing
it
again
when
you
receive
it
and
right
all
these
extra
unnecessary
steps
yeah.
So
I
want
to
just
go
like
straight
to
just
make
it
keep
enough
data
and
spread
that
out
and
you
can
do
what
you
want
with
it
from
there.
E
C
I
thought
that
could
be
related
to
to
to
this
one
that
I
will
send
in
the
chat,
but
it
was
abandoned
but
could
be
donning
another
another.
C
C
A
D
Yeah
so
put
together
a
blog
post
for
that
chat,
yep
yeah
there's
some
content
there
to
put
in
a
blog
post.
You
might
want
to
do
some
like
editing
or
something
first.
E
D
E
E
E
E
E
D
I,
like,
like
tracking
life
cycle
of
requests
and
handles,
like
the
the
request
part,
is
maybe
a
little
ambiguous
with
async
local
storage
but
tracking
like
handles
and
sockets
like
you
kind
of
need
access
to
those
like
you
get
from
using
asynchronous
directly
or
like
right.
E
E
Okay,
I
it
might
be
useful
to
convert
that
to
like
a
bulleted
list,
which
is
like
other
known
use,
cases
which
are
not
served
by
async.
Local
storage
include
long
stack,
traces
measuring
the
time
that
would
have
flowed
more
naturally,
for
me,
at
least,
if
those
were
like
three
bullet
points
under
that
one
thought.
F
A
A
D
Like
even
like,
not
necessarily
something
you're
standing
up
but
just
like
seeing
like
the
numbers
are
showing
that
this
could
maybe
be
better.
D
It's
like
you,
you
might
have
like
a
high
traffic
server
that
you're
getting
like
tens
of
thousands
of
requests
per
second
and
like
it
handles
it.
Okay,
but
you
wish
you
could
get
like
a
bit
better
like
just
like
request,
latency
or
something
like
that
and
like
looking
at
the
blocking
time
to
tell
you
like,
oh
yeah,
I'm
spending
a
fair
bit
of
time
in
this
like
blocking
code.
Maybe
I
can
put
that
in
like
a
worker
thread
or
something
okay,.
E
E
D
E
D
It's
more
what
is
not
in
this
list.
E
Right,
okay,
so
maybe,
instead
of
being
it's,
you
know
saying
there
are
other
known
use
cases
but,
like
you
know,
maybe
it's
like
we
should
say
something
like
we
already
have
on
our
list
to
look
for
other
ways
to
allow
you
to
do
this
this
and
this
without
async
yeah
and
then
it's
kind
of
like.
Are
there
any
other
things
that
you
use
them
for
that
you
know
you
know
so
yeah,
that's
really
the
fundamental
question.
Are
you
using
ace
and
cooks
for
anything
else?.
A
E
E
So
don't
you
know,
don't
don't
get
rid
of
them
without
us
without
the
replacement
right
yeah,
so
it
might
be
worth
just
ending
on
the
you
know,
if
you
have
use
cases
which
are
not
covered
by,
though
you
know
the
async
local
storage
or
these
other
ones
we
have
in
the
list,
please
let
us
know,
because
you
know
we
want
to
make
sure
that
we
we
you
know,
come
up
with
alternatives
before
yeah.
E
I
don't
know
how
to
write
that
off
the
top
of
my
head,
but
basically
we
kind
of
kind
of
want
to
say.
Let
us
know
if
you're
doing
something
so
that,
if
we're
going
to
get
rid
of
async,
we've
got
an
alternative
for
it,
without
necessarily
wanting
to
ignite
that
we're
going
to
get
rid
of
that
discussion
or
whatever.
But
like
if
he
hinted
hinted
that
right.
A
Yeah,
I
guess
it
it's
all
two
purposes.
One
is
to
show
the
list
of
known
use
cases
or
the
use
cases
that
we
have
envisioned
and
which,
and
some
of
the
users
are
not
aware
of
it,
and
they
they
just
got
to
know.
Okay,
these
are
the
cases
where
we
can
use
it.
So
that
way
it
becomes
useful
for
them
and,
on
the
other
hand,
we
could
get
to
know
what
are
the
other
ways
in
which
users
are
using
it.
E
Yeah,
maybe
maybe
like
that
list
is
like.
Maybe
it's
like
our
plan,
we
could
say
something
like
our
plan
is
to
try
and
develop
other
apis,
which
will
let
you
you
know,
which,
which
will
cover
the
async
use
cases
and
which
we
have
which
have
a
better
chance
of
becoming
stable,
which
follows
from
your
previous
paragraph.
These
are
the
ones
that
we
have
in
our
list
and
then
you
know
what
we're
looking
for.
Is
your
feedback
on
things
which
we've
missed
in
terms
of
that.
E
Yeah,
that's
good,
and
I
guess
it's
like
you
know
if
I
don't
think
we'd
want,
we
need
to
wait
till
the
next
meeting,
like
I
think,
you've
got
something
pretty
close
sounds
like
with
a
few
tweaks.
It'd
be
ready
to
go.
I
don't
know
if
you
you
know
want
to
just
edit
it
and
then
send
it
on
to
get
it
published,
or
you
want
people
to
take
another
look
or.
D
D
And
related
to
this,
I'm
what
I'm
wondering
about
the
idea
of
like
following
from
this,
like
docs
deprecating,
casing
hooks
and
like
just
have
the
intention
that,
like
it
may
just
sit
that
way
for
like
many
minutes
forever,
but
just
like,
like
use
it
as
like,
sending
a
message
to
users
that
we
we
don't
like
this
thing
like
at
at
the
least
come
complain
to
us
loudly
about
us
like.
E
I
think
we'd
want
to
do
some,
like
you
know,
like
mateo,
for
example.
I
think
we'd
want
to
connect
with
him
and
see
what
he
thinks
of
that,
because
I
know
he's
been
a
strong
like
hey.
We
use
this.
We
can't
get
rid
of
them
yeah,
so
we
want
to
make
sure
that
a
few
people
like
him
and
james
are
on
side
in
terms
of
like
hey
here's,
the
plan,
the
doc
deprecation
isn't
isn't
a
you
know.
The
plan
is
we'd
like
to
create
other
apis,
eventually
deprecate.
It
are
people
comfortable
with
that
step.
E
E
E
And
I
guess
the
doctor
deprecation
does
like.
Does
that
actually
trigger?
Can
I
I'm
forgetting
whether
you
can
run
like
a
command
line
option
that
will
tell
you
about
docked,
applications
or
anything
like
I'm
just
what
I'm
exploring
is
like.
Is
there
any
benefit
to
actually
dock
that
that
deprecating
versus
in
the
docks
basically
saying
this
is
experimental
and
we
never?
We
don't
necessarily
expect
it
to
exit
experimental.
E
E
You
know
that
that
that
might
be
the
way
to
go
to.
You
know
outline
the
plan.
So
if
we
wrote
that
down,
we
could
pass
that
by.
You
know
again,
like
I
think,
still
getting
the
input
from
from
like
mateo
as
a
first
step
to
say
you
know,
would
you
would
you
be
okay
with
adding
this
to
the
docs
and
if
we
sort
of
test
it
out
with
a
few
people,
who've
shown
the
most
concern
over
async
hooks,
then
we
could
go
to
the
tsc
and
say
hey.
This
is
this?
E
D
D
I
don't
know
how
helpful
anyone
will
be
on
this,
but
I
just
pasted
it
in
the
chat.
I
have
a
pr
that's
a
little
bit
in
progress
still
to
change,
so
I
did
the
context
promise
hook
stuff
and
that
changes
when
you're
using
async
hooks
without
a
destroy
hook.
It'll
use
the
faster
context
hook,
but
if
you
have
a
destroy
hook,
it
still
falls
back
to
the
old
way.
D
So
this
is
removing
that
and
the
destroy
hook
would
then
just
be
using
the
same.
Registered
destroy
hook,
function
that
async
resource
uses,
so
it
would
like
all
of
problems
or
like
all
of
that
would
be
like
pure
javascript
side.
There's
like
two
major
concerns
right
now.
What
one
of
them
is
there's
a
test
for
inside
of
the
vm.
D
Like
it
was
a
deleting
promise
domain
at
one
point,
it
seems
like
that
code
actually
like
disappeared
at
some
point,
but
somehow
that
continued
to
not
like
continue
to
be
not
there
in
vm.
I
think
it
was
just
a
matter
of
it
not
being
set
properly
rather
than
it
being
removed.
D
I
haven't
quite
figured
out
how
it
is
like
not
in
master
and
reappearing
here,
but
I'm
working
on
trying
to
figure
out
how
that
makes
sense,
and
the
other
point
was
trace.
Events
don't
trigger
currently
in
here
and
so
wondering
about
like
exposing
javascript
api
for
that
and
what
that
should
look
like.
E
D
Yeah
so
in
c,
plus
plus
in
the
faster
promise
or
no
full
problems,
function,
there's
a
like
trace
event,
scope
there
and
it
triggers
the
before
and
after
events
and
yes,
the
net
gets
triggered
from
like
inside
of
the
async
rap
somewhere.
D
With
this
change,
promises
will
no
longer
be
wrapped
with
an
async
wrap.
D
It's
we.
We
can
reproduce
basically
that
behavior,
if
we
like,
expose
the
logic
for
triggering
those
into
javascript.
Somehow
it's
a
bit
unclear
like
how
we
want
to
do
that
and
how
much
that
matters
yeah.
I
believe
when
I
made
the
original
context
promise
book.
There
was
some
slight
concerns
from
james
about
that
not
being
exposed
anymore,
but
he
seemed
fine
with
it
at
the
time,
because,
with
the
destroy
hook,
it
was
still
there.
So
you
could
just
add
the
destroy
hook
to
make
it.
F
D
Yeah,
so
that
this
this
codes
performance
is
like
about
equivalence
to
doing
it
as
a
native
hook,
maybe
slightly
faster.
It's
more
so,
just
like
a
big
simplification
of
the
codes.
D
It
just
has
like
one
path
to
go
through
for
everything,
whereas
with
with
its
having
like
two
different
sets
of
like
promising
logic,
I
actually
like
discovered
that,
like
our
code
coverage
actually
like
was
broken
in
certain
spots
because,
like
it
went
one
path
and
not
the
other,
so
I
just
had
having
one
set
of
logic
would
be
a
lot
better.
I
think.
E
D
Yep
yeah,
I
believe,
node
clinic,
uses
the
trace
events
from
promises
for
something
right.
Don't
really
know
the
details,
though,
and.
D
Yeah
yeah,
that's
like
I.
I
definitely
see
the
value
in
having
that
like
I.
I
think
we
probably
should
try
to
keep
that
behavior.
It's
just
unclear
how
to
do
that
right
now,
like
what
like,
where
the
efforts
to
have
javascript
api
got
if
anywhere.
E
E
D
Yeah,
my
my
thinking
is
so
like
currently
for
promise
hooks.
I
have.
I
got
I've
split
it
up
into
like
two
separate
functions
for
like
init
before
after
resolve,
and
then
I've
also
separate.
D
I've
also
separated
it
so
like
if
you
have
a
destroy
hook,
there's
a
different,
a
net
hook
that,
like
registers,
the
destroy
hook,
it's
like
I'd
just
be
adding
like
a
bit
more
complexity
to
that
there's
just
like
another
function
like
if,
if
you
have
trace
of
like
trace
events
turned
on
then
use
this
extra
thing
that
has
some
extra
cost
of
like
c
plus
boundary
cross.
E
E
Like
basically
have
haven't
have
native
code
which
will
emit
those
trace
events,
and
you
this
sounds
like
it'd
be
horribly
inefficient,
though,
as
like
you
could
actually
call
a
method
to
say
you
know
the
the
the
before
and
after,
like
is
there
two
trace
events?
Was
it
or
more
than
two
more
than
two.
E
D
Yeah
but
there
is
a
fast
function:
api
for
v8
that.
D
We
can
probably
use
that
because
I
don't
know
that
we
have
to
pass
much
through
necessarily,
or
at
least
we
don't
necessarily
have
to
pass
it
directly.
E
D
Yeah,
it's
like
it
isn't.
Cooks
were
already
doing
a
bunch
of
like
weird
passing
stuff
around
through,
like
shared
arrays
and
stuff,
like
that.
So
could
probably
do
the
same
for
whatever
we'd
pass
into
a
function
to
trigger
trace
event,
stuff.
E
E
Certainly
like
yeah,
it
sounds
like
getting.
Some
feedback
from
james
would
be
good
because
you
know,
maybe
if
he
says
oh
no
really
doesn't
matter,
but
otherwise
it
does
sound
like
like,
like
you
were
saying
something
we
probably
want
to
preserve
and
if
you
have
time
to
look
at
how
how
whether
they're,
like
the
fast
functions,
might
make
it
fast
enough.
That
sounds
good
yeah
and
if
that
turns
into
like
a
generic
solution,
that's
even
better
right,
like.
D
Yeah,
that
was
like
the
other
point
I
was
unsure
about
like
if
we
should
make
something
that
is
like
generically
useful
or
if
we
should
just
like,
have
some
internal
thing
to
trigger
the
async
rap
stuff.
Specifically,
for
this.
D
I
think
that's
probably
it
and
I
have
a
back
part
of
the
context
promise
up
to
v14,
which
hopefully
we
can
get
that
out
there
at
some
points.
Whenever
it's
another
v14
release.