►
From YouTube: 2022-03-11-Node.js Node-API Team meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
I
forgot
this
right.
I
guess
the
last
thing
you
are.
C
B
After
the
meeting,
I
should
check
this
but
yeah
I
totally
forgot
sorry.
Okay,
does
that
make
sense,
you'll
take
a
look.
A
Next
yeah.
B
C
B
A
B
Yeah-
and
I
can
do
this-
okay,
if.
A
A
D
Yeah,
sorry,
I
didn't
really
get
to
this
for
the
last
couple
weeks,
but
yeah
just
like
quite
a
bit
of
changes
that,
like
so
far
and
I'm
not
sure
if
they
would
prove
some
like
a
big
change
like
this.
Some
of
the
files.
A
In
the
yeah
sorry,
it
almost
feels
that,
like
for
14,
it's
just
not
that
likely
to
be
accepted.
So
maybe
we
just
leave
it,
as
is
yep.
A
C
C
A
C
A
A
A
D
Yeah,
I
don't
think,
there's
anything
update
on
that.
One:
okay,.
A
A
A
C
I
did
did
some
research
around
this
one
crtx.
A
Okay
yeah,
so
let's
we
can
talk.
I'm
just
gonna
delete
this,
but
I
think
that'll,
still,
okay,
so
yeah
so
wanna
share
an
update
on
that
one.
C
So
apparently,
it's
what
I
found
that
this
is
nothing
more,
but
just
a
fancy
way
to
call
a
dynamic
initializer
for
static
variables,
so
think,
in
order
for
us
to
register
module,
we
need
to
execute
some
piece
of
code
and
the
way
we
do
it.
Normally,
you
would
expect
we
have
some
static
variable
and
have
some
initializer
for
this
variable
and
it
sounds
like
somewhere
back
then
for
modules
in
node.js
people
chose
to
simulate
what
normally
msvc
compiler
does.
C
So
I
have
a
link
to
this
documentation
here,
which
pretty
much
says
like
this.
This
section,
crt
xcu,
is
being
generated
by
msvc
compiler,
and
the
problem
is
actually
because
we
have
it
in
our
code
and
microsoft.
Compiler
does
the
same.
They
can
flick
to
each
other
and
all
these
different,
unexpected
side
effects
that
are
kind
of
dynamic,
static.
Variable
kind
of
insulation
created
by
msvc
is
actually
suppressed
by
our
override
okay.
C
The
so,
ideally,
we
should
replace
with
something
which
just
do
it
not
do
it
and
exactly
what
documentation
says
like
while
some
developers
doing
it,
it's
not
the
recommended
way
to
do
it
because
it
may
break
all
this
future
compatibility
and
it's
not
standard
approach,
so
standard
approach
to
be
like
some
similar
to
what
another
developer
proposed
before
and
just
try
to
kind
of
build
on
it.
Imagine
you
have
just
simple
kind
of
struct
and
we
just
centralize
the
static
variable
explicitly
and
achieve
exactly
the
same
result.
E
The
problem
is,
I
think,
that
the
the
header
has
to
compile
with
c
right,
and
so
the
generated
code
has
to
be
plain
c,
because
we
support
building
add-ons
with
plane
c,
and
so,
if
somebody
writes
it
with
plane
c,
then
the
c
language
does
not
have
the
ability
to
execute
code
during
library
loading.
That
is
a
c
plus
plus
only
thing
right.
E
So
so
then
the
only
alternative
is
to
explicitly
add
a
library
constructor
which
is
supported
and
works
properly
in
in
in
linux,
right
because
the
library
constructors
they
get
added
to
a
special
section,
just
as
they
do
in
windows,
but
they
are,
the
linker
will
will
sort
of
collate
them
from
the
various
modules
and
then
put
them
one
after
the
other
in
the
section
and
the
c
library
will
execute
them
one
after
the
other.
The
order
of
execution
is
not
guaranteed,
but
it
is
guaranteed
that
they
will
all
execute.
E
So
so
then
I
don't
know
what
other
alternative
we
can
have
on
windows.
That
is
c
only
that
will
cause
these
constructors
to
be
executed
other
than
this
approach.
But
if
this
conflicts
with
with
with
c
plus,
then
I
I
don't
see
an
alternative
to
that,
and-
and
I
also
I
also
learned
about
the
this-
this
problematic
issue
or
this
problem
with
with
using
this
particular
form
of
library
constructors.
But
we
we
have
to
do
something.
E
C
Yeah,
it's
another
alternative,
but
the
communication
says
like
xcu
is
something
being
used
by
by
compiler,
but
we
can
use
anything
between
xc,
a
and
xcz
okay
for.
C
We
chose
to
just
one
which
is
conflicting,
so
we
can
documentation,
says
we
can
use
xt,
xct
or
xcd,
so
it
will
be
minimal,
minimal
change
if
you
like,
just
renaming
it,
and
that.
A
E
No,
it
wouldn't
that's
right
because,
obviously
they
built
the
add-on,
they
tested
the
add-on
right.
It
didn't
have
any
surprising
interaction,
and
so
therefore,
for
that
particular
build
of
that
particular
add-on.
Our
existing
solution
works,
so
you
wouldn't
need
to
rebuild
it.
A
B
If
I
have
a
binary,
no
so
and
I
don't
need
to
to
recompile.
C
No,
no
so,
thankfully
my
initial
thought
was
this
section
has
been
used
by
node.js,
but
it's
not
it's.
Actually,
it's
it's
used
by
a
c
plus
plus
runtime.
If
you
like
or
c
c
runtime,
this
is
like.
Crt
stands
for
c
runtime
loaded
in
memory.
We
kind
of
see
runtime
sees
all
the
different
sections,
they
sort
it
alphabetically
and
executes
all
the
different
pointers
inside
of
these
sections,
and
so
it's
what
actually
happened
and
what
we
can.
E
Do
I'm
surprised
that
there's
a
conflict
then
like
if,
if,
if
for
each
section,
it
executes
all
the
different
pointers,
then
why
is
there
a
problem
at
all
right,
because
if
there
are
two
different
pointers
in
in
what
what
we
have
now
see
crt
dollar
xcu?
If
there
are
three
pointers
in
crt
dollar
xcu,
one
of
them
is
added
explicitly
by
node.js
and
the
other
two
are
added
by
c
plus
plus.
Then
why
is
there
a
problem?
I
mean
I'm
okay
with
changing
it.
Don't
get
me
wrong.
I'm
just
curious,
like.
C
We're
effectively
overriding
because
imagine
like
we
have,
they
have
a
section
which
member
cc
compiler
generates
and
we
declare
in
exactly
the
same
section.
Yeah.
C
E
Isn't
it
supposed
to
like
if,
if,
if
two
different
compilation
units
declare
that
this
function
is
to
be
placed
in
a
certain
section,
doesn't
that
mean
to
the
linker
to
take
those
functions
place
them
one
after
the
other
in
a
section,
and
that's
it
right
like
it
shouldn't
override,
it
should
be
like.
Oh,
you
want
to
be
in
that
section
too.
Okay,
fine
and
let's
create
an
array
of
two
functions
and
place
them
both
in
that
section.
C
But
the
issues
we
declare
in
this
section
we
we're
not
just
we're
not
just
kind
of
adding
to
existing
sections,
we
kind
of
say
we
we're
creating
sections.
E
Aha,
okay,
well
all
right,
yeah,
okay,
yeah,
and
then
we
can
just
add
it
to
a
different
one,
but
then,
but
then
okay,
so
so
I
mean
if
the
if
this
is
how
the
compiler
behaves,
then
what?
If
what?
If
somebody
uses
and
statically
links,
another
library
that
also
uses-
I
don't
know,
dot,
crt
dollar
xct,
then
the
compiler
will
will
once
again
have
this
collision
we'll
choose
one
at
random
and
then
their
library
will
behave
surprisingly
right,
so
I
mean
okay.
C
Yeah,
I
agree
so
I
I
propose
like
so
this
will
not
full
solution.
I
just
add
into
whatever.
Could
you
scroll
up
a
little
bit
like
this
previous
user?
He
he
proposed.
This
is
the
full
code,
so
the
full
code
actually
says
like.
If
we,
if
we
inside
of
c
plus
plus
context,
then
we
don't
use
these
sections
at
all.
We
just
do
normal
dynamic,
static,
variable,
initialization,.
E
Oh
yeah,
okay,
okay,
yeah,
yeah,
yeah
yeah,
yes,
okay,
yeah!
So
then
we
reduce
the
risk
yeah.
Because,
okay,
I
see
what
you
mean.
So
if
you,
if
you
build
with
c
plus
plus,
then
then
you
use
an
initializer.
Otherwise
you
use
this
yeah.
So
I
suppose
this
solution,
plus
moving
to
a
different
section,
which
is
also
which
also
counts
as
a
as
a
library
initializer
on
windows.
C
E
Been
a
problem
all
these
years,
so
well,
no
because
all
these
years
there
there's
never
been
a
conflict
because
they
used
they
used
dollar
xc
something
else,
and
we
use
dollar
xcu
right.
So
there
hasn't
been
a
conflict,
but
if
we
move
then
then
we
will
be
stepping
on
their
toes
after
all
these
years.
That's.
A
E
A
Of
it
like
in
the
c
plus
case
that
we
won't
get
the
initialization,
we
expect
or.
E
There
is,
there
is
one
more
situation
that
I
can
think
of,
so
the
code
base
of
the
add-on
may
be
a
mixture
of
c
and
c
plus
plus
right.
So
so
then
you
know,
although,
although
the
the
thing
that
links
against
node
is
written
in
c,
there
may
be
c,
plus
plus
compilation
units
in
that
same
project,
which
are
then
called
from
the
c
entry
point.
E
So,
like
you
know,
people
don't
let's
say
they
don't
want
to
use,
know
that
on
api,
because
they
are
super
super
sensitive
to
performance
right,
but
otherwise
their
their
native
library
is
written
in
c
plus,
so
they
they
will
write.
They
will
write
their
add-on
in
c
right
yeah,
but
only
the
the
portion
of
the
add-on
that
speaks
to
node.js.
E
Everything
else
is
in
c
plus,
and
so
they
may
and
then
they
they
have
an
api
that
they
call
so
they
so
basically
their
bindings
are
a
bridge
between
their
c
plus
library
and
and
node.js
right,
and
then
you
know
the
if
they're,
if
their
c
plus
plus
library
has
global
constructors,
then
we're
again
in
this
boat.
Even
though,
even
though
you
know
the
the
file,
the
compilation
unit,
that
links
to
node.js
is
itself
written
in
c,
so
it's
still
possible
with
a
mixed
code
base,
but
there
is.
E
There
is
another
thing
that
speaks
towards
this
solution,
which
is
that
for
a
node.js
add-on,
we
have
said
that
we
strongly
discourage
global
static
data.
Well,
let's
say
global
static,
writable
data,
because
global
static,
constant
data
doesn't
matter
right.
You
can
always
read
it,
but
global
static.
Mutable
data
is
strongly
discouraged
for
node.js
add-ons
because
of
the
presence
of
workers,
etc,
etc.
So,
if
you
must
have
global
data,
then
please
don't
declare
it
static.
Please
don't
initialize
it
with
with
the
library
constructor,
please
keep
allocated
and
attach
it
to
the
instance.
E
So
so
the
fact
that
there
is
a
global
that
there
is
a
global
initializer
that
is
implemented
as
a
library
constructor
in
in
in
c
plus
plus
is
bad
for
a
node.js
add-on
for
different
reasons.
I
still
think
that's
got.
A
E
Oh
yeah,
no,
I
mean
this
will
protect
us
right
to
some
extent
right
and
okay.
So
there
is
another
thing
right:
if
you,
if
you
do,
have
global
static,
constant
data
right,
you
can
still
have
and
correct
me.
If
I'm
wrong,
you
can
still
have
a
non-trivial
initializer
for
this
global
static,
constant
data
right
and
so
so
that
initializer
has
to
run
as
a
library
constructor
is.
Is
that
am
I
am
I
right
there?
E
C
E
E
Well
I
mean,
if
people
recompile
a
c
plus
library
right,
then
that
this
macro
will
be
reevaluated
and
they're
going
to
get
the
proper
bona
fide
c,
plus
plus
constructor,
that
plays
nice
on
both
linux
windows
and
any
other
platform
right,
because,
by
virtue
of
the
of
the
of
the
c
plus
specification
for
how
to
implement
global
static,
initializers
right,
if
they
recompile
a
c
add-on,
then
there
is
no
change.
A
E
Then
it's
then
there
is
no
change
for
them.
If
they've
had
a
conflict
so
far,
they
will
continue
to
have
a
conflict
right
because
it
still
uses
the
so
so
for
so
for
for
compiling
for
compiling
that
particular
file
with
the
c
compiler
that
includes
this
header
right
and
and
calls
this
macro.
If
you
compile
it
with
a
c
compiler,
it's
going
to
produce
bit
for
a
bit
identical
code
to
what
it
does
today
right
so,
but
I
think
no
more
nor
less
of
a
conflict.
A
E
A
A
E
That
that
yeah,
that's
that's
harder
to
evaluate,
because
that
may
break
some
folks.
Yes,
you
know
just
in
case
they.
They
had
this
exact
conversation
three
years
ago
and
they
chose
to
change
the
section
so
as
to
deconflict
themselves,
because
you
know
they
were
running
into
issues
where
the
c
plus
plus
code
wasn't
getting
initialized.
E
You
know
they
changed
the
section
right
and,
and
now
you
know
they
they're
having
this
problem
so
yeah,
those
folks
make
it
they
get
broken.
If
we
happen
to
choose
the
same
section
as
that,
but
I
think
I
think
that's
a
super
duper
corner
case.
A
E
E
You
know
if
anybody
ever
says
that
we
will
say
well,
okay,
fine
now,
let's
consider
changing
the
section
and
then
we'll
take
that
on
when
it
comes.
E
A
Think
that
makes
sense
in
the
context
as
well,
that
we
probably
shouldn't
be
inconsistent
with
the
rest
of
now
yeah
right,
if,
like
other
add-ons
use,
crt
dollar
xc
xcu,
we
should
probably
change
those
together
so
doing.
This
is
the
first
step
to
address
this,
make
sense
and
then
maybe
even
open.
Just
a
general
note
issue
that
says
this
is
a
potential
conflict.
A
E
C
And
only
this
section
name
are
only
used
in
two
places.
We
effectively
duplicate
it
from
node
code
in
one
place,
and
there
is
also
an
interesting
note
a
little
bit
above
like
the
saying
that
if
we
don't
use
pragma
like
this,
but
we
make
this
pragma
as
a
part
of
the
macro.
If
you,
if
you
scroll
up
a
little
bit
a
little
bit
to
original
bug
report,
I
see
like
yeah
over
here
sorry.
Could
you
scroll
down
a
little
bit
this
one
uh-huh?
C
Okay,
you
see
like
it's
kind
of
another
kind
of
proposal,
how
to
address
this
issue,
and
you
see
this
showing
like
pragma
to
be
being
in
place.
E
I
guess
you
said
you
said
that
you
said
this
was
being
duplicated
for
regular
node
add-ons
right.
Well,
I
think
we
probably
copied
that
yeah
yeah,
but
what
I'm
saying
is
like
regular
node
add-ons
are
guaranteed
to
be
c
plus
right,
so
for
regular
node
add-ons.
We
should
definitely
change
it
so
that
so
that
they
play
nice
with
with,
with
with
the
c
plus
compiler.
E
E
So
so
you
know,
we
will
improve
regular
c
plus
plus
node
add-ons
by
turning
the
the
node
add-on
macro
into
a
global
static
initializer.
E
C
C
What
I
have
implemented
unit
tests
and
I
actually
went
a
little
bit
further
like
I-
create
new
public
function.
I
need
to
add
the
communication
for
his
function,
but
thankfully,
what
I
discovered
and
gabriel
is
the
one
who
implemented
this
change
sometimes
ago
effectively.
We
had
a
second
pass.
We
used
to
run
all
these
different
analyzers,
but
now
we
don't
do
it
immediately.
We
we
actually,
we
call
set
immediate
so
think.
All
our
finalizers
right
now
run
as
a
part
of
set
immediate
yeah,
so
proposal
here
actually
to
create
this
finalizer
finalizer
queue.
C
Where
we
put
all
our
kind
of
analyzers
on
the
first
run
of
gc,
then
we
we
can.
We
can
drain
this
analyzer
queue
from
three
different
places.
So,
first
of
all,
I
have
added
I'm
adding
new
public
function,
a
node
api
called
analyzers,
so
people
can
call
this
function
explicitly
whenever
they
want
right.
C
Second
thing
like
we
still
running
it
as
a
part
of
a
set
immediate,
and
there
is
a
code
which
ensures
only
one
set
immediate,
which
concerns
about
draining
this
queue,
so
pretty
much
is
to
match
existing
behavior,
and
this
third
approach
is
actually
also
part
of
the
solution,
as
we
run
it
in
the
end
of
each
public
function,
which
can
touch
a
garbage
collector.
C
I
propose
maybe
this
third
approach
to
be
more
like
configurable,
so
I
want
to
add
new
api,
like
I
discussed
somewhere
else
like
a
node
api
enabled
feature
or
something
like
this
yeah,
so
it
should.
F
C
So
for
now
I
don't
have
this
feature-based
ones,
but
I
want
to
add
that
so
overall
approach,
almost
there,
if
you,
if
you
like,
have
a
look
and
comment
the
only
thing
I'm
missing
right
now,
I'm
missing
documentation
and
I'm
missing
this
feature-based
stuff
yeah.
To
turn
on
on
off
of
this
behavior.
E
Yeah,
so
so,
let's
see
which,
which
apis
are
you
proposing
to
append
this?
This
draining
too,
like
I'm
thinking
like
things
like
napi
wrap,
is
probably
a
good
one,
but
which
other
ones
like
an
api
had
finalizer
is
a
good
one
right.
C
E
Oh,
I
see
okay,
okay,
okay,
so
so
you
added
it
to
an
api
preamble.
Yes,
that's
okay-ish,
but
it
may
be.
It
may
be
too
strong,
because
an
api
preamble
is
for
is
for
apis
that
are
able
to
execute
javascript
and-
and
those
are
those
are
more
numerous
than
I
think
what
we
need
right,
because
they
don't
touch
the
garbage
collector
right.
E
It
should
work
for
now
and-
and
we
can
we
can
always-
we
can
always
get
some
performance
measurements
for
for
for
how
how
much
it
impacts
the
performance.
If
you
choose
to
opt
into
this
feature,
once
we
have
the
feature
flag,
the
runtime
feature
flag:
if
you
choose
to
opt
into
it,
then
it
may
slow
down
your
add-on
substantially
or
it
may
not.
E
We
don't
know
we
will
find
out
right
and,
and
then
we
need
to,
we
need
to
decide
which
we
we
may
need
to
sort
of
separate
this
calling
of
this
drainer
from
an
api
preamble.
We
may
need
a
separate
preamble
that
that
that
is
attached
to
fewer
methods
right
because,
like
if
you're,
if
you're
setting
a
property
on
an
object,
then
why
are
you
draining
the
the
finalizer
queue
right?
One
could
ask
that
question,
especially
if
the
performance
impact
is
non-trivial
right,
but
again
we
need
we
need.
We
need
data.
I
think.
A
C
E
Yes,
these
are
also
heuristics
we're
going
to
have
to
we're
going
to
have
to
sort
of
feel
our
way
through
right,
because
what
about
the
opposite
end?
You
know
what
if
the
the
finalizer
q
has
thousands
of
entries,
you
know,
do
you
want
to
spend
100
milliseconds
draining
the
finalizer
cube?
You
know
what
I
mean
you
got.
E
Okay,
but
I
think
you
think
about
think
about
the
baseline.
What
do
we
have
now?
We
have
zero
drainage
right.
The
finalizer
cue
keeps
building
up
until
you
exit
your
busy
loop,
that
creates
a
bunch
of
native
ad
native
objects
and
then
garbage
collects
them
right.
So
currently
the
finalizer
queue
will
be
enormous.
Whenever
you
exit
that
loop
right
so
so
from
there
we
can
go
to
you
know
being
very
diligent
about
draining
it.
We
can.
We
can
go
to
being
okay,
diligent
enough,
but
still
lacks.
You
still
have
a
backlog.
E
When
you
exit
the
loop,
you
know
we
we
we
we
can.
We
can
choose
any
of
a
range
of
of
of
behaviors
right
and
we
need
to
yeah.
I
think
we
need
to.
We
need
to
pick
somewhere
along
that
along
that
scale
that
works
for
most
people
or
because
we
need
to
have.
E
We
need
to
have
a
configurable
parameter,
saying
okay
drain
at
most
50
or
drain
at
most
10
or
you
know,
that's
that's
also
a
runtime
feature
flag
that
we
might
turn
on
and
then
that
that'll
give
individual
add-on
authors
this
capability
based
on
their
performance,
statistics
or
characteristics.
You
know
and
usage
characteristics.
A
E
A
A
E
But
here
we
are
having
to
figure
out
what
those
defaults
are
right.
So
you
know,
starting
with
the
ability
to
to
to
configure
those
and
then
and
then
giving
that
ability
to
a
select
few
that
actually
know
what
those
things
do
will
help
us.
You
know,
hopefully
query
a
wider
audience
for
what
the
default
should
be
right.
There's
no
way
to
defer
that.
A
E
Well
that
that's
that's
just
it,
though
right.
If
I
understand
correctly,
the
problem
is
that
the
gc
is
is
busy
and
and
and
it's
in
a
state
where
you
can't
even
call
into
javascript
and
and
so
so
basically,
the
javascript
engine
is
is
in
a
bad
state
right
now
you
know,
if
you
do
anything
synchronously
and
you
ask
anything
of
the
javascript
engine,
it
would
be
in
a
bad
state
to
do
anything
right.
E
E
Well,
presumably,
okay,
so
so:
okay,
yeah
yeah,
okay.
I
think
I
think
what
you
what
we
need
to
do
is
we
need
to
know
from
whence
we
were
called
right
like,
for
example,
let's
say
a
finalizer
runs
well
actually
yeah,
let's
say
inside
the
finalizer.
You
create
a
new
object
right
yeah,
since
the
finalizer
is
in
set
immediate,
creating
a
new
object
and
setting
properties,
and
all
that
is
perfectly
safe,
right.
Yeah.
C
E
But
let's
say
let's
say
we
add:
we
add
this
draining
functionality
to
an
api
wrap
right
then
an
api
wrap
is,
I
think,
it's
guaranteed
to
run
in
a
state
where
we
can't
drain
right
because,
specifically
because
of
the
set
immediate,
but
if,
if
we,
if
we,
if
we
call
if
we
call
the
draining
thing
and
the
draining
thing,
calls
a
callback
which
calls
napi
wrap,
then
we
can't
call
the
draining
thing
again.
E
C
E
A
E
No
because
the
the
gc
has
r
the
gc,
the
the
javascript
object
is
already
gone
right
because
when
we
attached
the
set
immediate
is
the
time
that
the
gc
informed
us
that
this
particular
javascript
object
is
gone.
And
when
we
returned
from
the
body
of
code
that
that
creates
the
set
immediate,
we
told
the
gc
okay,
we're
done
recycling
this
native
memory.
Thank
you
for
informing
us
goodbye
right,
but
we
haven't
really
done
anything
right,
because
all
we
did
was
attach
a
set
immediate.
E
So
we
haven't
really
finalized
that
memory
right
and
so
now
we
need
to
finalize
that
memory
and
the
gc
is
totally
unaware
of
our
need,
because,
on
the
on
the
javascript
side,
that
object
is
gone,
so
we
cannot
rely
on
the
gc2
to
to
to
destroy
an
object
that
it
has
already
destroyed.
We
we
are
on
our
own.
So
this
is
this.
E
Yes,
in
a
way,
in
a
way
this
is
similar
to
to
to
to
the
you
know
the
the
worker,
the
worker
is
shutting
down
scenario
where
the
worker
is
shutting
down,
and
so
there
are
still
15
javascript
objects
left
to
bgc.
E
They
all
have
native
data
attached,
and
we
know
they
will
never
be
gc
because
well,
the
worker
is
shutting
down,
and
so
would
you
see
them
ourselves
during
the
destruction
of
the
environment
right
and
that
has
caused
the
myriad
problems
and
we've
kind
of
solved
them,
because
there's
more
value
in
having
this
cleanup
than
there
is
in
not
having
it
and
so
on
right.
This
is
a
this
is
the
same.
E
This
is
kind
of
the
same
situation,
but
during
normal
operations,
rather
than
during
environment
shutdown,
right
and
and
now
we
need
to
figure
out
what
to
do
during
normal
operations
and
and
and
the
solution
of
course,
is
to
eventually
call
those
finalizers
from
a
safe
state
which
we
are
in
900
percent
of
the
time.
But
now
how
do
we
distribute
the
burden?
You
know?
C
And
and
the
reason
I
put
it
inside,
of
try
catch
it's
just
because,
like
I
said,
tricash
is
never
using
we're,
calling
javascript
function
and
with
high
probability,
this
function,
which
may
cause
gc
right
yeah.
Previously
we.
F
C
To
collect
it
in
is
a
part
of
second
pass
of
gc,
but
but
we
said,
but
I
think
with
this
this
request,
I
pointed
to
that
you
implemented
a
few
years
ago.
Oh
some.
E
Okay,
I
see
I
see
I
see,
I
understand
that
why
you
put
it
in
try
catch
is
because
that's
there's
a
high
probability
that
the
javascript
engine
did
some
cleanup
and
and
therefore
some
things
appeared
in
this
queue
right,
yes,
and
so
so,
okay,
they
appeared,
we
don't
know
what
appeared
in
the
queue,
because
it's
up
to
the
gc
to
decide
what
to
put
in
the
queue
right,
but
chances
are.
There
are
some
things,
and
so
let's
take
this
opportunity
to
clean
that
up.
E
Yeah-
and
I
guess,
if
we're
diligent
enough
about
it,
then
you
know
we
there
will
not
be
a
build
up
of
thousands
of
things.
If
there
is
ever
a
buildup
of
thousands
of
things,
then
we
missed
a
spot.
Basically,
you
know
and
okay,
let's
add
another
call
to
the
drainer
in
that
spot
and
now
there
will
never
be
a
backup
of
thousands
of
things.
A
A
E
Run
those
today
we
would
run
them
as
soon
as
javascript
is
finished
executing
and
we
return
control
to
the
event
loop
and
the
event.
Loop
gets
to
the
point
where
it
caused
the
set
immediate
callbacks.
Then,
during
the
set
immediate
callbacks
we
would.
We
would
call
those
finalizers
synchronously
from
from
a
safe
to
execute
javascript
context.
E
We
said
no,
no,
we
do
we
do
we
do
we
do
okay,
because
that
that's
the
point
that's
the
portion
that
is
up
to
us.
The
portion
that
is
up
to
the
gc
is
already
done
right
because
the
gc
said
hey
clean
this
up
and
we
said:
okay,
yes,
sir
we'll
clean
this
up,
but
all
we
really
did
was
add
it
to
the
list
of
set
immediates.
We
haven't
really
cleaned
it
up.
So
as
far
as
the
gc
is
concerned,
its
work
is
done
right.
E
A
E
A
E
C
F
E
Well,
we
are
currently
keeping
track
of
those
by
virtue
of
hooking
them
to
the
event
loop
right,
but
we
don't
want
that
because
the
event
loop
may
or
may
not
run.
We've
just
established
that
right.
If
we
have
a
tight,
javascript
loop,
the
event
never
gets
a
chance
to
run.
C
E
Oh
actually,
now
that
I
think
about
it:
maybe
okay,
we
we
need
to
check.
Is
there
a
uv
api
for
for
for
executing
the
the
set
immediate
callbacks
synchronously?
Because
then
we
don't
need
to
maintain
our
own
cue.
We
can
just.
We
can
just
tell
uv,
hey
ronda,
oh
no,
no,
no,
never,
mind!
No!
No!
No!
No!
I
take
back
my
words
because
set
immediates
could
be
any
of
a
number
of
things
and
we
don't
want
to
mess
up.
Yeah,
no
notes
notes,
contracts.
E
C
It
would
typically
cause
like
some
kind
of
unhandled
exception
stuff
raised
because
it's
inside
of
set
immediate,
but
here
what
we're
doing
like
we're
pretty
much
saying,
because
we
can
run
this,
we
can
drain
this
queue
as
a
part
of
any
method
which
touches
garbage
collector,
so
even
function
itself
succeeds
right
now
an
api
function
succeeds,
but
it
may
actually
fail
because
of
because
I'm
propagating
this
finalizer
exception
out
of
it
right,
like
so
think,.
F
E
F
E
Like
that
seems
like
quite
a
change
in
behavior
yeah,
unless,
unless
we
can,
unless
we
can
store
the
exception
and
and
pass
it
pass
it
to
the
same
stack
that
set
immediate
would
pass
it
to
so.
You
know
just.
E
Well,
I
mean
what
happens
in
set
the
media,
so
basically,
if
we
can,
if
we
can
reproduce
the
behavior,
if
we
can
reproduce
the
mechanism
that
set
immediate
uses
to
handle
exceptions
and
make
it
indistinguishable
from
from
the
exception
having
happened
during
set
immediate,
then
we
have
a
solution
that
is
indistinguishable
from
from
what
we
have
now.
Only
the
you
know
cleanup
performance
is
better,
but
but
that's
that's
that
that
needs
some
investigation
like
how
does
it
happen
now?
How
do
you
get
the
exception
into
the
same
pipeline
that
it
would
find
itself
in?
C
Yeah
yeah,
my
understanding
is
that
the
way
it
works
today
we
have
factory
kind
of
exceptions,
exception
propagating
up
the
chain
and
if
we
have
somewhere
like
try
catch,
it
will
catch
it.
If
nobody
catches
it,
it
will
handle
conception
right
immediately
is
also
handled
in
some
sense.
Oh.
E
C
C
A
A
E
Absolutely
yeah
exactly
it's
completely
unrelated
yeah.
So
so
I
think
I
think
if,
if
we
can
distinguish
the
exception,
that
happened
because
of
the
napi
call
from
the
exception.
That
happened
because
of
one
of
the
finalizers
executing,
which
I
think
we
can.
Then
we
can.
We
can
feed
that
exception
into
the
same
pipeline
that
set
immediate
feeds
it
into
which
would
ultimately
cause
an
unhandled
exception
on
the
node.js
side,
which
is
good
because
that's
the
behavior
we
have
today,
that's
the
behavior.
C
Okay,
so
thank
you.
We
should
bypass
any
kind
of
dry
catch
if,
while
we
drink.
E
Well,
maybe
use
it,
you
use
a
separate,
try
catch
right,
so
basically
like
try
drain
catch
exception
right
and
that
exception
is
guaranteed
to
come
from
one
of
the
finalizers
and
that
exception
is
guaranteed
to
be
unhandled
in
the
existing
behavior
right
and-
and
I
mean
I
I
think
you
can-
I
think
you
can
nest,
try,
catches
right,
and
so
so
you
can
put
a
try
catch
inside
the
try
and
then
that
will
catch
only
those
exceptions
which
are
supposed
to
be
set
immediate
exceptions.
E
C
Yeah,
there's
only
part,
I
don't
understand
like
how
to
bypass
all
the
different.
Imagine
like
you
have
one.
C
Internally,
maybe
call
another
one
another
one,
so
thank
you.
We
need
somehow
to
kind
of
ignore
all
these
different
chain
of
try
catches
and
go
like
outside
of
the
direction.
Yes,.
E
Yes,
directly
to
unhandled
exception,
yeah,
my
my
feeling
my
gut
feeling
is
that
there
is
a
node
api
for
that,
like
like
a
c
plus
plus
n,
not
like
a
v8
api
like
a
node.
C
E
For
for
for
throwing
an
unhandled
exception
and
then
we're
just
gonna
have
to
call
that
api
that
that
is
my
gut
feeling
is,
there
has
to
be
a
way
to
say
you
know,
throw
this
unhandled
exception,
go
directly
to
the
to
the
unhandled
exception
handler.
If
any
otherwise
kill
the
process,
there
has
to
be
a
way
in
node.js.
To
do
that,
I
mean
setimedia.
Does
it
right?
So
if
you
look
at
the
implementation
of
set
immediate,
maybe
you
can
use
that.
E
I
don't
know,
but
maybe
right
so
that
might
be
one
place
to
start,
is
to
just
look
at
how
set
immediate
is
implemented
and
and
and
see
if
you
can
use
any
of
the
any
of
the
api
that
that
produces
the
unhandled
exception,
but
this
is.
This
is
really
cool
by
the
way
like
this
is.
This
is
great
because
this
has
been
a
pain
in
our
backside
for
a
long
time.
This
this
buildup
of
of
of
of
you,
know,
finalizers,
it's
been
there
forever.
A
Thank
you
yeah.
I
like
the
the
idea
of
opting
in
too
would
be
a
good
yeah,
good
transition,
at
least
to
start
so.
E
E
Yeah,
I
was
just
going
to
say
this.
This
idea
of
runtime,
opt-in
or
opt-out
of
things
is
is,
is
a
is
an
important
generic
mechanism
to
have
so
so
to
get
this
api
right,
we'll
have
implications
for
for
future
flexibility
in
our
apis
right.
So
this
is
also
a
great
bit
of
value
to
have.
C
A
A
Okay,
otherwise,
thanks
for
everybody's
time
and
thanks
for
anybody
who's
watching
and
we
will
see
everybody
next
week,
actually.
A
Because
I
can't
make
it,
but
hopefully
the
rest
of
you
I'll
see
two
weeks
after
thanks:
okay.