►
From YouTube: OpenJS Foundation AMA: Node.js TSC
Description
The OpenJS Foundation is a member-supported non-profit organization that provides a neutral home for some of the most important project in the JavaScript ecosystem.
Learn more and join us at https://openjsf.org
A
Okay,
it
looks
like
we
are
live.
Thank
you
so
much
everybody
for
joining
us
today.
You
are
watching
the
open,
J's
foundation.
Ask
me
anything.
This
month
we
are
with
the
no
js'
technical
steering
committee
and
we
will
be
joined
by
reps
from
the
technical
steering
committee
and
Michael
Dawson,
who
will
be
moderating
the
panel,
so
I
am
going
to
hand
it
over
to
Michael
in
just
a
second,
but
before
I.
Do
you're
probably
wondering
how
you
might
want
to
ask
some
questions.
A
B
Thank
You
Rachel
we're
just
gonna
start
from
with
a
few
introductions:
I'm
Michael
Dawson
ibm's
community
lead
for
nodejs,
and
that
means
I
get
to
spend
quite
a
bit
of
time
working
with
all
the
great
people
in
the
community,
including
the
technical
steering
committee,
the
community
committee
and
a
bunch
of
the
working
groups
and
so
forth,
and
one
of
the
things
I
do
get
to
do
as
part
of
that
as
I'm.
Currently
the
TAC
chair
and
I.
Think
at
this
point
Colin
do
you
want
to
give
yourself
a
quick
introduction?
I
want.
E
B
B
F
G
B
H
Hey
I'm
Gabriel
I
work
at
Intel
and
I've
been
working
with
node,
oh
boy,
2015,
ish
or
so
1415
I'm
I,
mostly
work
on
on
native
add-ons,
helped
bring
an
API
to
light,
which
I'm
really
happy
to
see
is
working
really
well
and
most
recently,
I've
also
been
working
on
optimization
of
nodejs
with
runtime
large
pages
flag.
So
people
can
just
add
a
flag
and
hopefully
their
stuff
will
run
faster.
B
Okay,
thanks
and
just
a
reminder
for
the
viewers:
if
you
have
any
questions,
you
can
tag
us
on
Twitter
through
the
open.
G
is
a
token
gsfs
F
handle
or
you
can
use
the
comment
function
on
YouTube
as
well.
So
we'll
start
out
with
a
few
questions
that
we
we
had
in
advance
and
the
first
one
is
that
you
know
nodejs
has
been
really
proactive
about
adding
new
people
to
its
ranks
of
maintainer
x'
toc
members.
B
H
So
what
I've?
What
I've
noticed
in
in
nodejs
is,
is
that
you
know,
as
we
add
new
folks
and
as
we
notice
their
contributions
and
offer
them
a
collaborator
ship,
they
bring
a
board
not
just
with
the
project
as
it
is,
but
they
they
bring
a
board
new
perspectives
and,
and
they
inform
us
of
the
direction
it
should
be
going.
Ultimately
so
first,
it
starts
out
small
and
and
and
eventually
people
become
more
and
more
involved
and
and
there's
there's
this
immediate
feedback
that
we
get.
H
You
know
because
we
understand
we
begin
to
understand
the
direction
in
which
they
are
going
and
that
how
that
jives,
with
with
where
nodejs
is
going,
and
so
we
have
an
opportunity
here
to
to
not
only
benefit
from
their
contributions
but
to
also
stay
abreast
of
of
where
the
community
is
going.
That's
that's
very
important,
so
and
and
in
terms
of
in
terms
of
advice
for
for
potential
collaborators.
H
B
D
It
was
something
we
started
trolling
out
at
the
start
of
the
year
after
the
last
collaborate
summit
in
December
and
because
the
building
of
a
release
and
auditing
the
commits
it's
kind
of
an
art
that
you
need
to
learn
by
kind
of
shadowing
and
mentoring.
What
we
we've
done
is
we've
set
up
a
hour-long
session
every
two
weeks
where
anyone
who's
interested
in
getting
involved
in
the
release
working
group
can
join
in
and
kind
of
just
watch
us
go
for
it.
A
process
of
preparing
a
release
and
also
look
at
us
follow
us
along
auditing.
D
All
of
the
commits
ready
for
the
release,
and
we
found
quite
a
few
participants
and
regularly
attending
that
one
and
and
that's
in
addition
to
the
release,
working,
very
cool
so
to
release
working
group
calls
were
open
for
everyone.
So
it's
kind
of
the
approach,
we're
kind
of
trying
out
and
release
and
really
see
how
it
goes,
keep
it
going.
B
Thanks,
yes,
my
thoughts
on
you
know
some
advice
to
you
know
other
maintainer,
ziz
I,
think
you
know
the
node
project
is
very
much
optimized
for
what
I
might
call
the
big
game
is.
You
know
we
expect
people
to
act
in
a
responsible
manner
and
you
know
sort
of
given
that
trust
they
do
so
as
opposed
to
optimizing
to
sort
of
protect
for
the
absolute
worst
case.
We've
sets
them.
We
set
some
set
up
things
so
that
we
try
to
be
very
receptive.
We
try
and
you
know,
give
people
access
and
make
them
collaborators.
B
You
know
earlier
rather
than
later,
and
you
know
I
think
we
found
that
that
that
actually
has
worked
pretty
well
in
terms
of
things
and
it's
a
it's
sort
of
a
good
approach
versus
you
know
being
in
a
defensive
mode
or
being
in
the
haeth.
You
know
we
think
these
new
people
are
going
to
generally
want
to
help
and
by
doing
that,
it
gives
them
a
you
know
a
better
opportunity
to
do
that.
F
F
It's
always
you
know
good
to
find
a
sweet
spot
where
you
are
productive
and
it
matches
the
requirement
of
the
project
as
well
and
then
try
to
engage
in
a
consistent
as
well
as
visible
manner,
basically
with
the
other
collaborators
and
contributors
to
improve
the
overall
efficiency.
And
now
you
know
meeting
the
need
of
the
project.
So
that's
a
proven
path
towards
becoming
collaborators
and
eventually
TAC
members.
G
So
just
wanted
to
encourage
folks
to
kind
of
think.
Broadly
in
general.
You
know.
The
best
way
to
get
involved
is
to
fix
things
that
are
broken,
that
you're,
aware
of
broken
and
and
usually
kind
of
getting
started
with
like
hey.
This
is
spelled
wrong
on.
The
website
is
even
like
a
really
great
place
to
to
get
involved
and
help
out.
Every
little
bit
helps.
E
Some
of
joining
note
core
can
be
daunting
on
certain
subsystems,
and
there
are
especially
some
system
that
requires
a
lot
of
knowledge
and
background
a
background
knowledge,
a
lot
of
understanding
out
that
intro
network
to
be
able
to
be
productive
contributors,
and
the
only
suggestion
is
to
keep
being
consistent
and
you
know,
make
ask
questions,
go
improve
the
docs
or,
just
even
you
know,
fix
a
small
bugs
and
try
to
fix
those
small
bugs
and
it
might
take
a
month
to
land
a
change.
It
might
sound
tiny.
But
in
fact,
this
way
harder.
E
I
would
note
that
there
is
several
areas
that
are,
that
needs
a
little
bit
more
help,
some
of
which
are
the
HTTP
team.
Http
to
team
streams
are
a
critical
part
of
the
node.js
of
nodejs
that
are
not
that
I
have
some
maintainer,
but
you
know
they
always
require
new
people
to
help,
and
because
also
some
of
the
issues
are
actually
hard.
Like
is
there's
no
good,
there's
no
good
fix
to
some
of
them,
like
you
need
to
drill
into
to
cast
a
trade-off
between
a
lot
of
various
opportunities.
So.
B
E
B
Know
I'll
just
close
that
with
you
know,
I
think
picking
picking
in
areas
is
a
really
good
suggestion
from
Therese
and
call
it
a
few.
Others
is
like
you
know,
there's
there's
the
package
maintenance
team.
There's
the
napi
team,
the
modules
team.
There's
some.
You
know
so
there's
some
good
areas,
there's
a
list
of
strategic
initiatives
that
are
also
good
candidates
on
the
off
the
TSC,
readme
and
I.
B
C
So
I
think
the
short
answer
is:
if
you're
a
company,
you
should
always
be
running.
The
latest
LCS
version-
and
you
know
usually
the
best
time
to
upgrade-
is
before
whatever
version
you're
on
this
and
the
place,
and
then
different
companies
will
have
different
constraints
on
how
they
can
do
that,
but
generally
run
be
running
a
long-term
support
branch
release
and
you
want
to
upgrade
to
the
latest
cuz.
There's,
there's
often
an
overlap,
but
there
will
be
two
LTS
releases
at
the
same
time.
C
C
So,
if
it's
very
important
to
you
that
you
have
the
latest
and
greatest
features
and
it's
okay,
if
you
encounter
the
occasional
regression
from
time
to
time,
then
you
can
run
the
current
release
branch,
but
just
be
one
that
things
do
break
there.
Others,
if
you
just
look
back
through
all
the
release,
blog
posts,
you'll,
see
this
a
release
and
that
a
quick
follow-up
release
and
another
day
or
two
to
fix
the
regression.
E
And
so
I
from
my
point
of
view,
it's
one
of
the
things.
That's
very
it's
very
important.
It's
if
you
are
altering
a
module
on
and
publish
it
on,
NPM
I
would
highly
recommend
that
you
trade
dropping
support
of
no
js'
release
line
as
a
semi
major
change.
We
do
so
energy
as
for
our
dependencies.
So
if
there
is
a
breaking
change,
we
do
we're
in
one
of
our
libraries,
for
example,
for
an
ABI
change,
or
something
like
that.
F
B
Yeah
I
think
it's
it's.
We
definitely
want
people
to
try
out
the
current
and
give
us
feedback
so
that
when
it
does,
you
know,
for
example,
when
the
the
the
even
currents
become
LTS.
We've
already
got
some
testing,
but
you
know
running
on
the
LTSs
is
what
makes
sense,
I
guess
the
other
thing
to
to
Colin's
point
about.
If
you,
if
you
can,
if
you
want
to
test
the
latest,
you
could
could
use
the
current
just
be
aware
that
you
know
those
only
have
a
six
month
lifecycle.
B
G
Think
I
think
one
thing
that
I
would
add
there
as
well
is
a
pattern
that
I
have
seen.
People
do
that
you
know
is
not
great,
is
launching
a
product
on
our
LTS
release
and
then
never
even
updating
that
LTS
release,
so
even
within,
like
the
same
release
line
if
you're
on
v10
or
v12
I.
Think
it's
really
important
to
be
kind
of.
G
You
know,
monitoring
the
releases
of
node,
and
you
know
following
that
and
making
sure
that
you
are
rolling
out
new
updates
to
your
services
as
we
roll
out
releases
for
LTS,
we
have
about
a
monthly
cadence
so
that,
like
you,
could
prepare
for
about
like
a
release
about
once
a
month
and
updating
your
stack
in
general.
You
know
we
try
to
make
sure
that
what
we
land
is
stable.
If
you
wanted
to
play
extra
safe,
you
could
always
delay
by
one
release.
G
That's
how
I
update
my
Mac
OS
operating
system
personally,
so
I
no
hard
feelings.
If
you
want
to
be
off
by
one,
that's
a
great
way
to
make
sure
that,
like
if
there
are
any
regressions
that
are
found,
you're
not
going
to
be
hit
by
them
generally,
if
there
are
bugs
that
are
found
in
LTS
we
tend
to
if
they
are
like
experienced,
breaking
bugs,
get
those
fixed
within
a
day
or
two.
We're
really
really
quick
about
getting
a
patch
release
out
when
we
break
things
on
release
lines.
G
Keeping
up
to
date
will
kind
of
ensure
that
you
you're
slowly
migrating
so
that
if
we
ever
do
have
to
do
a
larger
security
release
which
could
end
up
introducing
several
major
changes
in
December
minor
release.
If
we
have
to
break
kpi's
in
order
to
make
them
more
secure,
which
we
have
done
before,
it
would
be
the
least
amount
of
work
for
your
team
to
have
to
be
to
be
able
to
focus
solely
on
that
and
be
able
to
kind
of
slowly
integrate
things.
G
I
won't
get
too
much
into
like
testing
and
deploy
infrastructure,
but
there's
really
great
patterns
you
can
use
like
blue
green
deploy.
You
can
do
all
sorts
of
staged.
Rollouts
tools
like
kubernetes
or
different
clouds,
have
all
sorts
of
awesome
tools
so
that
you
can
kind
of
slowly
roll
things
out
and
test
them.
The
way
that
we
find
bugs
is
by
people
rolling
things
out
in
production.
G
B
Yeah,
it's
it's
a
good
point
because,
like
in
our
security
releases,
we
work
to
making
it
such
that
the
security
release
only
includes
the
security
fixes,
so
that,
if
you're
at
the
latest
point
version
before
that
you
you
know
the
only
thing
you're
getting
is
the
security
fixes
and
to
minimize
your
overall
risk
of
quickly
moving
to
that,
but
that
doesn't
really
help
if
you're,
like
you
know
a
couple
of
cember,
minor
or
patch
releases
behind,
because
you're
gonna
have
to
move
up
across
all
of
those.
So
that's
it's
a
very
good
point.
There.
G
Yeah
and
I'll
add
to
that
we
don't
patch
all
the
previous
ember
miners,
our
LTS
releases
move
forward.
We
don't
maintain
multiple
miners,
so
if
there
is
a
security
release
and
you're
not
kind
of
sitting
at
the
tip
of
that
of
that
release,
tree
you're
going
to
be
forced
to
update
for
lack
of
a
better
way.
To
put
it
one
small
thing
that
I
will
add
is,
in
general,
our
semper
patches
tend
to
be
much
more
kind
of
like
stable
and
reliable
in
the
sense
that,
like
the
things
that
are
landing,
are
usually
like
fixes.
C
So
it's
more
of
a
question
to
everyone
else
on
the
panel
does:
does
anyone
have
a
take
on
tips
and
how
that's
impacting
users
I
know
I've
seen
customers
who
have
been
stuck
on
their
date?
It's
just
an
end
of
life,
sir
for
months
now,
but
they
can't
upgrade
because
they're
required
to
support
Phipps
and
there's
no
dip
story
and
firmly
supportive
of
these
lines.
So
one
one
approach
I've
seen
customers
take
is
the
setup
support
contracts
with
the
OpenSSL
project,
I'm
learning
that
I
was
just
curious
for
anyone
else.
B
So
I'll
you
know,
I
can
answer
on
we're,
certainly
watching
that
as
a
project
Sam
Roberts
who
works,
works
with
us,
you
know
what
we're
tracking,
what's
going
on
with
open,
SSL,
3
I
think
is,
as
you
you
know.
As
you
probably
know,
it's
end
of
this
year
is
kind
of
like
an
optimistic
best
case
of
when
something
might
come
from
the
open,
SSL
project.
So
it's
it's
not
a
place.
We'd
want
to
be
in
terms
of
the
community
release.
B
Once
we
can
move
up
to
open,
SSL
3
will
be
looking
to
pull
that
in
there
are
only
a
few
alternatives.
I
think
some
of
the
distributions
offer
their
own
Phipps
certified
version.
I
know
Red
Hat,
one
of
our
associated
companies.
You
know
ships
their
own
crypto
module,
which
they
know
that
chips
on
Red
Hat
gives
you
fit.
So
there's
there's
a
way
to
get
it,
but
you're
gonna
have
to
be
looking
at
the
particular
distros
in
some
cases
to
be
able
to
do
that
today.
B
F
Basically,
when
you
look
at
the
CPU
cores,
the
basic
premise
of
throughput
and
performance
being
a
function
of
CPU
core
is
the
basic
premise
in
most
of
the
programming
languages,
but
when
it
comes
to
node.js
being
in
JavaScript,
which
follows
in
a
synchronous
and
even
driven
architecture
do
not
necessarily
align
with
that
chemische
and
instead
it
actually
leverages
the
high
level
of
concurrency
inherent
in
the
language
and
the
platform
to
influence
the
throughput
or
the
performance
characteristics.
So
the
single
answer
is
in
most
of
the
workloads
and
production
systems
which
I
looked
at.
F
If
you
see
around
100
percent
of
the
CPU
consumption
from
any
other
language,
it
might
be
scary,
whereas
if
it
is
a
node.js
deployment,
it
may
be
business
as
usual.
It
would
be
as
simple
as
the
entire
CPU
load
allotted
to
the
process
is
being
exploited
because
of
the
way
the
synchronous
human-driven
architecture
is
leveraged
by
the
program.
However,
when
the
workload
increases
beyond
proportions,
it's
so
possible
that
the
single
CPU,
overconsumption
or
utilization
model
is
still
not
enough,
and
you
would
look
at
a
scalability
aspect
in
both
vertical
and
horizontal
dimension.
F
So
few
options
which
are
inbuilt
in
the
core
are
one
is
child
process,
which
is
a
raw
implementation
of
replicating
your
code
into
different
child
process
and
dealing
with
the
communication.
Synchronization
and
data
sharing
between
sibling
processes
or
and
child
processors
by
yourself,
so
I
would
say.
Child
process
is
the
primitive
mechanism
for
exploitation
of
multi-core
and
then
comes
the
clusters,
which
is
a
sophisticated
abstraction
on
top
of
the
child
process.
F
Whereas
the
communication,
synchronization
and
load
balancing
between
the
cluster
members
are
taken
care
by
the
API
itself,
but
then
how
the
transaction
has
to
be
laid
out
in
terms
of
the
workflow
Italy
at
the
hand
of
the
application.
For
example,
how
do
you
manage
the
session
is
not
in
the
purview
of
the
cluster
and
then
you
have
the
worker
threads,
which
is
a
new
addition
to
this
family,
which
looks
at
running
JavaScript
workload
in
the
same
process,
but
in
a
different
thread.
F
By
sharing
the
process
order
space,
but
not
sharing
the
isolate
the
unit
of
execution
in
the
JavaScript
core
next,
none
of
these
things
are
probably
competitors
to
each
other,
but
basically
look
at
different
use
cases
and
both
load
scenarios
to
see
which
one
suits
the
perfect
combinations.
So
from
that
perspective,
co-routines
and
the
other
semantics
of
the
golang,
which
is
channels
I,
would
say,
is
another
abstraction
on
top
of
the
basic
threading
models,
so
it's
possible
that
may
implement
that
as
well.
F
But
eventually,
the
question
is
how
that
synchronization,
the
sharing
of
the
data
and
the
communication
between
different
threads.
How
are
you
able
to
deal
with
that?
What
is
the
sophistication
you
are
able
to
bring
to
the
table
and
how
the
users
are
able
to
cope
with
that?
So
the
answer
lies
in
that.
E
Don't
think
why
that
that
would
any
new
API
that
will
add
will
change.
In
fact,
it
creates
a
better
it
reduced
cost
and
footprint
on
the
cloud
to
have
more
multiple
small
servers
versus
massive
servers
with
32
cores
or
something
so
from
usability
from
a
scalability
perspective.
I
think
it's
it's
way
better
anyway.
E
Related
to
what
delicious
add
there
is
you
can
use
right
now.
There
is
a
few
modules
to
use
worker
threads
as
cue
system.
So
if
you
need
to
have
some
long-running
CPU
process,
you
can
actually
offload
it.
So
you
can
keep
your
main
thread
lightweight
and
without
blocking
the
event
loop,
so
that
that's
that's
it
and
it
works
very
well.
It's
very
stable
and
very
performant,
so
I
I
think
with
a
combination
of
all
of
those.
It's
actually
very,
very
good
personally.
E
I
do
not
recommend
to
use
faster
for
exact
lustre
module,
for
example,
and
I,
think
I
prefer
to
rely
on
cloud
providers
or
kubernetes
and
so
on
to
scale
my
node,
my
node
processes
up
so
I
think
this
is
infrastructure.
This
is
an
infrastructural
concern,
not
an
application
concern,
at
least
from
what
for
what
I've
seen.
G
Yeah
I
can
I
can
even
add
to
that
as
well.
Matteo
I
think
there's
a
lot
of
interesting
questions
about
like
at
what
layer.
Some
of
these
problems
should
be
solved
and
I
think
there
is
a
lot
of
strong
evidence,
and
it
doesn't
mean
that
this
is
the
case
for
every
deployment
or
implementation,
but
that
scale
and
load
balancing
is
not
an
application
layer
concern
and
there's
even
some
really
cool
stuff.
F
G
A
lot
of
logic
and
also
potential
for
error,
so
that's
one
of
the
things
is
to
is
really
great
at-
is
managing
policy
across
many
many
services.
And
so,
if
you
have,
you
know
like
one
monolithic
application,
then
yeah,
maybe
it
makes
sense
for
all
these
things
to
live.
But
if
you're
thinking
a
lot
more
around
the
micro
service
approach
to
things,
you
really
can
start
teasing
a
lot
of
things
out
of
your
node
application,
which
I
think
allows
for
better
auditability
consistency
and
maintenance
for
your
whole
suite
of
applications,
not
just
a
single
application.
C
F
C
The
bugs
of
whatever
you
know
is
in
the
cluster
module.
I've
also
worked
at
companies
that
try
to
use
the
question
module
and
it
didn't
work
very
well.
I
also
used
to
be
one
of
the
maintainer
of
the
question
module,
so
I
can
I
can
say
you
should
not
use
it,
so
that
was
speaking
to
the
first
part
of
the
question.
C
The
second
part
of
the
question
is:
do
you
ever
succeed
going
like
channels
and
directions,
I
think
if
it
was
gonna,
be
syntax
and
part
of
the
language
that
would
that
would
be
a
question
for
tc39
and
not
not
this
group,
but
I
would
imagine
you
could
implement
something
similar
to
the
routines
now
using
work,
the
threads
and
module.
So
if
that
was
something
you
were
interested
in,
I
would
encourage
you
to
experiment.
B
Yeah
I
guess
just
adding
to
the
discussion
of
you
know
whether
to
use
the
like
scale
using
the
infrastructure
I.
We
had
at
least
one
fairly
large
node
internal
deployment
where
they'd
started,
you
know
doing
their
own
load,
balancing
moved
to
kubernetes
and
we're
wondering
if
they
should.
You
know
continue
to
have
that
within
the
containers,
because
you
know
for
other
other
plots,
other
languages
and
runtimes.
B
H
So
so,
if,
if,
if
you,
if
you
can
solve
it
with
a
with
a
single
process
and
and
offload
the
image
processing
to
a
worker
thread
and
and
that
gets
you
the
highest
throughput,
and
so
be
it
if,
if
if
it
turns
out
that
that
the
memory
consumption
is
such
that
that
that
it's
it's
mostly
the
node
process
that
and
then
then
maybe
you
want
now
switch
to
to
using
a
single
process,
because
then
then
the
code
can
be
shared.
So
so
it
depends.
H
It
depends
a
lot
on
your
use
case
and
and
I
think
image
processing
is.
It
is
one
of
these
use
cases,
so
III
I
cannot
suggest
the
best
case
architecture
off
the
top
of
my
head
for
image
processing,
but
but
basically,
we
now
have
an
additional
tool
pool
with
with
worker
thread
beyond
just
cluster
and
and
then
child
processes
and
and
and
you
having
the
the
orchestration
perform
the
scaling.
So
so,
if,
if
it
works,
then
use
it
assume
and
of
course
it
also
adds
a
little
bit
to
the
experimentation.
H
So
we
need
to
be
agile
about
that.
Definitely
definitely
a
sort
of
a
an
emerging
possibility,
because
JavaScript
has
been
single-threaded
for
a
very
long
time,
but
that's
not
true
for
many
programming
languages
right.
So
a
lot
of
them
support,
threading,
right
off
the
bat
and
JavaScript
hasn't.
So,
if
you're
coming
in
from
from
from
a
different
programming
environment,
then
then
you,
you
may
be
now
breathing
a
sigh
of
relief,
though
of
course
you
always
have
to
check,
and
the
performance
of
your
solution
always
has
to
keep
to
keep
you
grounded.
H
So,
but
at
least
at
least
now
we're
sort
of
joining
the
ranks
of
those
languages
which
support
threading
blocks
and
we'll
see
what
the
implications
are,
as
as
these
things
develop
and
as
more
and
more
applications
that
were
previously
threaded
are
now
being
attempted
in
in
JavaScript.
So
that's
it.
On
my.
E
Well,
now,
very
recently,
you
can
also
use
node
for
doing
taking
screenshots,
for
example,
of
web
pages.
We
have
done
these
with
I
think
was
puppeteer
or
something
like
that
even
running
something
on
headless
Chrome
on
lambda,
something
like
completely
that
stuff
that
was
completely
unfeasible
before
and
pretty
pretty
good
like
it
was
what
it
was
well
from
my
point
of
view.
It
was
well
I
was
just
reporting
that
we
have
used
this
in
so
many
occasions
and
I
never
got
any
issue
whatsoever.
B
H
Yes,
okay,
it's
also
just
to
just
to
elaborate
a
little
bit
on
that.
The
threat
safe
function
is
basically
used
when,
when,
when
you
have
a
module
that
is
multi-threaded
already
and-
and
you
wish
for
it,
direct
with
with
with
nodejs,
so
you
you
you
can
you
can
basically
get
to
the
point
where
you
receive
a
function
from
nodejs,
which
you
can
call,
and
since
you
cannot
normally
call
such
a
function
from
from
any
given
thread,
but
only
from
the
budget
as
main
thread.
H
This
this
particular
construct
will
allow
you
to
make
an
asynchronous
call
from
any
thread,
so
you
can
use
your
existing
threading
implementation
in
on
the
native
side
and
just
and
just
pass
in.
Basically
what
is
a
JavaScript
function
that
you
can
call
so
so
you
know,
if
you
have
a
multi-threaded
image,
processing
application
already
written
in
the
native
language,
then
you
can
now
integrate
it
far
more
easily
with
no
js'
okay
sounds.
C
Good
Collin
I
think
you
want
to
add
something
just
something
very
brief.
Another
alternative,
since
we
are
using
their
Jas
instead
of
using
a
native,
would
be
something
like
assembly,
so
you
could
compile
the
web
assembly
ahead
of
time
and
then
the
image
and
signal
processing
applications
tend
to
do
a
lot
of
like
vector
and
rate
based
operations.
So
that's
something
that.
B
D
So
that's
when
you
should
start
thinking
about
maybe
migrating
your
production
applications
up
to
14
for
the
first
six
months
it
will
be
in
the
current
state,
so
it
will
pick
up
all
new
features
that
land
on
master
and
some
of
the
new
features
going
into
14
at
the
moment
are
the
VA
upgrades
and
there's
a
lot
streams
changes
going
into
14x
and
leaving
some
of
the
kind
of
long-term
dock
deprecated
API
to
end-of-life
and
in
terms
of
the
release
itself.
There
are
some
release
candidates
out
there.
D
B
D
So
we
so
previously
a
release
would
be
in
active
LTS
for
18
months
and
then
switched
to
maintenance
for
12
months.
But
recently
we
made
the
decision
to
swap
that
which
means
now
a
release
so
to
both
12
and
14
will
only
be
an
active
OTS
for
12
months
and
they
will
be
in
maintenance
18
months.
So
the
whole
lifetime
of
the
release
is
still
the
same
amount
of
time.
We've
just
flipped
the
active
out
yes
and
the
maintenance
timeframes.
G
Love,
it
I
think
the
more
runtimes
the
better.
It's
definitely
like
a
complicated
question
and
I
think
that
there's
individuals
who
like
to
try
to
frame
things
as
like,
mutually
exclusive
or
one
versus
the
other
I'm
a
huge
fan
of
the
concept
of
abundance
thinking.
You
know
the
more
the
merrier,
the
better
I
think
about
this,
especially
in
regards
to
some
of
the
work
that
I
do
at
CC.
G
39
note
has
been
one
of
the
sole
non
browser
based
vendors
that
have
been
discussing
changes
to
the
language
and
with
the
introduction
of
demo
and
multiple
and
a
number
of
other
JavaScript
runtimes
that
are
not
browsers.
Cloudflare
hedge
workers
also
falls
into
this
place.
There
is
more
voices
for
us
at
TC
39
for
other
use
cases.
G
It
also
helps
us,
as
node,
in
my
personal
opinion,
identify
things
that
are
not
specific
to
our
runtime,
but
are
a
bit
of
kind
of
more
specific
to
the
server
side
or
non
browser
use
case,
which
helps
us
identify
things
that
shouldn't
be
standardized
across
multiple
implementations.
A
really
great
example
of
one
of
the
things
right
now
that's
being
discussed
is
import
meta
around
all
the
different
environments
and
an
API
such
as
import
meta
results.
G
G
C
So
kind
of
to
echo
what
mouth
said:
I
I
would
love
it.
If
people
would
stop
kind
of
trying
to
pit
me
against
dinner
and
then
it's
not
really
a
fair
comparison.
So,
for
example,
I've
seen
numbers
posted
about
how
much
faster
then
I
can
start
up
the
node.
You
know
over
the
course
of
ten
years
a
lot
of
people
have
a
lot
of
feature
requests.
C
Adverse
things
start
to
pile
up,
so
eventually
I
would
have
imagined,
Bennett
also
start
to
slow
down,
but
on
the
positive
side
there
are
actually
a
lot
of
things
from
there
that
I
would
love
to
see.
May
the
doc
I
would
love
to
have
a
built-in
test.
Runner
I
would
love
to
have
you
know
a
built-in
bundler
things
that
when
they
were
originally
started,
it
was
kind
of
the
minimal
core
approach.
C
E
I
just
wanted
to
play
on
one
of
the
key
difference
between
Dino
and
note
and
which
is
the
difference
in
the
module
system,
which
is
very
significant.
It
seems
out
of
scope
for
Dino
to
support
the
NPM
ecosystem
and
they
just
rely
on
HTTP
URLs
and,
from
my
personal
point
of
view,
is
that
it's,
the
the
NPM
ecosystem
and
node
modules
has
been
one
of
the
key
success
of
nodejs.
F
I
just
want
to
add
a
couple
of
things:
one
is
a
synchronous,
even
driven
programming.
We
are
the
pioneers
node.js.
That
also
comes
with
some
kind
of
problems,
for
example
the
some
of
the
features
some
of
the
capabilities.
We
don't
have
a
specification,
the
way
the
feature
works
itself
becomes
the
standard,
for
example,
the
streams,
the
HTTP
protocols
or
the
promises.
These
are
the
kind
of
things
which
much
got
matured
in
the
community
over
the
time,
based
on
the
use
cases
and
the
workload
I,
don't
know
much
about
that.
F
They
know
other
than
the
fact
that
it's
a
type
script
based
back-end
platform,
but
having
a
similar
platform
with
similar
objective
and
capabilities,
is
gonna,
definitely
improve
the
overall
you
know
features
as
well
as
help
standardizing
and
building
specifications
around
features
and
capabilities
which
are
derived
from
even
driven
architecture,
which
is
good
for
the
overall
ecosystem.
In
my
opinion,.
B
B
G
Can
add
just
like
a
small
bit
here,
which
is
that,
like
at
least
from
the
note
Janice
modules
team's
perspective,
we
are
not
looking
like
outside
of
browsers
and
note
itself.
As
far
as
making
decisions
about
our
resolution
algorithm,
the
resolution
oven,
we
have
four
es
modules-
is
heavily
based
on
capabilities
that
can
be
matched
to
specifically
the
import
map
proposal,
that's
being
done
over
in
Wi
CG,
the
ycg,
which
is
a
w3c
incubator.
G
The
plan
and
hope
is
to
allow
like
what
is
written
in
know,
DSM
modules,
to
be
more
less
compatible
with
the
browser
modules
without
requiring
a
build
step
simply
through
metadata.
We
do.
We
are
working
on
making
sure
that
our
loaders
implementation
will
allow
various
individuals
within
the
ecosystem
to
build
custom
things
so
yarn,
plug-and-play,
tink,
web
pack
Babel.
These
were
all
use
cases
that
we
looked
at
explored
and
thought
about
in
designing
and
in
the
continued
design
of
the
loaders.
G
But
as
far
as
the
node
modules
Tina's
perspective
is
concerned,
the
baseline
resolution
algorithm
that
we
have
is
going
to
be
absolutely
minimal.
We've
actually
been
removing
things
from
the
algorithm,
not
adding
things,
but
we
are
trying
to
make
first-class
support
for
extents
extensions
in
so
that
people
can
work.
You
know
more
hand-in-hand
with
with
node
itself,
as
opposed
to
some
of
the
happier
things
that
have
been
done
in
the
past
in
the
command.
B
Thanks,
okay
sounds,
like
you
know,
that's
what
we
have
for
that
question.
We're
down
to
you,
know
the
last
five
minutes
and
I
guess
you
know
I
was
thinking.
Maybe
we
could
use
this
to
either
let
each
of
the
panelists
give
you
know
a
short
if
there's
some
one
more
topic,
they'd
sort
of
like
to
to
let
people
know
about
or
spend.
B
D
Sure
I
start
with
and
how
I
got
involved
with
Nate,
so
the
way
I
got
involved.
Node
is
the
team
I
like
that.
Ibm
had
maintains
their
own
port
of
note
and
then,
by
running
over
in
belts,
I
managed
to
weave
rerun
tests
and
then
Mike
triaging,
a
test
results
I
found
out
that
some
of
the
tests
were
a
bit
flaky
and
from
there
we
started
contributing
and
flaky
test
fixes,
etc.
D
A
bit
later
on,
I
think
miles
offered
to
help
mentor
me,
along
with
Gibson,
into
the
release
working
group
that
was
kind
of
a
one-to-one
mentoring
sessions
every
two
weeks,
similar
to
what
we're
running
now,
and
that
got
me
more
up
to
scratch
with
like
how
to
build
a
release
and
that's
how
I've
continued
to
contribute
there.
So
again,
if
anyone
else
wanted
to
get
involved
in
some
releases
and
interesting
fruit
and
joining
those
bi-weekly
meetings
is
a
good
way
of
getting
involved.
I
think.
B
H
One
I
I
first
I
started
working
with
the
jQuery
Mobile
and
I
was
a
contributor
there
and
turns
out.
They're
built
system
runs
nodejs,
and
so
sooner
or
later,
I
went
from
writing
for
the
browser
to
write
JavaScript
or
the
build
system
for
her
jQuery
Mobile
and
and
then
we
got
involved
with
with
IOT
and
so
I
kind
of
I
kind
of
you
know.
H
Somebody
walked
up
to
me
and
said:
hey,
you
know
no
J's
right
where
we
were
doing
this
thing
and
it's
using
no
Jas
and
yes,
but
either
way,
so
that
and
so
then
I
I
got
to
working
with
no
Diaz
full-time.
Writing.
Writing
bindings
for
for
C
library
was
used
in
an
IOT
stack
like
note,
stupidly,
node
and
and
basically
to
do
working
on
the
naked
on
subsystem
itself,
and
so
just
at
the
same
time
we
had.
H
We
had
an
API
sort
of
becoming
a
thing
of
both
for
supporting
multiple
engines,
that
is
JavaScript
engines
and
for
for
providing
stability
for
native
add-ons,
so
that
you
know
you
don't
have
to
release
your
native
add-on.
Every
time
a
new
node
person
comes
out
and
so
that
that
was
kind
of
a
segue
for
me,
I
ended
up
being
a
lot
to
to
an
API
and
that's
what
basically,
the
other
collaborators
noticed
and
eventually
took
me
on
as
a
collaborator
and
then
later.