►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
A
B
C
Inside
I
think
mateo
needs
to
be
promoted,
feel
free
to
make
me
co-host
and
I
can
keep
track
of
that.
A
B
Here
we
are
just
talking
about
the
clarification
around
real
world
risks
and
use
cases
of
dvn
module.
I
believe
you
added
that.
D
Sorry,
I
missed
the
exact
sentence.
I
was
just
joining
with
the
audio,
so
can
you
repeat.
D
B
D
Well,
yes,
it's
it's!
It's
an
old,
it's
an
old
one.
To
some
extent,
it's
we
never
came
to.
We
talked
a
little
bit
at
one
of
the
previous
meetings
and
there
was
some
conclusion
posted
by
you
reach.
I
think
so.
There
is
a
pr
open,
so
I
think
we
can
remove
it
from
the
from
the
agenda
like
that
was
a
pr
open
and
which
a
bunch
of
lg
tms.
So
I
think
it's
going
to
be
nothing
soon,
so
no
can
be
removed
from
the
agenda,
no
not
into.
C
I
don't
think
so.
It's
still
the
more
troublesome
repositories.
We
just
need
to
figure
out
a
plan
for
moving
those.
A
Yeah,
okay,
so
let's,
let's
keep
it
for
the
time
being
as
well.
Probably
there
should
be
some
some
discussion
around
to
see
some
concrete
action
plans
coming
out
of
this
discussion
and
spinning
spinning
it
out
from
the
thc
agenda
so
that
some
activity
can
be
done
rather
than
be
doing
a
status
check
on
our
weekly
cadence.
A
Next,
one
is
on
the
tsc
repo
1133
nytsc
members
and
google
calendar
even
for
meetings.
C
Yeah,
I
think
the
last
step
on
that
one
I
think,
we've
all
got
the
invites-
would
be
to
add
the
zoom
link.
Just
so
it's
easier
for
us
to
access
it
and
it
can
probably
be
removed
from
the
agenda.
A
A
A
D
B
A
D
A
D
D
I
mean
it's,
it's
it's.
Let
me
open
it
up,
I
don't
I
don't
remember
adding
it
back
here,
but.
D
Yeah
it's,
I
agree
like
that's
my
full
sentence
on
it
like
with
the
prototype.
Is
there
we
know?
We
know
it
works.
I've
tested
the
prototype
in
in
a
few
client
settings
in
african
production
settings.
It's.
It
delivers
phenomenal
results
in
terms
of
memory
consumption.
So,
if
your
node
app
is
scaling.
D
D
D
But
that's
it
okay!
So
it's
you
need
somebody
to
take
on
the
work.
If
you
want
to
make
it
official,
essentially
like
there's
a
lot
of
work
to
make
it
official.
If
you
want.
D
A
D
I
think
that
there
will
be
there
is
an
ask,
but
there
is
not
not
enough
for
people
to
volunteer
to
do
the
work.
D
B
I
think
it's
also
not
a
very
well-known
feature,
it's
very
specific
to
v8
and
similar
runtimes,
and
it's
not
very
commonly
used
by
other
runtimes
at
this.
I'm
not
aware
of
any
runtimes
other
than
chrome
implementing
it
or
using
it.
D
An
action
here
would
be,
to
you
know,
put
it
in
some
more
prominent
way
to
some
of
our
corporate
sponsor
if
they
want
to
push
this,
I
don't
know
like
it
doesn't
seem
a
feature
that,
like,
if
you're
a
large
node
deployment,
you
might
want
to
do
this.
Okay,
but
if
you
are
a
company
just
using
node
for
stuff,
probably
not
really
is
not
your
first
place
where
you
would
like,
invest
your
money
so.
D
We
can
do
this.
This
is
somewhat
less
somebody's.
D
It's
I
am
I'm
way
less
concerned
now
than
I
was
in
the
past
in
the
sense
that
we
can
say
that,
but
most
of
our
individuals
use
an
api
and,
I
think,
that's
safe.
So
that
was
my
and
if
they
need
that
well,
fine,
they
will
rebuild
the
modules.
I
don't
necessarily
be.
D
B
D
You
will
need
the
other
build
because
we
have
we.
There
are
a
few
users
out
there
that
are
currently
using
node.js
to
execute
application.
That
requires
eight
to
12
gigabytes
of
memory.
So
that
is
a
real
thing.
Okay,
now
we
could
decide
to
to
ship
it
in
a
major
and
just
maintain
an
unconstrained
version
for
linux,
for
example
yeah
that
can
be
another
possibility.
D
D
A
D
Not
willing
to
fork
to
invest
on
it,
so
essentially,
I
probably
not
should
be
not
a
priority.
B
D
I
think
it's
per
eep
but,
like
I
think,
if
you've
moved
from
one
other,
isolate
to
another,
you
get
different
ones,
but
I
will
not
be.
I
will
not
put
my
hand
on
fire
on
it.
Okay,
thanks
the.
D
Works
we
just
it
just
cuts
it.
It
tells
you
32
bits,
pointers,
that's
how
it
works.
Okay,
it's
no
matter,
not
magic;
okay,
it
just
cuts
away,
32-bit
pointers
when
just
move
everything
to
32-bit
pointers
when
referencing
from
one
object
to
another
like
it's
in.
It's
actually
very
simple
to
understand.
So
it's.
B
B
D
Yeah,
it's
yes
and
it's
it's
signifi,
it's
extremely
significant.
I
think
it's
I
don't
know.
We
run
it
in
case
of
a
lot
of
very
small
objects:
okay,
like,
for
example,
reactor
side,
rendering
the
application
or
things
like
that
it
was.
I
don't
know,
35
40,
something
like
that
of
memory
saved
something
like
it's
kind
of
impressive
by
looking
at
it,
but
it
comes
with
a
limit
right.
D
D
Or
if
it
or
it
was
shielded
by
some
other
bottlenecks
that
we
have,
which
is
also
possible,
but
this
was
like
last
time-
I
checked
was
a
long
time
ago,
so
I
it
might.
Even
if
I
recall
correctly,
it
was
actually
running
a
little
bit
faster
because
because
it's
actually
exactly
less
memory
being
used.
So
there
is
less
memory
that
you
can
actually
the
more
memory
in
the
cpu
caches
that
can
go
into
the
cpu
caches.
So
I
think
it
actually
is
actually
slightly
faster,
but.
D
It
was
a
long
time
has
passed,
but
by
that,
if
I
recall
correctly,
it
was
slightly
faster,
but
you
know
not
to
not
50
faster
or
not,
even
I
don't
think
even
10
percent
faster
okay.
So
but
it
also
depends
like
I
didn't
test
those
big
loads,
so
just
small
loads.
A
There
is
an
additional
activity
involved
in
dereferencing
every
pointer,
because
you
need
to
recompose
the
64-bit
part
of
the
point
or
before
you
actually
get
the
value
that
is
pointed
by
the
pointer.
So
that
way
there
is
an
additional
penalty
associated
with
every
operation,
but
probably
that
gets
nullified
by
the
cpu
cache
optimization,
which
matteo
is
talking
about,
that.
That
may
be
huge,
and
that
is
masking
of
this
small
penalty.
D
As
I
said
from
my
test,
there
was
no
almost
almost
not
if
no
difference,
if
not
a
little
bit
of
a
benefit
of
it,
but
to
be
honest,
it's
it
was
hardly
like.
It
was
not
enormously
significant.
Okay
to
recall.
Oh,
it's
radically
different.
It
was
a
huge
memory,
saving
for
almost
no
penalty
in
practice
like
we
tested,
we
did
some
testing
of
it
and,
however,
there
was
literally
no
interesting.
D
I
didn't
see
much
interest
in
shipping.
Okay,
this
feature
now
we
might
want
to
do
some
promotion
to
attract
some
collaboration
and
things,
but
I
don't
know
like
this
is
a
it's
a
good.
It's
a
good
question,
though
it's
a
good
thing
to.
A
Moving
on
to
the
next
one
108
mini
summit-
I
see
this
issue
is
closed.
So
there's
nothing
to
discuss
on
this
as
per
last
command
by
michael.
A
So
that's
the
list
of
items
in
the
regular
agenda.
I
know
we
also
used
to
discuss
the
cpc
update
and
the
strategic
initiatives.
Are
there
anybody
in
this
meeting
who
can
update
on
those
I
don't
know
which
order
or
which
item
we
use
to
discuss?
So
if
anybody
has
any
idea,
please
of
david
just
a
question:
should
fetch
be
a
strategic
initiative.
A
The
strategic
initiatives
are
crafted
in
the
past,
basically
to
add
some
traction
to
a
specific
initiative
which
are
strategically
important
but
identified
as
requiring
some
specific,
focused
effort
which
was
not
otherwise
getting
to
it.
So
if
fetch
qualifies
for
that
absolutely
yes.
A
So
probably
I
mean
if
you're
interested
or
if
you
want
to
champion
that
I
would
suggest
you
open
an
issue
in
this
tsc
repo
and
add
a
label
so
that
we
can
have
a
round
of
discussion.
And
I
don't
know
whether
we
should
have
a
vote
for
that.
But
if
there
is
a
general
consensus,
that
fetch
should
be
can
be
a
strategic
initiative
and
we
can
go
for
that.