►
From YouTube: Node.js Foundation Benchmarking WG meeting 25 September
A
So
far
we
have
an
attendance
myself,
Michael,
Dawson
and
empower
and
Anil
Kumar
and
we'll
see
if
anybody
else
joins
up
along
the
way
in
terms
of
stand-up
I've
been
working
to
help
get
the
no
DCIS
benchmarks
running
in
the
standard
jobs
and
publishing
the
results
into
the
charts
which
we
do
have.
Although
there's
a
few
issues
in
that,
it
seems
to
be
hanging
fairly
regularly.
So
we
need
to
figure
that
out.
A
There's
also
been
a
couple
of
our
sync
issues
where,
for
some
reason,
I
log
into
the
data
machine
and
there's
like
a
milionaire
sync
process
is
running
so
trying
to
figure
out
what's
going
on
there
or
at
least
cleaning
it
up,
so
that
things
continue
to
run
a
little
move.
So
most
the
benchmarking
things
that's
working
on
Canal.
How
about
you.
C
Yeah
hi,
so
I've
been
after
we
talked
about
the
node
BCI's
is
enough
not
finishing
properly.
Look
at
I
tried
many
modular
eyes.
It
and
I
just
issue
a
pull
request
on
my
repo,
with
all
the
modified
most
recent
version,
and
then
I
also
did
three
runs
with
node
V,
8,
V,
6
and
V
4
version
on
her
H
marking.
It
seems
to
be
working.
Fine
all
three
runs
pass
and
they
finish
in
time.
So
we
can
let
if
we
can
give
it
a
try
and
at
that
now,
okay,
so
basically
gonna
change
like.
C
A
C
C
B
A
Got
the
figurative
with
the
stand
up,
we
figured
out
the
hang.
You
have
a
clear
for
the
changes,
so
yeah
okay
sounds
good.
We
didn't
have
any
actions
from
the
last
meeting
in
terms
of
the
issues
we
were
going
to
talk
about
the
survey,
questions
for
the
end
users,
I
think,
since
we
don't
have
very
many
people,
we
should
defer
to
the
next
meeting
right.
A
D
A
E
F
Was
just
thinking
I
think
one
of
the
toner
I
keep
thinking
like
what
we'll
talk
at
not
interactive
and
one
of
the
thinking
we
were
looking.
So
we
have
the
test
being
done
at
the
mold
benchmarking,
and
we
have
some
testing
done
at
our
language
performance
lab
site
and
there
might
be
some
tests
being
done
by
miles
for
like
Kennedy
and
the
golden
and
the
cannery
project,
or
some
tests
being
done.
This
is
a
hard
builds
or
something
is
there
a
to
synchronize
or
do
we
need?
We
need
to
create
more
this
person.
A
Pretty
sure
I'm
pretty
sure
the
sitting
stuff
I
know
what
that
is
and
that's
different.
That's
basically
just
running
the
tests
that
are
built
into
the
modules,
so
there's
no
performance
testing
there
and
in
the
bill
there
really
isn't
any
performance
testing
either
so
I,
don't
think,
there's
any
overlap
in
any
of
those
kinds
of
jobs.
So
far.
F
Was
saying
that
Roubini,
if
we
want
to
let
these
all
these
things
independent
totally
or
is
it
good
to
create
some
kind
of
synchronization
among
them
who
the
important
release
another
thing
that
all
ten
human
checklists
that
order?
How
is
the
know-how?
Is
the
node
benchmarking
number
looking?
How
is
the
Canadian
hold
by
looking
at
how
the
test
suite
number
looking
are?
We
are
using
the
information
besides
to
determine
hey.
Do
we
have
any
four
stopper
in
the
release,
build
or
something
right.
A
So
I
guess
your
suggestion
is
like
definitely
this
idiom
is
used.
The
build
results
were
definitely
used.
The
benchmarking
results
they're,
not.
We
don't
end
up
getting
a
number,
that's
directly
on
the
release
until
it
goes
out.
Having
said
that,
we
do
use
it.
You
know,
I
do
keep
an
eye
on
it
and,
for
example,
when
we
weren't
getting
the
change
for
the
startup
time,
I
did
ping
miles
based
on
that
to
say
hey,
why
isn't
the
master
gone
up
to
what
we
expect
so
you
could?
A
F
F
A
F
Yes,
that's
double
eyelid
indicating
that,
because
they
do,
people
want
to
win
the
number
that,
fortunately
they
to
eight.
You
have
to
go
and
run
back
right
so
say
that
again,
post
release
is
too
late.
I
feel
like
that
we
should
be
doing
as
part.
This
is
the
way
the
regression
testing
another
thing
will
stop
the
release
when
you,
if
it
is
critically,
fails
or
something
yeah.
So
my
proposal,
if
I
do
need,
is
marking
as
part.
A
F
A
It's
you
know,
the
main
benefit
should
be
that
you
can
use
it
across
different
versions
of
node,
and
you
know
it
does
help
the
VM
neutrality.
In
terms
of
you
know,
it
also
allows
you
to
run
your
modules
with
other
VMs
as
well.
But
the
real
selling
point
in
my
mind
is
that
you
know
you
you
can
use
it
across.
It
isolates
you
from
v8
changes
and
recompilation
across
versions
of
them.
A
I
mean
basically,
we
defined
our
own
API
and
it's
it's
a
complete
API,
so,
like
man
isn't
complete
in
the
sense
that
it
doesn't
cover
types
and
things
like
that,
and
so
you're
making
well
you're
making
calls
with
with
you
know,
direct
feed
calls
and
you're
using
their
type.
So
if
any
of
those
changes,
you'll
be
out
of
luck,
nappy
or
napi
wraps.
A
All
of
that,
so
that
we
have
our
own
types,
our
own
set
of
the
api's,
and
so
you
know
we
we
haven't
got
it
haven't
kept
it
up,
but
early
on,
we
had,
like
you
know,
before
v6
and
I,
think.
Actually
we
went
all
the
way
back
to
be
10,
0
10
and
showed
that
we
can
have
a
module
that
you
know
behind
the
scenes.
We
took
care
of
the
differences
in
v8
and
kept
the
API
separate.
I'm
sorry
same.
C
Module,
no,
not
really.
We
don't
use
any
native
modules.
As
such,
we
were
thinking
of
using
some
the
GCD
flags
with
v8.
The
parents
of
Statistics
was
running,
but
we
are
not
using.
F
C
A
C
D
C
As
part
of
that,
we
created
a
seat
to
see
an
RC
plus
two
sat
on,
but
the
apparent
it
is
already
is
simplest
without
on
existing,
but
there's
that
performative,
if
put
it
into
that,
because
a
lot
of
changes
so
we
create
a
different
add-on,
is
not
submitted
in
the
NPM
registry.
Yet,
but
there
is,
there
are
some
changes
so
something
like
that.
You
are
thinking
right.
C
A
B
C
F
The
other
part
is
that
I
know
the
security
in
the
node.
Space
is
a
pretty
wide
topic.
Are
there
some
authentication
and
other
modules
in
the
node
which
not
being
exercised
with
any
of
the
benchmarks,
and
should
we
try
to
do
some
other
to
get
on
into
node,
E
yeah,
yes
or
for
that
will
be
separate
Wars
or
not?
What
do
we
see
a
need
of
any
node
authentication
or
not
security,
small
small
lightweight
benchmark?
A
Yeah
certainly
I
mean
anything
covering
other.
A
common
youth
case
is
I,
don't
know
if,
like
security
is
often
like
a
one-time
authentication
and
that's
you
know,
therefore
not
going
to
really
factor
into
your
benchmarks
or
if
it's
something
that
in
some
use
cases
every
single
request
has
to
do
that
authentication.
Then
you
know,
then
that
would
be
more
interesting
right.
C
Right,
like
one
meter
token
authentication,
let's
say
if
you
create
a
adjacent
talk
token,
with
bcrypt
that
gets
evaluated,
you
can
have
a
some
expiration
time
spam,
which
we
can
check.
So
it's
this
whole
string.
You
are
to
decode
that
string.
Then
you
have
to
do
the
compare
B
cubed
compare
so
there's
quite
a
bit
of
CPU
intensive
stuff
going
on
there.
We
can
try
to
debate
that
functionality
yeah.
That
would
be
definitely
interesting.
A
Say
this
is
a
this
is
a
use
case
where
it,
you
know
the
benchmarks
make
make
sure
we're
not
doing
anything
that
affect
negatively
affects
the
performance
of
the
authentication,
although,
if
that's
is
that,
would
that
end
up
testing
the
module
versus
the
line
itself?
That's
that's
not
that
that's
necessarily
that'll!
You
know.
That's
still
useful.
If,
like
everybody
uses
decrypt
in
a
patina.
D
C
D
A
C
C
Then
we
start
the
climb
and
then
we
before
anything
starts
we
collect
some
memory
footprint
data
and
then
we
collect
some
CPU
statistics
during
the
run,
then
we
collect
them
after
that
we
collect
another
memory,
footprint
data
and
try
to
kind
of
list
out
all
the
steps
we
doing.
What
we
need
to
do
so
I
just
wanted
to
see
if
we
can
kind
of
capture
that
somewhere
and
then,
if
someone
does
want
to
add
a
new
work
towards
these
are
the
approximately
steps
we
need
to
actually
to
be
done.
C
Yeah
come
on
and
that's
what
I
try
to
do.
Pri
I
should
I
kind
of
created
each
step
as
a
one
function,
which
kind
of
does
those
that
particular
thing
right.
Okay,
so
hopefully
you
should
be
stable,
now
different
machine.
Definitely
we
can
try
it
out
for
a
couple
days
and
I
to
win.
That's
fine,
I'm!
Okay,
with
that
I
guess,
I.
A
What
are
ya,
what
is
thinking
I
might
do
is
just
take
one
of
them
like
say
master.
Instead
of
calling
from
you
know
the
the
main
repo
cone
from
your
PR
the
reap.
You
know
the
branch
for
your
your
PR
yeah,
that
seems
to
run
okay,
then
I
can
switch
over
the
right.
You
know,
I
can
land
it
and
then
we
could
switch
over
the
rest.
C
So
now
I
can
think
of
yeah.
We
have
like
multi,
we
have
a
MongoDB
starting
script.
We
have
a
workload
its
own
script,
then
we
create
the
footprint
in
a
different
log
files
and
all
these
different
pieces
there
are
all
moving
pieces
and
they're
all
Katie's
own
log
files.
So
there
is
no
one
needs.
Maybe
we
kind
of
have
all
the
whole
log
yeah
I
try
to
a
new
peer,
I,
try
to
create
or
the
whole
log
in
the
main
log
file,
but
still
MongoDB
or
official
comes
out
separately.
C
The
cash
comes
out
separately
and
I'm.
Thinking.
Can
you
is
there
a
way
we
can
kind
of
make
it
into
one
log
and
so
only
log
for
the
new
to
power.
So
we
can
build
other
tools
based
on
that
log
file.
Right
the
reason
I'm
saying
that
the
MongoDB
when
it
starts
it,
faces
old
log,
MongoDB,
dot
out
yeah
and
that's
it
for
how
the
whether
it
has
version
numbers
field
is
outputted.
If
it's
not,
then
it
just
goes
in
a
sleep
mode
right,
okay,
different
log
file
altogether,
and
so
we
still
don't
know.
C
Even
if
you
the
mount
for
the
version
number,
it
doesn't
really
mean
the
MongoDB
radio
to
accept
new
connections.
What
I'm
trying
to
see
is
MongoDB
ready
to
accept
connections
for
exam
which
says,
connections
open
and
then
then
it
knows
that
I'm
ready
to
accept
connection
those
changes.
I
could
we
could
make,
because
we
are
deceptive,
separate
scripts
right
so
I'm
going
to
if
it's
okay
I'm
going
to
take
an
action
item
to
see
whether
we
can
consolidate
all
log
into
one
log,
and
so
all
these
tools
can
be
based
on
all
the
log
files.
A
No,
absolutely
I
think
any
work
you
want
to
put
into
sort
of
making
more
of
a
framework
for
people
to
be
able
to
more
easily
run.
All
of
the
stuff
would
yeah.
Okay,
oh
yeah.
If
you
know,
if
you're
there
was
more,
you
know
if
you've
said:
okay
I
want
to
do
DCIS.
If
you
had
more
of
a
thing
where
you
could
have
just
plugged
it
in,
and
you
can
think
about
how
to
make
that
available,
then
yeah.
That
would
be
great
yeah.
A
A
A
That's
good.
Our
next
meeting
is
scheduled
for
the
8th,
which
is
a
week
after
so
next
week,
we're
in
note,
interactive,
we'll,
hopefully
all
get
together
and
out
there
and
then
the
week
we
can
get
together
the
week
after
that,
yeah,
okay
and
yeah,
so
I'll
see
if
I
can
get
that
in
I.
Guess
it's
it's
at
least
good.
You
can
point
to
the
benchmarking
results
for
no
DCP
is
I'm
the
site.
Now
right,
yeah.
A
A
F
A
To
start
with,
you
know
what
would
be
useful
is
like
somewhere,
we
could
have
in
the
in
the
benchmark
and
repo.
That
then
has
links
that
you
know,
maybe
is
this
kind
of
some
rates
like
here's
appears
a
link
to
the
stuff
we're
running.
If
you
want,
if
we
then
want
to
have
links
to
things
that
are
running
elsewhere
as
well,
that
would
make
sense
yeah,
along
with
the
description
of
what
they
are
and
how
they
work
and
stuff
like
that.
That
would
be
quite
good,
I
think.
C
A
A
A
F
F
A
Sure,
okay,
so
just
before
we
go
I'll
ask
if
there's
any
questions,
people
who
are
watching
online.