►
From YouTube: Nodejs benchmarking WG meeting
B
A
A
So
mostly
what
I've
done
since
last
time
is
I'm,
just
trying
to
think
I
think
actually
was
pretty
busy
at
node,
interactive
and
then
recovering.
So
not
too
much
have
looked
a
little
bit
at
some
of
the
issues
in
terms
of
finishing
off
the
addition
of
no
DCIS,
but
that's
about
it,
Benedict.
How
about
you
so.
C
E
A
H
Well,
after
coming
back
from
the
conference,
I
I
I
send
you
and
reply
to
that.
That
comment
about
my
last
full
requests,
which
were
you
said,
was
not
landing
properly
because
of
just
too
many
permits
so
I
have
just
one
question:
should
I
combine
them
into
one
all
Twitter
22
commits
hundred
requests
yeah.
A
I
H
H
Yeah,
but
yes,
okay,
so
I'll
take
a
look
and
the
second
second
thing
I've
been
working
on.
Is
we
during
a
conference
with
Anil
and
Suresh?
We
talked
about
adding
the
ghost
page
working
to
urban,
no
just
which
working
set
up
since
you
have
an
expertise.
Basically,
we
try
to
run
it
here
on
Intel
platform,
so
we
open
the
issue
to
track
that
progress
right.
Okay,.
H
I
F
J
And
Peter,
if
you
plan
to
chalk
out
the
work
here,
these
are
all
the
things
it
might
be
good.
We
could
let
the
different
community
guy
chime
in
so
if
you
want
to
create
a
top
level
list
in
either
the
1000
2000
cases,
guys
and
and
even
want
to
take
suggest
all
these
are
useless
or
other
things
you
might
open
up
for
yeah.
D
One
thing
I
thought
about
was
I
mean
it's
not
gonna
be
possible,
for
you
know
two
people
or
something
to
go
through
all
of
them
a
lot
of
these
others
many.
So
we
do
sort
of
a
bit
and
then
maybe
like
write
a
little
bit
of
a
guide
or
sort
of
prioritize
a
bit,
and
then
other
people
can
sort
of
pitch
in
and
pick
up
a
little
bit
of
work.
It's
a
really
good
for
so.
J
A
Excellent
okay.
So
our
next
agenda
item
so
actions
we
didn't
have
any
from
last
week.
So
that's
good.
So
next
thing
is
to
go
through
the
tagged,
open
issues.
So
the
first
one
was
the
one
that
you've
actually
already
mentioned.
159,
which
was
the
add
ghost
there's
a
workload.
So
I
don't
know.
Do
you
have
anything
more
to
say
on
that
front?.
H
A
Sounds
good
the
next
one
is
survey
questions
but
I
what
I'm
gonna
suggest
as
we
leave
that
to
the
end
and
spend
what
can
we
have
left
to
go
through
and
maybe
close
on
that
so
the
next
one
is
the
status
of
node
core
benchmarks,
which
is
1:27.
So
let's
just
open
that
right?
Okay.
So
this
is
the
one
related
to
running
the
core
benchmarks,
I'm
just
looking
to
see,
if
there's
anything,
there's
nothing.
Recent
I.
A
A
A
On
the
first
step
was
that
anybody
who's
you
know
running
the
byte
wants
to
run
the
micro
benchmarks
for
changes.
They're
making
can
run.
It
can
run
that
and
we
know
to
enable
that
really
what
we
just
need
to
do
now,
I
think
unless
there's
any
more
feedback,
we
want
us
to
tell
the
collaborators
that
is
their
point
them
at
the
little
bit
of
guidance.
A
That's
document
in
terms
of
how
to
use
it
and
then
people
will
be
using
it
going
forward
if
I
think
separately
from
that
like
in,
if
Paul
is
some
of
what
Peter
is
looking
at,
is
if
we
come
up
with
a
subset
that
makes
sense
to
run
regularly,
we
might
use
a
similar
script,
but
first
we
got
to
figure
out
what
you
can
run
at
a
reasonable
interval
and
what
numbers
you
can
get.
You
can
sort
of
grab
at
a
reasonable
number
of
numbers
and
chart
and
stuff
like
that.
H
You
think
there's
one,
because
I
think
that
one
thing
lacking
is
if
someone
does
want
to
run
this
micro
benchmarks
on
a
machine.
Do
they
need
to
know,
would
it
be
helpful
if
they
know
they're
running
on
a
particular
particular
machine
hardware,
a
CPU?
If
there
is
something
they
say,
buffer,
API
test
case
and
there's
some
feature:
processor
future
production,
something
like
that
or
G
Lipsy
version
or
is
there?
A
H
H
H
H
A
Look
to
see
that
there's
some
extra
stuff
installed
on
the
bench
marking
mm-hmm
data
machine
walks
even
on
the
the
main
machine.
So
anyway,
as
we
do
ghosts,
that's
probably
worth
thinking
about.
Should
we
actually,
you
know,
add
that
extra
stuff
to
those
new
machines
and
have
them
run
there.
Instead,
that
could
make
good.
H
J
Http
2.0
is
being
I,
think
developed
or
they're,
both
they're
working
on.
Should
we
approach
them?
Hey.
Do
you
have
any
simpler,
reasonable
use
case,
which
we
can
add
to
the
performance
which
parking
so
so
any
time
in
future
somebody
make
changes,
etcetera
it
being
tested,
then
right
so
I
guess
the
way
he
approached
them.
Hey
guys
find
or
create
a
case,
so
we
can
add
it
to
the
benchmark.
Yeah.
H
A
At
that
yeah
I
think
that
that's
a
good
idea,
so
just
you
know
he's
the
right
person
to
ask
to
say:
hey:
do
you
have
a
like
a
whole
system,
one
and
prioritising
that
would
make
some
sense,
given
that
that's
just
coming
in
so
okay,
I
can
dig
that,
though
okay.
So
let
me
just
take
that
as
a
note
here.
E
A
A
E
A
A
A
I
F
F
J
I
think
you
know,
since
we
are
on
the
same
path,
James
are
also
starting
updating
the
stream
api's
right.
That
is
one
of
the
Charter
going
power,
so
I
think
we
should
ask
them
proactively.
Yes,
there
are
thousands
of
benchmark
but
identify
now
some
meaningful
single
digit
cases
which
can
test
those
changes,
and
we
add
them
as
we
go
into
the
node
benchmarking
for
all
future
Fork
we
require,
with
these
guys,
were
making
significant
changes.
Yeah
identify
few
critical
benchmarks
which
can
be
added
so
all
new
good
features.
A
J
A
C
Okay,
so
the
repository
is
now
hosted
on
github,
slash,
create
protruding
benchmark.
It
follows
the
v8
product,
copyright
CA,
whatever,
so
it's
just.
Whenever
you
want
to
contribute
to
this,
you
basically
need
to
go
through
the
same
process
as
you
need
to
go
for
v8,
and
the
benchmark
was
rewritten
to
fully
use
NPM
and
the
pack.
So
all
of
the
dependencies.
C
C
If
you
open
it
in
a
browser,
then
it
has
a
very
entertaining
UI,
but
maybe
I
will
find
someone
who
makes
it
nice
at
some
point,
though,
from
the
tests
it's
a
fairly
simple
setup.
I
think
I
most
cover
most
of
the
tools
but
I,
since
we
don't
want
to
measure
IO
or
it
doesn't
really
make
sense
to
measure
IO.
C
Since
we
cannot
do
anything
about
it,
it's
only
what
the
core
workloads
so,
for
example,
in
case
of
WebEx,
there
are
two
things
that
consume
most
of
the
CPU
time:
1
is
the
parsing
and
one
is
the
bundling
and
the
actual
tree
shaking
the
parsing
is
done
via
the
Acorn
powder.
That's
why
icon
is
included
and
for
the
tree
shaking
I'm,
still
waiting
on
the
WebEx
folks
to
come
up
with
the
separate
benchmark
for
that
there's
some
trouble
because
of
epic
do.
C
So
from
the
organization
on
point
of
view.
All
of
this
is
in
source,
so,
let's
say
the
Acorn
benchmark.
This
is
just
loading,
a
corn
and
that
reading
various
sports
files,
if
you
bundle
this
forever
pact
and
all
of
this
ends
up
in
a
virtual
file
system.
Otherwise,
if
you
run
it
by
an
author,
actually,
then
it
loads
it
from
the
file
system,
and
then
this
thing
just
runs
the
acorn
tokenize
on
it
and
the
pawza
and
in
the
end
of
full
ast
watt.
So
that's
mostly
straightforward.
C
What's
important
is
which
we
measure
only
the
actual
workload
we
don't
measure
and
the
I.
Also
all
of
this
reading
of
files
happens
before
so.
Some
of
these
benchmarks
are
more
involved
once
some
of
them
are
even
simpler,
like
the
typescript
one
is
the
simplest,
it's
just
two
files,
or
even
one
file,
but
compiled
to
es
3
or
2
years.
Next,
where
typescript
so
far,
we
don't
validate
the
result,
which
I
think
we
still
need
to
do,
although
it's
probably
very
hard
for
an
engine
to
cheat
on
this
Plus.
C
If
you
know
engine
manages
to
compile
away
typescript
charts
by
module,
then
maybe
it's
fine,
whatever
okay,
so
I'm,
also
tracking
findings
on
on
these
benchmarks.
We
have
this
investigation
document
for
v8.
It
already
found
a
couple
of
things
like
I,
already
addressed
five
tricky
box
that
were
found
by
the
benchmark
and
we
just
basically
started
looking
so
I
already
see
that
we,
we
have
a
lot
of
interesting
things
that
we
can
fix,
that
didn't
show
up
so
far
or
that
only
it
shows
up
now,
because
we
run
certain
things
for
a
longer
time.
C
While
so
far
we
were
mostly
looking
at
startup
in
the
browser
and
there
you
don't
see
a
lot
yeah.
So
I
said
this
is
currently
blocking
on
internal
process,
so
we
are
not
making
public
noise
about
it
yet,
and
probably
since
this
is
mostly
about
improving
a
performance
of
tools,
we
might
want
to
not
sure
also
definitely
don't
want
to
turn
this
into
the
new
octane
benchmark.
This
will
be
just
one
driver
for
performance,
but
not
the
main
driver,
definitely
yeah,
and
that's
it.
For
me,.
C
C
Right
for
continuously
running
it
so
I'm
waiting
for
micro,
selectra
from
John
David
Thornton
to
provide
this
functionality
I
hope
you
will
do
it
soon
so
and
I'm
also
waiting
for
Facebook.
They
want
to
send
me
another
tool
in
case
that
is
missing
and
of
a
peg.
That
would
be
nice
to
include
so
maybe
two
weeks,
maybe
in
Turkey
or
something
it's.
A
C
A
A
A
C
A
C
The
main
question
for
users
is
given
the
choice
of
both
and
both
have
similar
performance.
What
would
you
prefer
and
right
so
people
would
prefer
Kovac
ap
is
which
I
doubt,
but
that's
just
my
gut
feeling,
and
then
it
doesn't
really
make
sense
to
go
all-in
on
promise
based
API
right,
so
it'd
basically
be
if.
C
A
C
C
C
K
C
C
Oh
yeah
right
to
get
the
proper
context
and
put
everything
into
perspective
that
comes
out
with,
but
specifically
the
language
style,
like
the
optimization
pauses
will
be
interesting.
Yes,
I've
now
looked
at
so
many
things
on
NPM
and
there's
no
I
still
can't
grab
an
underlying
concept,
some
people
funnel
their
code
and
mini
file
and
whatnot,
and
then
upload
it
to
and
PM,
and
some
people
just
pass
the
naked,
your
six
code
that
they
write.
D
A
J
A
C
Right,
so,
even
if
you
have
a
yes
I
agree,
but
even
if
you
only
point
to
a
JavaScript
language
feature,
let's
say
people
need
faster
promises,
faster,
async,
wait,
then
that
also
depends
on
note,
because
the
micro
test
queue
goes
together
with
notes
all
right,
I
think
you
don't
need
to
draw
clear
boundary
here.
So
maybe
just
just
what
you
suggested.
A
C
A
C
C
C
A
C
A
K
H
A
H
Whatever
we
want
how
about
workers
yeah
so,
for
example,
so
people
might
just
start
to
know
the
feature
before
even
available
and
what
it
is
good
or
white
where
it
can
be
used,
I
should
EP
or
worker
also
or
multi-threading
whatever
it
happens
to
be
so
we
can
have
a
question
and
have
some
new
features
which
are
in
words
right
now,
or
people
are
thinking
the
core
one
we
can
add
into
the
survey.
Would
that
be
helpful?
Yep.
C
C
A
E
What
they
think
might
be
interesting
to
also
understand
is
whether
people
have
benchmarks
for
their
own
code,
and
the
reasoning
here
is
when
we
tell
people
to
do
testing
of
node.
You
know
new
versions
of
node.
Do
they
actually
like
catch
performance?
Regressions
ahead
of
time
are:
are
they
going
to
be
like
yeah,
looks
good
and
then
later
on,
they're,
like
oh
I,
see
a
slowdown
because
they
don't
have
any
benchmarking
on
their
end.
C
E
A
C
A
C
A
A
E
Thing
the
the
the
sentiment
behind
the
service
seems
to
be
around
throughput
and
I've
heard.
A
lot
of
customers
complain
about
memory
and
so
I
think
it'll
be
interesting
to
just
see
what
the
perception
out
there
is,
and
so
maybe
ask
the
question
of.
If
you
think
you
have
a
performance
problem
through
diao,
a
cpu
problem
or
a
memory
problem
and
see
what
people
say
about.
A
E
Put
it
another
way
actually,
which
is
I,
agree
that
it's
hard
to
answer
with
any
kind
of
depth,
but
I
think
it's
more
trying
to
measure
the
perception
of
the
community
here
and
the
way
I
would
frame.
It
is,
if
you
know,
if
they
could
ask
the
node
the
node
code
to
invest
in
one
area
will
be
CPU
performance
improvements
are
memory,
improvements.
A
C
A
C
F
A
H
About
the
node.js
usages,
for
example,
what
the
command
and
options
people
are
familiar
with
I
mean
there
are
just
tons
of
applica
command-line
option
with
node
and
v8
debugger
options,
tracing
options,
DC
tracking
options.
Are
they?
How
familiar
are
they
with
all
this
option?
They
can
do
to
collect
various
performance
data
I.
C
H
A
C
C
I
think
we
need
to
start
to
the
use
case,
so
this
is
just
so
that
everything
else
is
in
context
and
yeah.
I
would
really
love
to
see
some
answers
in
the
direct
and
the
deployed
calls
for
the
rest.
I
don't
really
know
what
will
people
have
more
days,
maybe
put
the
top
five
node
modules
last.
The
second
last
question
make
that
the
last
one,
because
that's
easy
to
answer,
even
if
your
or
a
went
to
nine
questions
right
sure.
K
K
C
C
That
it's
already
second
dose.
That's
fine,
okay,
yeah
there
sure,
because
if
that
comes
too
early,
then
people
will
just
come
up
with
all
kinds
of
crazy
ideas
right
and
there
was
this
other
question.
If
you
had
to
choose
what
to
focus
on.
This
is
also
one
of
these
never-ending
questions.
So
even
note,
the
CPU
or
memory
I
think
that
should
also
be
move
down.
D
C
B
Of
seems
pretty
good
to
me.
I
think
yeah.
F
A
Had
talked
to
Tracy
the
community
manager
earlier
about
this,
she
seemed
interested
in
that
and
generally
they'd
like
to
to
actually
you
know,
send
out
more
surveys,
get
comes
to
feedback
and
and
sort
of
formalize,
almost
a
little
bit
how
to
make
that
easy
to
happen.
So
the
she
when
I
talked
to
her
just
she
sounded
very
receptive
to
help
me
open
may
be
using.
This
is
the
first
test
case,
so
we'll
see
how
that
goes.
A
Yep,
okay,
so
we'll
you
can
off.
We
can
just
keep
track
on
that
issue
just
to
see
what
goes
on,
but
I'm
gonna
actually
didn't
mean
to
close
that
one
I'm
gonna
reopen
it
and
we'll
see
you
there.
Okay,
so
we're
just
that
time.
Was
there
anything
else.
People
want
to
discuss
this
week,
I
guess
the
the
last
thing
we
should
do
is
that
the
the
meeting
conflicts
with
note
coffee
you
so
I,
wouldn't
be
able
to
make
it
so.