►
From YouTube: Node.js benchmarking WG meeting Nov 18 2016
Description
Node.js benchmarking WG meeting -https://github.com/nodejs/benchmarking/issues/69
A
A
B
Yep
and
I've
not
done
a
massive
amount
to
be
a
nice.
The
last
couple
of
months
and
I've
had
quite
a
lot
on,
so
I
have
not
had
chance
to
look
into
node
too
much.
However,
a
bit
more
settled
now
to
hopefully
going
forward
if
we
all
spend
a
little
bit
more
time
doing
stuff.
Okay,
if
you'll
a
few
months,
we'll
start
ramping.
A
C
D
Hi
so
I
could
I
am
not
sure
if
people
know
Aaliyah
team
has
been
working
on
a
new
interpreter.
Ignition
interpreter
and
they've
been
doing
quite
a
bit
of
work
on
this
and
on
the
optimization
pipeline.
So
so
I've
been
looking
at
the
performance
and
working
with
the
team
about
the
performance
of
the
new
interpreter
and
how
it
works
with
node,
and
also
trying
to
figure
out
looking
at
acme
air
how
it
behaves
with
some
of
the
things
that
are
coming
down.
D
A
Hopefully,
grow
up
some
more
support
other
than
that
I
think
that's
the
two
main
things
that
I've
been
doing
since
we
last
discussed
since
we
last
got
together
so
we'll
leave
it
at
that.
A
A
B
A
The
other
thing
all
kind
of
add
on
that
front
is
Wayne.
Andrews
who
works
with
the
team
here
at
IBM
is
also
going
to
join
and
start
to
get
involved
in
helping
to
so
fill
out.
Some
of
those
and
just
generally
participating
this
work.
Well,
he
couldn't
make
it
today,
it's
pretty
late,
UK
I
stops.
We
will
see
you
next
time,
ok,
so
that
is
actions
from
last
time.
A
A
A
A
D
A
I
mean
the
other.
The
other
approach
we
can
take
to
is
we
don't
just
because
we
get
access
to
the
benchmarking,
doesn't
mean
we
have
to
immediately
give
access
to
everybody
right
it
it's
more
of.
We
can
give
it
to
people
who
are
part
of
work.
You'd
be
working
group
as
opposed
to
anything
else.
Right.
D
Guess
the
leaving
that
is
also
has
the
optics
effect
of
that
that
we
not
sort
of
up
to
date
with
the
status
so
in
in
one
way.
I
can
see
that
as
an
argument
that
the
list
should
be
current
so
because
we
are
actively
looking
at
things
and
that's
one
thing:
that's
not
the
current.
So
that
could
be
a
motivation,
but
I
don't
have
tremendously
strong
feelings,
but
if
we
do
have
benchmarking
machines
that
are
actually
available,
that
would
make
more
sense
to
lose.
The
group
right.
B
The
people
there's
quite
a
few
people
who
haven't
responded.
I'm
looking
at
the
list,
I
know
your
suit
k.
I
actually
spoke
to
him
at
a
node
interactive
and
he
was
saying
that
he
was
hoping
to
come
along
to
more
meetings,
but
quite
often
the
times
didn't
match
up
right,
yeah,
no
but
I
say
it's
a
shame
that
he's
or
make
me
add.
That
could
well
be
other
people
in
that
list
to
have
the
same
sort
of
issue
where
the
times
don't
match,
but
really
they
don't.
B
I,
don't
think
so.
No,
it
was
scheduled.
No,
it
was
just
at
one
of
the
sessions
at
s
at
node
interactive.
He
introduced
himself
that
he
is
a
member
of
the
benchmarking
work
group
except
Rosie,
but
now
I
mean
and
I
can
have
a
quick
check
and
see
if
he's
51.
A
B
A
As
a
few
of
the
other
people,
I
can
definitely
you
know,
some
of
them
are
IBM
ER,
so
I
can
reach
out
to
them
and
see
what
you
know
ask
them
to
answer
so.
I,
there's
no
work.
There's
no
rush
here!
I!
Don't
think
that
so
I'll
try
and
do
that
maybe
reach
out
a
little
more
directly
and
we'll
see
where
we
get
on
that
front.
Okay,.
A
A
A
A
F
A
Right
so
this
this
will
issue
is
Michael
actually
put
together
a
number
of
micro
bench
micro
benchmarks,
for
let
constant
bar
sort
of
showing
the
performance
comparisons
between
those
three
different
ones.
I
think
along
the
lines
of
you
know
having
the
data
so
that
we
know
which
ones
to
recommend
part
of
what
he
suggested
was
that
we
add
them
to
the
you
know
the
charts
that
are
generated
nightly.
My
question
was
they
seemed
a
bit
different
from
the
ones
with
we
are
charting
charting
nightly
in
terms
of
they're.
A
Not
necessarily,
you
know,
like
a
you
know,
a
whole
they're
not
like
you
know
a
creer
where,
when
it
goes
down,
you
say:
okay,
that's
a
performance
with
Russian.
We
need
to
change.
It
was
more
of
a
comparison
between
them
in
terms
of
the
motivation.
So
I
was
wondering
if
that
was
the
right
way
to
keep
tracking
them.
I
could
see
maybe
generating
jobs,
but
maybe
publishing
the
day
to
some
other
way.
I,
don't
know
what
you
guys
think
on
that
front.
B
Well,
I
guess
the
if,
if,
if
it's
fair
comparison,
we'd
want
to
make
sure
that
sort
of
the
different
versions
of
node
maintain
the
same
gap
or
or
perhaps
that
newer
versions
get
faster,
I
guess
right,
so
perhaps
maintaining
a
comparison
across
the
different
releases
could
be
potentially
beneficial.
I
do
know,
though,
that
the
I
think
one
of
the
reasons
for
the
let
constant
VAR
was
because
it
was
a
bug
in
the
v8
optimizer,
sir,
perhaps
with
the
new
one
that
Ali
was
talking
about
that
may
change
things
quite
a
bit
anyway.
A
D
Yeah
I
would
agree
with
that
argument.
I
mean,
though
I
am
sure
you
can
take
anything
on
Jasper
and
and
and
could
be
a
benchmark.
I,
don't
think
it
necessarily
everything
should
be
a
display.
Another
chart
I
think
it's
important
to
have
it
I
mean
we
interesting
to
have
it
somewhere
that
it's
trapped
or
can
you
can
run
in
quickly
their
performance,
but
charting
might
be
a
bit
too
much
right.
A
So
I
guess,
then
it
then
it's
like
what
should
we
do
for
these
other
kind
of
things
like
we
could
set
up
benchmarks,
job
bench.
You
know
a
nightly
benchmark
job.
That
runs
something
like
this
now,
there's
already
a
whole
bunch
of
micro
bench
works
in
like
the
nodejs,
you
know
core
runtime.
You
could
almost
make
the
argument
for
any
of
those.
A
The
challenge
there
is
that
they
take
a
long
time
to
run
and
there's
so
much
data
again
that,
like
unless
somebody's
actually
looking
at
them,
I'm
not
sure
exactly
how
it's
useful
right,
yeah
I
be,
and
so
it's
a
question
of
these
ones.
You
know:
is
there
something
we
should
be?
It's
interesting
will
have
the
comparison
as
a
point
of
time
is
there
some
ongoing
need
to
you
know,
run
them
every
night,
compare
them
and
make
that
data
available
somewhere.
Is
this
kind
of
discussion,
I.
B
Guess
perhaps
that
he
could
set
some
jobs
to
be
run
on
a
less
regular
basis,
maybe
that
I
once
a
week
once
every
couple
of
weeks
and
perhaps
have
a
separate
page
of
charts,
which
they're
not
exactly
headline
benchmarks,
right,
they're
available
for
some
legal
and
check
out
one
something
so
often
just
to
make
something.
Funny
hasn't
happened
that
we've
missed.
Yes,.
A
D
I
can
see
something
like
that.
Certainly
so
I
think
then
firm.
Is
that
signal
you
want
is
something
is
50
times
slower
right
right,
so
you
don't
want
to
track
a
sort
of
movement
like
small
movements
on
these
metrics,
because
then
you're
going
to
be
inundated,
but
if
something
is
100
times
lower
yeah
it
that's
something
we
probably
should
look
at
at
some
point.
All.
A
Right
what
I,
really
like
and
I,
haven't
figured
out
how
we
get
there
yet
is
like
some
way
that
you
add
a
whole
bunch
of
number.
You
write
up
all
the
numbers
for
the
small
ones,
and
you
track
that
that
added
number
and
if
that
changes
significantly,
then
you
go
and
look
at
with
the
witch
thing
contributed
to
that
now.
I
know
that
may
not
is
not
easier
to
get
something
that's
useful,
but
that
would
be
something
where
you
know.
B
D
B
D
Challenge
with
these
micro
benchmarks,
you
pick
up
the
new
v
at
5.5
and,
let's
say
holy
arrays
have
become
faster,
but
maps
have
become
slower.
A
geometric
mean
is
going
to
be
I,
don't
know,
but
you
do
care
that
if
Julio
is
became
10
time
floor,
you
do
care,
so
you
I,
don't
think
it's
the
aggregate
I
think
aggregate
will
lose
some
information,
but
large
substantial
swings
in
numbers
may
be
interesting
to
look
at
in
the
like,
independently.
D
A
C
Hey
okay,.
A
So
there
was
a
recent
case
where
we
had
floated
a
patch
that
actually
made
me
a
much
slower
in
a
particular
case,
and
we
believe
that
the
octane
benchmarks
would
have
caught
that
so
from
that
it
kind
of
indicates
that
we
should
be
running
octane
and
I
I,
don't
know
if
it
reports
a
single
number,
but
you
know
try
and
report
that
number
as
one
of
our
charting
charting
numbers.
It.
D
Does
report
single
number
if
I,
dunno,
okay,.
A
So
I
think
just
based
on
that.
You
know:
there's
concrete
concrete
case
where
we
had
a
regression.
We
didn't
catch
it
for
a
long
time.
Octane
would
potentially
helped.
Yes,
you
know
I
think
in
this
case
we
octane
would
have
helped
us
catch
that.
So
we
should,
you
know,
say:
okay,
let's
just
add
that
one
and
I
don't
think
it
should
be
too
hard
to
run
the
octane.
A
C
A
D
A
D
E
F
A
I
was
just
saying:
if
you
can
ping
yang
leak,
she
was
the
one
who
was
actively
involved
last
by
new
will
do
okay
next
one
benchmark,
charts
y-axis
offset
value,
so
I
noticed
I
don't
know
if
this
is
recent,
but
I
I
noticed
after
when
I
was
investigating
a
particular
issue.
A
There
was
quite
a
long
tab
sort
of
quite
along
right
up
of
his
investigation
in
a
particular
area.
Around
array,
cloning
and
array
sums
the
net
at
the
end
was
kind
of
at
this
was
the
question
that
any
questions
will
be
where
the
result
be
consistent
across
platforms
and
does
beat
have
some
tests
similar
to
this.
So
I
wasn't
sure
if
acne
air
covered
that
directly,
but
I
wanted
them
thought
and
actually
gets
quite
pink
you
ali
just
to
see
if
you
were
aware
of
any
other
benchmarks
used
by
the
e.
D
A
We
already
talked
about
70,
which
is
the
call
for
people
are
interesting.
69
is
our
next
meeting
there's
been
some
discussions
of
67.
There's
been
some
discussion
on
whether
we
should
enable
TCP,
no
delay
by
default
in
ojs,
so
I
agreed
to
and
add
it
in
I
did
a
clone
of
our
acne
your
job,
where
I
and
pointed
it
at
well.
No
I
didn't
point
of
that.
I
I
applied
as
part
of
that
job
apply
to
patch
that
set
TCP
no
delay
on
and
did
some
runs
and
provided
data
to
show.
A
A
A
D
D
So
yeah,
so
this
is
the
soviet
team
is
working
on
a
new,
optimization
pipeline
that
includes
the
new
ignition
interpreter
and
uses
more
of
the
turbo
fan
optimizer,
and
so
this
is
the
emission
staging.
This
will
change
going
forward,
but
the
exact
way
it
works,
but
this
is
intended
to
be
the
next
shifting
configuration
sometime
next
year,
so
be
good
to
get
formal
feedback
from
nodejs
and
not
just
the
benchmarking
work
group,
but
from
the
broader
community
as
well
or
and
how
it
performs.
C
A
So
I'm
just
going
to
basically
you'll
open
an
issue
to
start
the
discussion,
but
I
think
I'm
not
sure
quite
how
I
how
I
would
add
command
line
options,
but
generally
it's
you
know,
cloning,
the
job
that
we
have
and
being
able
to
run
in
another
mode,
be
relatively
straightforward.
That
seems,
makes
good
sense
to
me
there.
So
once
you
open,
the
issue
will
start
pushing
that
forward.
A
D
So
one
thing
I
IQ
on
the
dock
about
is
that
so
one
thing
I
without
near
that
I
found
and
I-
don't
know
if
you
see
this
as
well
as
that,
actually
from
a
list
later
three
point
of
view:
the
the
benchmark
or
the
the
work
that
the
workload
does
is
really
fast
and
I.
Don't
jmeter
has
reporting
that
has
finer
granularity
than
milliseconds.
D
B
D
B
D
Many
seconds
is
I
mean
sorry
when
I
run
it
runs
in
three
milliseconds
right
and
I'm
done.
We
had
to
run
that
hasn't
run
in
three
milliseconds,
even
after
changing
different
versions
of
node.
So
and
and
then,
if
it
reported,
let's
say
microseconds
or
if
there
was
a
slightly
different
metrics,
are
so
right
now
I'm,
actually
looking
at
the
average
latency.
D
B
Well,
probably,
nothing
I
think
the
only
number
that
we
get
out
is
we
get
the
average
latency
for
a
certain
period
of
time,
and
then
we
do
some
maths
on
that
to
get
it
the
average
for
the
other
data
points,
but
yeah
me
I
can
have
a
look.
I'm
sure
I
mean
Jamie.
It
is
pretty
sort
of
extensible,
so
I'm
sure
there
must
be
some
way
of
getting
some,
perhaps
more
accurate.
A
throw
out
of
it.
I'll
have
a
look
into
it
anyway
and
then
I
can
report
back.
B
A
A
Yes,
ok,
so
I
guess
maybe
we
should
call
this
a
day
for
the
meeting
this
week,
cars
this
time
and
I'm
all
set
up
another
one
and
say
three
to
four
weeks
or
or
as
we
can,
you
know,
agree
on
through
through
doodle
yep
co
test.
Okay,
great
thanks
for
everybody
in
attendance
and
we'll
see
you
next
time.
Okay,
chase
back
by.