►
From YouTube: Node.js Community benchmarking WG meeting 11 Sep 2017
A
A
It'll
be
an
active
listener
for
this
particular
combination.
We
can
start
with
our
standard
agenda,
which
start
with
stand-ups,
so
myself,
I
did
some
work
with
them.
To
help
get
DC
is
working,
it
is
running
nightly
now
generating
some
stats
just
just
locally
in
the
jobs
themselves.
Kind
of
next
steps
is
to
get
that
landed
and
then
look
at
starting
to
get
the
the
you
know.
A
Graph
generation
working
you
know
been
doing
most
of
that
work
about
something
a
little
bit
and
also
spent
some
time
with
him
on
the
ansible
side
of
things
so
that
he
can
start
to
work
on
ansible
templates
to
configure
our
benchmarking
machines
turns
out.
We
needed
a
few
more
things
installed
for
DCIS,
so
it's
just
sort
of
hit
the
point
where
we
really
got
to
start
using
ansible
for
those
machines.
Otherwise
we
ever
lost
one
of
them.
It
would
be
hard
to
get
back
to
a
working
state.
A
C
Yep
not
too
much
as
I
read
up
some
instruction.
Some
are
some
clear
instructions
on
how
to
use
the
Jenkins
job
for
doing
the
color
benchmarks
last
week
or
two
weeks
ago,
so
I
noticed
you
coming
over.
We
can
still
go
and
turn
that
into
a
document
that
we
have
in
the
US.
People
have
a
go.
They
can
offer
some
car
a
minute
son
though.
D
A
E
Yeah
so
I've
been
trying
to
integrate
DCIS
into
our
benchmarking
and
thanks
to
you,
Mike
for
getting
it
working
in
and
helping
means
all
those
tools.
There
was
just
couple
of
Python
packages
missing
on
that
machine,
I
and
requests
module,
but
now
that
after
installing
it
is
seems
to
be
running
okay,
we
have
not
seen
an
issue,
no
hang-ups,
nothing
like
that.
So
the
last
point
type
right
now
doing
is
trying
to
develop
those
so
that
we
can
generate
the
data
so
one
one
same
quest
and
once
landed.
A
Does
anything
yeah?
Basically,
once
we
land
the
like
the
the
configuration
files
that
generate
the
we
have
to
do
two
things:
land,
those
things
like
with
the
new
IDs
and
Jenner,
and
the
X
and
I
forget
whether
they're,
xml's
or
whatever,
but
the
configuration
files
that
generate
the
charts
yeah.
Then
they
will
do
the
nodejs
org
and
we
just
need
to
land
one
there's
one
like
under
the
web
directory.
We
need
to
update
that
to
include
the
the
new
charts
as
well
and
then
they'll
show
up
there.
The
new
charts
will
show
up
there.
Yeah.
E
A
E
Okay,
so
that
was
four,
so
the
second
one
is
I
gotta
build
infrastructures,
I'm
trying
to
look
at
it,
and
that
is
replicate
on
my
couple
of
VMs.
So
I
can
tell
you
value
to
a
lot
of
information,
Mike
and
hopefully,
I
can
plug
it
in
and
make
all
the
changes
for
the
PC
is
and
issue
a
new
PR.
One
sounds.
B
E
A
G
H
Which
doesn't
include
transcript
yet,
which
is
the
main
thing
that
were
two
billion,
but
there
were
some
new
fields,
so
I
uploaded
the
version
that
contains
China
as
a
benchmark,
so
some
asserts
and
some
using
this
DVD
API,
also
using
the
Esprit
browser
which
is
based
on
so
that
we
get
some
coverage
for
general.
It's
a
joke,
stenosis
I
know
maybe.
A
A
A
Actions
from
last
meeting
I
still
have
to
suggest
some
updated
wording
for
our
participation
guidelines
and
I
still
haven't
done
that
so
sorry
I
will
eventually
do
it.
I,
don't
think
we
have
any
rollover
action,
so
the
next
part
is
going
through
issues
that
were
tagged
and
the
first
one.
There
is
the
issue
number
136,
which
Benedikt
work
together
in
terms
of
questions
for
asking
for
end-users,
so
I
thought
we
could
just
go
through
that
get
feedback
and
then
hopefully
get
to
the
point
where
you
know
either.
A
So
I
don't
know
if
it
everybody
has
this
open.
So
maybe
I'll
read
through
the
questions
and
then
we
can
kind
of
discuss
a
bit.
So
the
first
one
was,
you
know:
what's
your
primary
use
case
for
no
web
developer,
tooling,
standalone
servers,
cloud
services
or
others?
What
kind
of
language
dialect
do
you
use
es5
via
sexiest?
Next,
whatever
supports
the
most
recent
version,
typescript
other,
do
you
run
optimization
passes
on
your
JavaScript
before
you
deploy?
No,
so
minification,
bundling,
obfuscation
other?
A
What
is
your
preferred
way
to
write
asynchronous
code,
so
callbacks
promises
async
await
other?
What
are
the
top
five
node
modules
that
you
use
most
often,
if
you
could
G,
if
you
could
choose
which
to
have
script,
language
feature
would
be
optimized
next.
What
would
it
be
using
the
latest
node
releases
as
the
basics,
something
those
are
a
good
set
of
questions?
The
first
one
I'm
wondering
a
bit
the
primary
use
case.
I
wonder
there
was
a
survey
that
was
completed
recently,
I'm
just
wondering
if
we
should
look
at
that.
B
A
D
A
A
H
G
A
Yeah
I
also
put
that
one.
We
can
find
there's
an
order,
detail
one,
but
that's
so
in
terms
of
the
es5
es6.
Yes,
next
I
guess
that's
in
interesting
for
optimization
at
the
v8
level.
Right,
yes,
yeah,
okay
and
then
the
optimization
passes
is
related
to
what
are
there
tools?
Do
you
use,
which
is
good.
It's.
B
A
H
It's
pretty
long
thread
already,
but
performance
optimizations
for
promises
and
async
await.
In
general
and
last
weeks
we
had
been
looking
into
that
quite
a
bit
together
with
Kate
and
kata,
and
maybe
even
considering
some
changes
to
the
specification
that
would
allow
us
to
run
certain
cases
where
it's
written
in
a
synchronous
way,
but
the
promise
is
already
resolved
immediately
so
that
you
can
almost
run
it.
Synchronous
performance,
okay,
odd,
but
that's
yeah.
I
am
not
sure
how
interesting
this
is
for
the
benchmarking
work,
because
this
is
mostly
language
and
engine
level.
A
It's
something
that's
gonna.
Well,
if
it
affects
I,
mean
it's
interesting
in
the
sense
that
if
it
you
know
if
it
affects
the
main
use
cases
and
somehow
changing
the
performance
of
that
something
you
know
no
would
do,
could
affect
that
performance
there.
That
would
certainly
matter
and
then
which,
if
you
could
choose
which
steps
could
run
which
feature
you
could
up
question
X.
What
would
it
be?
Yes,
good.
A
A
Yes,
you
know
what
I'd
suggest
is,
maybe
you
know
people
can
think
about
it.
We
should
probably
go
back
and
look
at
the
other
survey
just
to
see
what
was
there
and
what
wasn't
just
because
I'm
sure
they'll
be
you
know
if
we
asked
we
asked
for
a
survey
that
has
certain
questions.
We
just
want
to
know
how
much
data
the
other
one
covered
or
not,
and
then
maybe
you
know
when
we
meet
back
next
time,
we
can
agree
on
hey.
A
A
A
No
problem
welcome
I,
guess
just
you
know,
for
the
other
participants
and
who's
shown
interest
in
getting
involved
in
the
benchmarking
workgroup
and
we've
seen
them
quite
active
across
the
repo.
So
it's
a
great
thing
so,
just
as
a
as
a
welcome
and
see
it
going
forward
so
onto
the
next
issue
that
we
have
is
the
status
of
running
the
Courtland
Oh
Corbett
track.
C
C
Perhaps
if,
therefore,
the
people
want
to
take
a
look
and
thing
hello
is
the
stuff
missing:
is
the
stuff
isn't
working
quite
as
you'd
expect
and
then
there's
a
PR
as
well
for
a
document
transaction,
an
easy-to-use
guide
which
remembers
onto
other
people
involved
in
the
project
if
they
need
to
go
and
run
some
of
these
clawed
benchmarks,
perhaps
against
a
pull
request.
Nature.
A
D
C
B
C
A
C
A
I
was
just
exploring
whether
a
drop-down
or
slang
would
make
sense,
but
yeah,
okay,
I
guess
the
other
thing
too
is
like
does
it
makes
us?
So
you
know
you're,
gonna,
you're
gonna.
Do
the
pull
request
to
get
the
doc,
which
is
a
good
will
or
comment
on
that
it
landed
in
the
meantime,
doesn't
make
any
sense
to
reach
out
to
like
I'm
trying
to
remember
who
was
it
who
asked
for
it
in
the
first
place.
C
I
think
aleena,
who
was
one
of
the
people
who's
originally
quite
interested
in
it.
So
when
we
put
together
the
first
runner
this
yeah,
he
had
a
go
and
I
think
that
was
around
the
time
when
the
rapid
machine
was
run
into
some
issues
with
or
the
Jenkins
I
think
and
then
I,
don't
think
anything
really
happened
after
that,
so
it
content
got
taken.
A
look.
Do.
A
C
A
A
B
A
B
H
Simple
runner,
which
is
essentially
a
copy
of
three
obtained
runner,
but
removed
a
lot
of
things
that
we
don't
need
and
put
some
other
stuff
in
there.
So
the
renowned
contains
some
polyfill
and
then
benchmark
result,
benchmark
and
benchmark
suite
class,
which
service
mostly
copy
and
paste
from
obtained,
but
were
laid
something
up.
H
This
is
also
not
interesting
reading.
So
then
just
create
a
benchmark
by
saying
you
benchmark
suite,
and
then
you
pass
an
array
of
benchmarks
for
chaya
only
have
one
benchmark
in
here,
which
is
also
mostly
copy
and
paste
from
there
on
the
test
speed.
So
the
benchmark
needs
a
run
method
or
run
function
instead
of
granted
on.
H
So
what
run
does
in
this
case?
For
example,
it
just
or
setup
collects
a
list
of
functions
using
this
bdv
interface,
which
you
can
just
execute
later,
so
they
installed
an
array
of
these
functions
and
run
just
go
through
it
and
invoke
them,
and
the
Hana
does
that
for
a
certain
amount
of
time
and
then
measures
the
ops
per
second.
H
H
So
what
was
surprising
to
me
is
that
this
person
performs
quite
a
lot
I
mean,
which
is
because
the
of
the
way
that
this
expect
interface
works-
and
there
are
a
couple
of
other
things
in
there-
that
we
definitely
have
an
optimized
in
v8,
because
we
weren't
aware
that
this
is
even
an
issue.
So
this
is
already
worth
defining
I
just
imported
the
full
browserify
generated
module.
H
H
A
H
H
A
H
E
H
Yeah
just
do
that,
so
nothing
for
a
speedy,
which
is
a
one-page
and
PJ's
is
not
the
thong
of
free
and
build
it
and
the
install
and
then
activity
if
final
process
the
file.
But
we
can
probably
also
automate
that
in
a
way
that
makes
it
easy
to
experiment
but
yeah
you
so
bit
easier.
Just
contains
contains
data
which
I
just
took
the
source
code
of
latest
jQuery.
H
H
G
H
A
H
A
H
H
H
H
H
A
So
I
guess
any
other
issues.
People
want
to
bring
up
or
discuss
this
week.
F
Yeah,
so
I
had
two
issues
that
I
wanted
to
see
that
right,
DC
integration
into
the
north
benchmarking
and
I
wanted
to
see.
If
some
has
any
feedback
that
how
in
future,
we
can
make
it
easier
or
it
was
pretty
easy
already.
So
we
wanted
to
go
see
if
is
much
easier
for
the
benchmark
to
be
added
in
future.
F
And
the
other
one
I
wanted
to
see
at
the
node
interactive
would
come
and
I
will
be
there
and
we
wanted
to
see
if
on
Friday,
there
is
an
interest
to
meet
one
or
two
hour,
because
there
is
a
community
day.
They
are
the
Friday
I-I'm
yeah
and
we
do
feel
that
the
node
benchmarking
is
one
of
the
important
area
to
further
credibility,
pickup
or
different
thing.
If
you
see
in
the
nori
yeah.
A
F
F
A
A
F
F
A
F
I
think
what
do
you
find
that
anytime,
like
Jerry,
get
nine
is
going?
We
know
there
are
at
least
of
the
customer
cases
which
have
been
built
up
over
the
years
for
comfortability
for
the
country
so
similar
way.
We
could
see
that
where
the
north
testing
infrastructure
currently
is
what
are
the
critical
customer
use
cases
and
different
things,
we
need
to
add
to
to
make
it
at
far
with
Java
and
got
mad
with
respect
to
enterprise
testing
of
such
runtimes
right.
A
A
You
know
it's
I
could
interpret
what
you're
saying
is.
Maybe
we
should
be
looking
to
add
more
things
to
the
use
cases,
or
maybe
it's
just
a
different
thing,
maybe
you're
suggesting
we
should
look
at
the
the
Java
world
to
see
what
kind
of
benchmarking
they
have
to
see.
You
know
if
I
gives
us
inspiration
for
other
things.
We
should
be
doing.
F
A
F
I
just
want
to
say
that
how
to
make,
because
even
like
no
DCIS
have
been
pretty
complex
fees,
even
though
it's
a
reasonable,
simple
case
individual
contributor,
they
probably
wanted
to
do
something
in
a
month
or
two
month.
They
can
add
and
contribute
so
I
think
what
we
really
need
is
some
people
can
start
adding
thing
to
it.
F
G
E
That,
like
I,
remember
a
couple
meetings
ago.
We
did
talk
about
that.
Okay,
we
are
using
these
workloads,
the
code
benchmarks
design
here,
there's
a
no
DCIS
and
maybe
a
couple
more
use.
Cases
is
good
for
we
for
us
to
do
benchmarking
to
know
the
node.js
code
performance
day-to-day,
but
how
we
were
also
I
think
briefly
talked
about
how
someone
else
a
user
can
use
this
for
their
benefit
for
evaluating
the
machines
and
one
men.
E
E
A
E
E
A
A
And
so
that
there
are
the
two
things
in
anyone
to
cover
that
with
that
was
one.
The
other
one
was
talking
about
getting
something
at
node
summit,
so
yeah
I
was
just
gonna.
Put
the
comment
that
says
you
know:
I,
don't
see
a
discussion
on
the
agenda
for
the
summit
yet
but
Anil
from
Intel
was
suggesting
and
I
agree.
It
would
be
good,
be
good,
1,
2
hours
to
discuss
benchmarking.
A
F
A
A
B
I
Are
calling
me
MJ,
I'm,
not
really
sure
yet
and
I,
don't
think
so
right
now,
I
still
try
to
make
it
that
way.
Yeah.
G
A
Good
and
Gareth
nope,
unfortunately,
not
no
okay,
I
know
it's
like
it
is
a
long
way
to
go.
It's
sort
of
the
I
don't
know
if
thank
Cooper
is
probably
further
for
you
guys,
even
than
the
East
Coast.
So,
oh,
yes,
yes,
okay,
I,
don't
the
one
thing
I
just
isn't
aside
the
one
thing
I
did
notice.
It's
actually
Canadian
long
weekend
that
the
conference
runs
in
kind
of
interesting
anyway,.