►
From YouTube: Node.js Community Benchmarking WG meeting
A
So
far
we
haven't
attendance
myself,
Michael
Dawson,
Gareth,
Ellis,
empower
and
Anil
Kumar
nitesh
can
wha
feareth
eyes,
which
I'm
sure
I
just
got
wrong.
Jimmy,
Thompson
and
I.
Think
that's
it
in
terms
of
our
agenda.
We
have
our
standard
agenda,
I,
don't
know
if
anybody
has
any
items
to
add
to
the
agenda
before
we
get
started.
A
B
A
A
Okay,
if
there's
no
other
announcements,
then
let's
move
on
to
to
that.
So
the
first
issue
is
159,
which
is
add
ghosts
yes,
workload
to
benchmarking,
I'm
guessing
the
room
that
was
was
it
you
asking
about
that
one
or
that
some
sort
to
who
was
asking
about
that?
Okay,
so
I
see
that
you
know
yeah
you've
been
doing
a
bunch
of
work
on
that
front.
You
want
to
bring
us
up
to
speed
and
frame
the
discussion.
B
A
B
A
C
A
D
A
D
D
A
B
A
E
But
am
I
right
that
it
just
work
out
of
the
box?
Then
all
we
need
is
some
sort
of
wrapper
to
be
able
to
drive
some
sort
of
workload
and
get
some
numbers
back
out
of
it.
Yeah
so
I
mean.
Is
the
ghost
team
open
to
pull
requests
to
get
ghosts?
What
the
the
latest
version
are
ghosts
working
on
the
latest
version
of
node.
E
But
we
could
potentially
find
ourselves
a
little
bit
down
the
road
of
ghosts
either
stopping
working
because
of
some
other
change,
and
then
it
would
be
keep
patching
an
old
version
of
ghost
it
working
on
the
latest
version,
a
node,
whereas
if
it's
already
working,
we
could
or
the
other
volunteers
who
may
be
as
cheap
as
things
are
checked
in
ghost
for
it
to
go
in,
we
made
to
make
sure
it's
working
on
my
latest
versions.
A
narrative.
F
Is
the
idea
than
just
a
budget
that
we
would
be
benchmarking?
The
latest
version
of
ghost,
as
of
when
have
you
run
the
benchmark,
or
that
we
would
pick
a
particular
version
that
is
currently
the
latest
and
then
check
out
that
hash
as
part
of
the
benchmark
and
use
that
and
then
later
on,
update
again
I'm
gonna
make
sense.
Well.
E
B
A
B
C
E
A
I
think
we're
gonna
have
the
challenge
of
upgrading
ongoing
ray
like
at
some
point
for
any
one
of
these
benchmarks.
It's
gonna
make
sense
to
update
and
the
approach
that
comes.
To
my
mind
is
that
you
know
maybe
we
have.
We
run
them
both
two
different
versions
of
it:
concurrently
for
a
while
until
you've
built
up
some
history
and
then
you
drop
off
the
old
one
once
you
no
longer
you
know,
you're
comfortable
you've
got
enough
history
with
the
new
one
yeah.
A
The
key
thing
I
think
here
is
like
you
know,
is
it
worth
getting
the
version
of
ghosts
that
that
service
has
working
now
in
and
then
maybe
you
know
six
months
from
now
update
and
I,
like
the
idea
of
you
know,
if
you
can
go
back
and
work
with
them
to
get
the
newer
version
working
on
the
later,
no
Jess
wouldn't
make
sense.
But
I
guess
question
is
at
this
point:
do
we
wait
or
let's
get
the
you
know,
an
older
version
going
and
start
capturing
some
data
and
having
some
coverage
there.
D
B
B
A
C
A
A
B
C
B
A
A
A
B
Does
that
make
no,
it
totally
makes
sense,
but
I'm
have
created
the
block
pages
using
their
user
interface
and
I
know.
I
have
some
idea
of
how
the
content
of
the
block
is
stored.
Some
of
it
is
in
the
database.
Some
of
it
is
in
the
content
directory.
A
Okay,
yeah
cuz
I
think
you
know,
at
least
in
my
mind,
that
would
be
the
ideal
where
you
you
know
you
have
the
zip.
You
know
what
that
is,
and
then
the
things
that
you
added
in
were
separate.
You
know
a
separate
patch
or
separate
files
that
you,
even
if
they're
files
that
you
need
to
like
copy
in
or
something
like
that,
that
would
doesn't
make
it
easy,
maybe
even
easier
to
move
up
to
new
versions
and
stuff.
B
Yeah
the
thing
is,
another
thing
is
when
they
moved
from
point
1
1
to
1.0
of
things
change,
so
they
are
not
doing
database
migration
at
all
right,
ok,
so
they
they
basically
wrote
a
kind
of
package,
an
app
in
not
Jesus
that
runs
inside
ghost
to
move
the
blocks
from
one
to
another
right.
Okay,
all
these
issues
right
so.
C
A
Of
which,
which
had
the
zip
that
you
can
download,
plus
whatever
other
files
you
need
to
add
and
then
a
script
file
that
takes
those
and
sort
of
builds
the
tree
that
you
need
to
run
and
then
starts
the
run
that
you
know.
That
would
be
what
we
could
then
easily
add
in
to
do.
You
know,
run
nightly
and
capture
the
results.
A
A
A
A
D
A
B
A
D
A
D
A
B
A
Yes,
but
it's,
but
it's
more
about
not
having
to
depend
on
pre
configuration
of
the
machine.
Ideally,
you
should
be
able
to
just
check
out
the
the
repo
run,
the
script
and
you
could
do
it
on
you
know
so
so,
for
example,
if
there's
a
problem
and
somebody
wants
to
investigate
the
problem,
they
can
more
easily
just
check
it
out,
run,
run
it
and
get
the
same
results.
You
know
not
these
same
results
because
it'd
be
a
different
machine
but
yeah
so.
A
A
So,
at
least
like
you
know,
if
there's
an
equivalent
to
that
one
on
Windows,
where
you
can
grab
a
zip,
unzip,
the
zip
and
then
you
know
start
it
up
that
way.
That
would
be
the
other
thing
where
you
know.
As
long
as
the
license
is
compatible,
we
could
store
the
zip
in
that
same
directory
and
then
the
master
shell
script
could
do
all
the
work
to
sort
of
extract.
B
F
It
worth
considering
something
like
blocker
for
this,
so
you
can
say
you
know.
Maybe
we
don't
include
the
my
sequel,
binary
Zinn,
the
repo,
but
you
include
a
docker
file.
There
refers
to
them,
and
so
that
just
goes
and
fetches
a
particular
image,
which
is
a
particular
version
in
there.
Maybe
it's
one
there's
been
authored
specifically
for
this.
That
has
the
Indian
file.
Setup,
however,
is
appropriate.
E
A
E
D
A
A
Yeah
I
think
that
would
probably
be
a
good
way
to
go.
It
sounds
like
there's
already
a
standard
SQL
docker
image,
so
we
wouldn't
really
even
need
to.
You
know,
do
much
other
than
just
run
that
container
and
then
hopefully
you
you
know
can't
I
guess:
we'd
have
to
also
figure
out
how
you
pin
that
container
itself
to
a
subset
of
the
CPUs,
but
that
would
be
a
good
pattern
to
figure
it
out,
and
then
we
can
reuse.
Another
ones
as
well
and
other
people
could
easily
recreate.
A
A
B
B
A
B
A
This
is
the
issue
that
we
opened
about
user
feedback,
so
I
do
know
that
that
has
been
discussed
over
in
the
community
committee.
There
is
a
group
forming
to
get
user
feedback.
I,
don't
think,
there's
been
any
concrete
action
on
getting
it
out
there
yet,
but
I
hope
that
that
will
happen,
then
in
the
next
little
while
I,
don't
think,
there's
too
much
else
to
say
about
that.
Did
it
was
there
anything
else?
People
wanted
to
talk
about
this
week.
B
E
A
A
E
So
I
forget
what
he's
really
yeah
anyway
go
see.
I
did
an
extra
property
to
the
launch
s,
so
you
can
also
specify
how
many
runs
you
want
to
do.
That
I
mean
that
was
something
I'm
missed
out
to
start
with,
because
we
thought
we're
gonna
use
just
the
standard
recommended
one.
So
you
just
gotta
make
sure
that
people
don't
go
sort
of
crazy
with
right.
C
E
You're,
going
in
queueing
up
like
three
days
of
one
particular
benchmark
or
something
I'll
go
in
the
documentation
to
to
point
that
out
as
well,
but
otherwise
it
seems
like
there's,
there's
been
a
number
of
runs,
used
or
done
using
the
scripts,
and
it's
not
in
I've,
not
heard
any
real
complaints.
Okay,
hopefully
that
means
it's
working
and
it's
not
just
people
giving
up.
Oh
yeah.
A
A
A
A
So
you
could
just
write
a
blog,
that's
like
it's
either.
If
you
could
fit
it
into
a
tweet,
you
could
tweet
it
out
and
then
let
us
all
know
and
we'll
retweet
it
and
I
guess.
Even
we
got
to
be
on
that,
we
got
to
maybe
have
an
issue
in
the
node
repo
that
says:
hey!
Here's,
this
new
thing
you
can
use
okay,.
A
E
E
A
E
A
A
G
A
A
Agenda
but
at
least
visible
there.
Okay,
that's
good,
so
remember
the.
E
Job
is
well
that
were
just
working
about
to
be
able
to
do
some
compares.
You
could
just
include
the
the
latest
v6
tech
stack
and
the
master
and
then
run
that
through
the
through
the
bench
of
the
core
benchmark
system
in
Jenkins
the
otic.
Obviously
you
shouldn't
do
the
entire
directory,
but
if,
for
example,
you
say,
oh
I'm
gonna
do
the
HTTP
ones
fork
or
something
like
that.
You
can
go
and
drop
those
in
to
get
some
numbers
at
least.
A
E
A
Perf
nodejs!
This
one
is
interesting
in
that,
maybe
not
directly
related
to
benchmarking
per
se,
but
Benedict
had
mentioned
that
the
flame
graph,
tooling
perf
the
perf
tool.
That's
used
for
that
because
of
the
switch
to
the
interpreter
now
gives
you
interpreter
information
as
opposed
to
what
you
really
want.
A
A
Bright
so
perf,
if
you
know
perf
right,
they
created
a
plugin
to
perf
that
knew
how
to
understand
what
was
going
on
in
the
computed
code
mm-hmm.
And
if
you
look
at
these
two
graphs,
the
main
difference
is
that
you
know
if
you
look
at
the
bottom
one,
it
actually
tells
you
I'm
running
this
particular
JavaScript
thing
like
I'm
running
load
module,
for
example,
yeah
in
in
the
new
version,
because
the
inter
bowl
fan,
not
everything
is
JIT,
it.
You
start
running
an
interpreter
and
then
later
on
it
jets
things
that
are
hot.
A
A
In
the
interpreter,
it
just
tells
you
you
can
see
those
things
that
says,
like
bytecode,
handler
a
call
property,
and
so
it's
actually
telling
you
what's
going
on
in
the
bytecode
Handler,
not
what's
going
in
in
your
JavaScript,
and
you
know
it
and-
and
it's
because
perf
is,
is
looking
at
compiled
code.
But
the
compiled
code
you're
running
is
the
interpreter
or
not
the
compiled
code
for
a
JavaScript
function
and
therefore
it
doesn't
know
what
to
tell
you
so
yeah.
So
that's
the
issue
there
I
don't.
A
G
A
And
I
expect
something
I
mean
there
was
a
perf
plugin
for
Java
as
well
and
I
suspect
something
along
the
same
lines
was
going
on,
so
he
you
know
the
question
he
was.
Having
is:
do
people
really
care
about
this
and
I
guess?
If,
yes,
then
we'll
have
to
figure
out
whether
you
know
it's
a
fix
to
per
for?
If
there's
some
other
alternate
way
get
the
data
that
matters
I
think
there
was
some
discussion
so
bleeding
into
the
note
coffee
you
there
were
some
discussions
with
like
Netflix.
A
C
C
B
A
A
A
C
G
D
Go
ahead,
okay,
so
I,
remember
maybe
you
remember.
We
talked
last
time
that
I'm
too
new
API
I
didn't
word
no
biggie
is
to
to
API
using
the
experimental
branch
and
Express,
and
our
latest
node
yeah
and
I
see
some
major
performance
difference
between
the
CDP
s
in
version
1
versus
HTTP,
2
and
I'm,
talking
to
James
and
material
about
the
differences.
D
D
D
Yeah,
the
difference
is
that
it
won't
run
lazy
for
noticing
C
is
it
won't
run
as
what
how
it
is
running
now,
the
reason
being
the
Python
client?
It
doesn't
have
the
HTTP
to
suppose.
I
have
to
use
the
Apache
bench
to
just
to
understand
and
all
those
comes
if
I
can.
If
that
is
good
enough
for
just
two
of
the
tracking
perspective,
I
can
I
can
go
ahead
and
create
the
PR
yeah.
A
D
C
D
A
D
E
A
A
Well,
no
specific
questions
would
be
about
performance,
but
if
there
isn't,
then
we
can
write
I,
don't
think
there
weren't
any
specific,
like
we
didn't,
have
a
bunch
of
people
get
together
and
talk
about.
You
know
benchmarking
or
performance
side
of
things.
There
is
a
fair
amount
of
discussion
on
the
sort
of
diagnostic
side,
so
I
think
there's
interest
in
pushing
an
effort
on
that
front.
A
Yeah
I
think
performance,
wise
I'm
thinking
just
trying
to
think
of
anything
sort
of
pops
up
and
that's
nothing
comes
to
mind.
There's
specific
things
that
you
know
were
discussed
there
or
even
in
the
presentations
or
anything
there
was.
There
was
one
presentation
on
footprint
like
memory
footprint
for
modules,
but
that's
sort
of
not
directly
I
guess
it
is
related
to
some
of
the
benchmarks,
like
memory
use,
that
we
do.