►
From YouTube: Node.js Benchmarking WG Meeting - Aug 28 2018
Description
A
A
A
So
looking
at
the
standard
agenda,
things
tagged
with
nodejs
benchmarking,
so
investigating
the
time
taking
a
number
of
benchmarks
and
core
Gareth
who's
working
on
that
isn't
here
today.
So
we'll
skip
an
update
on
that
and
the
same
actually
goes
for
the
next
two
which
I
require
cached
perf
being
lower.
Although
on
that
front,
I
can
give
a
bit
of
an
update.
A
I
know
Adams
been
trying
to
get
to
recreate
that
hasn't
been
able
to
recreate
on
a
local
machine
and
I've
been
trying
to
get
him
access
to
the
actual
benchmarking
machine
itself,
that
it
is
si,
say
SSH,
keys
and
so
forth,
but
for
some
reason
I
don't
think
he
was
being
able
to
log
in
still
and
so
I
still
need
to
catch
up
with
him
to
see.
If
we
can
get
him
access
there
and
then
the
next
one
on
his
list
is
to
move
on
to
the
the
ghost
benchmarking
workload.
A
B
B
The
the
work
that
we
want
to
do
is
basically
we've
got
a
software
stack
for
onload,
which
is
basically
a
kernel
bypass
stack
that
is
complementary.
With
the
the
actual
TCP
stack
that
exists
in
Linux
kernel,
we
are
able
to
show
massive
improvements
for
web
style,
the
best
response
type
workloads
and
have
done
so
in
the
past
for
hft
style
applications.
B
We
are
now
interested
in
applying
the
same
sorts
of
techniques
to
node.js
and
as
part
of
that
effort,
we
thought
we
reach
out
to
the
nor
J's
group
about
the
benchmarking
that
you
guys
are
doing
and
see.
If
there's
any
way,
we
can
contribute
towards
this.
So
we
are
interested
in
in
helping
to
do
some
of
this
work.
We
were
interested
in
getting
out
from
you.
Guys
is
guidance
on.
B
C
Oliver
I'm
a
summer
intern
at
solar,
flare,
I'm,
a
second
university
student
at
Cambridge
and
I've,
been
first
of
all,
getting
there
just
to
actually
work
on.
So
if
there's
Network
stack,
which
involves
hacking
rewriting
bits
of
it,
but
that
can
be
done
more
nearly
in
future
and
I've
done
a
plenty
of
simple
Apache
bench
benchmarks
of
a
very
simple,
no
chess
program.
That's
about
it.
A
C
B
Definitely
from
our
perspective
to
to
have
this
integrated
into
the
libuv
would
be
ideal.
I
think
we
don't
have
a
lot
of
context
about
why
libuv
is
actually
not
calling
out
to
the
sea
library.
It
basically
makes
the
SIS
call
out
of
libuv,
but
I
expect
the
reasons
that
are
probably
to
do
with
not
having
it
could
probably
be
a
case
of
that
they
aren't
interested
in.
Did
they
want
to
be
sure
about
the
code
path,
all
the
way
to
the
kernel?
B
We
are
quite
happy
to
start
talking
to
the
label
UV
guys
about
whether
our
patches
are
something
they'll
be
willing
to
accept.
If
not,
then
you
know
we
can
move
this
functionality
into
unload,
so
there's
two
bases
off
in
the
problem.
One
is
we
what
if
I
live,
UV
the
others?
We
can
do
a
patch
to
unload
that
we
would
run
as
part
of
the
application
profile
once
against
noches.
So.
C
B
A
Yeah
I
think,
like
you
know
at
least
an
issue
saying
hey:
we
noticed
that
you've
chosen
this
style
versus
this
other
style.
You
know
the
choice
that
was
made
causes
not
to
be
able
to
do
X.
Was
that
a
conscious
decision
or
you
know,
would
there
be
you
know
if
we
want
to
put
in
the
work
to
change
it?
Would
it
make
sense
I,
don't
because
I
don't
know
the
answer,
but
it's
always
worth
asking
absolutely.
A
B
Do
you
mean
by
that
as
an
hour?
Our
point
was
basically
that
you
know
we
saw
the
kinds
of
benchmarks
that
you
guys
have
and
they're
very
representative
of
the
kinds
of
things
that
we'd
want
to
do
so
brain
or
from
reading
around
that
nodejs
is
used
in
different
styles
of
operation.
Yep,
we
just
don't.
We
help
fill
in
some
of
those
gaps
in
the
types
of
benchmarks,
right.
A
A
Right
so
basically
start
with
this
one
and
then
figure
out,
you
know
which
ones
you
think
would
be
a
good
candidate
to
start
with,
in
terms
of
you
know,
writing
a
new
benchmark,
finding
a
benchmark
or
you
know,
yeah
writing.
You
know
finding
one
tweaking
one
or
just
writing
one.
You
know
reading
a
new
one
from
scratch.
Yeah.
C
B
What
we
could
deal
with
a
little
bit
of
help
from
you
guys,
so
the
ones
that
are
interesting
to
us
are
those
that
are
missing
are
obviously
service-oriented
architectures.
This
is
a
single
webpage
agency,
selectors
yeah,
do
you
have
any
feel
on
what
the
majority
or,
if
you
were
to
say,
if
no
just
break
down
by
percentage
of
use
case,
which
one
is
the
biggest
fear
right
now
is
missing.
A
A
B
We'll
take
a
look
at
because
if
there
is
accelerate
button,
if
that's
a
kind
of
common
way
of
doing
things,
we
would
be
happy
to
use
that
the
other
thing
that
maybe
interest
us
apart'
municipal
page,
so
some
of
these
that
are
missing
I-
think
we
for
that
are
missing,
we'll
probably
be
able
to
contribute
to,
while
Oliver's
in
on
ok
and
I
think
maybe
single
page.
What
about
nice
to
service
based
applications
is
no
just
very
popular.
Oh.
B
A
That'd
be
one
actually
to
look
at
and
see
if
there's
an
you
know
that
newer
version
is
something
we
want
to
get
running
or
if
there's
some
other
good
benchmarks
on
the
micro
services
from
yeah
I
know,
micro
services
and
the
single
page,
those
those
are
yeah.
One
of
the
the
very
common
common
use
cases
is
micro
services
in
the.
B
A
B
A
Some
good
people
in
the
community
who
might
step
out
to
step
up
to
do
that
like
I,
think.
Basically,
if
you
open,
you
know
what
I'd
suggest
is
in
the
benchmarking
repo,
you
open
an
issue
that
says
hey.
You
know
we're
bad
the
background
that
you've
talked
about
we'd
like
to
contribute
both
service,
oriented
architecture
and
generating
dynamics
or
dynamics.
A
The
single
page
application
use
cases.
You
know
we
we
can
do
some
research,
but
it
would
be
good
if
we
could
just
talk
to
a
few
people
having
having
done
that.
There's
a
few
people
like
from
near
form
and
note
source
like
in
like
at
CCA,
you
know,
would
you
guys
have
any
time
to
you
know,
give
you
a
bit
of
a
you,
know,
sort
of
a
brain
dump
and
at
the
same
time,
if
you
guys
can
do
some
reason,
you
know
go
out
there
and
do
some
research
and.
A
And
I
think
like
Eric's,
as
you
start
to
develop
like,
even
if
you
know
we'll,
try
and
get
as
much
feedback
input
along
the
way,
and
so
you
know
if
you,
even
if
you
do
some
research
and
say
hey,
this
looks
like
a
good
candidate.
We
should
post
that
in
and
Brian
get
people
from
the
community
to
comment
and
and
I
can
help
facilitate
trying
to
get
people
to
come
and
say:
oh
yeah.
That
looks
like
something
that
resonates
with
us
as
a
benchmark
or
not
and
so
forth.
That
sounds
great.
B
A
B
A
The
first
thing
that
they
built
in
that
was
kind
of
a
web.
You
know
it
falls
into
the,
and
actually
we
should
probably
update
that
because
we
didn't
in
this
table
but
understand
still
kind
of
fits
into
the
generating
serving
dynamic
web
page
contents
as
far
as
I
understand
got
it,
but
it
you
know
it's
it's
a
framework
in
which
they,
if
they
always
said,
would
be
a
framework
to
add
additional
use
cases.
B
A
A
B
That
sounds
great
great
I
mean
I.
Think
that
kind
of
covers
the
the
starting
points
we
had
like
I
said
we
we're
mostly
just
looking
for
guidance
and
some
some
interaction
of
the
community
that
we're
doing
the
right
thing.
You
would
like
to
be
able
to
pass
our
code
back
to
you
guys
and
something
you
can
integrate
into
this
yeah.
A
A
But
in
here,
if
you
can
see
the
screen,
I'm
sharing,
there's
like
a
directory
right,
and
so
you
know,
for
example,
no
DCIS.
It
has
the
pieces
that
you
need
to
run
that
script.
They
may
do
things
like
you
know,
you
know,
grab
an
MP
M
or
grab
a
binary
from
somewhere.
Ideally
if
we
can
check
everything
into
the
repo
here.
Okay,
this
is
on
our
benchmarking
machine.
When
the
nightly
runs
run,
they
basically
cloned
this
repo
and
then
the
jobs
will
run
a
set
of
scripts
one
after
another.
A
So
hopefully
you
know
what
you
can
end
up
with
this
for
each
of
the
benchmarks
you
want
to
add
it
would
be
a
you
know,
a
PR
that
adds
you
know
one
or
more
files
in
the
direct
in
this
under
this
experimental
benchmarks,
directory
yeah,
then
to
go
along
with
that
under
the
benchmarking
directory
benchmarks,
there's
basically
a
file,
this
candice
jason
file,
which
basically
says
okay.
I
want
to
generate
a
chart,
and
this
basically
says
ok
for
benchmark
8
and
that's
the
key
into
the
database.
I
want
to,
you,
know:
generator
hurt
it
uses.
A
You
know
the
template,
which
has
some
common
stuff,
but
you
can
set
the
name
units
and
then
the
streams.
So
again,
each
stream,
like
we
run
the
benchmarks
for
like
six
versions,
six,
eight
and
ten
of
note
right
now,
yeah
and
mastering
canary.
So
this
is
what
streams
you
want
to
show
up
on
the
on
the
graph,
so
say:
you're
only
running
it
for
the
latest
ones.
It
only
shows
up
to
that
and
then
that'll
automatically
generate
a
chart
that
we
can
show
on
benchmarking,
nodejs,
org.
A
Basically,
it's
you
know
that
you
come
up
with
the
pr
that
says
here
the
scripps
you
need
to
run.
You
add
it.
You
include
this
configuration
that
says.
Oh
and
here's
how
I
want
the
chart,
that's
for
it
that's
configured,
then
we
can
update
the
main
job
that
runs
every
night
to
include
another
step.
To
run
that
benchmark.
You
know,
generally,
we
don't
want
them
to
be
super
long.
So
you
know
five.
Ten
twenty
minutes
at
most
yeah.
A
A
B
A
B
A
Is
they
actually
pin
the
different
pieces
to
different
cores?
Okay,
you
know
by
default.
Node
is
single
threaded
for
the
most
part
anyway,
depending
on
what
you're
doing
so,
you
know
will
pin
node
to
a
couple
cores,
for
example,
if
you're
using
a
back-end
database
to
one
of
the
cores
and
then
the
load
generator
to
another
one
of
the
cores.
But
that's.
B
A
Part
of
the
same
like
if
I
went
back
to
here
and
looked
at
the
benchmarks,
and
you
can
go
and
look
at
these
and
see
you
know,
look
at
what
the
examples
look
like
the
run.
Acme
air
here
should
be.
You
know,
starting
up
like
here,
for
example,
is
saying:
here's
the
affinity,
we're
going
to
use
for
each
of
those
different
components:
okay,
so
from
the
CPU
since
the
CPUs
and
stuff,
but.
B
Obviously,
as
a
function
of
the
company
that
we
work
for,
we
would
be
interested
in
taking
this
across
machines
and
also
you
know,
kind
of
doing
experiments.
I,
don't
know
if
you'll
get
to
it
in
the
time
that
all
of
you
are
but
definitely
want
to
get
to
points
where
you're
trying
to
saturate
the
link
or
you're.
Looking
at
things
like
TL
and
Layton
sees
across
a
network,
yep
are
those
sorts
of
patches
anything
that
you'd
be
interested
in
in
you
know,
incorporating.
B
A
B
B
A
Yeah
good
and,
like
I,
said
you
know,
open
the
issue
put
in
all
the
details
and
and
really
keep
it
updated
so
that
cuz
that's
a
good
way
to
sort
of
remind
people
to
comment
in
and
and
I'll
do
my
best
to
jump
in
and
and
if
you
have
specific
questions
and
stuff
that
people
aren't
jumping
into
answer.
Yeah,
just
ping
me
and
I'll.
Try
and
think
of
you
know.
People
I
can
can
sort
of
reach
out
to
and
ask
to
take
a
look
with
that
kind
of
stuff.
B
C
C
B
So
the
point
we
were
gonna
make
was
that
I
think
it's
very
similar
in
kind
of
spirits
to
almost
you
could
think
like
a
micro
service
or
a
REST
API
endpoint.
What
we
probably
do
is
try
and
convert
some
of
that
over
integrates
all
of
that
build
on
top
of
the
bigger
micro
benchmark
stuff
the
rearguard.
So
we
will
throw
all
that
stuff
the
issue
as
well.
Yes,
you
can
reuse
some
of
that
code.
Yeah.
A
Basically,
you
know
the
things
that
you
know.
If
you
can
outline
your
ideas
and
what
might
be
a
good
starting
point,
then
we
can
try
and
get
as
broad
as
an
input
into
from
people
to
say:
hey
does
this
seem
reasonable
and
the
other
thing
that's
interesting
too,
is,
if
you
have
any
data
that,
oh
you
know,
nodejs
is
sensitive
in
particular
areas.
Yes,.
B
A
Those
show
up
in
the
benchmarks-
that's
interesting
as
well,
like
you
know,
if
it's,
if,
if
you
find
that
I,
don't
know
we,
we
tweaked
the
HTTP
implementation
this
way
and
it
now
halves
the
speed
or
it
increases
the
speed
and
that's
shown
in
the
the
benchmark.
That's
a
very
good
way
to
show
that
that's
a
decent
benchmark
for
covering
that
particular
case
right
yeah.
B
C
A
Ok,
ok,
so
I
think
yeah.
Unless
you
have
any
other
questions,
we
may
call
that
for
the
meeting
this
week
and
look
forward
to
interacting
through
github-
and
you
know
of
course,
we'll
see
you
again,
I
think
it's
three
weeks
for
now,
though,
and
the
next
one,
the
scheduled
on
the
calendar.
Ok,
that
sounds
great.
Thank
you
did
I
point
you
at
the
calendar.
Yes,
you
did
and
Lumos,
oh
ok
great,
so
you
can
find
when
the
next
one
is
and
yeah.