►
From YouTube: Node.js Benchmarking WG Meeting
Description
B
Okay,
so
welcome
to
the
benchmarking
nodejs
team
meeting
for
September
10th
2019
will
sort
of
follow
our
agenda.
We
don't
have
too
much
on
it,
but
I
guess
and
we
were
having
a
conversation
on
compression
and
some
of
the
work
that
have
done
so
I
think
will
add
that
to
the
agenda,
but
first
like
any
announcements
that
people
want
to
share
in
general
before
we
get
started.
B
Nope,
okay,
so
let's,
let's
get
back
to
that
I'll
just
give
a
little
context
for
people
who
are
watching
the
recording
lunamon
mentioned
that
his
team
had
been
doing
some
work
on
some
additional
NPM
modules.
To
do
compression
and
I
just
mentioned
that
you
know:
we've
done
some
experimentation
at
IBM
to
see.
If
we
could,
you
know,
modify
core
node.js,
so
you
could,
you
know,
specify
a
shared
library
which
would
provide
an
alternate
Lib
library
to
do
compression.
B
C
One
thing
so
in
the
in
the
in
the
fast
use
case
right,
III
I,
agree
that
that
that
constantly
that
having
having
like
an
indirection
on
every
call
into
into
deflate
is,
is
a
penalty,
but
there's
there
there
is
even
even
in
fast.
There
should
be
a
break-even
point,
because
after
all,
you
called
the
function
so
as
to
compress
stuff.
So
if
the
actual
payload
code
that
does
the
actual
compression
is
much
much
faster,
then
then,
then
at
some
point
it
offsets
the
overhead.
C
So
there
is
a
point
at
which
at
which
you
could
have
just
used,
regular
Zed,
Lib
and
gotten,
or
the
built
in
said,
they've
been
gotten
the
same
performance
as
as
you
do
by
by
paying
the
overhead
and
then
doing
the
faster
Zed
live
past
that
point,
it's
always
better
to
use
the
faster
Zed
live.
However,
you
quickly
approach
the
point
where
the
function
is
taking
long.
So
there's
that
so
there
may
be
like
a
nonzero
window
between
the
break-even
point
and
at
the
end
of
the
useful
lifetime
of
the
function.
B
I'm
not
personally
so
worried
about
the
weather
when
you
turn
it
on,
it
makes
sense
in
all
cases
cuz
some
cases
it
won't
and
you
just
don't
turn
it
on.
My
bigger
worry
was
that
when
you
integrate
it
into
the
codebase
so
like
what
a'mma
has
now
is
like
a
separate
library
that
you
can
use
or
not
use,
but
you
got
to
change
your
code
to
use
it
right.
B
What
I
wanted
to
see
if
we
could
do
was
like
integrate
it
directly
into
node,
so
on
the
command
line,
you
would
say,
use
this
shared
library
instead,
but
that's
gonna.
That
will
add
some
overhead
and
what's
important.
Is
that
we'd
be
able
to
show
that,
hopefully
that
that
actually,
oh,
that
the
overhead?
That's
there
is
small
enough
that
it
doesn't
matter
yeah.
C
B
That's
what
we
had
like
what
we
experimented
was
with
like
a
command
line
that
says,
use
this
library.
If
you
don't,
if
you
don't,
you
know
the
default,
is
you
don't
specify
that
and
internal
to
node
itself
that
basically
had
a
check
that
says
when
you
made?
Is
that
a
call
to
any
one
of
those
Ebla
calls
that
says?
Could
you
enable
this
no
just
call
exactly
what
it
would
have
called
before.
C
C
C
And
especially,
a
decision
overhead,
if
it's
just
a
simple
as
a
single
decision,
they're
managed
all
right,
because
at
that
point
at
that
point,
it's
pretty
cut-and-dried
right.
Because,
okay,
yes,
you
pay,
you
pay
only
for
the
decision
and
then
and
then
the
fact
that
it's
an
indirection
after
the
decision
has
already
been
made
is
part
of
the
performance
of
the
library.
You
can
just
bundle
that
in
with
the
with
the
performance
of
the
actual
compression,
you
know
the
price
of
the
indirection
yeah.
B
B
B
B
It's
run
it's
it's
basically
run
time.
It's
like
a
dash
dash
zeal,
apply
Lib,
you
say
equals
to
this
and
it's
a
shared
library.
So
if
you
specify
that
it
will
dynamically
load
it
and
you
know,
look
up
all
the
read
all
the
pointers.
You
need
to
do
the
interactions
or
to
do
the
forwarding.
Basically
so.
A
Another
thing
to
consider
I
mean
perform
is
definitely
one
thing.
Another
thing
to
consider
is
the
security
because
of
this
dynamically
sharing
with
some
library.
How
do
you?
How
do
you
make
sure
that
it
is
doing
the
right
thing
and
not
because
now
you
can
link
with
any
library
as
long
as
there
is
an
API
there,
you
can
tell.
D
E
B
A
C
A
A
B
B
Then
people
can
basically
like,
without
the
the
person
who's
running
the
code
at
the
library
and
it'll
run
without
and
be
used
without
them,
knowing
as
as
specifying
the
library
path
on
the
command
line.
I
think
addresses
that
in
that,
if
I
can
change,
your
command
line,
I
can
run
pretty
much
any
code.
I
want
anyway,
yeah.
A
Who
provides
you
so
think
about
in
fast
environment?
When
you
upload
the
function,
you
specify
normally,
you
specify
what
nor,
unless
example,
is
the
JavaScript
code.
You
say
this
is
where
my
node,
this
is
the
version
of
know
what
I
want
to
use,
and
this
is
my
function
to
run
now
now
you
have
a
regional
component
to
upload
where
my
library
is
going
to
be,
who
provides
that
library,
yeah.
C
D
C
You
can
like
certain
providers,
they
allow
you
to
customize
the
environment
right
in
which
your
function
runs
like
like
ahead
of
time.
Right,
like
you,
can
have
like
like
an
execution
environment,
and
you
can
specify
I
want
this,
and
this
and
this
right
so,
like
you,
sorry,
but
I'm,
not
sure
how
much
granularity
you
have
over
that.
You
know
also
another
thing
I
just
thought
about
is
since
we're
talking
about
the
dynamic
library
we
could
potentially.
C
We
could
potentially
render
this
as
an
add-on
right
because
add-ons
are
dynamic.
Libraries
too,
it's
just
here's
the
here
here,
here's
an
add-on:
it's
got
some
symbols,
here's
the
symbol
table
go
around
with
it,
you
know
and
then,
and
then
we,
instead
of
having
a
command
line,
we
would
be.
We
would
be
using
the
require
mechanism
to
load
the
library,
but
otherwise
it
would
be.
It
would
be
the
same.
You
you
load
the
library
and
then
you
pass
off
the
the
function.
Pointers
to
to
to
note
core
and
you're,
basically
telling
node
core.
C
B
That's
true,
I
think
I
think
when
we
looked
at
the
other
one.
It's
be
interesting
to
think
all
through
this.
You
I
think
you're
right
there.
What
what
I
think!
In
our
case,
we
had
like
an
existing
Z
library
so
and
I
think
there
are
existing
Z,
Lib
libraries,
so
it's
in
in
in
some
cases
you
just
don't
want
to
have
to
even
write
a
wrapper
right.
You
just
want
to
be
able
to
oh.
C
The
reason
why
I
brought
this
up
is
that
is
that
the
in
the
in
the
fast
case,
you
may
not
have
control
over
the
node.js
command
line
right
as
the
fast
infrastructure
may
support
loading
native
add-ons,
because
that's
a
fairly
popular
sort
of
thing
to
do
with
nodejs
right,
in
which
case
out
control
the
command
line.
You
could
still
tell
nodejs
to
use
this
particular
binary
at
runtime,
but
you
would
use
a
JavaScript
interface
to
pass
it
right.
B
B
C
A
B
D
C
A
D
A
Think
about
look
at
our
all
the
micro
benchmarks,
all
the
input
we
have
for
the
compression
they're,
so
small.
None
of
them
are
realistic.
Right,
they're
they're,
like
what
10
bytes
15
bytes
when
naughty
101
K
bytes
he's
just
running
n
number
of
time,
but
the
input
itself
is
very,
very
strong,
very
smoky.
C
A
E
A
C
You,
if
you
add,
if
you
add
20
milliseconds
to
the
product
time
but
but
but
you
you
double
the
throughput
from
traumatic
from
from
10
megabytes
per
second
to
20,
megabytes
per
second
after
you've
paid
that
price.
It
may
still
be
worth
it
right
because
the
function
can
run
for
multiple
seconds
and
it
can
do
a
lot
of
compression
in
that
time.
All
right!
Yes,.
E
C
A
C
C
A
So
what
I
was
trying
to
my
point
was
that
it's
not
that
there
are
no
use
cases.
What
my
point
is
that
there
it'd
be
increase
in
the
startup
time,
the
node
runtime.
So
when
we
think
about
the
load
performance,
we
are
not
thinking
about
the
compression
algorithm
itself
right.
We
are
talking
about
the
node
dodgiest
runtime
startup
time,
and
that
would
be
is.
Can
we
make
it
small
enough?
So
it
is
not.
You
know
impacting
the
startup
time
we
already
I,
think
current
or
with
latest
run
latest
version.
A
C
But
but
since
we're
talking
about
compression
right,
if
you,
if
you
look
at
the,
if
you
look
at
the
picture
from
from
the
end
users
perspective
right,
they
can
compress.
Let's
say
they
can
compress
200k
in
in
20
milliseconds
with
the
infrastructure,
as
it
is
now
startup
time
and
everything
you
call
the
function,
it's
a
cold
start.
You
spend
it
200k.
It
takes
20
milliseconds
to
compress,
and
then
you
get
it
back
right
now.
C
You
know
it's
still
from
the
end
users
perspective:
okay,
I've
sent
300k
it's
taken
just
as
long
right
and
I
got
it
back.
Okay,
it's
one
and
a
half
times
faster,
I,
don't
care
that
the
startup
time
is
now
a
greater
proportion
of
the
total
time
spent
running
the
function.
It
still
wanna
know
I'm
faster
from
their
perspective.
Yes,.
A
C
If
you,
if
you,
if
you
do
this
as
an
add-on,
then
you're
not
slowing
that
down
the
case
when
you're
not
using
the
compression,
because
it's
an
add-on
only
those
only
those
functions
which
require
it
will
actually
be
slowed
down
by
it.
Similarly,
with
with
Michaels
interface,
only
those
functions
which
have
the
the
command
line
parameter
will
be
slowed
down
by
it.
Who's.
C
A
So
what
I
was
saying
that
who's
going
to
call
so
in
typical
application?
The
user
application
calls
require
in
North
core.
If
you
take
away
the
Z
leap
and
put
in
external
library,
as
we
were
talking
about,
then
who's
someone
needs
to
do
a
require
right.
Maybe
it
is
dependent
on
some
command
and
options,
but
there
is
that
cool
which
going
to
check
for
the
command-line
options.
Well,.
B
C
Yeah
yeah,
almost
certainly
right,
I,
mean
think
about
it
think
think
about
think
about
nodes
ad
lib
right.
Let's
say
you
have
just
for
simplicity
sake.
You
have
a
global
static
point
or
to
deflate
right
yeah.
It's
it's
set
to
no.
Normally
it's
set
to
no
right.
So
so
you
basically
say:
if
Global
static,
deflate,
equal,
equal
null,
then
deflate
right
and
that
that's
their
that's
the
flag,
check
price
you're,
paying
no
matter
what
right.
C
But
if
it's,
if
it's
not
null,
then
then
you
know
star
global
static,
deflate
right,
so
then
you're
paying
then
you're
paying
the
price
of
of
loading,
the
library
and
in
directing,
because
it's
not
statically
linked
and
all
that
right.
But
but
that's
that's
what
it
looks
like
from
the
deflate
perspective:
you're
only
paying
that
price,
if
that,
if
the
pointer
is
not
not
okay,
otherwise
you're
paying
a
really
tiny
prices
of
comparing
something
to
zero
right.
It's
like
one
more
instruction
right,
compare
compare
that
global!
Okay!
I'm!
C
C
Now
how
that
globo
static
value
becomes
non
null,
we
have
several
choices:
several
design
choices
it
either
becomes
non
now
by
virtue
of
the
fact
that
something
was
specified
in
the
command
line
and
the
price
of
loading
the
library
was
paid
before
the
first
call
to
deflate
was
ever
made
or
I
require
was
present
in
the
definition
of
the
function
which
caused
us
to
set
that
value
to
something
non
know
by
some
as
yet
undefined,
no
jsapi,
but
either
way.
The
result
is
the
same.
C
C
That's
so
we
can
even
wrap
that
check
in
in
in
and
if,
if
likely
and
and
and
then
it's
it's
it's
up
to
it's
up
to,
though
it's
up
to
those
who,
who
are
heavy
users
of
compression
to
decide
whether
paying
the
price
of
the
additional
startup
time
of
the
additional
indirection
during
each
call
is
worth
it
for
them.
And
you
know
we
can
run
the
experiment
to
decide
whether
whether
after
we
add
all
this,
there
are
any
use
cases
left
for
whom
it
makes
sense.
C
C
Okay,
instead
of
the
30
millisecond
startup
time
with
within
those
300
milliseconds,
it's
gonna
be
a
50
millisecond,
startup
time
right,
plus
250
milliseconds
of
useful
work
right,
but
that's
still
250
milliseconds
of
hardware-accelerated
work,
whereas
before
you
had
270
milliseconds
of
software
work
right,
so
you
can
still
achieve
at
least
a
couple
of
X
there.
I
can't
imagine
not,
but
anyway,
we'd
have
to
run
the
experiment.
Yeah.
B
So
basically,
yeah
I
mean
I
I'll
dig
up
what
we
had
I,
don't
know
if
it'll
still
it
probably
won't
apply
to
it
won't
apply
to
note,
but
it
you
know,
I
can
dig
that
up
and
then
you
know
the
real
work.
I
think
is,
you
know
doing
the
experiments
to
say.
Can
we
show
that
it's
not
not
a
big
overhead
yeah
and
you
know
getting
it
landed
and
stuff,
because
I
think
as
long
as
we
can
make
it
show
the
case
where
we
truly
don't
believe
the
overhead
is
going
to
be
a
problem.
B
C
Yeah
yeah,
oh
for
sure,
yeah
HTTP
is
a
huge
huge
candidate
for
zenlea,
yeah
and
and
I.
Think
I.
Think
the
bigger
picture
here
is
also
that
that
more
and
more
of
these,
these,
like
these
calm
infrastructure
components
like
like
compression
and
hashing
and
and
and
these
kinds
of
things
they
are
becoming
increasingly
hardware-accelerated
and
and
and
and
some
of
the
some
of
the
paradigms
that
that
we've
been
using
so
far,
which
which
is
pretty
much
to
run
these
things
in
sync
and
to
use
a
thread.
C
The
concept
whereby
you
have
streams
that
run
in
parallel
has
to
go
much
deeper
into
the
stack
than
it
has
so
far
right.
So
far,
streaming
has
been
has
been
an
abstraction
imposed
upon
what
is
otherwise
a
synchronous
algorithm.
But
now
the
algorithm
itself
becomes
asynchronous
and
unable
to
accept
multiple
streams,
implicitly
without
any,
without
any
sort
of
abstraction
on
top.
C
So
so,
for
this
has
to
be
woven,
deeper
and
deeper
into
the
stack
right
and
and
and
in
add
to
this,
the
fact
that
that
there
are
different
kinds
of
hardware
like
you
can
do,
compression
with
FPGA.
You
can
do
compression
with
all
of
these
have
their
own
libraries,
some
of
which
may
may
you
may
only
get
the
benefit
from
if
you
you
expose
the
streaming
interface
rather
than
the
synchronous
interface.
So
so
so
sooner
or
later
you
know
we'll
need
to.
We
need
to
accommodate
this
increasingly
heterogeneous
sort
of
computing
landscape.
B
A
A
B
C
A
And
so
well,
but
one
good
thing
about
its
explicitly
mentioning
that
way
is
one
good
thing
is
even
if,
in
the
fast
world
K,
if
you
take
the
same
code
into
the
fast
world,
you
can
then
specify
when
you're
selecting
either
a
machine
or
infrastructure.
You
can
specify
the
type
of
hardware
your
to
run
this
function,
because
I'm
using
this
specific
npm
install
package,
which
takes
advantage
of
certain
type
of
hardware,
yeah
I,.
B
A
A
B
That
sounds
good
I,
think
we've
got
I'll,
dig
up
the
code,
and
then
it
sounds
like
Gabriel
you're
interested,
because
we're
we
stalled
out
was
doing
the
benchmarking,
which
is
as
how
it
was
how
this
all
relates
back
to
this
group.
Is
we
sort
of
stalled
out
and
doing
the
benchmarking
that
would
help
us
show
that
it
wasn't?
You
know
that
it
was
okay
to
land.
It.
B
B
B
B
I
think,
as
long
as
you
show
that
case,
and
then
you
say
there
is
hardware
that
can
be
beneficial
to
our
customers
at
particular
times,
I
think
people
aren't
going
to
say.
Oh
well,
don't
do
that.
You
know
convince
me,
you
really
can
have
a
benefit.
It's
acceptable
yeah
as
long
as
yeah
we're
heads
except
well
now,
internally,
it's
it's
very
good
to
be
able
to
say
here's
a
tangible
use
case.
That's
accelerated
by
four
times
right.
So
if
we
couldn't.
B
C
C
C
C
E
C
C
C
B
B
C
B
C
C
B
C
Cuz
otherwise
this
is
this
is
an
exhaustive
list.
As
of
master,
so
right
people
should
see
their
benchmarks
here
if,
if
ever
they
use
them,
there's
even
one
that
that
that
isn't
yet
one
master,
it's
the
one
for
the
for
the
weak
refs.
Where
is
nappy,
because
you
know
I'm
working
on
that
weak
references
as
as
cleanup
hooks,
so
I
even
have
a
benchmark
here.
That's
not
really
a
benchmark
yet,
but
anyway,
I
can
remove
that
line
yeah,
so
so,
okay,
so
then
I
can
turn
this
into
into
a
SurveyMonkey
survey
and
go
from
there.
B
C
The
other
question
I
have
is
so
like
the
voting
system
that
we
should
use.
I
mean
I
know
this
is
super
super
splitting
hairs,
but
should
I
just
make
like
a
set
of
check
boxes
and
then
people
check
the
ones
they
like
or
should
I
do
like
a
score
like
from
one
to
ten
or
or
whatever,
because
I
think
you
can
do
that
or
you
can
be
like
a
read.
Disagree
somewhat
agree
all
that
stuff.
The.
B
D
B
A
E
A
Also,
on
top
of
that,
I
was
thinking
then,
looking
at
that
list
can
we
narrow
down
ourselves,
something
which
is
just
a
JavaScript
function.
They
are
really
you
do
require,
but
then
mostly
it
is
just
running
in
v8
mode.
You
are
not
really
using
any
of
the
node.js
functionality
right
and,
and
the
performance
of
JavaScript
is
easy
to
gauge
from
how
the
v8
performance
is
the
newer
versions.
But
there
are
certain
API
for
some
silly
crypto,
which
depends
on
certain
libraries,
which
they
don't
really
have.
A
A
C
A
B
B
B
B
Specific
es
ones
may
have
been
added
because
at
some
point
we
were
float
when
we
float
patches
like
we,
don't
necessarily
have
an
exact
copy
of
the
eight,
so
I
think
those
may
have
gone
in
because
we
actually
did
break
things
at
some
point
on
some
key
things.
B
B
C
E
B
Okay,
then,
week
separately,
we
could
have
a
second
question
which
is
like
which
of
these
do,
you
think,
are
most
important
to
test
like
I
guess.
The
problem
is
that
when
you
change
code,
maybe
that's
not
a
good
question
because,
like
obviously,
if
you
change
the
URL
related
to
URLs,
you
should
run
the
URL
ones
right.
B
C
B
C
C
B
B
C
C
C
C
Yeah
yeah,
cuz,
I,
guess
I,
guess
they're,
probably
probably
very
important.
If
they've
been
used,
they
have
some
importance
so
that
sort
of
speaks
to
the
second
question
and
I
mean
that's
I
mean
the
first
question
is
a
special
case
of
the
third
question
right:
do
you
use
them
for
what
purpose
right
right,
writing,
testing
a
PR,
so
yeah
I,
guess
I
guess.
The
third
question
seems
to
be
the
one
that
that
sort
of
captures
the
most
yeah.