►
From YouTube: Node.js Benchmarking WG Meeting - April 23 2018
Description
A
A
If
not
I
guess
the
the
one
announcement
I
have
is
I
did
just
flip
over
the
charts,
the
benchmarking
Sharps
from
no
10
from
node
4
to
no
10
in
advance
of
the
release
planned
released
tomorrow,
so
well
and
I
dropped
four
at
the
same
time
since
it's
going
and
of
life
okay.
So
the
first
issue
on
the
agenda
is
footprint,
increase,
acne
or
afterload.
So
I
think,
like
we've
discussed
in
the
past,
who
wouldn't
figured
out
where
around
it
to
commit
was
there's
a
fix.
A
Coming
just
hasn't
quite
made
it
in
yet,
but
shouldn't
be
too
much
longer.
I
guess
the
next
one
is
202
you're
looking
at
required
cash
perf,
much
lower
yeah.
B
A
B
B
A
I
think
the
discussion
was
around
like
does
it
actually
have
to
be
installed,
or
can
it
just
be
something
that
we
Tolan
in
the
benchmarking
repo,
the
docker
container?
You
mean
okay,
so
yeah
we
were
talking
about
docker
as
opposed
to
yeah.
C
B
The
two
ways
we
were
going
to
thinking
to
solve
that
I
believe
sattvic
who,
as
I
said
you
know
he
is
no
longer
on
this
team,
but
he
he
was
going
to
I
think
he
had
tried
with
docker
container
my
sequel
and
I'll
I'll
verify.
If
that
is
true
and
I,
think
that's
the
route
we
will
take
so
it
as
as
part
of
a
run,
it
will
download
the
my
sequel
docker
container.
Then
it
will
start
that
and
then
start
the
ghost
application
and
then.
D
B
B
A
A
And
then
we'll
probably
have
to
like
at
some
point,
install
docker
and
just
make
sure
we,
you
know,
watch
the
charts
following
that
to
see
if
there
were
any
issues:
okay,
okay,
that
was
on
our
what
was
tagged
on
the
agenda.
I
also
just
mentioned
that
ultimate
caught
that
we
were
not
running.
We
were
getting
zero
results
for
Acme
air
for
a
few
days
right.
A
It
turned
out
I,
don't
know
why
or
how,
but
it
had
something
to
do
with
domain
name
resolution.
It
was
like
I,
you
know,
I
tracked
it
down
to
the
the
client
wasn't
being
able
to.
You
know,
hit
the
right
host
and
I
added
an
entry
in
the
local
hosts
like
etc
hosts,
and
it
seems
to
fix
it,
but
I
have
no
idea
what
changed
to
make
it.
You
know
stop
working
from
before.
A
A
A
A
B
A
A
B
Okay,
so
there
was
a
I
saw
yesterday
or
last,
maybe
yesterday,
that
era
before
has
been
kind
of
a
duplicated
and
the
master
branch
yeah.
A
lot
of
internal
micro
basements
are
failing
at
least
the
one
we
are
running
internally
here
in
10
and
as
a
benchmarking
group.
We
are
not
really
doing
any
of
on
that
front
right.
We
don't
have
any
results
so
answer
sayings
can.
B
A
B
A
B
A
B
So
so
I
was
thinking
about
that
I
didn't
tell
we
run
subsets
of
those
like
some
buffer
ApS
in
buffer
API
is
important.
One
any
clearly
impacts
performance
for
any
anything
ran
unknown.
So
we
do
track
then
I
believe
the
miles
boring
stream,
which
does
the
canary
build
I.
Guess
they
probably
track
some
of
those
are
part
of
those
internal.
So
should
we
kind
of
so.
A
Far
like
with
canary
in
the
gold
mine,
that
will
not
do
anything
performance
related,
but
they
don't
do.
Okay,
they
are
the
carrying
the
gold
mine.
What
it
does
is
it
is
it
loads,
NPM
modules
and
runs
their
tests,
so
it's
meant
to
basically
say
has
you
know
the
recent
change
is
broken
and
you
know
the
ecosystem
in
some
way,
but
it's
not
performance
related
as
functional
related.
Okay,.
B
So
is
that
yeah
is
there
a
way
we
can
at
least
maybe
since,
let's
say
into
learning
it
just
a
suggestion,
yeah
running
can
we
have
a
link
pointing
to
introduce
they
which
is
open,
I,
think
on?
Oh
one,
dot,
o-r-g,
site
and
I
just
have
an
insight
because
they
do
nightly
build
nightly
testing
and
would
that
be
useful?
What
none.
A
B
A
We've
just
never
sort
of
came
to
a
set
that
said
these
smaller,
this
smaller
subset
makes
sense,
because
you
know
it's
more
important
and,
and
it's
directly
related.
So
if
you
guys
have
a
recommendation
that
says
yeah
like
this,
this
set-
maybe
it's
just
buffer
or
maybe
it's
buffer
plus
some
others
are
the
most
important
that
I
think
we
could
start
looking
at
running
those
okay.
B
And
in
the
global
and
the
bigger
issue,
what
I'm
thinking
is?
How
do
we,
because
there
are
just
so
many
benchmarks
and
and
all
of
these
are
important
for
the
performance?
How
do
we
as
a
whole
node.js
ecosystem?
Is
there
a
way
we
can
do
something
about
it,
whether
it's
a
really
large
problem
for
sure
yeah.
A
B
So
one
thing
is
in
the
baby
run
internally,
we
do
lot
of
multiple
iterations
so
that
we
get
statistical
significance,
yeah
confidence
number.
Is
that
really
important
for
the
nightly
testing
as
long
it
gives
us
a
trend
about
a
particular
workload
or
the
whole
for
a
P,
I
or
any
of
those
sort?
Would
that
be
good
enough?
It
really.
A
Yeah,
it
depends
on
how
variable
it
is,
and
then
it
still
then
comes
down
to
how
many
different
numbers
you
want
to
show
right.
Okay,
like
as
if
there
was
a
you
know
what
the
thought
I'd
already
had
always
had
in
my
head,
but
never
had
time
to
actually
do
anything
on
is
the
you
know.
Can
we
add
them
together
some
way
so
that
you
can
get
smaller
like
if
you
added
all
the
buffer
ones
together
and
got
some?
You
know
one
number,
four
that
would
that
be
useful
at
all
to
truck.
E
Right
well,
I
mean
yeah,
yeah
I,
think
that
the
world
of
this
I
think
this
there's
two
problems
that
the
one
is
that,
even
if
you
only
run
a
small
number
of
iterations
for
each
of
the
benchmarks,
it
still
is
I
mean
I
was
just
trying
to
find
those
I
know.
Was
it
andreas
mud
sand
had
put
together.
You
got
and
collected
how
long
each
of
the
benchmarks
yeah
run,
and
it
was
it
was
still
hey.
E
It
was
still
a
long
time
yeah
and
then
the
other
problem
is
that
if
we
were
to
do
that,
we'd
we'd
need
to
either
keep
a
static
copy
of
the
benchmarks,
because
I
know
that
there
are
changes
going
into
node
all
the
time,
even
just
straight,
into
the
bend
that
benchmarks
in
car.
So
we
could
risk
yeah.
If
somebody
makes
a
slight
change
to
one
of
the
benchmarks,
we
could
risk
then
having
sort
of
false
positives.
Obviously
it
shouldn't
be
too
difficult
to
spot,
because.
A
The
answer
is,
if
you
can
come
up
with
a
good
recommendation
of
how
to
do
something
on
that
front.
We're
all
very
interested.
Okay,
just
haven't
had
a
good
enough
idea
or
enough
time
to
to
come
up
with
something
cuz
yeah
I
mean
there's
a
there's,
a
whole
bunch
that
are
quite
interesting
there.
It's
just.
How
do
we
leverage
them
is
the
question?
A
B
Fill
me
so
inside
the
last
time,
I
think
at
the
one
of
the
conference
I
talked
about
having
that
DP
DK
installed,
I
mean
work
with
nodejs,
and
that
was
just
a
POC.
But
this
time
I've
been
working
on
it
for
last
six,
more
than
six
months
and
I
kind
of
got
everything
working
so
I
integrated
the
PDK
which
improves
the
networking
stack
quite
a
bit
and.
E
B
It's
a
data
plane.
Development
kit
is
really
so
his
Intel
developed
this
some
time
ago
and
the
open
sources
project,
okay
and
in
a
way
it
kind
of
exposes
the
the
network
card
and
the
driver,
the
HU
to
the
user,
space
and
kind
of
bypasses
the
kernel
in
certain
instances.
Okay,
I'm
buddy
scale
very
well,
and
when
I
did
my
POC
I
found,
you
know
performance
up
to
4x
in
small
microservices
type
of
application
and
I
see
around
let's
implement,
even
in
no
DC
like
a
big
application,
which
are
the
data
base
and
stuff.
B
So
right
now
looks
like
a
good
train
to
add
it
to
the
whole
node.js
coup
and
I
added.
A
core
support
for
DP
decay.
I
was
in
I
can
build
it
every
time,
ready
to
foo
to
issue
a
pull
request.
I'm
just
doing
internal
core
sanity
testing
make
sure
there
is
no
duplicate,
whatever
licensing
and
other
things.
If
there
is
there,
any
Frank
Philippe
within
couple
I
should
be
able
to
issue
put
request,
but
then
I
thought.
B
B
You
have
to
turn
on,
or
so
it's
a
build
time
option.
You
have
to
build
it
on
this.
No,
not
really
a
runtime
environment
variable
option.
It's
like
a
having
a
built-in
support
or
having
width.
It's
like
a
with
the
PDK
lip,
and
then
it
will.
So
you
expectation
is
that
you
have
a
typical
library
install
on
the
system
and
it
just
links
with
it.
Okay,.
A
B
The
to
stasis
two-step
process,
it's
like
a
having
a
huge
pager
support
right,
for
example,
you
can
enable
the
huge
phase
in
the
curve
in
the
application,
but
if
the
Linux,
the
kernel
is
not
support,
enabled
for
that,
then
it
will
not
use
which
pages
right.
So
we
similar
in
this
case
too,
so
you
have
to
enable
the
Machine
the
the
system
to
for
the
DP
ticket
driver.
So
you
have
to
go,
install
the
driver
and
all
that
and
then
link
with
it.
A
B
D
A
A
B
B
Without
without
thinking
deep,
the
the
one
thing
I
can
comes
to
mind
is
because
Toby
the
way
people
we
know
Jess
uses
the
tcp/ip
stack
like
socket
API
and
not
lying
there.
You
have
a
JavaScript
code
which
goes
into
users
ellipses
standards,
BSD
socket
API
yeah.
In
order
to
use
the
PDK,
there
is
a
user
space
stack,
we
have
to
use,
so
we
don't
want
to
all.
Those
socket
calls
does
not
go
through
the
HTTP,
oh,
but
something
else
right
on
not
happen
at
the
dynamic
layer.
I
think.
C
B
I
think
about
it,
but
that's
what
should
I
definitely
think
about
it.
If
this
can
be
improved
and
that's
one
of
the
things
I
want
to
issue
a
pull
request,
so
people
can
look
at
it
and
suggest,
and
if
let's
see
if
this
is
really
useful,
we
think
and
because
I've
seen
a
very
great
numbers
performance
numbers
and
if
there's
something
before
interest
rate,
we
can
always
modify
the
code.
It
just
occurred
anyway.
Yeah
do
the
best
thing
possible
for
node
yep
sounds
good,
so
that's
the
talk
is
all
about
that.
B
B
It
also
scales
the
way
it
runs.
It
also
scales
without
cluster,
so
you
don't
even
have
to
use
close
to
in
order
to
have
multiple
processes,
you
can
just
run
regular
application
without
cluster
and
run
on
as
many
cores
as
possible.
So
the
lot
of
good
benefits,
because
I've
seen
a
lot
of
people
kind
of
reserved
a
have
reservation
about
cluster
if
it
doesn't
scale
well
after
four
cores,
but
this
is
kind
of
words,
a
pleasure
and
since
still
scales
on
the
CPU.
This
is
a
void
caster.
B
So
the
DP
decay
layer
kind
of
manages
the
the
course-
and
you
just
say
at
the
top
level,
how
many
processes
you
want
to
run-
and
you
just
start
so
like
think
about,
if
nobody's
behind
engine
X
and
engine,
is
kind
of
starting
new
process.
You
can
just
keep
on
doing
that
without
having
a
special
noches
application.
Okay,.
A
B
Cool
and
in
fact
I,
we
have
also
I
try
to
pour
the
radius
and
because
radius
is
server,
is
also
very
good.
One
of
the
component,
a
lot
of
people
use
between
database
and
node
applications
now
coated.
That
has
been
also
ported
as
an
experiment
and
V
so
around
2.7
to
3x,
latency
reduction
so
the
whole.
So
the
best
looks
like
good
potential
to
use
the
technology
in
general
right.