►
From YouTube: Node.js Benchmarking WG Meeting - Oct 30 2018
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
You're,
making
terrible
sounds
I,
think
yeah
just
stop
moving
and
we'll
be
fine,
so
various
stakeholders
here
who
are
interested
in
benchmark
development
and/or
in
having
benchmarks
to
validate
different
analyses,
whether
those
benchmarks
are
our
performance
benchmarks
or
just
no
look.
It's
a
complicated
codebase,
though
our
analysis
works
on,
for
you
know
whatever
it
is
that
the
academic
folks
are
trying
to
do
some
kind
of
information
flow
or
something
so
there's
there's
potentially
something
of
a
coalition
forming
out
there.
That
I
wanted
to
tell
you
guys
about.
B
A
B
B
Maybe
a
useful
point,
intersection
being
standardizing
hooks
and
data
formats
and
configuration
file
formats,
and
so
on
that
you
know
that.
That's
something
that
this
group
could
potentially
take
the
lead
on
if
we
wanted
to
or
that
they
would
at
least
like
to
know
what
what
it
is
we're
doing
so
and.
B
B
B
Definitely
potential
interest
from
folks
out
there
and
in
a
using
the
benchmarks
that
the
node
community
manages
yep
and
be
potentially
having
an
easy
way
to
feed
those
benchmarks
back
in,
and
you
know,
we've
talked
about
issue
number
one
five
five
below
which
is
about
you
know
an
agreed
upon.
This
is
how
these
are
the
scripts.
You
need
to
write,
and
this
is
how
they're
going
to
be
run
and
so
on,
which
would
be
helpful
for
them
to
give
things
back.
If
we
tell
them
sort
of
the
spec
of
right.
A
B
A
That
sounds
good.
I
was
just
trying
this.
It's
like
it's
not
like
a
separate
group.
That's
gonna
get
together
and
do
something
in
a
different
direction
or
on
their
own,
but
it's
more
of
a
you're
saying
it
might
not
be
as
directive
involvement,
but
that
are
interested
in
feeding
stuff
into
the
kind
of
yeah.
B
A
B
A
B
A
B
A
B
A
A
A
B
B
A
B
C
I
did
narrow
it
down
to
that
one
change
and
the
change
is
fairly
simple,
but
of
course,
I'm
not
familiar
with
the
most
of
the
course
I
need
to
look
at
what
exactly,
but
what
exactly
the
the
impact
is,
but
then
I
looked
at
I
just
thought
about
it.
The
require
is
something
as
vary.
This
could
be
very
high
level.
So,
in
from
my
perspective,
require
is
something
like
you
do,
a
require
it
loads
finds
a
file
and
loads.
C
It
and
kind
of
finds
the
export
function
and
X
holds
the
function
and
kind
of
gives
it
to
the
application
and
the
second
time
you
require
the
same
function
with
the
new
cached
module
functionality.
It
kind
of
saves
it
into
internal
array
and
then
that's
what
it
doesn't
have
to
load
or
find
the
exported
function.
Entry
cost
so
that
in
that
say,
it
probably
is
the
time
is
the
correct
measurement
and
offs
per
second
and
I'm
I
wasn't
really
sure
exactly?
C
C
So
are
we
doing
so
again?
I
have
not
looked
into
the
base
model,
J
base
model,
dot,
J's
or
the
runner
a
function.
So
are
we
doing?
How
does
the
base
model
Jess
worse?
Does
it
do
that
run
that
function
100
times
in
a
fixed
amount
of
time
just
to
require
new
and
the
required
cash
and
see
how
many
times
you
can
access
it
in
that
fixed
amount
of
time?
So
that's
a
way
you
can
see
that
ops
would
be
higher,
absolutely
better
right.
A
C
C
C
A
So
that
that
part
and
then
I
guess
you're
right,
and
so
as
part
of
that
you
see
it
says,
return
S,
Plus
event,
target
dot
hurts
so
that's
a
dime
right.
Well
hurts
to
me
is
a
something
per
second:
okay,
like
60
to
60
cycles
per
second,
so
I
read
that
as
that
would
be
ops
per
second.
So
that's
a
more.
C
Is
better?
Yes,
okay,
so
in
that,
if
that
is
the
correct
way
to
measure
it,
then
the
overall
number
II
when
the
require
dot
new
has
gone
down.
It's
not
so
you
should
look
at
the
six
dot.
X
numbers
it
is
a
little
higher
than
now
is
per
second
for
the
latest
one
and
right
after
from
that,
not
actually
the
require
dot.
New
is
reduced
not
from
that
commit,
but
it
has
starting
to
go
down
sometime
right.
A
A
C
C
So
if
I,
if
I,
already
experimented,
if
I
thought
that
commit,
especially
when
the
cache
model
is
true
yeah,
the
numbers
are
higher.
That
I
know
that
I
just
wanted
to
make
sure
I
understand
the
framework
correctly
and
are
we
measuring
the
correct
way,
because
this
time,
then
it
makes
sense,
eight
is
going
down?
That's
how
I
was
looking
at
it
right.
A
A
C
A
I
could
understand
why
that
change
would
make
it
go
down.
But
if
you
look
at
the
number,
it's
like
three
thousand
four
K,
you
know
three
thousand
fir
cached
and
thirty
thousand
fir
non
cash,
so
I
I
could
see
cached
it's
slowing
down,
but
not
being
less
than
new
right,
because
new
I
think
caches
the
children
as
well
new.
C
Does
not
okay
I
need
to
look
at
that
yeah.
A
C
Don't
mind
but
I
know
that
in
cache
module
if
I
first
of
all,
there
are
multiple
connected.
There
are
multiple
conditions
in
that
object.
Children
function
right
so
it
could
be
just
so.
We
are
doing
more
operations
in
that
function.
Anyway,
yes
been
really
more.
It's
a
green
little
bit
more
Java
code,
JavaScript
I'll,
take
a
look,
and
then
I'll
maybe
update
this
issue
later
and.
A
Is
it
is
it
updating
the
cache
with
the
children
or?
What's
the
it
is
updating
the
cache
yeah
right?
Okay,
so
maybe
it
would
seem
that
a
cache
is
not
doing
what
it's
supposed
to
do
if
it
actually
is
ten
times
slower
with
the
cache
10
without
right,
yeah
yeah.
So
if
I
just
don't
update
the
children
so.
C
A
C
A
C
C
B
B
C
I
was
saying
that
ghosts
ISM
next
on
my
list,
but
I
did
spend
time.
Looking
at
the
note,
DCIS,
okay
sure
you
mentioned,
we
don't
have
an
actual,
actually
issue
open
for
that.
But
you
said
it
doesn't
not
work
with
master
branch,
but
if
you
look
at
the
chart
it
seems
to
be
working
with
master,
not
with
the
canary
build
Oh
for
some
reason.
So.
B
C
A
A
C
A
B
So
I
have
sort
of
touched
this
in
the
sense
that
I've
written
no
code
but
I
have
been
attending
meetings
to
try
to
figure
out.
If
those
other
groups
already
have
the
stuff,
we
want
that
we
could
just
recycle,
it
doesn't
sound
like
they
do
per
se
or
if
they
do
it's
at
a
higher
level
than
what
we're
talking
about
here
right.
So
I
think
that
some
groups
may
have
a
sort
of
a
benchmark
harness,
but
it
doesn't
we're
we're
talking.
B
This
issue
is
about
a
lower
level
like
a
consistent
way
that
a
harness
would
invoke
all
of
these
different
benchmarks.
So
right
it
doesn't
sound
like
anyone
has
looked
at
specifically
no
DC,
acne
or
and
the
other
stuff
that
core
is
doing
and
put
those
things
into
such
a
consistent
harness.
It
looks
like
okay.