►
From YouTube: Node.js Benchmarking WG Meeting - March 5 1029
Description
A
C
A
D
I
think
so
I
hate
this
on
my
local
that
I
local
cable
I.
Am
it
takes
really
long
time
in
all
these
variations,
if
I
just
do
like
every
benchmark
by
every
micro
benchmark
runs
like
30
times,
and
things
like
that
yeah,
so
it
at
least
it
ran
for
me
almost
three
days,
all
the
world,
all
the
multiple
tumors,
so
I,
don't
know
whether
that's
liquid
I
did
on
all
those
in
our
infrastructure.
Maybe
we
can
pick
some,
maybe
buffer
or
few
of
them
in
down
options,
but.
A
D
I
mean
I
I,
so
I
was
always
thinking
there,
for
example,
they're
trimming
down
so
it
for
is
I,
mean
buffer.
You
have
a
look.
An
unsafe
on
Oh,
see
I.
We
can
ignore
the
unsafe
for
long
because
they
are
going
to
be
deprecated
anyway.
You,
the
performance,
we
can
just
pick
which
are
relevant
within
before.
Maybe
our
look
is
good
enough
and
and
that
way
we
can
reduce
the
number
of
Bin's
quality
filter,
yeah.
D
So
there
is
some
little
bit
of
changes
happening
in
my
group,
so
I
may
not
have
time
to
do
exactly
that,
but
if
that
is
something
interested,
I
can
ask
one
of
my
teammate
to
join
this
meeting.
Okay,
that
they
can
contribute,
though
I
didn't
in
the
meeting,
but
I
may
not
directly.
You
know
welcome
that
I'm
going
to
work
on
this
required
performance
for
sure
yeah,
but
as
someone
else
to
look
at
that
and
sure.
A
No
I
I
think
if
you
have
somebody
who
will
take
ownership
and
run
with
it,
that's
great
it
sort
of
been
one.
We
we
just
I
think
we
all
agreed.
It's
a
good.
You
know
we
want
to
leverage
those
tests
if
we
can,
but
we
need
somebody
who's
got
enough
time
to
figure
out
how
to
do
that
in
a
good.
You
know
how
do
we
do
that
practically
right.
D
A
Yep
yeah,
no,
it
sounds
good
yeah
like
it
yeah
we
yeah
same
steps
and
it
would
be
like
some
configuration
to
say:
okay
run
this
smaller
set
and
capture,
it's
also
like
because
it
generates
so
many
numbers.
How
do
we
take
those
numbers
and
put
it
into
one
number
or
two
numbers
or
like
some
small
number
of
numbers,
that
we
can
track
that?
That
makes
sense?
Yeah?
A
D
So
we
last
time
we
discussed
the
scene
I
think
it
was
the
request
form
from
Google
engineer,
which
was
in
bloomer,
so
I
don't
know
whether
that
is
still
relevant.
Do
we
still
want
to
track
and
I
think
I
mentioned
that
ghosts
were
Pro
the
way
it
runs.
It
does
not
build
with
the
master
branch
which
I
tried.
It
does
not
work,
so
we
won't
be
able
to
get
the
numbers
with
the
master
and
the
canary
build
up
load
right.
A
D
D
B
B
A
B
B
B
A
D
All
the
word
ghost
I
did
put
I
remember.
It
has
two
components:
the
three
component
when
they,
the
my
sequel
it
needs
I,
did
put
that
into
my
docker
container.
It
was
so
that
it
would
be
easy
to
run
then
I
put
ghost
itself
in
container
and
they
of
course,
they'd
be.
The
classic
might
is
simple.
Only
if
the
problem
it
does
not
build
will
not
run
with
master
and
canary
dev
version
of
any
no
chase.
D
A
B
D
This
is
just
a
static
web
pages,
so
there
are
many
URLs
like
in
OTC.
It
can
dynamically
generates
urs.
Well,
that
is
not
the
case.
It
ghost.
So
you
have
to
build
a
logic
to
kind
of
generate
static,
URLs
so
that
you
can
get
the
images
you
can
get
some
data
from
database.
Also,
you
have
to
build
that
client,
which
is
not
there
yeah.
B
D
A
A
A
A
B
Jonathan
my
helpful
assistant
has
been
trying
to
run
the
node
benchmarking
suite
locally
and
budem.
You
may
recall
a
a
while
back
I
can
September
that
I
opened
this
complaint
against
no
DC,
GIS
and
micro-services
mode.
I
got
these
HTTP
400
errors
on
the
client-side
yeah
I.
Don't
know
if
you
remember
that,
but
let
me
let
me
paste
the
link
into
this
tag.
B
So
we're
trying
to
run
the
micro
services
motive.
No
DCE
is
because
it
does
something
noticeably
different
from
the
monolithic
mode
and
when
you
run
it
on
the
client
side,
that's
right
when
you
run
it
in
Microsoft,
the
client
gets
errors
printed
to
its
log,
which
is
weird
and
Jonathan,
has
identified
the
cause
for
that
and
opened.
This
pull
request.
C
You're
still,
muted,
all
right
guys
there
we
go
quitting
on
you,
but
but
there
we
were
getting
was
like
Jamie
said,
the
400th
add
URL,
err,
err
and
what
it
looked
like
was
requests
to
add
new
user
were
being
made,
but
there
was
no
last
name
by
getting
the
data
that
was
sent
by
sabbatical
and
I
was
what
was
causing
the
air
I'm.
Still
looking
at
like
the
monolithic
version,
it
seemed
that
they
were
like
not
enough
last
names
and
in
one
monolithic
version.
C
B
A
B
A
D
How
did
you
find
running
so
many
services,
because
the
currently
no
DC
is
my
micro
services
Maury
when
the
code
is
there,
I
never
really
tested
it
all
the
way
from
end
to
end
did
some
testing
make
sure
you
towards,
but
not
really
not
like,
though,
how
the
monolithic
or
the
cluster
version
works
in
OTC
yeah?
Did
you
find
it
easy
to
use
other
than
that
yeah.
B
I
mean
the
instructions
you
gave
pretty
clear,
so
you
just
launch
each
of
the
services,
we're
launching
them
all
on
the
same
machine
and
then
you
run
the
client
and
it
the
clients
the
same
in
both
versions,
okay,
but
so
as
long
as
they're
all
as
long
as
all
the
micro
services
are
run
on
the
same
machine.
A
B
D
D
D
So
the
MongoDB
is
the
very
well
is
very
paralyzed.
It
has
almost.
It
can
support
a
hundred
thousand
parallel
connections.
Somebody's
rated,
so
I
have
not
seen
big
bottle
late,
because
the
MongoDB
and
and
see
they
all
microserver
uses
individual
tables.
So
it
can
be
same
data.
They
put
different
collection.
I.
Think
this
day
should
not
be
in
contact
right.
A
A
D
A
D
A
A
That
makes
sense
I'm
just
thinking.
If
there's
any,
it
then
posts
the
data
to
another
shared
machine,
so
I
don't
think
there
should
be
anything
any
reason.
Why
not
to
do
that?
So?
Yes,
okay,
so
so
Jamie
Jonathan!
If
by
the
time
we,
if
we
start
to
get
to
the
point
where
we're
ready
to
run
it,
we
should
look
at
running
it
on
the
other
machine
and
then
you're
right.
We
could
have
bigger
numbers
of.
Is
this
something
that
can
scale
up
that
microservices
version
like
in
it
is
throwing
more
cores
out.
D
I
mean
the
idea
was
that
you
would
write
this
microservices
and
idea
would
was
that
you,
each
micro
series
is
like
a
different
service
and
each
can
be
closer
eyes,
so
that
can
run
multiple
cores
and
scale
across
the
course.
So
you
can
really
think
about.
You
have
authentications
and
right
now
we
don't
have
that,
but
you
think
about
service.
You
have
some
other
service
and
they
each
can
scale.
D
As
for
the
load
increases
in
real
world,
that's
what
I
was
trying
to
mimic
with
that
use
case,
so
it
should
scale
with
multiple
cores
for
sure,
okay,
and
in
this
case
we
use
a
request
model
typing
to
communicate
across
much
but
services.
So
that's
something
new,
compare
rhythm
quality
question
so
that.
A
A
D
A
Okay,
so
that's
the
end
of
what
was
added
to
the
agenda.
Are
there?
Other
things
is,
this
is
I
mean
I,
guess
Jamie.
It's
like
you've
been
working
on
this
instead
of
the
other
things
is
that
the
Jonathan
is
my
arms,
okay
right,
so
that's,
but
that
died.
That
was
what
you
were
referring
to
earlier
in
terms
of
this
is
what
yeah.
A
So
I
guess
the
the
the
two
things
is
like.
Does
it
these
two
issues?
If
we're
gonna
focus,
you're
gonna
focus
on
the
like
the
the
other
version
of
DC
is,
would
it
make
sense
to
remove
the
benchmarking
tag
on
these
other
two
issues?
So
they
don't
show
up,
they'll,
be
fine,
okay,
and
then
you
can
add
them
back
in
when
it's
time
to
talk
about
them
in
the
meeting
itself
right.
They
just
need
to
pester
you
every
week.
If
yeah,
that's
a
focus
somewhere
else.
A
Okay,
so
I
guess
it's.
It's
is.
Maybe
it
would
be
worthwhile
opening
an
issue
just
to
cover,
adding
in
that
that
new
version,
so
that,
if
there's
questions
you
can
answer
them
there,
we
can
well
yeah.
You
can
ask
them
there.
We
can
answer
them
there.
We
know
where
to
look
to
see
if
in
terms
of
progress
and
stuff
like
that
in
or
if
there's
any
you
need
from
from
myself,
rode
him
along
the
way.
A
B
A
B
D
A
I
think
we
should
make
it's
it's
there
like.
We
already
have
the
machines
in
the
in
the
in
the
farm.
What
we
need
to
do
is,
instead
of
using
the
existing
benchmarking
jobs
like
we
have
a
benchmarking
job
who
says,
run
these
things
once
a
night
for
these
different
versions,
instead
of
adding
a
new
step
to
that
job,
what
I
think
we
want
to
do
is
create
a
new
job
which
can
be
maybe
a
clone
of
that
to
start
with,
and
it
runs
you
know
in
parallel,
but
is
targeting
those
Numa.
Those
new
machines
is.
A
A
Even
decide
whether
at
some
point
we
want
to
move
the
existing
ones
from
the
current
machine,
where
I
think
we've
kept
for
consistency
right
like
if
you
move
it
to
a
different
machine
and
there's
a
big
jump
in
the
charts
that
doesn't
necessarily
help
unless
we
tell
everybody
but
it.
But
but
by
doing
this
this
new
one
over
there,
it's
a
good
stepping
stone
to
say
well,
should
we
move
them
at
some
point
or
when
we
add
new
ones,
it's
easier
than
add
them,
and
they
just
go
on
to
that
machine.
So
I
think.
D
D
A
D
A
A
B
A
It
might
require
pinning
or
something
right
like
if
you,
if
you
pinned
every
test
to
a
different
core,
maybe
that
wouldn't
interfere,
I,
don't
I,
don't
know
we'd
have
to
understand,
but
that
job
yeah
that
job
is
already
set
to
run
on
those
new
machines.
Well,
for
the
exact
reason
of
what
you
said,
somebody
kicks
one
off
and
it
runs
for
two
days.
At
least
it
doesn't
interfere
with
our
existing
stuff
right
now,.