►
From YouTube: Benchmarking special meeting - NODE-DC-EIS deep dive
Description
A
Okay,
welcome
to
the
special
benchmarking
meeting,
which
is
a
deep
dive
on
the
node,
yes,
workload
being
presented
by
I,
guess:
room
from
interval.
It's
a
new
work
workload
that
you
know
we're
looking
at
in
benchmark.
This
is
an
option
to
ever
some
of
the
use
cases
that
we
want
to
cover
so
little
more.
Let
you
take
from
that.
C
D
D
This
diagram
I'm
just
going
to
write,
drive
into
it.
The
diagram
you
see
here
is
a
very
typical
node
application.
What
we
know
looking
at
and
we
kind
of
learned
as
we
looked
at
many
many
usages,
so
you
always
have
a
note.
Dear
node.js
base
server.
There
is
a
MongoDB
database
at
the
back
end
and
the
client
could
be
other
angularjs
which
is
disconnected
from
the
node
server
or
it
could
be
inside
us
in
the
node.js
application
self,
like
a
rendering
of
the
web
pages,
and
things
like
that.
But
this
is
a
very
typical
way.
D
The
application
looks
like
so
I
pardon
my
the
error
here.
If
says
the
node,
the
node
els
as
I
said
I.
I
did
not
get
chance
to
prepare
the
slides
because
just
came
back
from
vacation
liquor
at
a
a
row
a
couple
days
ago.
So,
but
this
is
actually
no
DC
is
internals.
So
as
it
says
it
has
a
database
object,
it
has
a
database
mango
debate.
The
back
end-
and
it
is
a
various
collections
like
employees-
employ-
have
I
dresses-
employ
have
a
health
records,
employee
above
family
records
and
compensation
and
photo.
D
D
So
when
we
started
this
is
what
we
started
with
the
Ethier
vision.
What
this
application
should
do
so
what
we
know
that
we
wanted
to
keep
the
employee
directory
as
a
senator
focus,
but
we
wanted
to
represent
in
multiple
ways.
We
want
you
to
create
a
monolithic
application
like
most
of
the
node
applications
are
so
all
these
are
done
in
one
or
two
controller
files,
or
maybe
one
server
or
jas.
We
also
want
to
represent
the
similar
environment
using
a
cluster
mode.
D
What
happens
if
you
run
this
in
a
cluster
mode,
the
same
application,
so
your
database
kind
of
remains
the
same.
No
changes
required
there,
but
you
just
start
the
multiple
processes
as
many
course
you
have
on
the
secure
world
and
then
also
we
want
to
divide
the
same
application
again
on
the
functional
level.
So
there
is
a
employee
service,
then
there's
the
health
service,
there's
a
photo
service
and
there
is
a
compensation
service
and
so
forth.
In
that
case,
you
can
keep
the
database
same
or
you
can
actually
separate
out
the
database.
D
Also,
in
this
case,
I
think
we
have
the
same
database
at
the
back
end,
so
that
means
we
can
clearly
see
what
happens
if
you
deploy
a
node
application.
Is
certain
one
of
these
scenarios?
What
what
could
be
the
performance
with
a
monolithic
cluster
and
the
micro
solution
is
more
and
then
people
can
get
kind
of
really
understand
what
would
the
benefit,
which
is
the
best
way
for
their
application
to
deploy,
and
we
should
give
you
the
best
performance
on
whatever
available
resources
they
have
and
for
node
E
is
what
we
did.
D
We
doubled
up
the
client
in
Python
using
its
green
pool,
3d
mechanism,
so
we
can
start
a
very
lot
of
concurrent
connections
and
everything
can
be
done
from
the
command
line.
So,
as
you
can
see
this,
the
database
remains
the
same.
The
client
remains
the
same.
The
application
accepted
the
functionality
remains,
the
same.
It
just
hadn't
deployed
in
different
ways.
D
Okay,
any
questions
so
far
seems
clear
to
me.
Okay
and
one
more
point
here
is
so
the
diagram
which
shows
we
can
now
morality.
We
can
now
add
more
scenarios
to
do
the
same
thing
here.
In
this
case
we
can
add
lace,
a
server-side
rendering,
so
we
can
add
another
mode
where
we
create
the
web
pages
from
from
the
server
side
and
and
we
can
extend
that
functionality
or
the
deployment
of
application.
D
Ok,
so
here
I
will
just
talk
a
little
bit
about
the
run
spec.
What
we
did,
we
actually
took
a
reference
from
how
the
speck
benchmarks
are
structured.
Most
of
those
workloads
are
the
benchmarks,
have
Ellen's
Becca
is
the
front
and
utility
which
kind
of
sets
a
lot
of
parameters
and
and
and
it
executes
the
program,
and
we
try
to
see
whether
we
can
learn
something
from
that,
because
it
has
been
proven
for
many
many
years
for
many
generations
of
this
many
generation
of
the
stack
cpu
benchmarks.
D
D
How
much
is
the
concurrency
could
be
to
show
how
many
parallel
connections
can
be
from
client
to
server
and
debug
an
and
think
lot
of
other
options?
Ok,
so
these
are
little
bit
of
description.
So
there
is
a
config
file.
You
have
a
dash
f
config,
where
you
can
specify
all
these
parameters
into
a
config
file
on
the
client
side.
So
the
program,
the
client
can
read
those
config
parameters
and
then
make
the
calls
to
the
server.
D
At
the
same
time,
you
can
override
those
options
by
from
the
command
line
you
can
over
at
the
number
of
requests
you
can
override
the
concurrency.
You
can
override
the
ratio.
Let
me
talk
a
little
bit
about
these
three
ratio.
What
did
what
we
mean
by
that?
So
as
we
as
I
said,
we
wanted
me
to
make
the
employee
directory
in
a
company,
so
you
have
employee
names.
You
have
employee
for
numbers,
you
have
our
employee
addresses
with
subzip
code.
So
what
we
really
want
to
do
was
we
want
to
distribute
those
employee
records.
D
The
name
certain
way,
for
example,
employ
have
a
last
name.
Maybe
there
is
a
DA
John
Doe.
Now
doe
is
a
lesser
last
name,
and
then
we
wanted
to
create
recent
10,000
record.
We
want
to
distribute
that
different
last
name
with
twit
certain
percentages.
So
anytime
we
make
a
query.
We
at
least
get
certain
number
of
records
back,
so
you
feel
request
for
give
me
records
with
last
minutes
dough
and
we
should
get
maybe
ten
percent
of
those
records
so
I
all
and
there
could
be
another
one.
They
say
my
name,
bhutan
power.
D
If
I
require
inquire
with
lasting
equal
to
power,
I
will
get
another
ten
percent.
So
every
time
the
load,
the
network
load
kind
of
remains
the
same.
Approximately
again,
it
all
depends
on
this
number
of
parameters,
but
the
idea
was
to
whenever
any
query
we
make.
We
get
a
certain
amount
of
load
back
at
the
data
back.
So
that's
the
name.
You
are
ratio
says
ten
percent
twenty
percent
again,
that
is
in
the
country.
Five
same
thing
is
true
with
the
zip
code.
D
So
if
you
distribute
the
employees
across
multiple
zip
course,
we
want
to
make
sure
that
for
every
zip
code
we
request,
we
should
get
certain
amount
of
data
so
that
it
is
not
lopsided
when
one
zip
code
has
a
hundred
records
and
another
zip
code
has
zero
records
and
another
one
has
10,000
records,
and
things
like
that,
so
we
wanted
to
make
sure
all
the
urs
we
are
hitting.
We
are
getting
decent
amount
of
data
back.
D
Okay,
the
DB
count
is
how
many
total
number
of
records
we
want
to
populate
in
your
database.
We
talked
about
the
name.
We
talked
out,
ZB,
zip
ratio,
the
order.
The
order
is
how
you
want
to
issue
the
request
they
should
they
be
sequential
dish.
Are
they
should
be
shuffled
way
in
a
sense?
They
can
be
of
different
type
of
request
mixing.
D
Okay,
so
is
the
are
there
any
questions
on
this
I
think
this
is
probably
is
most
I
owe
two
more
things:
I
need
to
talk
with
the
ramp
up
and
RAM
down
what
he
says
it
when
the
server
starts.
When
the
database
is
populated-
and
you
say,
hey
I
want
to
mimic
thousand
requests
what
for
a
pop
and
drop
down
flag.
Does
it
says
that
I
want
to
run
some
number
of
requests?
D
I
want
to
issue
some
more
equation
before
we
actually
start
the
measuring
time
for
the
performance
so
that
the
server
has
started
and
we
made
some
requests
and
it
has
processed
some
of
the
requester.
It
has
processed
some
of
the
callback.
Some
hopefully
lot
of
they've
got
into
a
cache,
and
hopefully
the
server
is
in
kind
of
stable
state,
and
then
it
could
be
100
requests
toward
a
request
by
default.
D
I
think
we
have
200
request,
so
100
requests
for
the
ramp
up
and
hundred
equation
for
the
rap
dog,
so
so
that
we
can
avoid
those
initial
peaks
or
the
valleys
and
kind
of
let
the
server
finish
properly
then,
and
after
the
hundred
requester
process
then
actually
start
the
major
in
time
of
remaining
1,000
requests.
A
total
requires
will
be
issued,
will
be
around
1200.
So
thousand
is
what
you
want
to
make
100
ramped
up
and
hundred
ramp
down,
so
total
1200
requests
should
be
issued,
but
after
first
hundred
are
done.
D
Actually
thought
about
it:
we
just
have
not
implemented
it,
so
we
want
to
right
now.
It
is
all
request
number
of
requests,
but
we
definitely
want
to
add
a
scenario
to
do
a
time-based
runs
and
we
want
to
run
for
ya,
so
we
actually
definitely
have
thought
about
it.
Okay
and
hopefully,
now
now
that
we
are
a
kind
of
its
out
there
in
open
source.
Hopefully
we
can
kind
of
work
together
to
see
whether
what
else
we
can
do
about
it.
D
D
So
you
can
see
on
the
left
side,
we
have
a
latency
graph
for
the
monolithic
which
is
a
bra
orange
color,
then
micro-services,
which
is
a
yellowish
color
and
the
cluster
mode,
which
is
a
green
color
and
thence
the
corresponding
throughput
on
the
right
side
and
then
now,
without
actually
for
the
end
user.
For
in
general,
for
the
community,
people
can
run
this
boat
load
in
various
fashion
and
kind
of
get
a
general
idea.
What
would
be
the
best
for
their
their
application
before
you
and
you
could
in
fact
deploy
or
even
develop
an
application?
D
D
That's
it
that's
what
I
have
again
I
did
not
get
any
time
to
prepare.
So
this
is
just
my
old
slides
from
way
back.
I,
don't
know
whether
what
else
you
can
what
else
you
would
like
to
know,
and
so
I
can
talk
more
about
it.
So
things
are
so
let
me
go
to
little
bit
of
intervals
of
the
routing
mechanism.
D
So,
as
you
see,
there
are
five
or
six
collection,
and
you
can
imagine
we
have
get
me
employees
with
by
allocate
all
employees
get
employed
by
ID,
get
employed
by
last
name
and
get
em
toi
by
zip
code.
Then,
similarly,
you
can
go.
Other
collections
get
photo
for
an
idea
for
an
employee
with
a
particular
ID
and
so
on
and
so
forth
a
lot
of
functionality.
The
queries
we
have
assembled
using
a
Singh
model
internally.
D
So
let's
say:
if
you're,
if
you're
looking
for
employee
information
by
an
ID,
then
you
get
an
employee
ID
from
the
employee
record
and
then
go
you
use
the
async
module
to
receive
or
execute
parallel
equation.
So
you
use
a
single
parallel
and
internally
you
call
into
the
address
dot,
fine
compensation,
dot.
Fine
hell
dot,
fine
as
for
and
so
forth.
So
we
try
to
use
all
the
module
which
are
most
popular
in
a
node
community.
I,
try
to
see
as
much
as
JavaScript
gets
executed
and
not
so
much
directly
called
the
queries
into
database.
A
D
Right
and
again
we
all
are:
we
are
kind
of
as
a
at
Intel
we
are
is
almost
a
year,
but
still
we
are
kind
of
learning.
We
don't
have
a
lot
of
noches
or
JavaScript
programming
expertise
here,
but
try
to
do
as
much
as
possible
try
to
looking
at
all
this
npm
top
module.
So
my
anger,
TB,
is
one.
Another
thing
we
learn
over
time
is
the
mongoose
dry
world
is
not
so
much
performance
official.
D
So
right
now
we
are
working
on
adding
another
mode
where
we
use
only
the
MongoDB
Native
Client,
so
that
we
can
see
the
clearly
difference
between
and
the
Mongoose
the
Mongoose
gives
you
schema,
which
is
very
good,
but
maybe
performance
is
not
so
great
compared
to
mongodb
native
Clyde.
So
we
trying
to
assess
that
and
hopefully
add
that
is
another
more
into
I.
Don't
know
DC
right.
A
D
And
as
I
talked
about
the
server
side,
rendering
so
initially,
we
are
just
going
to
have
a
default
template
hinging,
maybe
handlebar,
but
we
can
definitely
go
with
what
walmart
has
electro
electrode.
I
believe-
or
something
like
that,
so
we
want
to
mimic
that
also,
and
so
things
are
remain
same,
but
this
rendering
and
all
the
internals
will
change,
but
as
in
another
case
with
not
so
much
as
a
pro
replacement
but
adding
another
case
too
I.
D
A
A
That
would
be
like
a
concrete.
Ok,
let's
you
know,
we
know
that
in
in
what
we
have
is
a
single
machine
with
four
cores
so
for
acne
air.
You
know
we
actually
pin
like
the
driver
to
one
core
weep
in
the
database
to
one
core,
and
then
we
use
two
cores
to
to
run
the
actual
benchmark
mm-hmm.
So
if
we
had
like
a
similar,
you
know
proposal
to
say:
okay,
let's
set
this
up
like
this,
it
gives
us
you
know
looking
or
use
cases.
I
think
it's
certainly
from
the
picture
here.
A
It
looks
like
it's,
you
know,
will
give
us
coverage
on
the
rest
service
type
use
case
no
to
ensure
yes.
So
if
we
could
come
up
with
a
configuration
you
know,
then
it
covers
that
maybe
there's
others
that
would
cover
others
as
well,
but
that
would
be
a
starting
point
yeah.
That
would
be
something
that
then
people
we
could
try
out
and
you
know
see
if
we
can
get
running
in
the
community
benchmarking
machine
and
possibly
running
on
a
nightly
basis
should.
D
Yeah,
so
these
three
definitely
could
be
a
starting
point
again:
monolithic
and
cluster.
We
have
just
one
mode
on
and
that
changes
depending
on
what
the
configured
configuration
parameter
is.
For
example,
config
file
has
a
cpu
county
equal
to
minus
1.
That
means
it
will
determine
the
runtime.
How
many
course
available.
It
will
start
that
many
worker
processes,
but
you
can
say
hey
just
run
one
and
it
will
be
like
a
monolithic
application
thing
right.
E
A
D
B
D
D
D
And
if
you
want
I
mean
the
internals,
you
know
a
solid
code
is
out
there.
So
internals
are,
as
I
said,
they're
very,
very
few
api's
gay
it
and
and
of
course
it
does
a
post
and
delete
operations
also,
so
it
is
not
just
get
one
thing:
we
did
notice
that
when
you
kind
of
do
multiple
different
types
of
operation
like
you
do
issue
some
get
requests
and
you
do
you
do
to
some
post
requests
and
some
delete
request.
D
Sudden
is
the
performance
varies,
and
we
are
kind
of
investigating
that's
one
of
the
line
using
why
the
performance
varies.
So
much
if
you
just
to
get
everything
is
great,
but
if
you
just
add,
maybe
out
one
percent
of
post
request
and
one
person
delete,
request
performance
goes
down.
Alright,
so
what's
happening.
D
A
Right:
okay,
while
we
continue
to
take
questions
for
people
on
the
call
itself,
there
are
11
viewers
of
on
the
YouTube
channel.
So
if
any
of
you
have
any
questions,
if
you
want
to
post
them
to
the
no
dev
IRC
channel,
I
can
pass
them
on.
C
A
A
A
A
A
So
but
yeah
I
me
not
that
you
know,
if
you
guys,
can
put
together
like
the
proposal
of
here's,
a
configuration
that
would
run
on
that
the
machine
we
have.
We
can,
you
know,
start
to
understand:
okay,
okay
and
then
sort
of
also
look
at
that
use
cases
and
say:
okay
I
this
particular
configuration
fills
and
this
use
case.
That's
our
overall
goal
is
to
fill
in
that
coverage.
Yeah.
D
D
A
A
If
we
have
something
that's
missing
as
well,
you
know,
then
we
should
work
on
adding
it
into
their
rights.
Oh
yes,
so
I
think
yeah,
like
the
micro
services
based
applications,
one
probably
describes
I'm
sure
in
the
words
should
say
rest
in
there
somewhere,
so
that
would
be
an
area
where
yeah,
okay,
this
configuration
of
the
note
dce
is
we'll
cover.
You
know
at
least
give
us
some
coverage
for
micro
services
based
mm-hmm.
Yes,.
D
D
A
A
See
I
don't
see
any
questions
there
I'll
double
check.
No
dev
I,
don't
see
any
questions
there
either.
So
I
guess.
Unless
there's
more
questions
on
the
people
who
are
actually
in
the
call
itself
or
and
if
there's
anything
else
we
want
to
discuss
or
present
today
it
maybe
we'll
call
it
call
the
end
of
the
meeting.
Okay.
D
E
A
Everybody
who
joined
the
call
and
who
was
on
the
watched
in
the
YouTube,
we'll
talk
with
everybody
later
thanks.
Thank
you.
Thanks
for
your
time,
dang.