►
From YouTube: Node.js Benchmarking WG meeting
A
A
A
A
B
Okay.
I
have
a
colleague
sattvic
who
was
working
on
it.
He
has
created
all
the
scripts
right,
including
jobs
and
CI
job
scripts.
I
just
need
a
way
to
test
it.
I
remember
a
few
steps.
What
I
did
while
I
did?
No
DC
is
workload,
but
when
I
go
there
now
to
the
Jenkins
machine,
I,
don't
see
the
way
to
run
this
one.
So
if
you
can
help
me
out
after
the
meeting
or
maybe
we
can
set
up
a
time
sure
there.
C
A
E
D
Been
adding
an
initial
version
of
the
script
to
automate
there
so
haven't
sent
a
pull
request.
Yet
it's
on
my
branch
currently,
but
the
pre
requirement
is
that
the
system
must
be
running
the
docker
cell
risk
already,
and
the
docker
should
also
be
configured
to
download
an
image,
because
it
requires
the
my
sequel
image
to
run
the
benchmark.
A
A
D
A
D
D
B
Go
ahead.
Do
if
you
remember
last
meeting
we
talked
about
that
the
goal
was
to
recover
my
sequel
database
yeah.
So
we
don't
have
my
stipple
installed
on
our
benchmarking
machine.
So
we
said
if
it
could
be
a
docker
image,
they
need
to
be
easier
and
that's
why
sorry
we've
been
working
to
kind
of
integrate,
dolphin
environment,
yeah.
A
A
Yeah
well
I,
guess
yeah
I
mean
I
think
in
general,
we're
gonna
run
into
more
and
more
of
these
cases
where
I
think
it
a
docker
container
would
probably
make
sense.
So
we'll
just
have
to
install
docker
on
the
benchmark
machine
which
isn't
hard
just
another
package.
So
that's
not
really
a
big
deal.
I,
don't.
E
A
B
A
D
A
G
F
A
F
A
It's
kind
of
like
you
know
and
I'm
just
thinking
we
we
know
we
have
the
one
machine,
we'd
like
to
actually
run
it
on
a
bunch
of
them.
Eventually,
you
know
it's
not
just
one
machine
where
this
could
happen.
It's
several
so
the
more
we
could
make
it
so
that
we're
not
giving
privileges
to
the
digest
user.
The
better
I
think.
A
F
A
D
A
D
D
A
D
D
A
A
B
There
was
one
more
issue,
I
think
I
talked
about
as
the
last
comment
Mike,
so
there
were
two
versions
for
the
ghost
right
now
one
is
a
free
LPS
or
older
LPS
0.11
and
the
latest
LPS
is
version,
one
there's
quite
a
big
difference
between
the
performance.
When
we
looked
at
the
numbers
here,
the
current
supported
increase
is
one
point,
one
point
seventeen
and
add
success.
I
just
use
the
latest
version
and
not
worry
about
the
zero
point.
B
D
F
A
B
D
Yeah
I
I
got
around
it.
The
thing
is
with
the
latest
version:
it
works.
Fine
are
not
eight,
but
you
still
need
that.
Yan
ignore
engines,
flag
option
to
make
sure
the
installation
goes
through
and
with
the
earlier
version
to
get
it
working
on
the
later
versions.
I
am
I
have
an
interlock
in
the
script
which
will
delete
the
node
modules.
Folder
upgrade
the.
So
there
is
a
required
there
code
module
which
a
later
version
of
it
was
released
and
I'm
upgrading
the
version
and
installing
NPM.
A
D
A
F
Can
move
I
can
remember
the
conversation
and
I'm
probably
gonna
need
to
go
back
in
just
to
remind
myself
of
the
whole
story.
Yeah
I
mean
from
what
you're
saying
it
doesn't
seem
to
make
much
sense
to
go
with
the
older
version,
but
I
seem
to
remember
the
being
there
being
some
reason
why
we
make
perfect
sense
to
include
the
older
version
as
well,
and
that
I'll
need
to
go.
E
F
H
E
D
I,
don't
remember
quite
off
the
top
of
my
head.
I
can
add
it
to
the
same
github
issue,
but
yeah.
The
newer
version
is
not
as
performant
as
the
older
one.
It
is
a
large
slower,
but
they
I
said.
The
newer
version
does
use
a
lot
of
the
es6
features.
So
it
will
be
a
good
protest
out.
Any
changes
you
are
making
to
speed
up
es6
features
and.
D
A
G
Right
so
I'd
dive
into
that.
Yes
awesome,
so
everybody
Disha,
you
probably
know
him
from
the
internet
and
all
know
Jessie
things
today,
I'm
here
to
represent
the
user
feedback
initiative
inside
of
the
community
community
calm
calm.
So
this
was
one
of
our
sort
of
kickoff
projects.
You
know
end
of
last
year
there's
a
real
opportunity
to
connect
some
of
the
dots
into
the
project
and
bring
in
attention
from
the
external
user
community.
We
needed
some
tangible
things
to
work
on
and
Michael
had
a
request
out
to
the
community
committee.
G
So
we've
worked
with
the
foundation,
so
Greg
Wallace
of
the
the
noches
foundation
helped
to
set
up
the
the
survey
with
the
same
infrastructure
that
they
use
for
their
annual
user
survey
and
plugged
in
that
data.
So
I
posted
a
couple
links
in
our
chat:
I'm,
not
posting
them
to
the
meeting,
because
you
know
it
contains
an
internal
link
that
is
for,
for
your
eyes
only
and
I've
posted
the
the
tracking
issue
to
the
user
feedback
group.
So
that's.
G
Is
just
fine
to
go
in
the
minutes?
There's
nothing
there
sensitive
and
there's
not
really
anything
sensitive
in.
You
know
the
Survey
Monkey
results.
You
know
we're
since
we're
just
starting.
You
know,
collecting
and
sharing
user
data,
we're
trying
to
be
sensitive
to
unknowns
where
we
may
not
know
you
know
in
the
data
you
know
some
some
folks
may
be
sharing
identifiable
information
that
that
we
want
to
sanitize
in
the
final
results.
But
our
objective
is
to
you
know,
like
the
any
other
aspect
of
the
denote
projects
it
to
produce
them
up.
G
G
We're
talking
about
something
you
know
completely
unknown,
so
you
know
here
we
are
the
the
overview.
We
had
really
solid
turnout,
so
we've
been
running
this
for
about
a
month
and
you
know
had
over
270
responses
and
you
know
running
the
gamut
of
sizes
looks
like
we
have
a
lot
of
of
individuals
with
smaller
applications
and
then
representative
samples,
some
folks
with
some
really
big
installations.
So
you
know
kind
of
matching
the
the
expected
spectrum
of
use
cases.
G
G
That's
very
good
to
see
right,
yeah,
I'm
impressed.
You
know
that
that
been
corroborated
in
you
know
several
different
surveys
that
that
you
know
I
know
the
notes
or
snowed
by
numbers.
We
saw
that
trend
where
the
uptake
of
our
latest
LTS
has
been
strong,
and
you
know
the
next.
The
next
question
is,
you
know
if
you
aren't
on
there,
how
fast
are
you
going
and
everybody
you
know,
looks
like
they're
they're,
really
moving
aggressively
to
our
latest
LTS.
G
And
we
get
to
get
some
perspective
on
the
use
cases,
we're
testing
you
know
the
services
and
and
back-end
micro
services.
You
know
which
is
kind
of
the
initial
hypothesis
around
the
benchmarking
working
group
right
when
Acme
air
was
introduced,
the
API
services
were,
you,
know,
one
of
the
primary
use
cases
and
that
continues
to
be.
You
know
the
trend
for
for
node
and
feedback
from
our
end-users.
G
Five
node
modules:
this
is
a
hard
quit
if
we
get
a
lot
of
kind
of
pushback
on
this
one
from
from
Greg
in
terms
of
like
coaxing
us
to
Jim
prove
how
we
were
articulating
this
one,
and
it's
like
that
was
a
you
know,
going
to
be
an
interesting
one
to
in
serpent.
I
think
that
we
just
have
to
take
the
data
and
pick
it
apart.
Right,
yeah.
E
A
G
G
G
G
H
H
G
G
So
big
spread
on
whether
folks
have
have
anything
in
place.
You
know
kind
of
split
down
the
middle
one-third,
one-third
one-third
across
our
folks
doing.
Do
folks.
Have
it
our
folks
wanting
to
have
it
in
place
and
do
folks
completely
not
have
anything,
and
that
was
the
infrastructure
of
tracking
performance.
G
G
E
G
G
E
A
E
E
A
Versus
like
I
guess
it's
it's
interesting
to
me
too.
Anyway,
we
can
dig
in
when
we
have
the
final
results,
but
kind
of
like,
like
10
percents,
big
enough
that
if
one
in
10
people
in
every
version
have
a
problem,
it
seems
like
we
could
try
and
do
better
or
like
it's.
It's
one
in
ten
people
notice,
something
we
haven't
noticed
through
our
own
testing
mm-hmm.
So.
B
I
A
G
Suddenly
we
might
want
to
do
in
the
future,
so
my
answer
would
be,
you
know,
have
I've
seen
performance
regressions
in
any
release
of
note.
Yeah
I've
been
using
node,
you
know
since
early
days-
and
you
know
definitely
haven't
seen
that.
But
you
know
if
we're
really
talking
about
you
know
node,
you
know
version
four
plus,
then
we
might
want
to
qualify
that
as
we
go
forward,
so
we're
we're
we're
focusing
our
sample
set
on.
You
know
the
current,
so
the
state
of
node
yeah.
G
A
A
E
G
You
know,
like
security,
you
know,
there's
there's
really
only
the
edge.
You
know
the
chances
are
that
you
know
you're
not
going
to
encounter
a
security
issue,
but
knowing
that
you
can
go
to
the
security
working
group-
and
you
know,
report
an
issue
and
you
have
a
way
to
disclose
that
I
mean
I.
Think
that's
an
important
way
for
folks
to
use
that
it'll
also
enhance
their
awareness.
That
this
group
is
a
thing
like.
The
benchmarking
is
a
thing
where
you
know
actively
engaged
in
the
process
of
maintaining
and
improving
our
opponents.
G
E
A
E
I
E
A
B
We
can
add,
link
to
the
benchmarking
page,
also
people
if
they
say.
Oh,
there
is
a
performance
they
go
based
back
inside.
Hopefully
they
are
tracking
looking
at
that
page
and
they
oh
well
most
of
the
workloads
we
track.
They
don't
see
you
good
aggression,
but
here
is
the
link.
If
you
don't
see
any
issues,
oh
yeah.
G
G
H
G
The
Michaels
filling
that
in
I'll
continue
some
of
the
questions.
This
was
gonna
be
fun,
you
know,
how
do
you
write
a
synchronous
code
and
it
is
the
callbacks
losing
and
promises
and
async
await
just.
B
I
hope
comment
about
that.
Oh
yes,
and
yes,
don't
mind
ya
workloads
we
have
on
the
benchmarking.
Most
of
them
are
either
use
callbacks
or
promises
right.
We
don't
have
like
no
DC,
for
example,
if
we
have
the
async,
we
should
be
taken
item
action,
action
item,
we'll
see
whether
we
can
convert
them
or
add
another
use
case
to
use
those
you
think
of
it,
and
people
can
see
it.
I
say
no
DC
is
work
with
the
callbacks
and
this
new
feature.
How
does
the
performance
looks
like.
G
Oh
Tom
I
think
that's
a
fantastic
take
away.
You
know
if
everybody
that
I'm
be
talking
to
lately
is
in
the
process
of
migrating
their
curve
base
to
be
more
async/await
D,
and
you
know
if
folks
can
get
feedback
that
they're
heading
in
the
right
direction.
This
is
going
to
be
a
performed
path.
You
know,
I
know
that
they'll
enjoy
it
and
from
Benedict
the
more
that
that
we
put
pressure
on
that,
the
more
that.
A
Yeah
I
mean
I
think
since
you
guys
put
together
DCIS
if
you're
in
a
position
to
basically
port
it
over
to
async/await
it'd,
be
really
good
from
two
perspectives.
Once
the
comparison
you
know,
we
could
have
them
both
running
and
you
could
see
how
they
compare,
but
then
two
would
give
us.
You
know
coverage
on
async/await,
making
sure
we
don't
regress
anything
there
as
well.
So
yeah
you
really
good.
Okay,.
E
And
there's
one
final
note
on
that:
I've
been
talking
to
the
happy
maintainer
so
you're
in
hammer,
and
he
also
wants
to
for
his
own
purpose,
already
have
a
benchmark.
Happy
version.
17
is
async/await
only
it
doesn't
use
any
qualitative
to
talk
to
note
and
even
internally,
including
just
using
async
functions,
nothing
else,
and
we
could
also
have
this
workload
running
because
yeah
yeah
it's.
A
E
G
So
you
know
what
what
what
do
folks
think
is
the
biggest
impact
on
their
performance.
Most
people
don't
know
are
not
sure,
and
then
the
oh
there
were
choir
module,
loader,
syntax
being
number
one
I
think
the
that
is
likely
an
incorrect
response,
since
it
would
only
really
impact
startup
time,
not
performance,
but
the.
G
A
G
G
So
this
matches
this
matches
sort
of
social
discussions
that
I
have
where
I
think
folks
are
not
using
node,
because
they've
been
told
that
computational
workloads
don't
work
with
node
and
they're
following
the
that
assumption,
not
that
they
don't
know
how
to
and
I
think
it's
that
they
don't
know
how
to
use
computational
workloads
in
node.
Does
that
make
sense.
G
In
my
experience
in
building
out
node
systems
around,
you
know
computational
workload
and
server-side,
rendering
is
a
example
of
it.
You
need
it's
an
architectural
shift
that
you
need
to
do,
and
you
know
if
you
haven't
learned
how
to
separate
the
service
workers
and
processes
around
node.
Then
you
know
it's
going
to
be
painful
right.
A
G
Not
sure,
but
definitely
you
know
in
the
clear
indications
that
the
reason
people
aren't
doing,
that
is
because
of
computation
right
worker
model.
Again,
it
looks
like
folks
are
interested
in
the
work
remodel.
It's
good
I
agree
with
that
is
startup
time,
an
important
use
case,
a
big
split
down
the
middle
whoa
40%.
What
for
you?
G
A
Right
so
basically,
this
one
comment
is:
is
that
they've
got
a
whole
pile
them
going
and
they
go
down,
so
they
needed
to
start
up
quickly.
G
G
G
G
G
G
A
G
A
G
G
G
G
We
are
planning
a
a
public
user
feedback
session
on
Friday,
February
9th.
You
know,
Michael
I,
don't
know
how
you
want
to
handle,
or
you
know,
engage
folks
in
the
working
group.
But
you
know
at
a
high
level,
everybody's
invited
and
I'm
happy
to
sort
of
get
a
list
of
names
and
add
every
buddy
to
you
know
the
the
calendar
invite
seven
has
a
calendar
to
invite
and
we'd
love.
G
You
have
you
join
with
representatives
from
folks,
who've
raised
their
hand
to
participate,
and
there
you
know
folks
from
you
know
all
over
the
industry
from
from
PayPal
some
folks
out
of
you
know,
from
Iowa
and
and
beyond
that,
have
volunteered
and
and
would
like
to
to
connect
and
so
we'll
begin.
You
know
putting
some
some
real
faces
to
some
of
this
data
and
you
can
act.
Ask
folks
directly.
You
know
in
and
dive
deeper
into
some
of
these
questions
on
February
9th
yeah.
A
So,
just
as
some
additional
context,
adding
to
what
Dan
said
is
you
know
our
plan
for
the
user
fee
group
is
to
build
this.
You
know
a
group
of
end
users
that
we
can
regularly
meet
with
ask
questions,
get
insight
from
their
actual
use,
and
this
is
the
sort
the
first
kick
up.
Kickoff
meeting.
So
we'll
probably
you
know,
start
with
a
bit
of
an
introduction
by
then
to
delve
into
sort
of
the
first
exchange
we
thought.
A
Well,
we
have
the
questions
from
these
from
the
benchmarking
survey
and
the
plan
would
be
to
you
know,
walk
through
them
and
have
the
in-person
conversation
about
the
questions
where
we
can,
like
you
know
when,
when
they
answer
a
particular
way,
we
can
actually
then
delve
into
the
details
and
I
think
it'll
be
interesting.
Just
to
see
like
can
we
actually
get
more
new
data
to
by
doing
that
versus
a
survey
and
it's
you
know
something
we
already
have
set
up.
A
G
A
G
G
A
A
A
Usually
at
this
point,
then
we'll
go
into
QA.
There
are
five
viewers.
So
if
people
have
questions
you
can
put
them
into
the
issue
original
issue,
I.
A
A
A
No
okay
yeah!
So
if
anybody
does
have
a
question,
if
you
can
put
it
into
the
issue
itself
and
I,
don't
see
any
so
I
think
that's
it
for
this
week.
Just
as
a
note,
you
know
there
was
some
discussion
on
changing
the
meeting
times
and
one
of
the
issues
so
I've
updated
the
calendar.
You
know
now
it's
still
every
two
weeks,
but
it's
Mondays
one
week,
Tuesdays
the
next
week
and
at
two
o'clock
Eastern,
both
of
those
both
of
those
times
so
the
next
one
will
be
two
weeks
from
now
on.
Tuesday
there's.
J
So
a
couple
of
us
at
Microsoft
and
a
couple
others
at
Google
have
been
working
on
a
blog
post
recently,
basically
advocating
for
people
working
inside
node,
yes
and
inside
node
modules,
too
right
idiomatic
code
and
complain
to
the
engine.
If
the
automatic
code
is
not
behaving
as
they
as
they
expect
it
to
so,
just
just
as
a
kind
of
notification,
so
that
you're
all
aware
that
that
is
also
going
on.