►
From YouTube: Public User Feedback Meeting - Node.js Benchmarking
Description
A
Great
okay,
well
welcome
everybody
to
the
first,
ever
no
js'
user
feedback,
a
public
session
we're
gonna
have
a
public
session.
Public
feedback
we've
reached
out
to
members
from
its
throughout
the
eat.
The
know
community
and
invited
them
to
come,
and
you
know
share
that
their
their
experiences,
their
feedback
around
the
project.
A
Today,
we're
going
to
be
focusing
on
initiatives
that
we've
taken
up
on
behalf
of
the
benchmarking
working
group,
so
benchmarking
working
group
reached
out
to
the
community
committee
and
and
largely
the
the
user
feedback
group
was
formed
around
you
know
solving
this
initial
first
challenge
of
going
and
doing
some
outreach
to
the
community
and
getting
feedback
on.
You
know
what
the
priorities
are
around
benchmarking
and
you
know
what
are
the
the
important
consideration,
as
were
you
know,
establishing
the
the
the
benchmarking
you
know
our
understand.
A
Our
collective
understanding
of
you
know
how
performance
just
product
is,
so
you
know
really
exciting
initiative
for
us
and
it's
great
to
sort
of
take
this
full
circle
in
and
bring
it
to.
You
have
the
overall
community,
so
I'm
Dan,
Shaw
I
lead
this
initiative
inside
of
the
the
comm.
Come
user
feedback
and
I'll,
let
let
me
go
back
to
zoom
here
real,
quick
and
just
kind
of
go
around
the
list
and
let
folks
introduce
themselves
so
Jeremy.
Could
you
introduce
yourself.
B
A
Great
I
have
a
couple
new
attendees,
so
to
do
I'm
gonna
go
stuff
away
from
from
looking
at
attendees
managing
expectations
of
attendees.
Your
folks
that
are
joining
the
zoom
meeting
are
going
to
be
live,
and
you
know
part
of
the
the
public
feedback
session.
If
you're
just
here
to
watch,
we
have
a
live
simulcast
to
YouTube
and
we'll
be
monitoring
the
chat
and
discussion
on
YouTube,
but
welcome
everybody.
This
is
a
public
session,
so
most
of
all,
we
want
to.
You
know,
recognize
that
you're,
you
know
in
a
recorded
and
broadcasted
medium.
A
D
E
A
G
Hi
good
morning,
I'm
from
warmer
labs,
I'm,
a
principal
architect
of
the
customer
engineering
web
platform
at
Walmart
labs
we've
been
using
an
ojs
for
a
couple
of
years
now
but
I
last,
but
two
years
ago,
I
ago,
I
completely
switched
the
entire
web
platform
from
Java
back-end
to
no
DJ
s.
So
the
anti-walmart
outcome
is
now
running
on
nodejs
for
the
front-end
stuff
and
I'm
here,
as
a
user
for
two
country
blue
with
the
feedbacks
Thank.
H
Okay,
I've
been
called
worse
I'm
with
crinkle
tech
I'm
representing
impart
IBM.
I
platform,
formerly
known
as
the
a
is
400,
so
no
dias
was
made
available
on
Ivy
my
a
few
years
ago
and
I
get
involved
in
collaborating
with
other
IBM
errs
and
helping
to
formulate
what
no
jeaious
looks
like
an
IBM
I.
Now,
for
example,
today
I
find
myself
compiling
the
database
adapter
for
db2
for
node
on
IBM
I
working
through
the
automation
of
that
putting
it
into
a
container
to
make
it
more
simple.
Thank
you
very
cool
and.
I
B
A
A
A
C
A
Yeah
great
well,
let's,
let's
go
ahead
and
get
started.
You
know
what
I'd
like
to
do
is
so
everyone
you
know
has
framing
is
really
let
Michael
Dawson.
You
know
kick
us
off
with
an
introduction
to
what
the
benchmarking
working
group
is,
and
you
know
what
we
try
to
achieve
inside
of
the
benchmarking
working
group
for
the
no
gist
project.
C
Okay,
so
let
me
share
my
screen
share
screen
there.
We
go
a
screen,
this
guy
I
think
is
it
so
you
should
all
be
able
to
III
haven't
present
to
prepared
a
slide
deck
or
anything
like
that.
I'm
just
gonna
walk
through
showing
a
little
bit
of
the
benchmarking
repo
and
just
sort
of
talk
to
some
of
the
things
that
you
know.
We've
been
working
on
and
focusing
on
the
focus.
The
overall
working
group
you
know
to
start
with
the
mandate.
C
C
You
know,
really
you
want
to
squeeze
the
most
you
can
out
of
the
money
you've
invested
in
your
hardware
and
by
making
sure
no,
you
know
node
runs
efficiently
and
as
quickly
as
possible.
We
help
you
do
that
and
we
also
want
to
avoid
any
you
know
surprises,
but
when
you
upgrade
it
upgrade
from
one
release
to
the
next.
The
focus
of
the
way
that
we've
been
working
on
that
so
far
is
to
define
important
use
cases,
and
these
are
the
use
cases
that
we
have
defined.
C
So
you
know
back-end,
API
services,
service-oriented
architectures
micro
service
based
applications
generating
serving
dynamic,
would
pitch
camp
content,
single
page
applications,
agents
and
data
collectors,
web
developer,
tooling
and
small
scripts,
and
will
use
these
set
of
use
cases
to
try
and
draw
out
the
benchmarks
that
we
put
in
place.
So
one
thing
that
you
know
I
mentioned
to
our
end
users,
if
you
have
a
use
case
that
doesn't
fit
into
one
of
these
it's
important
for
you
led
us
to
note.
Let
us
know
that
so
that
we
can
factor
in.
C
You
know
the
additional
use
cases
into
the
work
that
we
do
here.
It
would
also
be
interested
to
sort
of
you
know,
get
feedback
I
think
we
did
through
some
of
the
surveys,
but
you
know
if
you,
if
you're
talking
about
you,
know
some
of
your
challenges
and
so
forth
with
us
to
let
us
know
which
of
the
use
cases
you
fall
into
I
think
will
also
help
us
frame
frame
that
a
little
bit.
So
those
are
the
use
cases.
We
also
define
a
number
of
key
runtime
attributes.
C
So
these
are
things
we
believe
are
important
in
terms
of
you
know
they
may
be.
They
may
be
applicable
to
one
or
more
of
those
use
cases,
but
we
think
are
generally
why
people
use
nodejs
and
are
important
to
maintain.
So
things
like
you
know,
short
start
and
stop
time
our
footprint
being
low.
Both
you
know
before
you've
used
it
and
afterwards
not
using
a
lot
of
CPU
and
we're
idle,
making
sure
that
we
can
serve
a
lot
of
you
know.
C
High-Throughput
of
HTTP
requests
making
sure
that
our
GC
load
and
allocation
throughput
are
good,
making
sure
that
we
don't
have
large
pause
times
and
then
keeping
things
like
the
install
package
size
and
the
size
on
disk
once
installed.
Also,
being
you
know,
you
know
a
small
which
is
one
of
our
advantages
over
other
runtimes.
C
So
it's
is
the
combination
of
those
use
cases
and
those
key
runtime
attributes
that,
let's
us
saying
okay,
now
we
want
to
build
out
and
I
see.
I
got
my
link
wrong.
There,
I'm
gonna
have
to
fix
that
that
lets
us
drive
the
matrix
that
says:
okay,
what
do
we
want
to
run?
Are
we
running
it
already
and
if
not,
you
know,
what
are
we
gonna
add
to
run
that,
so
we
have
this
set
of
use
cases
which
basically
say
ok
for
the
use
cases.
C
C
We've
got
acne
air
which
came
out
of
a
there's.
An
equivalent
acne
air
for
Java
simulates,
an
airline
transaction
system
and
it's
intended
to
represent
you,
know
a
whole
a
whole
end
user
application.
You
know
going
out
to
the
database,
interacting
with
the
user
over
the
web
to
get
a
you
know
to
track
our
performance
on
key
attributes.
C
For
that
we
have
things
that
check
out
some
of
the
key
attributes
like
making
sure
that
we
can
require
things
quickly,
because,
if
you're
running
small
scripts,
you
don't
want
to
spend
a
lot
of
time
pulling
in
the
pieces
you
need.
You
need
to
be
able
to
do
that
quickly
and
then
you
know
things
like
starting
up
and
stopping.
We
have
a
start
and
stop
and
again
some
of
those
existing
ones
map
to
the
key
attributes
as
well.
C
If
we're
going
to
benchmarking
nodejs
org,
you
can
see
these
see
the
charts
that
we
have
these
are
we
run
every
night,
the
benchmarks
that
I
was
mode
across
a
number
of
releases,
and
you
can
see
you
know
we
have
results
in
4,
X,
6,
X,
8,
X
and
canary
as
well
as
master,
and
then
you
know
we
basically
get
an
extra
dot
in
this
chart
published
every
night
and
we
can
use
this
to
track
the
performance
and
say:
ok.
Well,
we've
got
something
we
need
to
investigate.
C
So,
for
example,
this
required
cache
does
a
case
where
it's
it's
decreased
a
little
while
ago,
and
you
know
we
just
realized,
and
now
we
need
to
go
and
say
well,
wait
a
sec.
What's
going
on,
let's
investigate
that
figure
out
whether
it's
actually
planned,
it's
ok
or
whether
it's
it's
a
problem.
We
can
see
that
generally,
you
know.
We've
had
increasing
performance
over
time.
C
If
we
look
at
acne
air
as
well
as
no
DCIS,
which
is
another
benchmark,
sort
of
whole
system,
web
front-end
and
node.js
back-end
a
system
that
we've
been
tracking,
you
know,
and
so
that's
a
good
thing.
It's
sort
of
the
data
confirms
that
we're
making
progress,
which
is
you
know
what
we'd
hope
before.
C
There
is
a
bit
of
a
framework
for
generating
and
plotting
the
charts,
but
I
think
that's
most
of
what
sort
of
interested
in
the
context
and
as
part
of
trying
to
figure
out,
you
know
the
benchmarks.
We
should
work
on
what
kind
of
stuff
we
should.
We
should
pull
into
those
benchmarks.
We
had
a
number
of
questions,
and
that
was
the
genesis
of
the
node
benchmarking
survey,
which
Dan's
gonna
take
us
through
some
of
the
results,
and
you
know
maybe
we'd
like
to
discuss
with
you
to
get.
C
You
know
an
online
discussion
of
some
of
those
questions
versus
a
survey
where
it's
sort
of
a
1:1
direction
as
the
first
sort
of
you
and
user
feedback
discussion
will
have
other
similar
one-
and
you
know
maybe
some
more
sort
of
open-ended
questions
as
well
sessions.
But
this
we
thought
this
would
be
a
good
one
since
the
benchmark.
Just
the
survey
is
closed
and
it
was
a
good
way
to
to
sort
of
add
to
the
data
that
we've
got
out
of
that
survey.
C
Right,
that's
that's
the
next
time
that
we're
gonna
have
an
opportunity
to
again
talk
to
two
people
directly.
We
have
a
session
scheduled
for
the
second
session
of
the
day.
So
yes,
if
you're
in
San
Francisco,
it's
free
registration
for
that
the
community
day
so
join
us
there
and
if
then,
if
you
want
to
stay
for
the
rest
of
the
conference,
yeah.
G
Question
exactly
what
time
is
the
meeting
gonna
be
start
at
index
con?
For
a
note,
you
know
it
starts
at
two
o'clock
and
goes
to
sex.
Okay.
Thank
you.
C
A
Great
well
before
we
move
on
to
the
benchmarking
working
group
data,
I
wrote,
I
wanted
to
make
sure
I
opened
the
the
forum
to
you
know
our
end
users
to
just
get
your
feedback
directly
on
on
benchmarking.
So
in
inside
of
the
internal
chat,
I've
posted
a
link
to
you
know
benchmarking
noj's
org,
unfortunately,
I'm
not
allowed
to
post
links
to
the
the
YouTube
chat
links.
C
A
You
know
is
this
some
resource
that
you're
interested
in
using
and
if
so
you
know
blog
to
hear
why
you
know
the
benchmarking
you
know
is
primarily
you
know
for
us
to
keep
things
saying
to
make
sure
that
we're
not
regressing.
But
you
know
it's
always
nice
to
see
that
your
folks
are
really
you
know,
paying
attention
to
and
use
it
utilizing
the
resources
that
we
provide.
H
This
is
Aaron
Bartel,
yes,
I
see
this
as
being
advantageous
for
the
IB.
My
community
I
go
to
a
number
of
conferences
every
year
and
talk
about
nodejs
and
its
performance,
and
this
will
be
a
useful
resource
to
show
people.
The
majority
of
the
people
I'm
talking
to,
though,
are
running
node
on
IBM
I
saw
need
to
dig
into
this
further
to
see
if
we
can
put
together
a
testing
solution.
That's
run
on
the
IBM
I
platform
to
see
how
it's
performing
there
I'm
guessing
it's
gonna
be
different.
It's
got
a
environment
on
it.
H
C
B
C
Our
scripts
and
so
forth
that
you
know
they're
there,
all
the
scripts
are
all
available
in
the
the
benchmarking
repo
to
actually
run
the
different
benchmarks,
but
there's
some
additional
logic:
I
could
give
you
from
the
the
community
CI
jobs,
that's
sort
of
how
they
run
together
and
then
publish
the
results.
Okay,.
G
Yeah,
we
actually
keep
internal
benchmarks
also,
but
it's
not
like
of
it
not
well
maintained.
It
is
every
time
there's
a
major,
no
released
and
I
would
go
and
try
to
get
up
and
try
to
remember
how
to
run
them
and
find
us
found
us
to
to
get
an
idea
of
like
a
ballpark
of
how
much
performance
we
can
get
from
a
major,
no
upgrade
no
useful.
There.
B
C
C
C
A
G
G
A
So
just
just
to
expand
the
framing
and
what
your
why
Tierney
and
Michael
and
I
got
so
excited
part
of
what
the
the
v8
team
has
begun
doing
in
terms
of
benchmarking.
V8
and
you
know,
a
lot
of
our
performance
characteristics
are
dependent
on.
You
know
the
the
VM
that
we
bind
to,
which
is
you
know,
google's
chrome
v8.
So
you
know
that
the
v8
team
has
shifted
from
your
micro
benchmark
to
benchmarking,
real-world
application
code
and
we
just
introduced
the
entire
tooling,
the
benchmarking
suite.
So
you
know
incorporating
babel
and
webpack
and
I
forget.
A
You
know
taking
that
that
real-world
experience
and
and
applying
that,
so
you
know
if
we
can
collaborate
with
you
to
to
extract
anything
that
you
know
this
internal
or
proprietary
out
of
of
those
benchmarks,
then
you
know
I
think
this
is
a
natural
extension
of
us
beginning
to
really
put
into
our
C
ICD
and
benchmarking
infrastructure
real
world.
You
know
node.
C
G
One
of
one
thing
that
we
do
that
we
have
a
lot
of
trouble
with
in
terms
of
performance.
This
is
actually
a
stylus
compounding
internally,
we
use
sass,
try
our
CSS,
and
then
we
have
the
compiled
empties,
massive
piles
and
and
some
of
the
deeply
nested
trees.
They
they
take
a
huge
amount
of
time.
So
initially,
when
we
were
using
a
stash
compiler
it
are
built,
was
running
in
upwards
of
like
hours
because
of
some
nested
structure.
C
A
A
You
know
if
I
recall
correctly
it
even
you
know
even
just
moving
it
forward
into
you
know
it's
a
more
modern
codebase,
its
parsley
not
maintained
it's.
It
was
it's
a
really
popular
module
in
a
really
weird
space
that
you
know
I
kind
of
expected
that
pain
to
to
produce
more
alternatives
than
it
has
but
yeah.
It's
good
good
to
hear
that
that
you
know
we
need
to
give
that
attention
so.
F
There
is
one
comment
the
Go
Daddy
has
had
about
this
web
tooling
benchmark.
If
you
go
and
look
at
how
it
runs,
all
of
these
benchmarks
are
requiring
that
whatever
they
run
be
synchronous.
So
that's
the
reason
web
pack
is
not
on
here,
so
we've
actually
got
our
own
stuff
going
separate
of
this.
We
have
some
build
farms
that
we
run
web
pack
in
and
note
SAS
probably
similar
to
Walmart,
and
so
we
we
can
probably
extend.
We
have
open
source,
but
that's
only
one
part
of
our
issue
is
these
web
tooling.
F
C
So
I
guess
it's
it's
it's.
It
sounds
like
there's
two
areas,
one
like
maybe
you'll
be
able
to
contribute
additional
benchmark.
We
might
want
to
be
running
for
the
web
tooling
side,
but
then
also,
if
you
have
some
specific
things
around
memory
and
in
terms
of
measuring
it
and
stuff
like
that.
C
That
would
also
be
a
good
thing
to
get
a
discussion
with
the
benchmarking
work
work
work
going
because
you
know
we
we
the
only
place
we
measure
nodejs
memory
usages
is
in
acne
or
sort
of
the
before
and
after,
and
what
you're
describing
I
think
is
quite
a
different
use
case.
So
it
might
be
a
good
addition.
A
F
A
A
You
know,
I
would
I
kind
of
assume
that
it's
in
part
because
we're
all
sort
of
rolling
our
own
rather
than
leaning
on
you
know
some
some,
you
know
central,
you
know
know
own
mechanism
that
way
that
we
can
say
okay,
I
kind
of
expected
to
perform
this
way,
but
I.
You
know
III
think
that
there's
a
great
opportunity
to
begin.
B
A
C
A
C
A
Yes,
all
right,
so
you
know
we,
we
went
through
the
survey.
I'm
gonna,
cherry-pick
some
of
these
questions
here
in
the
survey
and
you
know,
ask
around
you
know
the
number
of
applications
in
production
I
think
we
probably
skip
over
and
unless
anyone
really
wants
to
dive
in
I'd
like
to
get
into
some
of
the
more
technical
things.
I
do
think.
B
A
Next
week
we're
gonna
reconvene
as
a
team
with
Greg
Wallace
and
you'll
go
back
into
the
survey
details,
so
you
know
tyranny.
Let's
hold
you
know
feet,
mix,
sort
of
meta
feedback
on
on
survey
and
things
that
we
can
improve
for
next
year
to
that
session.
Okay,
all
right,
so
you
know
note
in
production.
You
know
I
actually
want
to.
You
know
stomp
on
this
one,
because
you
know
I'm
really
happy
with
the
results,
though
it
does
surprise
me
a
little
bit.
You
know,
especially
in
production
workflows.
A
F
F
G
Okay,
yeah
thing
for
us
because
we're
ecommerce
site
things
are
a
little
more
critical,
so
we
generally
have
a
longer
lifetime.
Whenever
there's
a
new
version,
I
I
would
actually
go
and
start
using
it
and
we
would
start
using
it
for
development
and
once
versions
promoters
LTS.
Then
we
will
start
them.
It's
not
testing
it
in
that
platform
internally
and
then
we
would
go
find
some
less
critical
apps
to
stop
testing
it
in
the
production
and
then
once
we
are
more
confident
than
we
started
getting
more
apps
to
upgrade.
F
A
A
B
B
A
H
G
Another
thing
I
want
to
add
is
we
have
a
very
a
lot
of
telemetry
data
logging
so
anytime
we
actually
update
any
anything
like
the
no
diversion
or
any
major
library.
We
actually
would
be
monitoring
our
locks.
Much
more
careful
and
in
in
past,
we've
had
a
few
incidents
where,
after
we
upgrade
to
a
new
version,
we
noticed
like
more
errors
or
things
are
slower,
and
then
we
have
the
rollback
so
that
that's
one
one
of
our
safe
cause
after
we
deployed
with
no
update,
for
example,
to
a
production
in.
G
Or
January
that
that
would
be
me
so
if
we
notice
anything
if
it's
anything
that
I
touched,
that
makes
sense
to
to
a
feedback
that
I
would
go
to
nodejs,
repo
and
Fire
book,
but
so
far,
and
anything
that
that's
a
kind
of
a
has
a
significant
impact
or
production.
It
would
end
up
being
a
issue
there
if
many
people
out
there
also
run
into
so
by
the
time
we
notice
and
and
get
our
stuff
in
line
someone
else
priori,
power
issue
wanna
know
Jade
repo,
but
anything
smaller.
Generally,
we
don't.
A
Well,
I
have
a
couple
more
comments
on
telemetry
based
on
our
diagnostic
summit,
but
all
the
discussion
that
we
can
take
offline
and
perhaps
discuss
in
person
at
Diagnostics
I
wanted
to
give
Nikolai
that
of
the
floor
for
feedback
on
version
uses.
Hey
thank.
I
You
actually,
we
do
the
same
like
a
Walmart
for
our
commerce
customers
and
we
have
a
six
version.
8
version.
We
are
excited
to
have
eight
version
in
production
because
we
increase
it
our
performance,
since
our
applications
have
integration
of
a
lot
of
NPM
packages
and
the
most
time
or
applications
were
spending
in
NPM.
I
G
We
actually
that,
where
to
fix
a
version
we,
for
example,
you
know
for
our
CI
CD,
we
will
lock
the
NPM
to
three
points.
Three
point,
eight
four,
you
know
so
and
we
would
lock
that
locked
it
to
that
for
a
while,
even
if
no
release
with
a
new
version
of
NPM,
we
would
actually
install
a
third
version.
We've
locked
internally,
because
that
give
us
a
more
predictable
environment
in
case
something
goes
wrong
and
and
the
ICD
or
functional
test
goes
wrong
them
and
them.
F
F
F
C
G
A
A
A
You
know
we
definitely
we
had.
You
know
strong
sort
of
corroboration
of
you
know:
back-end
API
s
and
microservices
api's
with
you
know,
sort
of
a
sampling
of
other
things,
and
you
know
just
wanted
to
confirm
that
that
is
the
type
of
infrastructure.
As
Michael
mentioned.
You
know
we
prioritized
benchmarking,
those
things
first,
so
I
guess
you
know.
Let
me
ask
the
the
negative
question
you
know:
is
there
something
that
you're
using
node?
For
that?
You
know,
isn't
you
know
captured
in
what
you've
seen
before
the
primary
use
cases
I.
G
Think
these
are,
these
are
use
cases
that
cover
what
we
use
one
one
of
the
things
that
allows
our
internal
teams
that
they've
been
asking
mote
modes
of
our
back-end
services
are
implement.
Our
and
I've
had
quite
a
few
teams.
People
constantly
asking
me
can
I
use
no
to
write
my
services
and
I
need
to
connect
to
MongoDB
or
Oracle
DB,
and
things
like
that
and
we've
had
mix
experience
and
results
in
when
using
node
and
try
to
connect
DB
from
no
directly.
G
C
A
Off
like
that,
we
also
had
discussions
the
diagnostic
summit
that
you
know
touched
on
pooling
right.
So
a
couple
pain
points
around
pooling
and
I.
Wonder
if
you
know
it's
it's
pulling
and
pulling
mechanisms
that
are
the
pain
point
or
if
it's
a
you
know,
computationally
intensive
I'll
flag,
that
price.
C
C
H
Anything
you
wanted
to
add
there.
What's
what
I've
seen
is
back
into
API
services?
Our
tradition
of
the
IBM
is
seen
as
the
legacy
platform.
It
holds
a
lot
of
the
businesses
data
and
people
want
to
get
at
that
data,
so
they
latch
on
web
services
restful
or
other,
and
they
use
node
to
do
that
great.
So.
F
A
Great
well
maybe
I'll
capture
that
and
we
can
sort
of
peel
back
the
onion
a
bit
there
and
see
you
know
if
it
is
pooling
or
if
it
is,
you
know
abstraction
layers
like
Oh
our
M's.
In
that
you
know,
if
this
the
the
database
concepts
definitely
database
challenge
is
definitely
an
area
that
I've
heard
multiple
times.
A
F
A
Got
it
got,
it
makes
sense
all
right,
so
we're
almost
at
the
top
of
the
hour
I
want
to
let
anyone
who
who
really
needs
to
go.
You
know
have
have
an
opportunity
to
have
a
final
word,
though
I
would
love
to
continue
for
another
15
minutes
or
so,
since
we
got
started
a
little
bit
late,
you
know
I'm
happy
to
stick
around.
Is
it?
Does
anyone
have
a
hard
stop
here
at
the
top
of
the
hour?
A
H
C
A
Long
and
go
through
a
few
more
of
these
questions,
so
you
know
kind
of
the
I
guess
the
surprising
mediocres
response
now
I'm
surprising
from
real-world
experience,
but
you
know
one
of
those
things
that
you
know.
We
always
hope
folks
have
more
infrastructure
and
more
visibility
around,
especially
when
we're
sharing.
You
know
the
context
of
something
like
our
building
benchmarking
infrastructure
and
note.
So
the
you
know
the
infrastructure
in
place
for
tracking
performance
of
application.
Joel
you
mentioned.
You
know
that
you
have
extensive
logging
in
terms
of
regressions.
G
So
the
application
with
a
report
event
to
the
APM
process
and
an
ATM
process.
It
would
use
a
matrix
library
to
calculate
numbers
and
then
report
that
into
the
kafka
every
every
minute
and
and
those
are
plot
plot
on
a
dashboard
for
the
applications
and
our
our
cloud
environment
also
allows
dashboard
to
monitor
CPU
and
memory
usage,
and
we
monitor
those
pretty
closely
and
in
our
APM
matrix
reporting
one
one
of
the
things
we
do
is
actually
we
we
lock
to
even
loop
latency
and
we
we
mark.
G
I
G
C
G
G
C
A
So
you
wouldn't
necessarily
have
that
being
a
an
event
loop
blocker,
but
you
know
that
could
very
reasonably
be
something
that's
sitting
on
the
event
loop
and
with
the
unresolved
callback.
You
know
at
Vox
er.
You
know
we
had
a
lot
in
our
infrastructure.
That
would,
you
know
basically
cut
off
any
stragglers
like
that
where,
where
we,
you
know,
pull,
pull
things
out,
that
weren't
weren't
resolving
and
you
know
the
biggest
culprit.
There
was
a
long-running
database.
You
know
things
that
were
allowed
to.
F
We
do
have
a
PM's
running
on
most
of
our
stuff.
We've
actually
had
to
turn
them
off
in
some
cases
and
production
due
to
the
APM's
leaking
memory.
Essentially,
while
they
try
to
track
events.
Unfortunately,
we've
worked
with
some
of
them,
trying
to
get
it
fully
fixed
and
just
ended
up,
keeping
them
turned
off
and
lying
on
logging
for
some
things,
which
is
unfortunate.
F
I
Nikolai
we
are
using
Splunk
for
logging,
using
New
Relic
as
monitoring
systems
and
actually
to
be
ready
for
Black
Friday
days.
What
we
do
we
yeah
we
decided
to
if
we're
implementing
any
NPM
package,
we
need
to
get
the
results
of
impact
on
performance
locally.
After
that
you
make
deploy
on
staging
environment
using
G
meter
and
New
Relic.
We
check
the
performance
impact
there
and
we
are
caring
about
the
performance
impact
of
new
New
Relic
as
well.
So
monitoring
systems
make
a
huge
impact
on
performance.
I
A
Important
I
did
really,
you
know,
find
the
the
optimization
pass
of
running
JavaScript
before
deploy.
You
know,
Bradley
mentioned
the
entire
web
plaque
Lurpak
infrastructure
there.
You
know
I'd
love
to
hear
more
about
that
Bradley
and
you
know
why
it's
needed
in
the
stack.
How
are
you
using
it?
It's.
F
So
GoDaddy
actually
has
a
reseller
options
and
we
actually
have
you
know,
builds
the
tape
around
a
day
or
so
just
to
generate
our
color
palettes,
and
so
we
have
a
few
customized
servers
that
do
this
behavior
we
actually
run
those
more
often
than
our
web
pack
builds.
Web
pack
builds
are
occurring
whenever
we
generate
a
new
image
that
we're
gonna
put
into
dev
test
abroad
and
those
happen.
You
know
on
everything,
pretty
much,
not
just
things
that
are
purely
in
the
front-end.
C
A
B
G
Us
generate
them,
we
do
minification,
bundling
only
for
the
front
end
code.
We
don't
do
a
fee,
a
fifth
occasion
on
the
node
side,
when
we
write
code,
we
well
personally
me
personally
I
I.
I
don't
like
write
code
that
gets
babu
if
I
transpile
when
I
running
on
node,
because
debugging
that
is
kind
of
tricky,
sometimes
and
I
hate
setting
up
source
map
myself.
G
So
generally,
when
I
write
code,
that's
for
a
node
server,
only
I
will
use
whatever
es6
the
feature
that
no
version
supports
and
I've
been
thinking
about
how
how
we,
what
we
can
do
about
these
actually
and
and
I
know
that
Facebook
has
that
project
forgot
what
it's
called,
but
what
they
do
is
state
it'll,
take
your
JavaScript
and
do
some
kind
static
analysis
on
it
and
then
and
eliminate
some
deco
based
on
that.
But
I
am
I'm,
not
sure.
What's
the
status
of
that
project,
myself.
F
A
A
A
So
you
know
the
single
threaded
and
you
know
that's
the
sort
of
reveal
of
the
outcome
of
this
all
right,
so
vast
majority
of
folks
shared
that
they
did
not
we're
not
impacted
by
single
threaded
workflows.
Everybody
here
is
doing
a
wide
variety
of
workflows,
so
you
know
it
are
areas
where
nodes
single
threading
is
getting
the
way
today
and
that
you
would
love
to
see
improvements
on.
A
I
C
C
C
A
This
is
a
burning
question
for
the
the
platform
right
now
so
yeah
very
interesting,
Bradley
I
know
you,
you
know
you
you've
been
around
and
you
know
the
team
at
GoDaddy
has
a
lot
of
experience.
You
know
so
you're
very
familiar
here
with
you:
no
service
to
service
offloading
and
I.
Just
love
to
get
your
input
on.
F
F
We
we
do
have
some
use
cases
that
would
benefit
greatly
from
having
workers
or
something
where
we
could
use
shared
array
buffer
between
processes.
We
spent
up
a
bunch
of
processes
in
our
build
step,
and
we
also
have
a
very,
very
large
computation
for
post-mortem
that
we're
starting
use
your
memory
usage
and
right
now
we're
in
the
hours
for
comp
you
time
on
a
large
process.
F
Having
that
be
smaller,
we
don't
want
to
rewrite
all
these
tools
that
are
creating
these
JavaScript
de
STS
and
things
or
the
JavaScript
heap
snapshot
format
into
C++
or
anything
so
we're
okay
with
it
being
ours.
But
it
would
be
nice
if
it
wasn't
done.
A
C
Okay,
so
it
sounds
like
maybe
you
know:
I'll
start
another
thread
through
email,
at
least
to
start
with
us
to
say
it
sounds
like
at
least
a
couple
of
you
guys
have
other
you
know
concrete
use
cases,
and
that
would
be
valuable
input
to
the
the
effort
that's
going
over
there.
So
I'll
follow
up
and
to
see.
If
we
can
get
some
more,
you
know
actually
document.
Those
on
in
writing
would
be
great.
G
Hey
so
I
want
to
add
a
little
bit
to
the
singles
right
thing
yeah,
so
the
no
J
a
single
threat
definitely
is
something
that
we're
constantly
trying
to
have
to
work
around
with.
You
know
just
because
I
warm
up
because
I
warm
up,
we
we
do
out,
we
are
ecommerce
or
so
as
I
rendering
it
was
very
important
for
us
to
them,
because
we
need
SEO
and
we,
since
we
switch
to
react
and
reacts
a
so
as
I
render
is.
G
Sync
is
a
synchronous,
so
that
is
a
big
problem
for
us
in
our
actually
in
all
our
weapons,
so
in
nodejs,
because
I'll
know
server
can
general
can
only
do
one
to
two
requests
per
second,
if
it's
doing
server-side
rendering.
So
that's
one
thing
that
one
thing
that's
one
thing:
I
spend
significant
time
on:
optimizing
react
so
so
as
I
rendering
and
which
is
why
react.
16
was
a
very
important
for
us
because
it's
so
surrendering
the
improve
significantly
and
I've.
Think
I've
worked
experiment.
G
Things
like
using
some
modules
out
there
that
offers
a
threat
by
using
native
code
in
no
js'
and
try
to
offload
so
was
I
rendering
it
to
a
separate
threat
and
then
leave
the
main
threat
open
to
answer
to
some
of
the
smaller
light,
light,
awake
requests
and
also
experiment
with
spawning
at
dedicate
process.
G
A
lot
of
the
things
that
I
do
is,
for
example,
logging
and
APM
monitoring,
I
I'd,
separate
out
those
into
dedicate
process
example
that
our
APM
reporting
is
a
dead
except
rate
no
process,
whereas
the
main
apps
are
all
just
by
using
a
local
message
passing
to
to
send
events,
and
then
the
lock
is
the
main
app
just
rights
to
lock
the
disk
and
after
that,
not
forwarding
process
for
them
to
Splunk.
Mrs.
A
A
B
F
G
Since
we
Mike
wrote
a
note
and
in
at
Walmart
it's
been
a
major
success
for
us,
because
our
development
and
deployment
has
been
significant
and
simplify
and
much
easier
and
faster.
We
our
developers,
you
know
it,
comes
on
board
and
is
able
to
contribute
call
on
the
first
day,
and
we
you
know.
Obviously
the
fact
that
Google
and
then
nodejs
are
working
working
group
is
making
such
an
active
development
and
and
constantly
pumping
out
so
much
improvement
is
a
it's
a
major
advantage
for
us.
G
You
know,
for
example,
the
upgrade
from
no
six,
no
eight
was
he,
which
one
in
performance
was
so
they
did.
This
is
were
very
good
and
it's
very
good
to
hear
the
users
case
from
others,
and
it's
it's
awesome
to
see
that
the
some
of
the
patterns
that
we
adopt
is
also
the
same
pattern.
Others,
you
know
you
listen.
I
Just
want
to
add
that
it's
really
nice
to
have
the
surveys
and
actually
each
survey
covers
the
next
topics
on
Summit
Conference.
So
a
lot
of
black
boxes,
for
example,
event,
loop
and
other
are
covered
now
and
I
I
really
excited
that
with
you
started
to
develop
us
in
cooks.
This
is
a
best
platform
for
next
generation
tooling.
For
now,
yes,
I
know
they
should
be
ism
with
promises
now,
I
hope
it
will
be
better.
You
will
fix
it
and
yeah
it's
it's
nice
that
now
you
are
checking
their.
C
I
guess
I'm
gonna,
you
know,
say
thanks
to
everybody,
who's
spent
the
time
to
come
in
and
talk
about
this
issue.
I
think
we,
you
know
we
could
spend
more
time
to
to
go
through
even
more
of
the
questions.
We're
going
to
I
know
in
the
end
user
feedback
group
digest
the
data
put
it
in
a
place
where
we
can
publicize
and
then
I,
then
the
discussion
will
start
in
the
benchmarking
workgroup.
C
We
may
then
ask
if
people
are
here
are
interested
to
come
and
meet
with
the
larger
benchmarking
workgroup
to
do
a
bit
more
of
a
deep
dive.
I
can
see
as
a
next
step
and
then
just
more
generally,
you
know,
III
think
this
discussion,
you
know,
in
my
mind,
was
very
valuable
help.
You
know
lend
insight
into
the
into
this
issue.
I
have
some
ideas
for
like
some
some
of
the
next
discussions
we
can
have
so
I
think
Diagnostics.
C
There
was
a
basically
a
summit
in
Ottawa
here
just
earlier
this
week
and
I
think
out
of
that
there's
a
number
of
questions
that
would
be
valuable
to
discuss,
but
I
also
want
to
say
you
know
for
for
the
end,
users,
if
you
have
particular
topics
that
you
want
to
have
put
on,
you
know,
have
us
do
a
session
on
as
well.
You
know
think
about
that.
C
Add
it
to
the
list,
and
you
know,
hopefully
we
can
have
a
regular
cadence
of
you
know
once
a
month
or
whatever
that
makes
sense
to
get
back
together
pick
one
of
the
topics
that
we
think
is
is
the
one
we
next
want
to
talk.
Talk
about
and-
and
you
know
make
this
an
ongoing
relationship
to
get
together
and
share
information.
A
Thank
you
Michael,
so
you
know
on
the
user
feedback
initiative.
You
know
our
role
is,
you
know
to
bring
end
users
and
the
project
together.
So
you
know
in
in
facilitating
that
I
think
you
know
kind
of
managing
expectations,
detailed
stuff
around
benchmarking.
You
know
I
hope,
you'll,
go
and
watch
the
benchmarking
workgroup
group,
and
you
know
you
can
continue
diving
deep
into
benchmarking,
benchmarking
needs
and
how
you
can
contribute
to
the
benchmarking
work.
That's
done
in
the
node.js
project,
with
the
benchmarking
working
group,
and
you
know
in
terms
of
the
user
feedback.
A
You
know
it's
our
role,
to
connect
the
dots
and
be
a
bi-directional
channel.
Go
through.
You
know
the
node
project
to
yourselves
and
in
the
end
user
community-
and
you
know
we're
going
to
take
and
replicate
you
know
conceptually
the
things
that
we've
done
with
benchmarking
with
Diagnostics
there's
a
lot
of
exciting
stuff
coming
in
Diagnostics.
A
lot
of
you
know,
community
feedback
that
we
need
to
make
the
diagnostics
and
prioritize
Diagnostics
work
that
that
folks
are
excited
to
go
work
on.
C
Just
don't
add
to
that.
It's
like
yeah
I,
think
like
as
a
group
as
the
user
feedback
group.
Here
we
could
choose
to
do
like
another
session
on
benchmarking.
If
we
want
to
we'll
have
some
other
ideas,
but
I
think
Dan's
pitch
to
get
involved
in
the
benchmarking.
Workgroup
with
dots
of
interest
is
a
really
good
one.
C
You
know
the
use
cases
that
they
have,
or
you
know,
helping
to
actually
implement
the
different
benchmarks
and
stuff
so
yeah.
If
you
want
to
get
involved
in
that's
great
or
if
you
you
know,
in
terms
of
the
very
specific
actions
you
know
you
can
either
open
issues
there
or
contact
me
to
sort
of
make
those
move
forward
as
well.
A
Fantastic
well
thanks
everybody
for
for
taking
time,
and
you
know
for
staying
a
little
bit
longer.
That
was
extremely
valuable
and
I.
Think
that
really
accomplished
everything
I'd
hoped
to
for
our
public
session,
and
you
know
the
kind
of
interaction
that
we're
doing
for
the
no
jest
project.
So
thanks
everybody
for
taking
the
time
and
we'll
hopefully
see
you
soon,
yeah.