►
From YouTube: Node.js Benchmarking Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
C
C
We
can
easily
go
and
throw
out
the
numbers
that
they
generate
and
also
I
see
how
how
variable
the
numbers
I've
running
on
a
particular,
even
a
single
machine,
or
maybe
we
can
even
try
it
on
the
bench
marking
machine
in
the
community,
because
if
the
numbers
are
really
variable,
then
we
may
have
to
think
again
because
we're
probably
not
going
to
much
but
yeah.
Those
are
the
main
things
I've
been
looking
at.
Okay,.
D
C
A
A
Fine
but
I
did
do
some
some
effort
put
together
last
time
we
got
together,
we
talked
about
you
know
some
of
the
key
use
cases,
so
we
landed
that
and
then
the
other
thing
that
came
out
of
that
was.
We
should
also
identify
the
key
runtime
attributes.
So
I
put
a
little
bit
of
work
into
that
and
that's
one
of
the
things.
Hopefully,
we
can
review
today
push
for.
A
Later-
and
you
know
I-
hopefully
the
next
step
after
that
is
that
you
know
we
can
take
those
to
instruct
put
together
a
matrix.
That
says
what
do
our
existing
benchmarks
run
and
cover,
and
then
you
know,
where
is
the
biggest
gap
next
biggest
gap,
to
try
and
add
some
additional
benchmarking
to
cover.
C
Yeah
sorry,
what
face
it
sounds
good,
okay,
so.
A
A
A
Ok,
so
in
terms
of
the
next
thing
in
the
agenda,
one
of
the
things
I
want
to
do
is
review
the
DCO.
So
there's
been
an
updates
more
of
a
clarification,
as
opposed
to
a
change
in
the
DCO,
which
is
what
governs
you
know,
contributions
to
the
various
projects,
and
you
know
we're
getting
requested
that
we
update
that
for
the
for
this
work
group
as
well.
So
really
you
know
what
I
want.
What
is
the
discussion
to
say
you
know?
A
A
A
C
A
Hey
next
thing
was
to
review
issue:
40
tour
pull
request
42.
I
think
so.
I'm
just
gonna
open
that
up
sorry
taking
some
more
notes
and
putting
that
into
the
mix
here
and
then
I'll,
throw
it
up.
So
I'm
going
to
open
up
now
is
over
quest.
42.
A
C
C
Pasta,
I
think
was
it
to
do
with
memory
footprint
I'm
assuming
it
was,
it
was
related
to
go.
He
said,
maybe,
after
a
few
cycles
of
the
event
loop
I
think
really.
The
document
is
to
define
it.
The
different
metrics
will
be
looking
at
rather
than
how
we
would
go
about
measuring
them,
but
I
mean
we
can
probably
possibly
reply
to
in
with
something.
C
Or
even
to
condense,
or
either
added
an
extra
player
to
circa
dense
those
who
say
memory
footprint
at
key
key
points
during
the
applications,
Liza
lifetime
right,
I,
suppose
yeah,
possibly
because
different
applications
and
different
tests
that
we're
doing
may
warrant
is
wanting
to
be
measuring
footprint
at
different
stages.
C
A
C
So
I
think
about
it
by
load.
Are
we
not
meaning
after
we've,
driven
so
for
example,
and
if
we
were
easy
Acme
areas,
an
example
it
would,
the
at
startup
would
be
once
we've
actually
started.
Acne
air
and
afterload
would
be
after
we've
done
a
three-port
run
and
we
say
right.
What's
the
footprint
usage
now
after
it's
been
serving
how
many
requests
a
second
right.
D
C
A
D
B
A
A
A
A
A
C
A
A
C
A
A
A
So
I
guess
this
one.
There
was
still
some
I
haven't
closed
that
because
there
was
some
discussion
and
we
are
tracking
startup
speed,
but
there's
some
discussion
on
tracking
different
versions
of
it.
I
haven't
had
any
time
to
make
progress
on.
You
know
looking
at
a
different
version
or
whatever
to
see
if
that
makes
sense,
regressions
in
before
crack
adjustment,
so
Gareth
I
think
you
were
looking
at
that
one
yeah.
A
17
events
benchmark
tests,
I
think
this
is
one
where
some
additional
things
were
added
to
the
regular
benchmarks
are
actually
things
had
blended
benchmarks,
but
they
were
one
was
a
total
bust
to
add
some
new
ones.
So
again,
I
think
we
can
look
at
that
in
the
context
of
what
you
were
saying
you
were
doing
and
looking
at
the
overall
bench
burger
dogs
to
see.
If
there's
anything
else,
we
can
pull
out
yeah
benchmarks
or
v8.
This
is
around
giving
them
some
benchmarks.
A
A
So
I
guess
this
is,
you
know,
I
know
I'm
working
with
stefan
and
Ian
holidayed
on
the
the
EPI
and
we
certainly
will
be
looking
at
performance
as
part
of
that,
so
maybe
some
benchmark
motor
that
slow
leave
that
open
for
now
micro
benchmarks
and
candidate
benchmarks.
So
those
are
again
to
identify.
You
know,
benchmarks
additional
benchmarks.
We
may
want
to
be
running.
You
know
my
current
plan
and
goal
is
to
work
through
those
based
on
okay.
A
Once
we
have
the
use
cases
and
once
we
have
the
key
attributes
to
then
put
together
a
table
that
says
you
know
what
we
have
that
covers
each
of
those
and
then
try
and
work
through
that
I'm.
So
I
think
that
covers
last
two
items.
I
think
I'll
use
that
as
an
opportunity
for
anybody,
who's
listening
in
or
watches
it
later
to
say
you
know
we
are
still
looking
for.
You
know
people
to
be
pricked,
participate
in.
A
You
know
looking
at
those
use
cases
looking
at
those
key
attributes
and
then
coming
up
with
additional
benchmarks,
either
the
funny
ones
that
exist
in
suggesting
to
use
them
or
building
the
benchmarks.
So
if
anybody's
interested,
you
know,
let
us
know
through
an
issue
or
join
the
next
meeting,
or
you
know
just
say
we're,
definitely
looking
for
additional
participants
to
help
push
this
forward.
A
A
A
Okay
and
I'll
set
the
next
meeting
for
three
to
four
weeks
from
now
didn't
get
a
great
turnout
in
terms
of
the
last
doodle,
so
I,
don't
know
you
know,
maybe
I'll
try
to
reach
out
to
some
of
the
people
just
to
see
if
it's
a
matter
of
the
timing
or
or
what
let's
see
if
you
can
round
up
some
more
people
for
the
next
meeting,
but
otherwise
think
Gareth
is
there
anything
else?
You
think
we
should
talk
about.
A
Well,
I'll
call
that,
for
this
meeting
again,
you
know
for
people
watching
I
will
call
out.
You
know
we
do
want
to
be
able
to.
You
know
basically
create
that
table
this.
Here's
our
use
cases
are
our
key
attributes
and
then
fill
in
benchmarks
to
cover
those
that
we
identified
regressions
and
can
sort
of
identify.