►
From YouTube: Node.js Benchmarking WG meeting
Description
A
A
A
A
B
B
So
the
issue
is:
what
do
we
do?
How
do
we
benchmark
tools
and
libraries
like
STD,
USM
or
other
things,
so
the
web
tooling
benchmark?
One
goal
is
to
also
run
in
a
browser
and
author,
ideally
in
all
shells
as
well,
that
you
have
really
an
easy
way
to
compare
how
different
engines
perform
on
this
even
engines.
They
don't
run
in
note
like
spider,
monkey
or
JSC,
and
so
it's
like
ESM
or
other
things
that
are
not
specific,
don't
really
fit
into
this.
Also.
B
We
cannot
include
that
pack
fully,
because
the
perk
is
completely
based
on
a
single
way
and
the
child
stole
implement
the
micro
tasks
you
consistently
so
specifically,
etch
doesn't
even
pump
the
micro
test
cubicle
before
it
goes
back,
so
then
patch
but
check
our
core.
It
just
doesn't
work
correctly.
There
I'm
the
same
for
what
was
the
other
one.
Roll
up
roll
up
is
also
only
offers
a
single
weight
based
API,
so
icing,
and
the
only
way
to
test
it
really
either.
A
B
Matias
also
brought
up
another
point,
which
is
every
like
SNES,
yes,
I'm,
not
technically
of
up
to
it.
Actually,
it's
not
at
all
of
up
to,
because
it's
an
extension
for
note,
so
I
think
we
are
just
missing
something
in
between
that
also
allows,
or
that
also
covers
specifically
things
for
note
right.
D
A
A
B
A
That
makes
sense
to
me:
I
mean
most
most
of
our
other
benchmarks
like
the
scripts
or
the
simple
ones
are
just
the
code
in
some
cases
are
just
there
in
other
cases,
you
know
their
scripts
that
then
pull
in
the
other
pieces,
but
yeah.
If
we
write
one
ourselves
that
seems
like
the
natural
place
for
me
as
well
I
guess
the
question
is:
do
we
have
somebody
who's
got
interested
in
time
to
start
working
on
that
kind
of
benchmark?
Is.
B
I'm
not
sure
what
exactly
we
need,
so
we
do
have
some
cycles
are
not
a
lot
of
cycles,
though,
but
if
it's
just
creating
the
script,
I
mean
we
could
take.
What
John
has
already
the
pull
request
for
the
web
tooling
benchmark.
It's
already
has
a
runner
for
use
them.
Okay
and
I.
Think
you
you
set
your
head
low
benchmark
already,
and
even
if
that
runs
for
long,
that's
also
fine
yeah.
A
Okay,
so,
basically
could
we
do
even
something
like
clone
the
framework.
That's
running
the
existing
webtoon
benchmark
and
move
that
plus
the
the
pieces
that
John
already
has
into
the
benchmarking
repo,
and
then
we
have
a
new
benchmark
covering
those
particular
use
cases
and
a
place
to
add
more
that
fit
into
that
category.
B
The
framework
for
the
Victorian
benchmark
is
benchmark,
is
so
case
framework,
so
that
sounds
easy,
yeah
I
guess
there's
not
a
lot
that
the
Victorian
match
my
gets
around
it.
The
the
main
thing
that
we
add
is
this
webpack
configuration
to
generate
the
individual
bundles
for
browsers
and
shelves
and
that
we
won't
need
here.
Okay,
so
I
think
we
need
to
spawn
different
node
processes,
though,
since
if
you
load
yes
em
once
then
it
will
be
in
there
forever
and
SKU
the
other
results
of
the
same
process.
D
You
can
load
depending
on
how
you
invoke
it.
It's
it's
own
thing.
It
doesn't
hijack
the
the
central
loader,
but
that
doesn't
mean
that
that
other
things
won't
so
like.
For
example,
Babel
register
does
hijack
the
the
central
loader
the
v8,
compile
the
package.
V8
is
a
v8
compiler,
cache
or
v8,
compile
cache
that
hijacks
the
global
alert
to
do
so.
Things
like
that
you'll
you'll
want
to
isolate,
so
it
wouldn't
be
bad
to
isolate,
because
other
other
projects
do
tap
into
that
global.
D
B
E
A
Is
an
example
where,
like
yeah
adjacent
file,
to
hook
it
up
to
the
infrastructure,
there's
a
like
just
a
shell
script
that
you
know.
Basically
what
we
want
to
do
is
run
that
shell
script
and
have
it
print
to
standard
out
the
the
metrics
you
want
to
report,
and
then
we
can
easily
add
it
to
the
existing
jobs
like
it.
A
Basically
scrapes
the
output
and
says:
okay,
I'm
gonna
extract,
you
know
whatever
the
number
is
post
it
to
the
database
and
then
the
jason
tells
you
how
to
how
to
sort
of
show
that
and
what
versions
to
show
it
and
stuff
like
that.
So
once
we
add
that
as
well,
then
it
automatically
can
show
up
in
the
charts,
okay,
cool,
so
yeah.
A
If
you
run
into
any
trouble,
let
us
know
who's
also
put
one
together,
so
you
know
the
two
of
us
should
be
able
to
help
you
things,
but
I
think
it's
pretty
straightforward
all
right!
Thank
you.
Okay,
any
other
discussion
on
that
front
that
sounds
like
a
great
outcome
will
have
another
benchmark
covering
more
stuff.
B
A
A
I
would
ask,
is,
if
you
could
take
a
look
at
this
page
which
basically
covers
you
know
the
key
use
cases
that
we've
identified
and
the
benchmarks
we
have
running
for
each
of
them.
Okay
and
think
about
you
know:
does
this
fit
into
one
of
those
existing
ones
or
have
we
missed
a
use
case
that
we
should
add
an
entry
for,
but
in
the
end,
it'd
be
good
to
say,
okay
ad?
A
D
A
B
A
D
C
A
A
G
B
C
F
B
C
A
G
G
B
B
G
B
B
G
G
B
G
Web
to
link
horses
shoes
though,
but
that's
a
different.
This
is
you
mean
that's
working,
yeah,
okay,
so
post
the
pre
is
I,
think
the
lower
or
close
to
that.
If
you
look
at
this
is
a
before
the
load,
there
are
lower,
including
the
DC
input
in
acne
air.
This
is
before
load
or
even
intitial
before
load.
This
was
higher
to
Wow,
after
is
definitely
higher
for
sure
41
to
69
K
and
is
consistent
on
DCIS
as
well.
This
year
is
what
we
think
post
pre
is
lower
or
close
to
it.
Here.
C
G
A
G
A
A
G
B
B
B
G
A
B
D
A
B
A
B
B
A
Had
how
to
get
out
of
full
screen
on
on
Windows.
C
A
E
A
G
B
A
A
A
G
A
G
A
I
guess
is
it
owns
I'm
gonna,
it's
already
assigned
to
you.
Okay,
so
I
think
that's
everything
that
was
on
our
agenda.
So,
let's
see,
if
there's
any
questions
or
on
the
YouTube
stream
or
people
who
are
watching
that
I
see
that
there
are
five
people
watching,
but
I
don't
see
any
comments.
Is
there
a
chat
section
I
need
to
find
I
think
they
usually
show
up.