►
From YouTube: Node.js Diagnostics WG - 2018-05-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Basically,
to
have
a
way
to
people
from
outside
the
working
group
to
be
able
to
know
what
we
are
working
on
and
to
know
who
is
trying
to
move
each
step
forward,
it's
very
similar
to
what
we
have
on
the
TSE
I
think.
Maybe
someone
from
the
TSE
could
comment
about
how
it
works.
There
it'll
be
great,
maybe
Li.
C
So
there
is,
you
know
like
in
the
node
project
overall,
there's
way
too
much
stuff
happening.
So
so
the
idea
behind
TSE
and
that's
my
interpretation
of
it
can't
speak
on
behalf
of
the
whole
TSE
is.
The
idea
is
that
these
are
some
big,
bigger
items
that
need
to
be
tracked
at
the
project
level,
Ronny,
where
we
do
an
update
the
TC
from
periodically.
So
so
that's
how
it
works.
It's
not
it's
not
really
something!
C
That's
used
for
every
topic
of
discussion,
so
the
difference
I
see
in
this
working
group
and
what
the
TSE
is
trying
to
do.
Is
you
you
keep
here,
I.
Think
in
our
working
group.
Every
thing
we're
working
on
every
issue,
I
would
argue
in
our
working
group
is
a
important
thing
that
we
would
you
know
I
guess
our
objective
is
whatever
needs
to
be
done
from
a
diagnostic
perspective.
So
what
I'm
trying
to
figure
out
is
what
would
be
the
difference
between
an
issue
on
a
repo
versus
a
strategic
initiative.
C
It's
less
clear
to
me
for
this
working
group,
so
so
so
there
are
some
things
that
require
certain
effort
over
a
longer
time.
Maybe
that's
a
difference,
but
I
don't
know
calling
it
a
strategic
initiative
would
make
a
lot
of
sense
other
than
just
an
issue
that
we
care
about.
If
something
is
not
important,
I
think
when
this
working
group
it
would
just
not
be
prominent
or
will
not
get
worked
on.
A
You
know
maybe
the
terminology
between
strategic
initiative
and
and
issue.
You
know
I
do
think.
Maybe
in
some
of
these
things,
I
knew
that
when
we
had
the
the
meeting
in
February
like
there
were
a
handful
of
names
assigned
to
some
of
these
things,
I
don't
recall
if
like
where
that's
actually
written
down
somewhere
or
if
necessarily
any
of
these
issues
have
specific
owners
assigned
to
them.
I
think
the
the
meta
point
that
I
read
out
of
it
is
like.
Do
we
have
some
way
of
saying?
C
Yeah
I
I
agree
with
you,
I
think
it's
the
terminology
I
mean.
Maybe
we
are
getting
stuck
on
terminology
between
champion
and
strategic
initiative
in
the
struggling
to
find
what
it
is
to
take
and
take
on
action
items,
and
that
was
a
challenge
and
I
think.
Maybe
this
is
a
way
to
like
make
it
more
proactive.
A
You
know,
and
we
can
have
a
table
like
this,
with
a
name
and
a
link
to
the
issue
and
if
something
isn't
getting
traction
or
it's
falling
off
or
you
know
it's
by
the
way
everybody's
dealing
with
it.
It's
just
you
know
it's
it's
not
important.
We
can
eventually
move
it
off.
I,
don't
think
by
having
something
in
this
list,
there's
necessarily
a
claim
that
it
is
important,
I,
think.
A
Maybe
it's
something
that
you
know
there's
an
agreement
that
maybe
it
should
be,
not
necessarily
that
it
is
but
I'm
happy
to
you
know
whatever
whatever
people
think
is
useful,
I
don't
have
too
strong
of
an
opinion.
I
mean
I.
Think
the
simplest
thing
is
just
go
through
these
different
issues
and
just
assign
them
to
individuals
and
and
then
there's
less
tracking
and
less
stuff
to
get
out
of
date.
But
yes,
I,
don't
know
if
you
had
any
thoughts
or
comments
on
it.
Yeah.
B
D
B
E
B
C
Know
but,
but
if
the
feedback
is
that
we
are
not
good
at
keeping
issues
up-to-date
and
sort
of
having
player
owners
for
those
issues,
do
what
basically
doing
the
bookkeeping
moving
issues
along
and
adding
more
right.
So
now,
there's
more
lists
that
we
have
to
keep
up-to-date,
that
we
probably
wouldn't
so
I-
think
I,
think
the
feedback
is
very
valid,
but
there's
a
lot
of
issues
that
we
are
probably
not
between
good
job
keeping
up
to
date
and
consolidating
into
sort
of
maybe
tracking
issues.
I
think,
maybe
that's
the
future.
C
A
Agree
that
so
yeah
so
do
we
just
want
to
say
like
hey,
we
should
go
ahead
and
we
should
have
tracking
issues
for
these
things
that
they
don't
already
exist,
and
we
should
make
sure
that
sub
issues
are
appropriately
linked
underneath
them,
and
once
that's
done,
we
can.
You
know
where
appropriate
assigned
names
as
owners
of
the
top-level
issues.
Is
that
a
fair
path
forward,
yeah.
C
A
So
then
we
need
to
make
sure
that
issues
are
cleaned
up
and
that
there's
tracking
issues
for
the
broader
level
things
on
and
and
then
we
can
have
where
there's
not
a
parent
or
a
clear
owner.
We
can
have
a
discussion
about
that.
I
say
the
next
meeting.
Does
that
sound,
fair,
yeah,
okay,
great
and
then
is
there
some
name
or
something
we
want
to
use
for
like
these
top-level
issues,
just
put
like
some,
like,
you
know,
square
bracket
tracking
square
bracket,
you
know
some
sort.
A
Okay,
labels
good:
we
can
have
a
label
I'll,
create
such
a
thing:
okay,
I
guess.
The
other
thing
there
is
like.
If
folks
can,
you
know,
go
in
and
you
know
maybe
take
some
without
assigning
stuff
take.
You
know,
like
Ally
volunteer
for
the
async
hook
stuff,
you
know
anything
Aysen,
contact
stuff
and
try
and
sort
of
organize
those
things
into
sort
of.
You
know
the
appropriate
hierarchy
with
the
sort
of
top-level
tracking
issues
and
links
to
the
other
things
that
are
related
underneath
it
sound
fair.
B
A
A
A
A
A
A
So,
okay
number
168
again.
If
people
have
you
know,
suggestions
or
ideas
about
something
that
they
want
to
go
and
and
deep
dive
on
I
think
the
you
know
comment
in
here
and
then
we'll
curiata
CLE,
try
and
schedule
some
of
these
I
think
we've
got
some.
You
know
the
we've
got
one
and
I
think
we
probably
need
to
schedule
it
with
this
async
contact
stuff
that
we've
been
talking
tom.
F
A
Okay,
async
hooks
the
stable
API
tracking
issue,
number
124
I
think
there
were
some
actions
from
the
last
one
that
Polly
you
volunteered
for
on
about
trying
to
define
some
exit
criteria
or
what
we
think
our
exit
criteria
for
this
I.
Don't
know
if
you
made
any
progress
on
that
or
if
there's
been
any
other
progress
on
this
so.
C
So
the
progress,
some
progress-
I,
don't
have
an
exit
criteria
list
written
down,
but
I
think
the
first
step
was
trying
to
figure
out
how,
as
promised
hooks
play
into
this
okay.
It's
the
I
think
there
is
an
update
on
that
front.
I
think
there's
a
document
that
young
put
together
the
talks
about
some
of
the
options
we
have
to
deal
with
issues
with
promise
works.
Is
this
a
good
time
to
go
with
that
dot?
Yeah.
A
G
G
H
Question
so
I
know:
we've
discussed
performance
issues
being
a
blocker
and
I
thought
there
was
some
discussion
last
time
about
having
some
measurements.
Do
we
actually
have
a
quantification
of
what
this
performance
impact
is.
H
G
H
G
C
Background
on
the
fast
path
or
that
talking
about
so
the
vehicle
I've
been
working
on
improving
promise
performance
in
general
and
the
challenge
is
that,
but
when
promise
hooks
are
enabled,
then
those
fast
paths
do
not
apply
anymore.
So
it
has
to
take
the
slow
path,
which
means
that
promises
or
time
will
become
faster
and
faster,
except
when
you
have
a
thing
cooks
enable
in
which
case
the
performance
is
not
going
to
be
great.
G
G
G
So
those
are
the
problems
and
the
proposals
that
we
have
try
to
address
those
so
I
guess
I
will
start
with
the
simplest
one.
The
simple
one
will
be
for
promises
that
are
either
not
observable,
or
at
least
formulated
in
this
fact.
That
suggests
that
it's
not
really
interesting.
For
these.
Like
the
throwaway
capabilities
that
I
mentioned,
it
might
be
not
necessary
to
actually
fire
any
promise.
Work,
yeah.
A
C
Another
way
of
looking
at
this
might
be
that
I
think
we
should
debate
the
behavior.
We
want
performance
hooks.
We
should
be
conservative
as
a
starting
point,
so
it's
voting
I
know
the
promises
that
are
that
we
think
durable
as
a
starting
point
is
giving
user
more
information
and-
and
if
that's
the
semantics
you
want,
if
we
tie
yourself
to
those
semantics,
we
always
have
to
support
those
semantics
going
forward
right.
So
the
question
is:
what
is
the
use
case
for
people
to
observe
unobservable
promises
through
promised
books
and
I?
A
Yeah
I
mean
I
guess
the
question
is:
is
it
possible
for
somebody
to
end
up
with
incorrect
or
lost
contacts,
because
these
non-observable
promises
are
actually
observed
in
you
know
some
edge
cases
and
the
hooks
aren't
there
right.
So
in
that
case,
what
you
have
is
you've
got
a
you
know
a
break
and
correctness
right
for
what
we're
ultimately
trying
to
achieve,
which
is
we
understand,
async
contacts
right,
yeah.
C
G
But
let's
move
over
to
the
next
one,
so
the
second
proposal
that
we
had
was
instead
of
defining
from
his
hooks
citrus
plus
callbacks
through
the
v8
API.
They
should
instead
be
defined
as
JavaScript
callbacks.
That
way,
we
don't,
he
doesn't
actually
have
a
call
into
C++
and
then
from
C++
node
would
call
back
into
whatever
JavaScript
errors.
G
He
said.
Ëi
would
just
know
whether
there
is
anything
installed
and
if
there's
anything
installed,
call
it
and
if
there's
nothing
installed,
don't
call
it,
and
that
also
opens
up
the
opportunity
to
implement
inlining
so
that
we
can
directly
inline
those
JavaScript
callbacks
into
whatever
is
firing
that
promise
event.
G
But
it
also
changes
is
that
promise
which
will
not
be
installed
on
the
isolate
anymore,
but
on
the
in
native
context,
which
means
that
promises
created
from
different
native
context
will
not
cost
like
the
promise
or
that
we
installed
to
fire,
which
is
kinda
important?
If
you
consider
the
use
case
of
electron.
C
That
one
is
just
I
think
that
one
is
really
really
unconditional
in
my
opinion,
but
another
change
I
think
I
think
it
would
be
better
if
the
if
the
wave
the
hook
specified
with
JavaScript
residentsleeper
first
I
think
that's
the
primary
the
JavaScript
is.
The
primary
use
is
facing
hooks,
so
I
think
that's
the
one
we
should
optimize
for
rather
than
see
purpose.
Yeah.
G
And
so
should
we
move
over
to
the
last
item
sure
yeah?
So
the
last
item
is
I.
Guess
the
most
controversial
one
we
proposed
to
remove
to
destroy
work,
and
so
please
hear
me
out.
We
figured
that
there
might
be
only
two
use
cases
or
we
should
probably
separate
two
use
cases.
One
use
case
is
that
you
use
destroy
hook
to
clear
and
clean
up
metadata
that
you
use,
while
tracking
like
keep
while
keeping
tracking
information
for
the
as
in
context.
G
G
You
know,
keep
the
mapping
between
the
ID
and
the
resource
anymore
and
instead
store
the
metadata
directly
on
to
the
resource
or
use
a
weak
map
to
map
the
resource
to
whatever
metadata
you
want
to
keep
this
way.
You
don't
even
have
to
wait
for
this
driver
anymore,
the
weak
map
or
like
if
you
store
the
metadata
on
the
resource
itself,
would
just
get
garbage
collected.
Naturally,.
H
So
can
I
say
yes,
please
and
ask
a
good
question
so
I
mean
maybe
cuz
I
can
imagine
some
things
like
suppose.
I
was
writing
a
Diagnostics
tool
and
I
wanted
to
look
for
unhandled
rejection
and
help
you
figure
out
exactly
where
you
were
out.
You
know
creating
the
promises
that
led
to
this
right
yeah
case
I
feel
like
you
do
need
to
destroy
hook,
because
that's
really
what
want
to
know
about
so,
yes,
I'm
coming
to
that
I'm
coming
so.
G
H
G
Coming
to
that,
so
the
the
use
case-
I
just
talked
about-
are
usually
ones
where
you
want
to
be
enabling
this
during
production
and
in
those
cases
having
performance
regressions,
be
very
bad
right.
You
you
want
to
be
able
to
like,
if
you're
an
APM
vendor,
you
want
to
be
able
to
turn
on
SN
cooks
all
the
time,
but
not
get
the
performance
it
that
you
would
get
from.
You
know
offering
the
the
powerfulness
of
districts
or
the
case
that
you
are
just
described,
where
you
actually
want
to
be
able
to
call
a
finalizar.
G
That's
not
only
to
you
know,
sort
of
emulate
weakness.
We
could
also
just
specifically
key
for
those
cases
use
weak
references,
so
weak
references
is
one
thing.
That's
going
to
be
going
through
the
stages
at
tc39
and
until
it's
implemented
and
we'll
expect
and
fire
implemented,
we
could
use
a
polyfill
based
on
what
we
already
used
with
weak
gobo
handles.
G
H
G
Well,
the
lifetime
hook
would
not
even
be
limited
to
promises
anymore,
okay,
yeah,
yeah
I'm,
just
saying
with
promises,
but
yeah.
That
would
be
awesome
yeah.
This
way,
if
you
really
want
finalized
errs,
you
pay
the
cost,
but
if
you
don't
actually
want
them
and
only
want
to
want
to,
you
know
make
sure
that
your
metadata
is
weakly
kept
alive.
Then
you
don't
really
have
to
pay
the
cost.
G
G
So
the
the
idea
here
is
that
basically,
you
make
it
monkey
patter
ball,
so
that
the
weight
call
would
call
the
custom
promise
constructor
and
you
would
then
monkey
pack
everything
in
there
to
hook
in
to
everything
that
would
mean
that
everything
stays
in
JavaScript
but
then,
like
I,
said
the
described
thing
is
not
addressed
with
this.
The
last
thing
is
to
revive
the
zones
proposal,
but
that
will
be
slow
and
it
also
doesn't
address
just
right
here.
So.
C
What
we
are
proposing
is
that
we
currencies
on
a
better
way
to
to
get
the
final
either
call
back
rather
than
a
destroy
hook
directly.
That's
one
of
the
proposals
here
as
well,
so
I
think
the
same
would
apply
in
in
zones.
So
if
you
want
a
finalizer
for
a
promise
resource,
you
could
do
it
using
a
big
difference
right.
A
But
I
guess
the
so
here's
the
thing
that
sort
of
occurs
to
me
right
is
that
there
is
an
API
which
Maps
lifecycle,
events
around
things
right
and
that's
being
used
to
sort
of
track,
async
context,
right
and
I
kind
of
feel,
like
that's,
maybe
the
wrong
thing
right
and
I
kind
of
feel
like
when
I
see
like
hey.
You
know
we
should,
you
know,
start
doing.
A
G
Yeah,
so
so
one
thing
that
the
zones
proposal
addresses
is
that
the
essent
context
is
not
represented
as
an
integer
ID
and
using
a
user
ID
to
to
you,
know,
weekly
hold
onto
something
it's
just
impossible
and
that's
why
we're
proposing
to
you
also
just
expose
the
resource
on
every
on
every
hook
like
beyond
the
init
hook.
This
way
you
can
just
use
a
resource
as
a
weak
key,
because
using
the
assing
ID
as
a
weak,
he
is
just
not
possible,
which
is
why
you
need
to
the
stroke.
E
So
one
thing
that
is
missing
from
from
this
discussion
is
why
we
do
have
an
a
sync
ID
in
the
first
on
the
first
thing:
I'm,
not
sure,
that's
that's
something
that
it's.
We
will
be
good
to
ask
for
to
Trevor
and
ask
travel
to
to
come
by
and
try
to
get
us
that
that
information
to
me
it
seems
it
has
been
every
time
it
has
been
very
weird
every
time
it
I
work
with
this.
A
H
E
E
A
C
G
H
My
only
concern
would
be
guaranteeing
that
you
get
a
fresh
one.
You
know
like,
because
now
you
need
to
not
just
generate
a
fresh
integer
ID,
but
you
need
to
make
sure
the
underlying
resource
pointers
are
different.
You
can't
pool
two
async
chains
on
the
same
underlying
resource
if
you're,
assuming
that
sort
of
model
and
I
I'm,
not
you
know
because
this,
the
spec
for
async
hooks,
is
sort
of
the
implementation
defined.
It's
not
clear
to
me
that
that's
always
the
case,
so
that
would
be
a
no
be
a
minor
concern.
I
have.
A
Like
what
is
the
identity
post-mortem
right
and
how
do
you
ensure
the
identity
post-mortem?
Okay?
Now,
since
you
have
a
unique
ID
and
you're
able
to
say
like
okay,
I've
got
a
bunch
of
unique
IDs
and
after
the
process
is
done,
you
could
construct
something
interesting
from
those
will
be
there.
What's
that.
G
E
One
of
the
main
reason
it's
done
in
this
way
was
that
it's,
the
tracking
of
the
looking
up
the
resources
and
so
on
I
think
could
be
done
in
JavaScript
land
instead
of
C
land
or
something
like
that,
probably
but
yeah
anyway.
Yes,
it
can
be
done
in
as
a
assembler
minor.
Really
we
will
need
to
do
what
every
implementer
of
icing
cooks,
the
switch,
is
maintained
this
map
and
get
the
data
out
of
it.
It's
very
tricky,
simple,
simple:
to
implement
it
right
now,.
C
Argument
from
young
is
that
it
would
be
even
easier
to
do
this
going
forward
because
you
wouldn't
even
need
the
destroy
hook
anymore,
because
all
you
need
to
do
is
keep
a
weak
map.
Now
you
have
the
resource
available,
so
use
the
resources.
The
ena
weak
map
and
you're
done,
and
that
would
be
more
efficient,
is.
E
Basically,
there
was
the
whole
point
of
the
destroy
hook,
so
in
the
concept
of
a
PMS
and
doing
all
the
other
things.
It's
that
destroy
book,
it's
needed
for
various
reasons.
One
is
like
keeping
track,
destroying
the
context.
Okay
literally.
So,
if
I
just
need
to
this
to
destroy
the
content,
if
I
can
attach
the
context
using
a
private
symbol
to
my
to
my
source,
then
I
have
no
basically
need
for
having
a
destroy
cook
at
all.
I
E
Exactly
okay,
if
they
need,
if,
if
it's
needed,
but
they
did,
that's,
not
that's,
not
a
problem,
so
the
major
the
major
issue,
it's
if
for
some
analytics
reason
you
want
to
know
when
a
resource
is
destroyed,
then
you
might
want
the
destroy
hooker.
However,
if
because
in
the
de
striking
and
the
destroy
you
when
a
socket
is
destroyed,
which
is
actually
very
important
versus
a
promise,
so
it's
not
just
promise.
Okay.
That
thing
also
provides
me
information
about
when
a
socket
is
destroyed.
E
G
E
E
But
if
this
gives
us
or
gives
them
a
huge
performance
over
it,
so
if
we
make
it
in
a
way
that
is
incremental
so
that
we
shift
from
being
so,
we
keep
the
current
set
of
API
and
then
we
enable
less
a
couple
of
different
destroy
methods,
destroy
hooks
things
where
one
is
only
four,
so
the
or
another
one
that
doesn't
get
promises.
Okay,
let's
say,
and
then
basically
the
destroy
cost
is
not.
E
Then
we
don't
have
the
destroy
cost,
which
is
promised
would
cost
for
just
maintaining
this
try,
then
this
can
be
done
and
if
they
want
to
use
the
other
one,
they
know
what
they
are
going
to
pay
for.
Okay.
So
so
can
we
look
at
this
a
different
angle,
so
no
I
just
want
to
say
one
thing:
I
would
really
like
to
get
everybody
at
the
demo
that
we
have
been
worked
on
and
we
have
10
minutes
towards
the
end.
So.
G
I'm,
just
one
last
thing
that
I
want
to
get
get
out
so
far
we
have
been
looking
only
at
the
promise
performance
right.
Do
we
have
any
data
on
you
know
micro
benchmarks,
that
don't
stress
promise
but
like
something
like
TCP
connection
and
figure
out
whether
I
think
hooks
regresses
performance
there
significantly,
because
there
you
have
the
same
issue.
E
I
have
those
numbers
I'll
need
to
change
them
back,
have
been
on
vacation
for
the
last
two
weeks
and
I
have
no
idea
what
the
status
I
will
I
will
know
something
more
tomorrow.
Okay,
I
will
just
get
my
two
colleagues
David
and
ruben
works
on
this,
and
I'm
completely.
I
just
came
back
from
vacation
today.
So
I
am
completely.
These
should
have
been
done,
should
be
done,
but
have
no
idea
what
goes
on.
Okay,.
A
A
Is
that
okay,
with
everyone
I,
think
there's
some
other
stuff
in
here?
There's
a
bug
there
was
some
trace
events
stuff,
which
I
don't
know
that
there's
any
updates
on
I'll
give
one
heads-up
we've
sort
of
been
working
on
this
kind
of
the
the
formalization
stuff
for
async
context
haven't
seen
a
lot
of
feedback
from
you
know.
Anyone
and
if
people
are
like
yeah,
we
think
that's
the
right
track.
That's
the
right
way
to
think
about
things.
That's
the
right
set
of
terminology.
A
What
I
think
we're
gonna
try
and
do
in
the
next
few
weeks,
maybe
four
weeks
or
some
other
things
going
on,
but
is
like
take
that
stuff,
get
it
cleaned
up
and
then
open
a
PR
and
the
diagnostics
working
group
just
to
sort
of
force
the
issue
and
if
people
are
like
no,
you
guys
are
crazy.
That's
the
wrong
way
to
think
about
it.
Then
you
know
that's
people's
opportunity
to
voice
their
concerns.
I
want
to
give
folks
a
heads
up
there.
A
You
know
right
now,
that's
in
a
private
repo
of
repo
for
well,
not
private,
but
it's
in
my
personal
github
thing
so
and
feel
free
to
make
comments
or
PRS
on
that.
A
K
H
K
Have
a
couple
slides
just
to
give
you
a
little
bit
of
context,
so
we've
been
working
on
a
new
tool
called
Baba,
Pro,
feed,
Mateo
and
a
couple
all
the
people.
It's
a
it's
a
it's
a
tool
that
tries
to
profile
a
sink
time
and
your
nodejs
application.
It's
it's
a
heavy
user
of
a
Seahawks
and
all
the
new
instrumentation
goodies
and
note
that
notice.
This
working
group
has
been
a
huge
part
of
its
piloted
a
clinic
tool
chain.
K
If
you've
heard
about
that,
we
also
do
a
lot
of
other
instrumentation
and
profiling
of
you
know:
flame
graphing
and
CPU
profiling.
Stuff,
like
that
all
right,
but
it's
an
upcoming
feature.
So
it's
in
our
development
branch
right
now
that
I
would
love
to
give
people
in
this
working
group.
Access
to
it
should
get
some
more
immediate
feedback
before
we
go
with
a
full
public
release.
K
Sure
we
can
talk
about
that
something,
but
basically
it's
a
it's
a
tool
that
that
collects
eighteen
times
been
using
eggs
and
using
is
enriched
and
also
trade
sermons,
and
that's
a
ton
of
data,
as
you
probably
know,
on
a
real
application.
So
it
applies
a
couple
of
mistakes
to
group
this
time
into
a
series
of
bubbles
as
a
visualization,
which
is
why
it's
called
pop
approach
or
profiling.
K
This
is
a
theoretical.
Basically,
people
are
down
to
whenever
your
program
is
doing
something
where
it's
moving
from
your
code,
to
note
or
or
from
your
code,
to
a
module
or
vice
versa.
I
can't
like
defines
a
boundary
of
your
bubble,
so
that's
like
the
quick
super,
quick
crash
course
intro,
but
I'd
love
to
just
show
a
demo
of
it
because
it
explains
is
way
better
than
I
actually
do
so
and
switch
my
terminal
here.
So,
like
I
said
you
can
install
on
NPM.
K
K
That
just
wait
a
couple
seconds
before
ending
EverQuest,
so
it's
basically
a
server
that
makes
latency
and
if
it
run
bits
through
the
tool,
we
call
it
an
approach
and
then
we
set
up
a
hook
here.
Inspired
with
the
program
it
detects.
When
a
TCP
server
is
listening
using
a
hook
and
then
you
can,
you
can
run
a
benchmark
tool
against
the.
K
K
So
basically,
what
this
does
is
it
spins
up
a
server
and
that
just
started
pounding
it
about
a
bunch
of
data.
There's
some
poor,
sorry
bunch
of
requests
to
produce
a
bunch
of
data
again,
just
like
instrumenting
it
using
the
hooks
to
the
background,
and
then
it's
analyzing
the
data
and
opening
up
hundreds
interesting.
So
this
is
what
this
is.
K
What
our
visualization
of
the
a
single
data
looks
like
so
like
I
said
it's
called
mobile
port
because
it
tries
to
to
group
data
in
these
bubbles,
so
each
of
these
bubbles
represent
guessing
time
spent,
and
these
lines
from
the
bubbles
represent,
ladies
into
the
next
bubble.
So
you
can
kind
of
see
in
this
example
here
that
we
have
a
big
bubble.
That's
just
called
HTTP
connection,
and
if
we
take
down
through
this,
we'll
discover
that
it's
just
a
bunch
of
no
course
does.
This
is
what
these
lines
represent.
K
So
it's
color
here
represents
something
different.
The
green
color
here
represents
network.
The
gray
around
it
represents
there's
no
core.
So
a
lot
of
time
here
spend
and
no
core
just
handing
them
a
question
which
makes
sense
because
our
example
is
not
doing
doing
much.
Then
there's
a
long
line.
That's
blue!
Because
it's
it's
used
to
go
down
to
a
timeout
because
there's
a
long
latency
down
to
a
timeout.
This
again.
K
What
we
expect
there's
a
tiny
bubble
in
here,
because
nothing
much
acing
as
having
in
that
timeout,
and
we
can
kind
of
take
it
in
here
and
see
that's
on
line
5,
but
you're
not
in
our
code,
is
at
timeout
here.
So
that's
likely
using
further
discussion
earlier
but
stuff
to
patina
and
then
after
that,
timeout
there's
a
bit
more
stuff
happening
because
it's
triggered
interests
in
prison.
So
it's
like
it's
a
very
easy
way
to
just
take
a
take
a
program,
get
a
ton
of
async
data
out
of
it
and
render
these
nice
simple.
K
It's
simple
in
a
good
way:
the
service
stations
to
kind
of
help
you
identified
like
so
I,
Paul
Mike.
It
would
just
be
like
I
wanna
reduce
these
ladies
designs.
I
have
a
couple
more
exams,
a
lot
to
show
you
to
kind
of
like
get
the
point
across,
because
if
you
do
something
like
this,
where
this
is
a
kind
of
silly
program
just
just
reading
a
file
twice,
but
it's
basically
doing
two.
Eight
sync
operations
and
series
went
for
two
in
118
operation:
I'm
gonna,
there's
an
immediate
physical
readiness,
that's
a
child
of
that.
K
K
So,
actually,
you
know
coming
back
to
the
discussion
before
we
actually
were
super
interested,
also
in
the
use
case
where
you
would
just
run
this
in
production,
although
all
the
time
and
have
these
probably
produced
assuming
there's
close
to
no
and
performance
lag
on
the
data
collection.
That
would
be
really
powerful.
I
think
yes,
that
performance
tool
it
is
like
this
is
the
this:
is
the
series
server.
K
So
it
is
starting
to
look
much
more
like
a
real
application
where
you
have
a
smaller
problem
now,
because
there's
actually
stuff
happening
outside
node
core
there's
some
small
bubbles
admin
here.
If
we
click
in
expand
to
the
file
system
which
calls
in
HUD
smuggles
are
like
depending
on
each
other,
then
this
boils
down
to
down
here.
There's
the
HTTP
and
request
to
get
in
here.
K
That's
tiny,
because
the
end
is
now
just
closing
the
TCP
socket,
basically
because
we're
doing
some
rights
before
and
then
there's
a
fork
out
here,
because
notice
is
doing
a
bunch
of
timers,
dear
old
unrolls,
which
is
kind
of
like
forked
in
my
graph.
So
you
can
kind
of
see
here
that
all
your
async
operations
is
dependent
on
each
other
and
again,
the
way
to
improve
this
application
is
to
shrink
these
lines,
because
all
the
that
makes
bubbles
kind
of
tiny.
K
K
K
It's
one
a
super
interesting
because
it's
kind
of
really
encapsulate
what's
going
on
in
the
program,
so
you
can
see
here
we
have
our
HTTP
problem.
It's
still
pretty
big
because
it's
probably
does
not
much
happening
in
that
program.
Then
we
have
free
immediately
forms
of
this
model
because
it's
happening
series
where
each
of
these
files
are
being
read
and
reach
bend
down.
These
will
see
that
these
are
the
lines
where
the
generator.
K
And
the
part
is
a
promise
right
versus
doctors
and
no
corporations
which
we
in
the
finals.
Then
we
get
this
interesting
profile
where
there's
a
big
promise
on
the
side.
That's
the
least
I
got
wait
promises
where
those
are
getting
resolved
and
it's
it's
bending
that
request
or
some
pita.
So
this
is
kind
of
like
how
a
parallel
Association
will
look.
I
found.
E
K
So
this
is
basically
the
tool
was
really
good
at,
like
you
know,
giving
and
giving
you
this
kind
of
underlying
information.
It's
also
really
good
at
finding
box,
where
you
like,
have
please
kind
of
real
things
happening
on
the
side
where
you're
like.
What's
that
happening,
that's
not
really
serving
any
purpose
in
terms
of
resolving
my
requests.
K
We
actually
use
this
on
some
no
core
data
and
we
noticed
that
we
got
a
ton
of
mixtec
bottles
and
we
use
that
information
to
kind
of
find
out
that
we
could
go
in
and
optimize
next
take
the
node
core,
which
I
did
color
months
ago
and
gotta,
a
nice
25%
performance
boost
out
of
that
so
and
I
was
just
by
looking
at
data
and
being
like.
That's
a
bunch
of
you
know,
big
Mexican
bubbles
here,
that's
interesting
I
would
expect
that.
K
So
that's,
basically
the
tool
so
I'd
love
to
if
anybody's
interested,
give
people
access
here.
So
you
can
like
play
around
with
it,
give
us
feedback
on
how
how
it
plays
in
real
data.
If
you
want
we're
pretty
excited
about
it,
it's
it
made
it
just
easy
folks,
twice
immense
and
a
ton
of
statistics
to.
K
So
what
we're
doing,
let
me
do
the
nonprofit
one,
because
it's
a
little
bit
easier
to
reason
about
so
basically,
what
I
tried
to
say
in
a
heuristic
is
that
every
time
a
program
is
moving
from
so
without
these
free
domains.
That's
like
user
code,
which
is
code
you're,
writing,
that's
not
a
node
module
and
then
we
have
no
core
code
and
we
have
NPM
modules
and
every
time
you
do
an
async
operation.
So
in
like
classic
Beks
every
time,
there's
an
invitation.
That's
moving
your
code
between
one
of
those
domains.
K
So
here,
like
you
time
we're
moving
from
years
of
encode
to
nor
code
because
the
timeout
is
in
milk
or
so
if
this
was
a
database,
that
would
have
the
same
every
time
you're
moving
between
those
boundaries,
it
kind
of
takes
everything
in
that
call
second
groups
in
a
bubble.
So,
for
example,
if
you
were
doing
to
set
pairs
timeouts
in
parallel,
that
would
go
in
the
same
bubble.
K
So
interesting
thing
about
that
kind
of
heuristic.
Is
that
if
you
have
a
let's
say,
you're
calling
out
to
a
database
and
your
database
you're
missing
an
index
on
your
database?
You'll
have
a
lot
of
latency
any
database.
Don't
actually
show
up
in
the
diagram
where
you're
sorry,
you
know,
let's
go
get
a
bubble
and
that
bubble
would
have
a
line
bound
to
a
database,
and
that
line
would
be
very
long
because
agency
and
breccia
also
had
another,
usually
stay
where
we
could
see
done,
adding
an
index
to
a
database
just
shrunk
to
diagram.
K
A
Yeah
I
think
it's
much
better
being
able
to
visualize
program
execution.
That
way,
then,
like
a
graph
where
you
know,
every
node
is
a
specific,
all
site
right
here,
specific
invocation
and
because
I
just
got
it's
too
big,
and
this
is
like
a
nice
way
to
condense.
It
yeah
just
something
that's
readable
right.
K
That's
something
we're
actually
where
it's
not,
which
we
have
the
data,
because
it's
it's
something
that
was
injected
to
see
is
that
if
you're
measuring
latency
and
your
reduced
latency,
your
fruit
put
normally
goes
up,
but
your
profile
is
the
same
length.
So
your
graph
turns
out
the
same.
So
it's
actually
super
important
and
something
that's
that's.
We
kind
of
visualize
really
it's,
because
that's
the
only
way
to
kind
of
see
how
your
latency
improves
cool.
K
E
F
E
One
of
those
few
things
that
we
spend
so
much
time
on
this,
so
yes,
anyway,
if
you
want
to
send
us,
we
can
start
an
email
thread
if
you
want
access
which
is
going
to
be
public
sooner
rather
than
later
like
it's,
not
something
that
is
going
probably
end
in
May
or
something
like
that.
We're
going
to
launch
you're
actually
starting
to
writing
and
our
launch
things
is
not
really
a
secure.
If
you
have
been
working
on.
K
K
Basically
this
example
here
in
my
screen
we're
doing
the
same
example
as
before,
but
I'm
awaiting
the
promises
in
series
so
kind
like
the
the
series
where
I
was
reading
file
on
callbacks,
but
just
as
promises
and
the
profile
for
this
turns
out
really
interesting,
because
the
college
like
this,
where
you
have
a
bunch
of
four
promises,
depending
on
each
other's
that
know
mostly
I'm,
going
back
to
a
big
promise.
So
this
looks
like
a
parallel
execution.
K
What
is
very
much
a
serious
execution
and
mean
material
was
talking
about
if
this
is
probably
related
fact
that
the
this
I
think
a
way
to
help
me
end
up
as
a
promise
chain
from
the
same
promise,
but-
and
that
might
look
like
that-
isn't
a
pro
find
again,
but
it's
kind
of
with
what
version
of
the
note
did
you
test
this?
Oh,
this
is
I'm
running
on
latest
nine
I
think
yeah.
G
There
was
this
bug
that
recently
surfaced
were
essent,
cooked
or
not
properly
fired
in
gate,
but
that's
only
in
no
ten
I
think
right.
K
So
I
don't
think
I've
ran
this
one
in
no
ten,
but
I
have
seen
this
kind
of
similar
everywhere
and
again.
This
is
like.
It
also
might
be
us
doing
something
wrong
and
analysis
pipeline.
It
also
might
be.
Oh,
it's
just
not
understanding
completely
how
a
sinker
weight
on
wraps
in
a
profiling,
a
spec
and
but
it's
definitely
interesting
and
very
counterintuitive
I
would
I
would
expect
it
to
look
like.
E
K
Yeah
so
so
again
like
the
heretic
is
that
this
is
where
code
moves
from
from
one
code
to
another
one.
So
my
guess
is
that
this
big
promise
here
I
need
to
take
into
this
more
because
it's
kind
of
a
new
discovery
to
come,
but
I
guess
is
that
a
big
promise
here
is
like
the
bad
promise
that
wraps
all
the
async
awaits
and
that's
they
made
child
of
the
of
the
HCP.
Once
you
unwrap
it,
I
don't
want
you
unwrapped,
async
awaits
and
then
that's
why
everything
can
I
have
this
below
it.
K
K
D
E
I
will
see
if
I
can
get
all
done
on
Andreas
and
Andy
Daly
Trevor.
We
are
not
source,
whatever
the
rod
or
somebody
just
to
clear
just
to
explain
if
they
can
write
us
something
about
that
part
of
the
API.
Why
it
was
designed
that
way.
If
there
was
a
rationale
or
something
because
that's
something
that
we
might
want
to
deed
to
do
differently
or
something
like
that.
D
The
whole
use
of
number
is
kind
of
derived
from
like
like
super
all
the
days
of
like
a
sink
listener
was
kind
of
meant
to
like
originally
was
supposed
to
be
like
CLS
in
node,
but
nobody
wanted
like
all
of
CL
sm
node,
and
they
decided
that
there
was
too
much
overhead
to
like
building
this
nested
object
thing.
So
they
decided
to
like
try
to
dine
and
find
each
contact
to
put
the
number
instead
and
then
number
number
idea,
just
kind
of
stuck
for
some
reason.
E
C
E
C
D
C
D
E
Exhibit
symbol
is
not
a
big
deal,
so
we
can
mutate
that
they're
using
a
private
symbol.
So
that's
that's
fine,
so
yeah
I
think
we
should
just
add
the
thing.
That's
this
factory
very
simple
and
maybe
not
having
the
performance
penalty
on
the.
If
there
is
no
destroy
hook,
setup
or
something
like
that
might
be
pretty
nice.
E
But
we
need
to
talk
about
it
in
later
on
and
we'll
provide
some
feedbacks
on
the
upon
the
proposal.
I
have
I
have
to
go
folks,
I,
don't
know
if
there
is
somebody
some
more
questions
about
bubble
prophet.
If
there
are
like
reach
out
to
Mathias
and
me,
and
we
can,
we
can
definitely
go
in
and
discuss
it
together.