►
From YouTube: Node js Tracing WG Meeting 2015-10-07
Description
notes at https://github.com/nodejs/tracing-wg/blob/master/wg-meetings/2015-10-07.md
A
This
is
the
no
DJ
s
tracing
work
group,
meeting,
October,
seventh
and
we're
just
getting
going
so
we
have
a
few
new
people.
It
looks
like
perhaps
they
would
like
to
identify
themselves
if
they
want
to
folks
who
were
on
last
week
and
identified
themselves
I
guess
we
probably
don't
need
to
do
that
again.
Anyone
I.
B
D
Okay,
thanks
for
joining
finally
hello,
my
name
is
Thomas
Watson
I
just
submitted
the
blue
quest.
That's
20
minutes
ago
to
join
and
I
was
invited
here.
It
was
ok
and
I'm
working
on
the
upbeat
note
agent,
where
we
do
a
lot
of
error
tracking
and
stuff
like
that.
So
that's
why
I'm
really
interested
in
this
awesome.
E
A
Ok,
I
think
everyone
else
was
on
last
week.
Let
me
go
ahead
and
I
guess
do
a
little
recap.
There
is
a
google
doc
that's
linked
to
in
the
issue.
Again,
it's
issue
24
in
the
no
DJ
s
or
the
tracing
WG
repo
issue
24
for
today,
with
links
to
everything
and
there's
a
link
to
google
documents.
When
folks
want
to
add
some
minutes.
Add
some
agenda
items
there's
not
much
on
there
right
now.
So
we
had
a
call
about
two
weeks
ago
and
so
that
was
kind
of
a
catch
up
call.
A
But
one
thing:
I
guess
that
did
come
out
of
it
was
to
create
a
set
of
docs
in
the
in
that
Rico,
where
we
can
kind
of
try
to
collect
some
of
the
information
that
we've
been
talking
about
these
many
months,
so
that
it's
not
spread
out
over
issues
and
other
places.
So
we
can
have
a
centralized
place.
I
know
I
kind
of
did
a
first
past,
extremely
rough
dump
of
that
and
I.
Think
there's
been
a
few
updates.
A
So
the
last
call
a
lot
of
it.
We
we
spent
on
talking
about
v8
racing
and
aliens
just
having
this
meeting
within
two
weeks
to
be
able
to
talk
a
little
bit
more
about
that,
since
the
topic
is
hot,
so
I
think
well
we'll
end
up
doing
that.
I
got
a
few
questions
in
there
to
kind
of
prep
us
with
Trevor's
on
today,
Trevor
horse,
who
is
kind
of
our
async,
wrap
one
of
our
I
casing.
A
Looking
at
the
agenda
items,
just
another
I
just
mention
status
on
docs,
which
don't
really
know
what
the
status
is,
although
I
think
there
have
been
a
few
slight
edits
with
it
and
then
another
one
can
break
break
things
down
into
smaller
steps.
The
people.
What
time
can
do
things
to
move
forward?
I
guess
I
was
hoping
that
maybe
that
you
know
our
initial
stab
at
doing
some
docs.
We
kind
of
out
of
that.
A
So
that
said,
I
sort
of
see
this,
as
maybe
two
two
big
issues
here.
Chunky
media
issues
of
interest
of
people
would
be
the
v8
race
in
an
async
wrap
Trevor
Ali.
Would
you
like
to
start
first
either
of
you
I.
F
Can
start
because
its
own
agenda,
so
on
the
author
on
the
issue
for
the
last
word
group
meeting,
I
posted
a
comment
with
details
on
on
v8
racing,
some
of
the
questions
at
torrent
had
cost
and
actually
at
this
point,
I
think
it
would
be
really
nice.
If
somebody
could
look
at
those
the
documentation
and
I
mean.
Are
we
really
good
if
we
can
get
some
specific
questions
about?
We
are
tracing
so
that
we
can
address
those
specifically
I,
don't
know
if
there's
any
at
this
point
or.
F
So
yeah,
so
so
I'll
try
to
quickly
summarize
so
and
I'll,
maybe
I'll
start
with
different
ears,
so
so
funny
has
been
working
on
getting
the
chrome
tracing
changes
merged
into
V
it
and
there's
some
still
some
work.
That
needs
to
be
done
in
order
to
get
those
things
into
a
shape
that
the
via
team
is
happy
with
and
for
these
working
on
that.
F
But
the
basic
idea
is
that
the
trace
event
mechanism
from
from
chrome
is
basically
a
bunch
of
macros
that
make
it
really
make
a
really
cheap
test
to
up
to
add
your
traces
in
into
into
code
and
on
the
back
end,
there's
a
buffer
that
basically
gathers
all
the
traces
into
a
single
stream
and
there's
also
a
mechanism
that
basically
enables
categories
to
be
enabled
or
disabled.
So
this
is
somewhat
analogous
to
the
util
debug
mechanisms,
the
node
underscore
debug
mechanism.
F
That's
used
that
you
define
a
category
in
the
new
and
sonically,
enabled
those
traces
and
those
get
spat
out
in
on
the
console
so
that
on
the
on
the
chrome,
creasing
side,
basically
there's
the
basic
idea
is
that
a
single
stream
gathers
all
the
types
of
tracing
that's
what's
going
on,
and
then
it
can
be
consumed
in
a
single,
uniform
manner
and
n
father.
You
can
correct
me
if
I
ever
straight
from
work.
F
C
C
The
second
one
is
to
add
a
trace
event.
The
third
one
is
updated,
trees
event,
and
these
are
the
streets
that
I
can
remember
now.
I
think
the
fourth
one
is
to
check
to
add
a
new
category.
So
so
mostly,
these
are
very
simple
methods,
and
these
two
three
four
methods
are
the
ones
that
would
create
the
buffer
or
the
stream
that
Ali
was
speaking
about.
So
once
you
have
these
methods,
you
will
immediately
start
getting
hammered
on
these
goals
and
the
events
will
come
in
through
the
actress
event
method
that
you
need
to
implement.
C
So
this
is
a
basic
idea.
The
integration
was
v8
is
that
they
need
to
maintain
which
I
slit
were
working
on,
and
this
is
some
saying,
chrome
never
cared
about
so
for
VA.
They
need
to
figure
out
which
context
or
isolate
as
they're
running
in
at
any
point
in
time,
so
we're
adding
this
to
the
traces.
They
also
have
some
restrictions
around
static
variables
that
we
kind
of
used
in
chrome
tracing
that
we
need
to
get
rid
off.
Among
other
challenges
that
were
working
down.
B
B
F
Oh
I,
we
want
to
make
it
possible
for
no
to
dock
this
as
well.
The
the
benefit
is
going
to
be
that
traces
from
node.
So
if
you're
writing
a
tool,
that's
going
to
do
performance
analysis
of
node.
It
would
be
really
nice
if
the
if
the
v8
events,
so
the
compile
events
are
showing
up
in
the
same
stream
as
the
as
the
as
other
async
comments
from
node
itself,
then
you
can
do
correlation
and
and
and
and
analysis
that
would
otherwise
be
much
more
difficult
to
do
so
we
certainly
want.
C
F
C
F
B
B
C
Okay,
as
an
existing
tooling,
if
you
dump
this
in
a
specific
JSON
format
that
chrome
has,
then
you
can
load
it
in
chrome
tracing.
If
you
open,
chrome,
a
column
that
double
backslash
trace,
you
will
be
able
to
take
a
new
trace,
and
this
is
a
format.
If
you
dump
your
trace
in
that
specific
format,
then
you'll
be
able
to
open
it
in
chrome,
tracing,
okay,.
B
C
So
we
recommend
that
you
use
a
JSON
format
so
that
users
can
use
a
chrome
tracing
interface
to
analyze
these
traces,
which
is
a
framework
that
we
actively
work
on
and
hack
on,
and
people
are
welcome
to
add
their
features
in
it.
It's
actually
on
github
and-
and
it's
not
designed
only
to
handle
chrome
traces,
we
actually
analyze
traces
from
a
ton
of
different
formats
and
from
different
sources
and
formats.
C
C
The
format
is
very
stable
for
the
JSON
format
and
also
you
can
write
your
own
importer.
So
if
you
decide
so,
you
do
not
want
to
use
our
person
of
the
JSON
format.
You
want
to
go
with
whatever
a
format
you
want,
you
can
write
your
own
importer
and
we'll
review
it
and
included
in
chrome
tracing
as
well.
C
G
C
G
C
H
C
Tracing
has
a
minimal
overhead
because
we
try
to
utilize
macros
as
much
as
possible.
The
only
overhead
hit
is
that
if
you
check
for
a
category
and
the
category
is
disabled
and
then
the
trace
logging
itself,
so
you
have
to
make
sure
that
whatever
you
write
in
the
ad
trace
method
should
be
also
efficient.
Okay,.
H
F
C
So
in
chrome
we
control
that
by
controlling
the
categories.
So
if
we
know
that
this
is
a
very
long
running
application,
we
set
it
to
the
minimal
set
of
categories,
and
we
make
sure
that
these
categories
do
not
push
too
much
events
into
the
system.
If
we're
only
trying
to
analyze
a
small
portions-
and
we
have
all
of
the
categories
enabled,
so
we
can
really
figure
out,
what's
going
on
nose
in
like
this
10
seconds
or
15
seconds,
so
we
do.
We
control
the
size
of
the
trays
by
controlling
which
categories
are
enabled.
C
B
C
That's
true,
but
tracing
supports
compression
in
our
back
end
in
Chrome.
Also
the
chrome
tracing
you
work
and
load
compressed
traces.
So
if
your
trace
is
just
beg
because
it
has
so
many
redundant
events,
it
end
up
being
really
small.
For
example,
we
look
at
you,
see
traces
or
around
200
megabytes
in
json,
which
usually
compressed
around
eight
to
eight
to
16
megabytes
compressed.
A
I
H
F
Better
needs
to
have
that
separate
thread.
That's
going
to
write
it
out,
so
node
will
need
to
implement
that
thread
that
basically
is
going
to
take
up.
So
so,
basically,
what
you
want
portrays
event
is
somewhere
in
memory
where
you're
gathering
these
events,
because
you
don't
want
to
do
an
I/o
every
single
time
and
gives
me
you
flush
them
out
to
whatever
sink.
You
want
the
sense,
whether
it's
a
file
or
if
it's
a
if
it's
a
network
stream,
and
so
it
makes
sense
to
implement
this
as
a
separate
thread.
That's
flushing
eatable
background.
J
H
From
that
point,
we
can
basically
identify
how
we
want
it
streamed
out
right
we're
in
full
control.
I
mean
we've
been
technically
control,
how
it's
converted
to
JSON,
if
we
didn't,
for
some
weird
reason
want
to
follow
the
JSON
spec
that
they
have,
we
wouldn't
have
to
like
we're
the
ones
doing
the
conversion
we're
the
ones
churning
out.
We
get
to
this
like
that.
Yeah.
I
That
sounds
great.
I
think
I
would
love
to
have
the
option
of
kicking
in
the
binary
format,
because
I
would
imagine
I'd
run
this
in
production.
Just
leave
it
on
and
if
it's,
if
json
is
you
know,
it's
like
two
orders
of
magnitude
larger
than
the
binary
format.
I'd
love
to
just
keep
it
in
the
binary
format
and
I
can
always
change
it
later
or
decompress
it
later.
I
A
So
I
mean
it
sounds
like
you
know,
you
can
already
sense
the
people
who
would
like
to
collect
this
data
in
different
formats,
and
you
know
I'm
kind
of
envisioning
new
command
line
arguments
for
node.
That
would
you
know
beyond
any
sort
of
category
setting
or
something
like
that.
That
might
happen,
but
also
indicate,
for
instance,
the
size
of
the
buffer
that
we're
buffering
the
stuff
to
you
know,
file
sizes
for
where
the
stuff
gets
long
to
the
format
that
that
kind
of
stuff.
A
B
Can
you
move
close
to
too
much
yes,
sir,.
A
I
I
didn't
hear
all
the
question,
but
it
sounds
like
it's
kind
of
another.
Maybe
we
could
maybe
we
can
do
implement
it.
This
way,
I
think
there's
going
to
be
a
lot
of
options
for
the
implementation
choice
that
we
end
up
or
implementation
choices
that
we
go
with
a
note
for
this.
His.
H
A
Like
I
said,
that's
another,
that's
another
option
for
us
right,
so
I.
Absolutely.
I
think
we'd
still
need
to
provide
some
code
in
ojs
even
to
make
that
kind
of
stuff
happen.
But
it's
it.
You
know
that
we
did
something
like
that.
Then
the
end
user
or
other
people
wanting
to
provide
introspection
or
trace
stuff
would
be
required
to
do
more
work
than
if
we
built
some
stuff
into
no
Jas.
What.
I
Are
we
really
talking
about
here,
because
I
think
it
would
be
it'd,
be
really
nice
to
have
something,
no
core
where
it
at
least
outputs
these
events
in
some
format
that
we,
whether
it's
JSON
or
whatever
it
is
then
folks
and
still
built
on
top
of
that
right,
but
I,
think
having
it
as
part
of
Corps
maintained
by
tsc
and
other
numbers
would
be,
would
be
of
tremendous
value.
Then,
having
crashed,
having
like
a
fragmented
set
of
modules
that
all
try
to
implement
the
same
thing
would
give
us
more
direction.
Yeah.
A
I
think
what
we
probably
ought
to
start
doing
is
identifying
the
use
cases.
So,
like
you
have
a
great
use
case,
but
not
everybody
is
probably
going
to
want,
and
that
is
you
want
to
stream
everything
like
in
your
case.
What
you
might
actually
be
wanting
to
do
is
dumpin
the
stuff
to
the
network
and
then
let
somebody
else
collect
it
right
that
might
be
inappropriate
mechanism
free
or
or
just
some
other
local
IPC
mechanism.
A
Of
other
people,
might
you
know
a
lot
of
the
other
use.
Cases
might
actually
work
out.
Well,
if
it's
just
written
to
a
file,
so
I
think
we
start
identifying
the
use
cases
there.
Then
we
can
kind
of
think
about
what
kind
of
stuff
that
were.
You
know
we're
going
to
have
to
bake
in
the
node
to
make
use
of
this
and
hopefully
kind
of
the
structuring
that
will
become
a
little
bit
more
clear,
yeah.
I
H
Hey
I
feel
like
we're
getting
a
little
too
much
into
implementation
yeah.
We
we
have
there,
we
have
the
V
module
in
court
and
this
seems
like
a
good
fit
for
it,
and
if
we
made
it
a
stream
and
basically
said
here's,
a
stream
of
data
specify
the
type
specify
a
file
descriptor.
It
will
stream
the
data
ask
that
type
to
that
file.
Descriptor.
That
gives
a
huge
unruh
possibilities
of
whether
it's
a
file
or
a
network
or
whatever
right
like
leave.
A
Want
yep
so
looking
at
the
agenda,
I
actually
put
a
couple
questions
and
right
before
the
meeting.
Sorry.
A
The
and
they're
in
the
in
the
document
in
the
house
side
of
the
google
talk
that's
pointed
to
at
the
issue
the
issue
points21
the
I
guess
one
question
I
had
I
kind
of
brought
this
up
last
time
as
I
I'm,
I
kind
of
get
the
feeling
that
maybe
the
existing
CPU
profiling
api's
that
are
in
VA
profile
of
H
will
maybe
go
away
because
it
sort
of
sensed
from
the
from
the
v8
side.
There
was
kind
of
a
desire
to
kind
of
move
the
sampling
stuff
out
of
v8.
A
Maybe
it's
not
even
there,
like
kind
of
barely
know
how
this
stuff
works,
but
instead
of
having
a
baked
in
like
the
embedder,
would
provide
that
capability
and
then
the
embedder
would
also
be
eating
that
you
know
consuming
the
trace
of
us
to
figure
like
in
the
particular
case,
with
the
CPU
profiler.
What
function
you
were
actually
in
based
on
compilation
events,
so
is
that
actually
going
to
be
part
of
it
as
well?
Are
we
going
to
have
to
implement
our
own
sampling
thread
than
I
assume,
even
if
we
did,
we
could
just
reuse?
C
The
VA
team
wants
to
get
of
old
spreading
from
inside
of
v8,
so
the
embed
rizal,
the
ones
responsible
for
all
of
these
threads
I
sing,
currently
they're
one
hundred
percent
shred
free,
except
for
the
same
thing
profiler
or
the
CPU
profiler.
It's
it's
self
target
to
move
it
out
to
the
embed
us
and
give
them
a
proper
interface
to
collect
the
samples.
But
it's
not
an
immediate
goal
for
them.
They
just
they
merely
want
to
do
it
only
to
get
rid
of
the
last
bit
of
thread
management.
They
have
style
still
inside
of
v8.
A
Okay,
great
because
the
recent
bring
that
up
is
that
there
are
a
number
of
people
who
make
use
of
that
cpu
profiling
api,
including
the
company
I
work
for
okay,
so
I,
you
know
I'm
very
sensitive
to
any
kind
of
potential
deprecation
or
it
just
going
away.
Sounds
like
it's,
not
something
we
have
to
worry
about
happening
next
week.
So
that's
good
and
I,
even
fenced
from
from
the
other
discussions
that
it
would.
F
F
B
A
A
It
was
sort
of
brought
up
the
three
to
four
methods
that
we
would
need
to
implement,
taking
a
look
at
the
macros
and
and
figuring
out
how
we
would
implement
them.
Those
are
all
kind
of
maybe
separate
pieces
of
work,
and
we
can,
you
know,
have
some
some
bodies
start
taking
a
look
at
those
items.
Maybe
we
should
just
open
up
issues
for
those,
and
so
people
can
kind
of
acclaim
them
or
you
know,
as
they
discover
stuff
drop
some
some
information
in
them.
I
Just
had
one
last
question
around
rating,
which
is
this
is
really
great
right,
like
be
able
to
get
all
of
these
events
whatever
you
want.
Bye,
don't
be
like
environment
Camilla
arcs
to
note,
but
is
there
any
appetite
for
a
more
on
demand
facing
option,
which
is
to
say,
like
maybe
I,
don't
want
to
collect
all
the
data
and
all
the
metrics
all
the
time?
But
if
I've
got
to
go
process,
it's
misbehaving,
then
I
want
to
dynamically
enable
some
of
these
robes
least
racing
features.
E
C
So
we
we
do
not
directly
support
changing
the
categories
during
runtime,
but
you
can
easily
stop
the
trees
and
start
a
new
one
with
the
new
set
of
categories
with
minimal
overhead
and
we're
working
on
like
trace
disk
or
so
that
we
can
close
the
previous
trace
as
fast
as
possible
and
then
starting
you
on
all
this
without
having
to
restart
the
process.
No,
you
don't
have
to
restart
the
process,
but
if
you
start
to
trace
yes,
oh
great
yeah.
A
A
B
A
Well,
11,
you
know
on
we
talked
a
bunch
in
this
workgroup
about
a
sick,
rap
and
and
I
kind
of
see.
A
sink
rap
is
a
great
place
to
put
some
of
these
tracing
macros
to
be
able
to,
because
that's
obviously,
a
very
important
part
of
the
node
runtime,
so
I
I
kind
of
have
been
thinking
in
my
mind
that
a
sink
rap
will
end
up
being
a
place
that
we
will
end
up
sprinkling
a
lot
of
tracing.
So
so
you
know
that's
probably
another
kind
of
a
use
case.
A
H
A
Do
that
too,
we'll
just
most
what
we
should
kind
of
label
that
as
a
different
sort
of
category
of
use
case
where
there
you
know,
talk
about
use
cases
for
consuming
tracing
but
also
kind
of
use
cases
where
we're
going
to
put
our
Tracy
so
I'll
plan
on
creating
a
an
issue
for
that.
We
can
just
kind
of
start
dumping
stuff
in
there
and
if
a
couple
needy
things
or
that
becomes
an
interesting
topic
for
itself,
we
can
create
a
new
like
Doc
subdirectory
for.
A
Alright,
so
the
other
kind
of
big
topic
is
a
sink
wrap
and
they're.
There
actually
has
been
quite
a
bit
of
discussion
questions
from
folks,
a
lot
of
answers
from
Trevor
I
think
we
I
don't
know
whether
or
not
stuff
has
been
sort
of
collected
into
that
Docs
or
not
for
that,
but
this
is
the
trevor
was
able
to
make
it
two
weeks
ago
he's
here
now,
so
I
figured
give
him
a
given
the
soapbox
if
he
wants
to
talk
about
that
involved.
Oh.
H
H
Unfortunately,
there
are
some
cases
like
timers,
specifically
and
also
next
take
where,
even
though
it's
an
asynchronous
event,
we
still
look
at
alerted
via
hissing
crap,
because
page,
how
am
I,
giving
a
quick
overview
on
Mason
grab.
First,
that
was
surely
he'll
come
to
set
some
contacts.
Okay,
sorry,
alright!
So
in
node
we
have
a
bunch
of
like
if
you
look
at
the
source
directory,
there's
a
bunch
of
underscore
rap
files
and
all
those
essentially
just
wrap
Libby,
be
functionality
that
allow
us
to
do
some
type
of
I/o
and
I
rewrote
them.
H
H
Let's
fundamentally
it
each
one
has
its
own
provider
occurred.
There
are
much
different
provider
types,
so
you
can
see
like
this
tcp
is
this
pipe?
Is
this
file
system
and
then
good?
Does
that
answer
your
question?
Yeah?
That's
great
thanks!
Alright
cool,
so
yeah,
like
I
saying
it
wasn't
meant
to
be,
and
I'll
be
all
just
a
helper
to
help
us
find
IO
related
things.
It
doesn't
help
us
in
timers
and
next
take,
but
that's
something
else
also
it
was,
but
one
thing
that
trace
event
doesn't
do.
H
H
Things
like,
for
example,
one
reason
why
I
put
in
the
the
in
it
before
and
after
callbacks
or
that
users
can
just
write,
JavaScript
and
modify
the
excuse
me
and
modify
the
the
objects
as
are
being
passed
through
on
the
fly,
and
then
they
can
change
how
that's
done
while
the
code
is
still
running.
But
all
that
happens
as
the
code
is
running.
H
The
thing
is
it
propagates,
the
specific
thing
properties
is
what
I
call
a
parent.
So
say
you
have
a
server
and
then
a
connection
comes
in
it
will
let
you
know
the
it
will,
let
you
know
the
parent
or
in
this
case
the
server
or
the
incoming
connection,
so
that
you
can
correlate
the
two
and
keep
better
keep
statistics
on
that
type
of
information.
I've
been
playing
with
some
other
ideas:
I
like
global
Nick
IDs
to
each
one,
so
you
can
track
them
easier
from
JavaScript
and
so
like
that
by
anyway.
It's
all
implementation
details.
A
H
Okay,
so,
unfortunately,
because
I
suck
at
naming
or
whatever,
like
there's
the
async
wrap
class
and
then
baked
in
our
some
API
callbacks
that
allow
you
to
know
when
an
AC
wrap
class
is
being
instantiated
and
then
just
before
its
completion.
Callback
is
called,
and
just
after
his
completion
callback
is
called.
H
So
I'm
it
it
way
back
when
it
originally
started
out
with
forest
wanting
to
integrate
continuation,
local
storage
and
then
over
many
many
months.
Many
iterations
I
eventually
came
to
came
to
this,
where
it
was
the
most
minimal
implementation
and
I
also
did
it
this
way,
because
I
felt
it
was
advantageous
for
us
consolidating
all
those
io
operations
through
a
funnel,
so
that
we
know
the
path
that
each
one
will
take.
A
So
what
do
you
again
sort
of
I
guess?
My
my
thinking
here
is
that
if
nothing
else,
there's
there's
kind
of
a
relationship
between
the
tracing
events
may
sink
rapid,
that
the
async
wrap
areas
seems
like
a
likely
place
that
either
we
want
to
have
some
trace
events,
sprinkled
in
that
or
or
maybe
easily
allow
people
to
kind
of
fit
in
trace
events.
Somehow
into
that.
Does
that
seem
about
right.
H
Yeah
I
mean
the
async
wrap
would
make
we
make
it
much
easier
to
place
your
race
events
instead
of
going
out
to
the
endpoints
and
saying
we're
going
to
put
a
trace
event
here
in
here
here.
Just
bring
them
all
to
a
single,
inherited
class
and
sets
the
provider
type
is
passed
through.
You
can
still
be
alerted
as
to
what
it
is
it's
being
created
right.
A
So
maybe
the
those
provider
types
even
then
the
mapping
into
the
VA
trays
category
story.
Somehow
I
mean
it
sort
of
seems
like
that's
the
way
it
might
work,
but
yeah
seems
like
this
seems
to
me
like
there's
a
lot
of
a
lot
of
time
in
between
those
things,
but
mainly
has
a
sink
wrap
as
being
being
a
place
where
or
a
good
good
example
of
us
like
adding
new
tracy
letts
into
into
new
joining
us.
But.
H
A
And
how
we
would
fit
in
tracing
into
those,
maybe
that
drives
more
requirements
for
a
sink
wrap
that
you
know.
We
need
to
provide
more
details
when
we're
creating
those
providers
or
when
making
calls-
or
you
know,
queuing
events
or
events
or
fire
whatever
right
so
so
that
that
seems
like
just
another
kind
of
more
concrete
use
case
for
somebody
to
kind
of
drill
in
and
look
into
did
anybody
else
have
any
questions,
so
they
say
perhapses.
We
have
Trevor
around
today.
B
H
Okay,
all
right
what
we
need
more
work.
Well,
the
my
original
idea
of
a
sink
wrap
was
for
I
0
related
things.
People
also
like
to
want
to
use
it
for
like
long
stack,
traces
and,
unfortunately,
since
nothing
is
propagated
with
timers
or
next,
a
cure.
You'll
lose
it
in
that
case,
so
you
won't
get
infinitely
long
stack
traces.
H
So,
yes,
I
did
one
area
that
could
be
improved.
I've
been
thinking
about
how
to
improve
it
and
I
haven't
figured
out
a
good
way
to
do
that
that
wouldn't
introduce
performance
overhead
and
probably
wouldn't
even
properly
at
all
the
information
that's
needed.
So
still
thinking
on
that
one,
as
for
next
take,
that
is
very,
very
sensitive
because
of
how
high
performance
it
is,
and
one
further
problem
is
that
V
gives
us
no
ability
to
tap
into
the
into
the
micro
task
queue
or
the
micro
task
scheduler.
H
And
so,
if
you
use
a
promise,
you'll
lose
your
long
static
trace,
and
there
is
nothing
that
can
be
done
about
that
today,
I
mean
okay.
Potentially,
we
could
overwrite
the
global
promise
function
with
our
own
promise
function
that
wraps
or
some
crazy
stuff
like
that,
but
that
that
one's
pretty
much
off
the
table
for
us
to
do
anything
there.
We
need
v8
to
implement
an
API
and
I
think
as
far
as
tanita
I
landed
a
patch
last
week
and
I'm
just
finishing
up
a
couple
things
this
week
or
I'm.
H
H
J
B
H
B
L
Yeah
I
think
it's
more
a
statement
unless
I'm
wrong,
but
look
like
you
mentioned
that
you
lose
contact
with
said
timeout
and
next
tick.
This
is
not
that
big
of
a
problem
because
those
you
can
just
money
monkey
patch
and
then
this
promises,
which
is
a
really
big
problem.
But
then
you
also
have
some
things
in
notes:
internals,
like
the
ECP
server,
where
we
also
lose
context,
and
that
would
be
nice
if
we
could
do
something
there.
Okay,
yeah.
A
Okay,
so
the
only
other
things
I
had
on
the
agenda
status
of
the
docks,
which
I
don't
really
have
any
and
I
haven't
really
looked
at
them,
probably
need
to
look
at
the
commits
to
see
if
folks
have
made
any
changes
and
then
again
break
things
down
into
smaller
steps.
So
I
think
I
have
some
ideas
now
that
maybe
I
can
start
to
create
a
couple
issues
that
are
a
little
bit
more
focused
and
maybe
through
something
with
the
docs
to
kind
of
leave
some
questions
than
there.
A
A
We
do
not
really
have
any
action
items
at
this
point
so
that
that's
going
to
be
kind
of
my
goal.
I'll
try
to
hold
a
couple
things
out
here,
create
issues
for
them,
maybe
they'll
be
too
vague
and
what
we
can
create
new
issues
based
on
those
or
something
like
that.
But
I'll
try
to
make
that
my
goal
to
work
on
right
this
week
and
next
week
to
try
to
do
that.
Remember.
A
Not
going
to
sign
anybody
work,
but
hopefully
some
of
these
will-
and
you
know
be
kind
of
specific
enough,
like
you,
you
know
on
with
his
desire
to
always
be
tracing,
and
you
know,
let's,
let's
get
that
nail
down
as
a
use
case
and
how
he
imagines
that
might
actually
work
that
kind
of
thing.
So
so,
hopefully
people
will
what
kind
of
gravitate
to
the
one
or
more
of
these.
A
So
next
meeting
so
we
have
a
meeting
two
weeks
ago,
my
my
original
thinking
was,
like
a
month
was
probably
going
to
be
pretty
good
I,
actually
liked
having
this
one
I'm
glad
ally
suggested
having
it
two
weeks
later,
so
you
know
to
bring
up
the
v8
racing
habit.
Gel
gel,
that's
gelling
more
in
my
head,
how
this
stuff
kind
of
fits
together
so
I
think
having
these
two
has
been
great.
A
Two
weeks
is
tough
for
me,
though,
every
two
weeks,
just
in
terms
of
the
time
it
trying
to
get
these
frickin
hangouts
it
scheduled
and
stuff
and
I
suspect,
that's
probably
too
much
up
for
everybody
else.
I
the
cadence
I
was
like
is
more
like
every
month
unless
there's
some
kind
of
hot
button
thing.
But
if
somebody
wants
to
do
something
you
know
in
two
weeks,
I
do
pin
maybe
be
willing
to
do
to
do
that.
You
know
one
more
time
so
folks,
thinking
one
month
or
two
weeks.
B
A
That
was
kind
of
my
original
thinking,
as
well
as
that
once
a
month,
unless
there's
kind
of
media
items
that
I
hadn't
really
thought
about
what
that
might
mean
at
the
time.
But
now
that
you
know
we're
talking
about
creating
some
issues
and
trying
to
get
people
to
drill
down
into
a
little.
But
those
are
the
kind
of
things
I
was
thinking
even
still
I.
Think
at
the
point
we're
at
it
seems
unlikely
that
we
have
any
anything
come
up
on
those
issues
within
two
weeks.
A
A
Yay
no
neighs,
so
I'll
shoot
for
that
I'll,
create
a
doodle,
also
a
create
a
new
issue
for
the
next
meeting
cricket
doodle
and
people
can
start
looking
at
their
calendars.
So
thanks
everybody
for
making
the
call
today.
I
think
we
had
more
people
than
we
did
last
time,
which
is
fantastic,
and
I
think
we
got
some
good
information
out
of
today's
call
I'll
be
going
through
the
meeting
and
creating
some
minutes
and
then
posting
those
to
the
repo
I.
Think
there's
like
a
WG
meetings,
a
directory
with
meeting
minutes
of
it.
A
So
I'll
go
ahead
and
plan
on
doing
that
over
the
next
couple
days
go
ahead
and
if
you
want
to
drive
anything
down
in
the
google
doc,
nobody
did
any
minutes
this
week
like
last
week.
That's
fine!
But
if
there's
anything
you
want
to
put
in
there
that
I
make
sure
I
don't
forget,
please
feel
free
to
do
so
or
contact
me
offline.
Whenever
so
I'll
see
you
all
in
about
a
month,
I
have
a
great
rest
of
the
day.
Everyone
flings.