►
From YouTube: Diagnostics WG meeting 2020-12-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
My
agenda
items
are
announcements
as
well.
I
guess,
which
is
just
I've,
been
working
on
a
js
function
based
promisebook
api
for
v8
for
a
while
now,
which
is
now
basically
ready
and
just
in
in
v8,
so
still
just
doing
little
bits
of
small
cleanup
to
get
it
to
approval,
but
there's
not
really
any
major
feedback
on
the
v8
inside
it
anymore.
B
So
I
opened
a
pr
now
to
node
making
use
of
this
new
api
which,
but
best
of
the
ideas.
That's
the
existing
promisebooks
is
a
sequel
plus
api,
which,
basically
all
all
of
the
promised
life
cycle.
Events
there's
like
enemies
when
the
promise
is
created
before
and
after
around,
like
whatever
it
calls.
The
like
then
handler
and
resolve
whenever
resolver
reject
is
called.
It
has
all
these
life
cycle.
B
Events
and
all
of
these
events
are
actually
triggered
within
within
bytecode
and
like
it's,
it's
all
in
the
generated
code
and
with
the
existing
sql
plus
premise
hook
api.
B
When
you
set
a
promise
hook,
it
basically
de-optimizes
like
most
of
promises
and
changes
to
another
path
that
it
it
has
to
like
jump
out
of
like
the
the
like
micro
test.
Cube,
for
example,
like
is
like
one
big,
linear
sequence
of
like
all
the
tasks
it
just
generates.
B
All
of
that
is
like
one
good
block
and
just
like
runs
it
in
sequence
and
so
like
it
can
optimize
that
quite
well,
but
with
the
the
sequel
promise
book
in
there.
It
kind
of
describes
that,
like
before
and
after
every
single
one
of
these
individual
tasks
within
the
sequence,
it
has
to
escape
back
to
c
plus
do
a
bunch
of
like
dispatching
stuff
to
send
it
to
the
callback,
which
is
the
c
plus
callback,
and
then
in
node.
We
don't
actually
really
do
much
of
anything
with
it.
B
B
So
I
I've
introduced
a
pull
request
which
uses
that
now,
instead
of
the
z-plus
version
and
it's
improved
the
performance
of
the
promises
while
being
observed
by
about
three
to
four
times,
so
it's
quite
a
bit
faster,
there's
still
more.
I
can
do
to
improve
the
performance,
but
it's
making
quite
a
difference.
So
far,.
C
B
B
Basically,
this
new
api,
one
of
the
major
performance
related
decisions
that
I
made
with
it
is
the
original
routed
all
event
types
through
one
callback
and
during
most
cases
we're
not
interested
in
all
the
events.
We're
only
interested
in
specific
events
and
like
especially
like
async,
local
storage
only
cares
about
it.
B
Events
technically,
we
need
the
before
and
after
to
do
the
resource
reattachment,
but
we
don't
actually
need
to
emit
facing
hooks
events
which
I
haven't
done
anything
to
take
advantage
of
that
yet,
but
I'm
planning
to
but
but
basically
I
split,
I
split
the
new
api,
so
there's
now
separate
functions
for
in
it
before
after
and
resolve.
B
C
C
Okay,
so
only
code
which,
like
async,
only
asynch
hooks
code,
that
was
specifically
looking
like
hooking
promises,
would
need
to
change,
and
the
only
thing
is
that
they're
going
to
have
to
provide
more
entry
points
potentially
like,
but
they
could
all
be
the
same
one.
If
you
want
to
do
the
what
you
had
before
right.
C
B
So
nothing
should
need
to
change
the
event.
Sequence
is
identical.
So
from
a
user
perspective,
nothing
is
different
at
all.
It's
just
faster.
B
It
took
me
a
while
to
track
down
all
the
places
for
it,
but
basically
v8
has
like
multiple
levels
of
promise
generating
code,
there's
like
the
torque
script
stuff,
which
is
mostly
the
faster
stuff
and
there's
the
cfa
which
some
of
that
is
faster,
but
across
these
two
different
languages
like
csa,
is
just
c
plus
plus
in
this
like
weird
macroish
language
and
torque
is
like
its
own
fully
separate
language,
but
they
all
kind
of
like
build
down
to
roughly
the
same
thing,
but
they
kind
of
share
these,
like
escape
hatches
in
a
bunch
of
places
where
it
will
differ
like
out
of
like,
but
both
csa
and
torque
are
like
that
they're
designed
in
a
way
that
you,
it
gets
run
initially
to
produce
bytecode.
B
B
That's
it'll
check
to
see
like
if
there's
a
promise
focused
if
there's
a
debugger
active
if
there's
anything
event,
delegate
a
bunch
of
different
things
and
if
it
detects
these,
it
will
bail
out
of
the
normal
bytecode
path
and
defer
to
runtime
functions
and
random
functions
are
just
like
plain
c
plus
plus
an
address
is
stored
for
them
somewhere
in
memory.
So
when
it's
actually
running
the
generated
code,
it's
going
to
have
like
some
jump
out
of
that
to
run
this
cpu
plus
thing
at
some
address.
B
Come
back
to
it
right
doing
that
all
the
time
is
just
expensive,
so
it
took
me
a
while
to
track
through
and
figure
out,
there's
a
bunch
of
places
where
the
runtime
functions
actually
do
like
relatively
significant
amount
of
stuff.
B
It's
like
it'll
bail
out
of
a
more
complicated
function
to
some
something
that
does
somewhat
more
things
decide
to
figure
out.
Like
exact
timing
of
like.
Where
can
I
actually
put
the
the
like
bytecodes
finished
instead
through
the
bytecodes,
like
events,
sendings.
B
So
most
of
the
emits
most
of
the
events
being
admitted
are
all
within
torque,
and
it's
it's
all
like
the
the
path
immediately
before
immediately
before
the
regular
promise
hook
would
emit.
B
But
now
that
there's
there's
a
few
cases
where
the
ordering
had
to
switch
but
from
a
timing
perspective
that
shouldn't
really
matter
just
because,
like
it
like,
unless
you're
trying
to
like,
compare
like
two
at
the
same
time,
that's
like
the
only
case
where
you'd
ever
notice,
it's
like
you're
you're,
generally
not
going
to
have
to
like
both
a
native
hook
and
a
javascript
hook
attached.
At
the
same
time
like
it,
doesn't
really
make
sense
so
that
that
ordering
shouldn't
really
matter
like
that.
The
timing
is
basically
the
same.
B
B
C
B
B
C
I'm
just
looking
because,
like
not
only
what's
significant,
is
not
only
that
it's.
You
know
it's
three
three
to
four
times
faster,
but
it's
getting
closer
to
the
original
as
well
right,
yeah
like
like
significantly
the
I
guess,
it's
like
25
degradation
for
enabled
only
versus
I
don't
know
like
several
hundred
percent
well
more,
like
75,
I
guess
yeah.
B
Yeah
yeah
so
there's
a
previously
quite
significant
drop
to
performance
when
you
turn
promisebooks
on
so
yeah.
My
my
intent
is
to
try
and
as
much
as
possible,
eliminate
that
and
I'm
planning
on
continuing
to
work
on
this
there's
I
mentioned
a
little
further
down
this.
I
get
in
the
vr
that
there's
no
optimizer
support,
currently
there's
no
optimizer
support
in
existing
contacts
either,
but
working
on
that
should
help
boost
this
a
bit.
B
So
the
optimizer
currently
like
most
most
things,
optimize
just
fine,
but
there's
like
a
couple
specific
cases
where
the
optimizer
is
configured
to
bail
out,
which
is
basically
in
an
async
function
between
the
point.
When
you
call
when
it
reaches
the
first
weight,
that's
not
going
to
be
optimized,
and
so,
if,
if
I
don't
bail
out,
then
it
won't
emit
the
two
init
events
at
the
start
for
the
function
itself
and
that
gap
before
the
first
await,
and
the
other
case
is
promise,
reject
and
promise
resolve.
B
B
B
And
one
of
the
other
things
that
I
plan
to
do
with
this
change
is
I
previously
made
a
change
moving
the
promise
hooks
as
long
as
there's
not
a
destroy
hook,
but
I
moved
that
to
javascript
and
that
kind
of
like
led
into
this
change
being
made
easier,
and
I
already
had
that
so
like
single
promote
function.
B
B
So
I
plan
on
trying
to
move
that
to
javascript
as
well.
The
reason
that
wasn't
moved
before
is
it's
needs
to
use
the
promise
wrap,
which
is
like
an
object
drop
class.
B
It
needs
to
wrap
the
promise
so
that
it
can
trigger
the
destroy
when
it
gets
garbage
collected
and
like
I
could
have
moved
the
api
for
that
into
like
javascript,
maybe
at
the
time,
but
it
was
just
easier
to
just
split
it
out,
but
yeah.
Now
I'm
gonna
try
and
make
it
so
it'll
apply
the
wrap
within
javascript
and
then
probably
the
other
events.
It'll
just
trigger
them
directly,
rather
than
trying
to
go
through
the
wrap
for
that.
B
So
yeah,
if
I
can
move
that
to
the
javascript
side,
then
hopefully
that
should
provide
a
performance
boost
even
with
the
destroy
hook.
It's
like
it.
If
you
look
at
the
numbers
right
now,
like
like
my
before
and
after
numbers
in
the
pr
show
like
huge
boost
like
enabled
and
enabled
with
init
only
but
enabled
with
destroy,
is
still
like
super
slow,
because
I
haven't
seen
anything
there.
B
I
don't,
I
don't
think
it'll
quite
get
to
the
same
level,
but
it
should
at
least
improve
somewhat
significantly
just
because,
like
it's
doing
the
gc
tracking
to
trigger
the
destroy
and
like
we
can
only
make
it
so
much
faster
with
all
that
gt
tracking
still
in
there.
A
C
B
Yeah
the
the
changes
I
have
in
the
pr
right
now
are
pretty
much
exactly
what's
planned
to
land
in
v8.
It
hasn't
landed
yet
because
they
have
like
some
just
really
small
feedback
things
they
wanted
to
like
merge
like
one
of
the
functions
which
I've
done,
but
I
haven't
got
feedback
on
yet
some
other
small
stuff
like
that,
but
like
but
functionally
it
should
remain
identical
and
should
hopefully
merge
in
the
next
week
or
two
we'll
see.
B
Yeah,
but
once
once
that
lands
then
yeah,
I
can
just
rebase
this
and
we
can
blend
the
changes
together
and
then
yeah
the
follow-on
stuff,
like
the
optimizer
stuff,
I'm
going
to
make
as
a
separate
change.
So,
but
that's
like
that
that
won't
influence
this,
like
the
javascript
side
changes
at
all
here,
so
that
can
just
be
like
another
v8
back
port
commit
at
some
point.
B
As
as
far
as
I
know,
it
should
be
fairly
straightforward
to
backboard
this.
I
actually
wrote
all
of
this
code
on
be
it
master
and
backboarded
100
clean
to
node
master
okay.
So
it's
been
a
while
since
they
changed
this
stuff.
B
Yeah,
I
would,
I
would
assume,
probably
I
think-
well,
I
I
know
there's
the
like
a
small
difference,
at
least
in
the
micro
task
csa
code,
but
that's
like
relatively
trivial
to
deal
with.
We
dealt
with
that
already
once
when
I
did
the
venable
specs
before
that
got
back
ported
just
needed
like
really
minor
changes,
but
that
that
was
back
according
to
13.
C
B
B
It's
like
user
line
code
could,
rather
than
like,
go
and
phrase
some
books
like
they
might
only
care
about
promises,
sometimes,
so
it
might
be
nice.
It's
both
promising
api
directly.
At
some
point.
B
A
So,
just
to
reiterate
the
actual
code
changes
that's
going
to
land
in
node
is
basically
the
changes
pertinent
to
the
a
sinkhook
module
which
is
making
use
of
this
v8
feature,
but
the
90
percent
of
the
code
is
actually
going
to
land
on
the
v8,
and
it's
just
here
here
just
for
reference
purpose.
A
B
As
I'm,
I'm
not
exactly
sure
what
what
our
process
is
for
v8
backloading,
stuff,
so
yeah
I've,
I've
just
included
all
the
v8
changes
in
this
pr.
It
may
be
that
we
want
to
land
the
v8
changes
separately,
and
then
I
would
just
like
rebase
this
on
that
and
like
none
of
the
v8
stuff
would
actually
appear
in
this
vr.