►
From YouTube: Diagnostics WG meeting - March 11 2020
Description
B
A
B
A
I
think
that's
yeah
I
mean
that's
fine,
it's
basically,
okay,
that
can
land;
okay,
oh
yeah,
for
the
other
ones.
It
was
just
like
before
we
dive
into
this
stuff.
Is
there
anything
that
I
just
say
is
anything
that
any
one
of
these
issues
people
think
we
should
discuss
this
time
like?
Is
there?
Do
we
have
cord.
A
D
On
the
sink,
low
storage,
look
I,
think
local
storage
friend
they
have
been
trying
to
migrate.
Some
of
the
existing
test
cases
to
the
CLS
variant
and
I
mean
with
two
option
two
objectives.
One
is
to
see
you
know
how
include
the
user
experience
or
the
development
effort
for
the
CLS
variant
and,
secondly,
to
see
if
there
are
any
use
cases
or
gaps
in
the
api's
that
we
need
to
call
out
early
so
that
the
feature
pack
can
be
as
robust
as
possible.
D
The
other
thing
I
quickly
ordered
touch
based
on
is
item
number
349,
which
is
report
version
semantics
not
defined.
We
been
through
this
for
a
couple
of
iterations
and
all
of
us
have
probably
voiced
our
opinion
in
the
issue
tracker
now
that
not
just
b14
is
slated
to
be
released
in
April
and
the
development
cut
is
expected
to
happen
on
March
24th
I'm
just
wondering
do
we
need
to
be
taking
any
decisions
before
that,
or
is
it
something
which
can
be
delayed
to
later
as
well?
D
D
E
D
Have
difference
in
opinions
not
sure
we
are
converging
on
that,
so
at
high
level
we
all
are
in
agreement
about
bomb
devotion.
If
the
structure
changes,
if
there
are
new
keys,
inserted
a
new
key
later,
but
what?
If
the
order
in
which
the
data
appears
is
reversed
or
some
of
the
internal
shape
of
the
data
that
gets
changed
not
due
to
a
reason
of
any
change
in
the
airport
generation
logic
itself,
but
the
actual
data
is
coming
from.
Runtime
has
underwent
some
changes.
A
A
D
D
Actually
depends
on
the
purpose
of
your
tool.
Right
there
are
one
or
two
or
three
types
of
tooling
use
cases.
One
is
a
tool
that
actually
reads
the
report
and
renders
in
a
general-purpose
UI,
without
any
transformation
applied
on
that,
in
which
case
there
is
absolutely
no
concern,
because
the
report
is
a
black
box.
The
tool
in
the
second
case
is
the
tool
is
actually
reading.
D
A
So
that
one
I
think
we
could
come
to
agreement,
my
my
the
other
thing,
I,
remember,
was
being
discussed.
There.
That's
I
think
is
almost
harder.
Is
what
about
consistency
across
versions
like
if
you
have
that
single
number
and
you
like
only
backport
one
one
feature
like
you've
had
poor
to
change
which
affects
the
report.
B
A
B
A
A
D
A
A
But
I'm
also
not
sure,
that's
hundred-percent,
practical
I
guess
it
could
affect
the
first
issue
too,
because
if
we
say
that
then
the
number
is
gonna
be
bumped.
When
the
data
changes
like
we
can't
necessarily
say
oh
you're
not
going
to
back
court
that
change
in
the
core
VM
like
in
the
core
runtime,
just
because
it
would
change
the
report,
know
what
I'm
saying.
D
D
So
do
we
have
a
consensus
about
the
current
proposal?
That's
relevant
in
the
issue.
Tracker
is
a
current
proposal
that
is
prevalent
in
they
should
tracker
that
is,
we
bump
the
number
on
every
change
in
the
key
value
pairs
and
just
keep
it
as
simple
as
that,
of
course,
if
there
is
any
data,
type
changes
as
well,
so
it
couldn't
be
directly
related
to
the
port.
So
this
will
we
cater
to
that.
So
that
essentially
means
any
any
structural
change,
including
the
adapt
change
in
the
report,
means
a
version
BOM
I.
A
How
those
n
API
handles
that?
Well,
any
PI
when
we
add
new,
when
we
add
new
API,
is
they
they
start
as
experimental,
and
then
we
make
a
decision
at
certain
points.
We
say
like
right
now:
we're
defining
an
API
version
six
and
then
only
if
we
backport
everything
that
is
innate
in
API
six
to
say
you
know
a
previous
release
like
12
or
10.
A
Will
we
say
that
that
version
supports
it,
and
we
do
know
that
there
are
certain
cases
where
you
know
so,
for
example,
if
we
took
if
we
define
an
API
six
and
it's
got
a
particular
function
and
it
means
that
that
function
cannot
end
for
some
reason
that
function
can't
be
back
ported
to
ten,
we
will
never
be
able
to
say
ten
supports
X,
it's
just
like.
Okay,
even
if
we
could
add
some
of
the
some
subset
of
the
functions,
it
wouldn't
necessarily
say
it
supports.
E
I'm
starting
to
lean
towards
that
idea
as
well,
unless
we
we
are
going
to
do
the
same
thing
that
that's
being
done
on
an
API
and
making
sure
that
there
are
no
breaking
changes
and
that
a
version
you
need.
But
everything,
including
changes
in
the
tax
format
in
the
date
format,
yeah
but
I,
don't
think
it
makes
sense
to
have
a
version
to
report.
Yeah.
A
Like
in
less
version,
six
means
that
sex
is
is
the
same
thing
supported
in
every
every
release
like
10
and
12.
Both
six
means
the
same
thing:
I
guess
the
only
other
alternative
would
be
to
do
something
where
the
numbers
are
not
sequential,
so
they're
not
like
six
is
bigger
than
seven
and
every
time
there's
a
change,
whether
it's
in
an
older
release
for
a
newer
release.
They
just
get
a
completely
different
number,
but
I'm
not
sure.
That's
really
helpful.
E
E
A
A
E
Kind
unrelated,
but
that's
how
we
do
things
on
ll
node.
We
don't
check
versions,
we
never
checked
versions
to
determine.
If
something
exists,
we
check
sumption
x'
and
it
makes
a
little
note
a
bit
more
reliable
than
if
you
were
checking
versions.
It
also
makes
it
more
forward
compatible
compared
to
before
checking
versions
right.
A
D
Yeah
III
don't
see
it
as
proposed
earlier,
you
could
propose
and
quickly
we
can
all
have
consensus
on
that
and
we
could
actually
do
the
PR
also
in
the
core.
Yes,
there
is
only
two
of
standing
issues
pending
on
the
report
at
this
moment.
One
is
to
basically
add
this
table
tag
and
then
one
small
bug
letter
to
the
petal
error,
which
myself
is
debugging
at
this
point,
so
we
can
get
this
one
also
in
the
next
two
weeks.
That
would
be
really
great.
Okay,.
B
F
Sure
I
just
I'm
working
with
a
third-party
vendor
and
they're
kind
of
curious.
How
crash
reporting
works
in
node
I
talked
at
Thomas
Watson
about
this,
and
he
said,
maybe
jumping
in
a
skull
would
be
a
good
place
to
start.
So
these
are
just
my
general
questions.
I
can
go
through
them
one
by
one
or
if
you
want
to
write
responses
and
take
your
time
that
also
works
really
well.
I'm,
not
sure
I
understand
crass
reporting
very
well
at
all
so
I'm
kind
of
a
newbie.
In
fact,
the
best
practices
guide
would
be
great.
F
E
F
A
F
B
B
E
B
B
So
basically,
we
were
following
the
structure
when,
like
we
were
saying
what
the
user
would
do
and
the
first
step
was,
the
user
would
have
basically
a
suspicion
about
the
memory
leak
and
then
kind
of
confirming
that
its
memory,
and
then
we
started
to
talk
about
that.
Like
what
could
you
do
after
you
have
a
Rea,
strong
suspicion
on?
B
You
know
that
there's
a
memory
leak
and
we
discussed
sending
heap
profiler
the
heap
snapshot,
and
then
we
were
running
out
of
the
time
and
we
put
the
world
right
in
the
pheno
is
another
tool,
and
that's
where
we
are
so
I
guess.
The
first
question
would
be
other
than
the
heap
profiler
and
the
heap
snapshot
and
the
wall
grindy.
She
already
have
some
documentation,
which
is
more
crash
related.
What
other
tools
can
come
to
our
mind
to
put
that
in
as
memory.
B
B
D
Many
of
the
API
is
JavaScript
objects
have
C++
backends
and
when
the
JavaScript
objects
are
sitting
in
the
J's
heap,
the
C++
objects
could
be
growing
in
the
native
heap,
and
there
is
definitely
no
one
is
to
one
relations.
For
example,
a
1
kb
JS
object
could
be
mapped
to
a
1
MB
of
C++
object
and
vice
versa.
It
all
depends
how
the
backend
front
end
is
implemented,
and
at
the
moment
there
is
no
easy
way
to
figure
out
what
is
occupied
in
the
native
heap
in
correspondence
with
what
we
see
in
the
JSE.
E
D
A
Yeah
I
know
something
we
have
seen
fairly
regularly
is
a
case
where
you're
keeping
JavaScript
objects
alive.
But
when
you
look
at
the
heat,
it's
not
clear
that
they're
holding
a
lot
of
memory,
and
actually
you
know
to
the
GC-
that's
also
the
case.
So
there's
no
there's
no
pressure
on
the
GC
to
free
up
the
heap
because
you're
only
using
like
say
1k
of
GC
memory,
and
so
it
never
decides
to
GC,
even
if
they
could
be
released,
but
because
they
keep
a
large
native
block
of
memory
alive.
It
ends
up
causing
you
problems.
E
E
A
I've
seen
that
with
like
the
internal
node
structures,
but
sometimes
with
you
know,
in
the
data
stored
by
the
application
itself
or
through
some
native
add-ons
I,
think
that
has
been
potentially
an
issue.
You
know,
possibly
because
they
they've
they've
got
a
bug
at
the
C++
side
or
something,
but
there's
definitely
been
cases
where
it's
it's
like:
hey
yeah,
even
if
you're
not
doing
GC.
A
E
D
Yeah,
ideally,
we
would
be
able
to
get
into
a
situation
where
we
are
able
to
see
the
relation
between
the
J's
object
and
the
C++
object
and
then
get
into
the
C++
object
and
walk
through
the
Dominator
tree,
if
possible,
and
in
each
of
the
node.
We
see
how
much
of
the
year
attention
how
much
memory
is
allocated
for
individual
objects
or
loads,
etc.
It's
a
big
shot
ambition,
but
the
arts,
and
that
would
be
the
ideal
scenario
for
native
memory
tracking.
G
G
G
A
The
one
thing
I'm
aware
of
is
like
say
through
an
API
there's
that
just
external
memory
method,
which
is
intended
to
let
if
you
know
that
your
gen
allocating
native
memory
when
you're
allocating
objects,
you
can
use
that
to
tell
the
GC
that
hey
we're
actually
using
more
memory
than
then
you
know
about,
and
then
that's
used
to
help,
try
and
trigger
GCS
as
well.
Although
I
think
that's
not
exactly
I
mean
it's
related
to
the
problem,
you
were
mentioning
about.
How
do
we
track
that
memory
like
time?
I?
Don't.
E
B
E
D
F
D
E
Yeah
I
think
the
first
thing
we
should
do
is
try
to
get
using
into
some
tooling,
otherwise
anything
we
do
trying
to
improve
the
set,
the
consistency
of
of
the
relation
between
J
objects
and
a
different
memory.
Even
even
if
we
get
a
perfect
point
from
a
back-end
perspective,
if
we
can
see
it
on
tooling
it's
pointless,
so
we
should
probably
start
by
engaging
with
v8
to
see
if
we
can
get
this
information
on
the
heap
profiler
and
the
hip
snapshot.
I.
A
Like,
let's
get
our
current,
let's
get
our
current
thing
documented
in
their
guidance
like
here's,
what
you
can
do,
we
can
say
here's
some
gaps
and
then
once
we
have
that
I
think
then
moving
on
to
like
how
do
we
improve
make
sense,
just
wouldn't
want
to
be
distracted
and
not
document
these
things.
First,.
B
A
B
A
May
report
that
you
have
like
yeah
you're
growing
one
thousand
new
objects
on
every
cycle
of
your
application,
but
then,
if
you
try
and
figure
out
well,
okay,
where
am
I
leaking
in
my
application?
You
you
pretty
much,
have
to
look
at
the
object
and
from
the
fields
or
whatever
try
and
guess
what
it
is
like.
There's
no
there's
no
easy
naming.
A
A
E
A
Well,
in
the
in
the
heap
snapshot,
you
can
often
you
know
the
the
the
it
lets.
You
see
the
difference
between
two
heap
snapshots.
You
can
say:
okay,
I'm
growing,
these
kinds
of
objects,
I
think
the
challenge
is
that,
like
the
sampling
heap,
profiler
won't
necessarily
have
those
at
the
top
of
its
list
right
like
if
you,
if
you
have
something,
that's
doing
a
lot
of
allocations,
but
a
lot
of
frees.
Although
I
guess
this,
the
sampling
profiler
does
give
you
things
that
were
allocated,
but
not
freed
right.
I
think
we
ate
has.
A
I
think
that's
where,
if
we
could
write
up
the
guidance
and
try
it
out,
we'd
probably
be
in
a
better
position
to
say.
Okay,
then,
given
that
this
is
what
we've
got,
what
are
the
areas
where
it
seems
to
be
a
shortcoming
because
you're
right,
it's
more
like
if
we
had
a
you've
got
this
problem.
Go
through
these
things,
use
these
these
tools
in
this
way,
and
then
it's
like
okay.
Well,
what
can
I
still
not
do?
That
would
be
a
better,
better
way
to
attack.
A
I'll
certainly
like
the
heap
that
you
captured
recently
I
was
involved
in
one
internally,
where
the
heap
snapshot
did
end
up,
pointing
us
to
the
right
object,
but
it
was
more
just
because
the
name
happened
to
match
something
that
people
knew
of
that
new
in
the
code
was
an
object,
I'm
not
even
sure,
quite
sure
why
the
fix
fixed
it.
Yet.
A
A
A
E
E
E
A
Sounds
good,
like
probably
two
one,
probably
if
we
have
like
how
to
use
the
sampling
heat
profile
or
how
to
use
the
the
heat
profiles,
there
might
even
be
existing
stuff
out
there.
But
if
we
have
open
an
issue
for
somebody
to
sort
of
look
at
what's
already
out
there
and
then
document
the
best
practice,
I
think.
E
A
B
B
A
A
B
A
B
B
A
Think
I
wrote
it
as
I
I
think
my
comment
was
meant
to
be.
Val
grind
can
also
help
you
with
like
non.
Like
just
crashes.
Worry
like
your
dummy
freeing,
but
it's
not
cut,
doesn't
cover
that.
Yet
it
covers
the
part
where,
like
hey
memory,
you're
actually
money
running
out
of
memory,
and
it
gives
you
the
summary
of
where
things
are
not
being
freed,
so
I
think
it's
written
not
for
the
crash
case,
but
for
the
memory
leak.
A
B
B
A
B
F
B
D
B
A
B
B
B
Yeah
I
think
a
big
big
question
that
a
little
bit
discussed
last
week
did
not
have.
Is
that,
like
I,
think
it's
super
useful
the
outcome?
Just
we
should
start
to
talk
about
how
we
popularize
it
in
the
community
so
check.
It's
great
that
we
capture
the
knowledge
but
like
I,
feel
what
the
missing
step
is
that
the
knowledge
we
making
to
the
people
definitely.
A
B
A
A
A
Are
all
empty?
Okay,
we
should
be
military,
so
let's
delete
that
and
then
like.
If
we
say
that
you
know
we're
pretty
happy
with
what's
in
the
enormous
termination,
we
could
start
to
say
we
could
start
to
try
and
promote
that
and
say:
okay,
what
do
we
want
to
do
to
promote
that?
Do
we
want
to
I
mean
we
could
obviously
individually
tweet
it,
but
what
do
we
want
to
like
write
a
short
blog
post
or
what
is
it
that
we
think
makes
sense
in
terms
of
like
trying
to
bring
attention
to
it
so.