►
From YouTube: Diagnostics WG meeting - Jan 29 2020
Description
A
So
welcome
to
the
no
des
diagnostic
working
group
meeting
for
today,
Peter
couldn't
make
it
and
he
asked
either
myself
or
Jewish
to
chair,
seems
like
Jewish
hasn't
been
able
to
get
in
so
I'll
share.
The
meeting
for
today,
let's
take
a
look
at,
will
follow
our
Center
agenda,
which
was
in
the
issue
number
351
and
to
start
out
with.
Does
anybody
have
any
announcements
or
other
information
that
they'd
like
to
share.
B
A
B
A
A
A
C
A
And
in
the
you
know,
the
diagnostic
report
writes
either
to
console
or
to
a
file
JSON
formatted
diagnostic
data,
and
it
includes
a
version
number
which
today
is
a
single
integer
and
the
PR
was
like
hey
it's
not
clear.
If,
like
is
it
when
we're
adding
features,
it
needs
to
be
bumped
or
you
know,
since
it's
only
if
it
was
ember
I
think
people
would
understand.
Like
semper
miner,
you
bumped
a
minor
or
a
major.
If
you're
making
a
breaking
change,
you
bump
the
major
given
that
it's
just
a
single
number.
C
A
B
And
my
personal
feeling
is
that
to
be
safe
as
long
as
it
remains
one
number,
it
should
be
like
any
change
at
all
bombs
it
those
integers,
are
free,
but
if
we
switch
December,
then
yeah
we
can.
We
can
split
up
the
meanings,
but
the
switch
December
should
I
think
probably
be
a
major
bomb,
but
note
it's
all
a
major
bump.
Okay,.
A
B
D
C
C
So
ultra
light
ultra
light
II
to
stop
two
parts
or
something
that
can
actually
give
us
an
idea
how
people
are
using
the
data
that
the
report
is
sending
and-
and
we
can
naturally
see
if
that
any
change
or
any
different
instructor,
any
different
yeah.
In
a
different
data
that
is
going
to
be
sent,
or
that
needs
a
needs,
reprocessing
or
in
is
that
the
part
that,
for
example,
we
do
to
do
checks
requires
a
a
different
yeah,
a
bigger
change
to
actually
understand
the
data
that
is
being
sent.
Oh,
that's.
That's
my
personal
opinion.
C
A
A
Okay
I'm,
just
reading
through,
like
some
of
the
comments
are
in
the
issue,
is
well
around
whether
cember
actually
applies.
A
E
A
A
A
F
F
F
Yeah
I'm
not
sure
I,
heard
all
the
opinions,
but
my
way
of
thinking
is
like
this
Jen
generally,
we
talk
about
semantic
questions
versioning
on
the
software
piece
of
code,
more
specifically,
where
you
have
more
than
one
type
of
you
know,
feature
implementation
which
essentially
have
implications
on
the
existing
code,
so
the
semantic
version
perfectly
makes
sense
over
the
major
version,
my
novel
version
and
patching,
whereas
with
respect
to
the
diagnostic
report,
they
are
essentially
talking
about
piece
of
data.
There
is
absolutely
no
code
that
is
being
shipped,
which
has
implications
to
the
existing
core.
F
So
what
is
the
implication
of
the
report,
which
is
a
piece
of
data,
a
collection
of
diagnostic
information,
how
that
implies
to
the
existing
consumers?
The
answer
is
mostly
for
the
tooling
owners
for
other
than
that.
It's
just
the
question
is
from
more
information
versus
less
information.
There
is
nothing
called
breaking
information
that
the
breakage
comes
only
for
the
tooling
owners.
That's
that's
my
perspective.
F
A
Like
there,
there
can
be
two
types
of
changes,
I
think
one
which
is
addition
of
new
information,
which
the
tools
like
an
existing
tool
could
continue
to
work,
because
it's
JSON
and
just
do
nothing
with
and
probably
doesn't
require
that
any
code
write
changes
and
then
there's
changes
which
would
change
the
format
such
that
the
tool
would
no
longer
work
with
that
on
it
without
doing
something
effort.
I.
F
For
example,
the
the
tread
ID
of
a
specific
environment
was
the
most
vital
information
in
a
specific
problem,
determination,
context
and
because
of
the
specific
question
which
was
in
use,
that
ID
was
thing,
I
think
it's
gonna
be
a
rare
combination.
In
most
of
the
cases
the
diagnostic
report
provides,
you
know,
more
than
sufficient
number
of
information.
Only
a
subset
of
information
is
really
going
to
be
used
for
a
specific
diagnostic
case,
so
that
way,
I
don't
see
any
any
requirement
of
semantic
versioning
being
a
relevant
over
there,
and
these
questions
arise
as
five
like.
F
What
do
we
do
for
patch?
What
what
exactly
mean
by
patching
this
diagnosis
report
corners,
so
a
simple
digit
which
basically
bumps
the
number
whenever
there
is
a
structural
change
by
which
the
the
tooling
has
to
be
rewritten
about
the
assumptions
about
the
layout
of
the
report,
makes
perfect
sense
to
me.
A
C
Don't
think
I
don't
think
it
should
be
the
case
shabam
right
for
every
case
it
I
mean
that's.
That's
part
of
my
deal
is
for
the
diagnostic
side.
We
should
just
have
a
talk
that
consumes
this
before
and
just
keep
an
eye
on
how
that
reports
that
report
change
and
the
tools
that
could
well
consume.
This
report
are
going
to
be
effective,
I
think
that
what
we
want
to
need
to
keep
in
mind
most
of
the
time
this
it's
just,
that
is
how
that
information
is
going
to
be
used,
and
if
I
change
happens.
C
A
Thought
I
had
is
like,
say,
you've
added
some
new
fields.
The
tool
may
need
to
say.
Okay,
if
I'm
using
you
know
a
report
that
has
those
fields
in
it,
I
want
to
do
this.
It
doesn't
have
those
fields.
I
want
to
do
something
differently,
so
I
could
see
potentially
wanting
to
check
the
version
number
and
say:
okay,
I'm
version.
Five,
therefore,
I
can
actually
show
this
data.
Oh
I'm
version.
A
C
Yeah
I
think
yeah
I
think
maybe
yeah
we
can.
We
can.
We
can
think
about
all
the
cases,
and
for
this
case
we
can
I
mean
we
can
actually
say,
for
example,
has
docker
they
have
two
digits,
so
we
will
say
that
every
time
we
add
a
new
field,
we
just
bump
the
first
digit,
so
it
will
be
1.1
1.2
or
something
like
that,
and
it's
a
specific
earth
teacher
I
mean
the
first
digit.
It
will
be
bumped
every
time.
C
You
knew
that
or
knew
that
all
this
data
is
in
the
report
and
maybe
if
we,
if,
if
that
I
change
one
departure
or
whatever
is
consuming
the
data,
it
will
pump.
The
major
version
of
the
report
wait
for
this.
If
I
think
it's
a
matter
of
thinking
all
the
possible
pieces
to
actually
figure
out
numbers
that
work
for
us
for
further
the
agnostic
report.
I
think
that's
that's
probably
the
best
thing
to
do.
C
It's
just
think
on
all
the
possible
situations
and
think
and
see
what
could
make
sense
on
any
of
that
and
at
the
end,
just
think
on
hey,
let's
keep
in
mind
those
reports
or
these
doe
situations
and
try
versioning
in
this
way.
I
think
I
think
that's,
that's
probably
I
mean
I.
Think
right
now
this
moment
we
don't
have
someone
using
this
report,
so
it's
going
to
be
way
more
difficult
to
actually
understand
how
this
is
going
to
be
used
later.
So
all
of
the
cases
are
going
to
be
mostly
hypothetical.
F
With
respect
to
the
possibility
that
Michael
proposed
that
some
of
the
fields
or
some
of
the
data
is
missing
in
the
report,
the
tools
are
already
taking
care
of.
That
is
my
opinion.
For
example,
the
libuv
the
current
report
itself,
not
necessarily
based
on
the
version
of
the
report,
but
based
on
the
actual
handles
and
the
artifact
sitting
in
the
even
loop.
It's
so
possible
that
in
a
specific
deployment,
the
libuv
data
can
be
completely
empty.
Versus
libuv
section
is
fully
populated
and
the
same
is
the
case
with
call
stack.
F
F
Right
and
probably
nobody's
taking
notice
about
that
because,
as
I
said
in
the
the
previous
conversation,
if
you
are
exposing
hundreds
of
information
in
a
specific
problem
site,
not
all
the
hundred
pieces
of
information
are
going
to
be
critical.
So
there
are
elements
which
are
probably
missing,
but
it's
not
being
taken
notice.
Oh.
F
Yeah,
the
single
digit
increment
is
essentially
telling
big
tools
to
say
that
the
current
way
of
consuming
the
report
is
probably
worth
investigating.
There
are
changes
the
way
the
report
is
generated,
so
you
need
a
modification,
it's
a
possibility.
It
might
not
be
because
of
the
kind
of
changes
that
happened.
The
depends
on
the
way
that
tolling
is
implemented.
It's
so
possible
that
it's
it
requires
a
change.
It
may
not
require
a
change
right
since
we're
using
the.
F
F
Because,
ultimately,
the
tool
is
not
the
end
consumer,
it's
just
an
intermediary
between
the
real
operator
and
rapport,
so
reflexive
way
of
just
translating
the
data
into
a
different
view,
called
different
representation.
I'm,
not
saying
that
tooling
should
be
returned
that
way,
but
that
it's
also
one
form
of
the
tooling
use
case.
So
in
that
case,
all
what
we
are
saying
is
this
digit
increment
represents.
There
is
a
potential
change
that
is
required
for
the
tooling
to
look
at
the
report
structure.
A
D
Originally
the
report
I
don't
have
a
version
number
and
it
was
quickly
requested
one.
As
I
wrote
in
the
report,
he
asked
for
a
version
number,
but
when
we
added
the
version
number,
it
wasn't
as
now
well
defined
as
what
that
version
number
meant.
But
that's
where
this
issues
come
from
is
there
has
been
either
things
to
the
report.
The
question
then
has
a
riot
arisen
now
that
we
have
a
version
number.
F
F
A
So
we
basically
bumped
the
API
version
every
time
we
add
new
methods,
but
it's
strictly
additive.
So
therefore
that
single
number
tells
you
the
when
basically
equivalent
December
minor,
but
you
know-
and
nothing
else
applies
in
the
case
of
v8.
We
basically
bump
that
every
time
we
bring
in
a
new
version
of
v8.
D
You're,
referring
to
the
the
module,
the
module
version
that
we
bump
every
time
we
put
a
new
version
of
the
18
once
we
release
a
major
of
node
that
gets
fixed
and
we
can
only
bump
the
version
of
v8
in
a
release.
As
long
as
we
maintain
garnery
compatibility,
which
means
that
version,
the
module
version
doesn't
change,
we
do
not
change
the
module
version
for
v8
for
add-ons
in
a
release
line
ever
okay.
We.
D
A
A
A
I
other
people
have
comments
I
my
take
is.
We
may
need
to
take
this
back
to
github,
to
continue
and
I
think
to
reach
the
the
suggestion
you
made.
You
know
you
could
suggest
that
just
talking
with
Chris,
though
it
sounds
like
their
there'd
still
be
discussion
about
whether
we
need
the
other
number
cuz
he's,
preferring
if
it
would
be
bumped
on
every
change.
C
A
Did
just
say
approve
up,
that's
fine,
pretty
difficult
can
change
it
later.
So
maybe,
if
you
want
to
put
in
your
you
know
what
you
suggested,
the
rationale
and
then
that
may
let
people
comment
specifically
on
that
I
mean
certainly
I.
Can
it
makes
sense
to
me
if
we
say
that
it's
only
the
summer,
major
that
makes
sense
and
the
minor
the
tooling
has
to
has
to
deal
with,
and
if
we
have
consensus
from
tooling
the
people
are
gonna.
A
Okay,
let's
see
what's
next
on
the
agenda,
gonna
take
a
few
notes:
lots
of
discussion.
A
Okay,
so
next
thing
is
the
proposal
to
drive
diagnostic
work
initiatives
through
user
journeys.
I'll
mention
that
I
did
write
a
section
on
debugging
with
using
debugging
memory
issues
with
Val
grind.
Let
me
just
find
that
issue
I'll
paste
it
in
there
and
then
I'm
going
to
once
that
lands.
I'm
gonna
link
that
into
the
work
that
we've
been
doing.
I'm
Theresa,
you
have
things
you
want
to
add
as
well.
F
Yeah
I
was
trying
to
complete
the
crash
documentation.
I
should
confess
that
I
did
not
get
enough
time
on
completing
that
it's
right
now
on
first
phase,
completer
out
of
three
phases
and
two
more
phases
need
to
be
done
and
I
believe.
A
couple
of
other
contributions
came
in
the
last
couple
of
weeks,
one
from
Matias
all
memory,
Diagnostics,
I,
believe
and
another
one
or
no
v8
profiler
still
in
the
review
phase,
I
request
participants
to
heaven.
E
A
A
F
B
B
One
exposes
a
map,
while
the
other
just
is
a
container
for
an
arbitrary
value.
It's
a
bit
so
so
far
seems
to
have
come
to
the
consensus.
That's
we
like
either
one
that
we
choose
will
want
to
rebase
it
to
use
the
execution
async
resource
PR
when
that
lands,
which
that
that
mostly
ready
but
said
we
found
a
bug
in
it
the
other
day
that
it's
not.
B
Taking
in
new
accounts
greet
like
reusable
handles,
that's
have
like
a
separate
resource
like
async
resource
to
represent
the
reuse,
so
I
need
to
make
an
update
to
make
it
recognize
that
properly
and
then
one
once
once
it
handles
the
reusable
things
properly.
Then
it
should
be
ready
to
like
it
execution.
Any
sink
resource
should
be
ready
to
land,
and
then
it
can
start
looking
at
which
of
the
two
PRS
we
want
to
rebase
on
that
and
land
I
think.
B
A
B
B
Yeah
yeah
I
can
kind
of
summarize
them.
So
there's
a
I.
Think
storage
is
the
what
more
closely
mirrors
the
continuation
local
storage
module
in
userland,
which
basically
just
gives
you
an
object,
the
that's
you
can
stuff
data
into,
and
it
propagates
that
object
through
the
async
tree,
whereas
async
local
is
just
a
container,
so
it
did
itself
does
not
actually
allocates
a
separate
object
or
anything
it
it.
Just
you
you
give
it's
a
value
that
it
should
use
as
its
thinking
to
pass
around,
and
then
you
can
get
that
value
out
of
it
later.
B
So
it
isn't
locals
may
be
slightly
better
performance,
but
no
I
don't
think
either
have
really
been
properly
performance,
tested
to
actually
see
what's
performances,
but
part
of
that
is
it's.
A
basic
local
currently
is
built
on
top
of
the
execution,
async
region
or
SPR
I'm,
not
sure
how
up-to-date
it
is.
It's
it's
built
on
top
of
that,
where,
as
amazing
storage
was
built
and
I
get
separately,
so
it
doesn't
depend
on
execution
async
resource
currently,
but
it
also
has
a
potential
memory
leak,
just
in
like
once
it
switches
to
execution
async
resource.
B
It
should
eliminate
eliminate
that,
but
just
the
way
it's
designed
right
now
that
tend
to
leak
in
certain
areas.
So
it's
more
just
practical
demo,
like
the
API
surface
than
anything
at
the
moment
right,
but
yeah
yeah.
We
should.
We
should
look
at
the
and
like
consider
the
API
organ
Amish
side
of
it
right
now,
in
terms
of
like
performance,
comparison,
I,
don't
think,
that's
really
something
we
can
do
quite.
A
Yet
right,
yeah,
I
guess
from
the
API
side:
it's
it's
is
it.
Is
it
familiarity
or
is
it
that
ones
like
a
higher
level
and
being
higher
level
is
less
risk
of
being
something
that
you
know
we
want
to
change
later
on
or
is
there
anything
like
on
that
aspect
that
helps
point
push
in
one
direction
or
another
and
yeah.
B
At
a
sink
storage
is
the
higher
level
module
that
there's
kind
of
mixed
opinions
on
like
do
we
want
higher
level
or
more
like
medium
level?
Kind
of
API
is
like
that,
to
some
extent
the
the
purpose
of
having
a
sink
storage
inside
of
node.
Is
that,
like
things
like
APM
vendors
and
just
like
web
frameworks-
and
things
like
that
can
have,
can
have
this
shared
format
for
kind
of
storing
and
communicating
data.
B
Many
of
the
many
of
us
are
somewhat
of
the
opinion
that
this
data
should
not
be
like
directly
accessible,
but
between
two
things.
Like
I
just
say:
Express
used
a
continuation,
local
storage
inside
of
itself
for
storing
its
own
contextual
information
and
an
APM,
probably
shouldn't
have
direct
access
to
that
because
you
might
interfere
with
it
but
having
like
two
separate
ones
like
a
communication
channel
between
them.
That
might
be
good.
A
B
B
Easy
yeah
and
in
the
K,
in
the
case
of
a
sink
storage,
it
has
like
an
internal
map
that
you
could
easily
call
the
map
clear
on
and
totally
mess
up.
Whatever
other
thing
is,
depending
on
it
right
with
a
sink
local.
It's
a
bit
safer
because
it's
just
a
container
of
a
value,
so
you
could
mess
up
you'd
like
erase
the
entire
container.
If
you
wanted
I
guess,
but
hopefully
you
shouldn't
have
direct
access
to
that.
B
The
actual,
like
contextual
metadata,
around
things
like
we,
we
did
have
like
Diagnostics
channel
proposal
yeah
a
long
while
back
I,
unfortunately
have
not
had
time
to
do
much
of
anything
that
but
I'd
kind
of
like
to
revive
that
at
some
point
and
have
that
was
like
the
way
to
actually
like
could
cover
the
like
linking
part
between
like
if
a
like
web
framework
wants
to
communicate,
routing
information
to
an
APM
or
something
you
can
just
expose
like
here's.
My
Diagnostics
channel.
Just
listen
to
this
things
like
that
yep.
A
That
makes
sense
to
me
so
I
think,
like
that,
that
one
doesn't
seem
to
me
there's
a
sticking
point.
The
sticking
point
is
more
around
higher-level.
Well,
something
you
know
in
one
case
is
maybe
a
little
higher
level
and
familiar
to
existing
CLS
users
and
in
the
other
case,
it's
a
little
lower
level.
A
You
know,
I,
don't
think
the
overhead
of
an
extra
map
or
not
really
makes
a
huge
difference,
but
it's
it's
it's
a
little
simpler
and,
and
so
it's
kind
of
like
what
is
it
yeah?
How
do
we
figure
out
whether
the
the
higher
level
one
like?
Maybe
it
makes
sense,
because
we
wanted
to
make
it
higher
level
in
the
nascent
cooks
so
that
it
would
be
less
likely
to
change,
but
I,
don't
know
that
a
sink
local
has
that
problem
like
it
seems
pretty
straightforward
as
well
and.
B
A
basic
storage
is
kind
of
modeled
fairly
closely
after
the
existing
continuation,
local
storage,
so
that
has
familiarity
if
you're
familiar
with
that
a
sink
locals,
modeled
more
after
thread-local
and
Java
and
other
languages.
After,
like
it
career
coming
from
that
direction,
the
master
may
be
more
familiar,
so
that
familiarity
case
is
a
little
bit
debatable
right.
Okay,
so
yeah
I
mean.
A
Yeah
so
I
guess
it's
still
a
fair
amount
of
discussion
to
figure
out
which
one
we
make
sense
as
any
other
key
sort
of
things
that
are
worth
mentioning
to
this
girl.
Like
does
anybody
else
in
the
in
in
the
the
group
here?
Have
any
thoughts
on
you
know
why
we
might
you
know,
push
things
in
the
direction
of
one
versus
the
other
or.