►
From YouTube: Diagnostics WG meeting 2022-10-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah
hi
nice
meeting
with
you,
everybody
I,
am
yagis
I'm,
a
senior
software
engineer
and
soon
to
be
node.js
collaborators,
so
yeah
I'm
I
was
just
recently
a
speaker
at
the
notecoff
talking
about
URL,
parsers
and
I'm
interested
in
performance
related
stuff
in
node.js,
and
there
are
a
couple
of
pull
requests
that
I
opened
and
I
thought
that
this
would
be
the.
This
would
be
a
great
place
to
talk
about
those
kinds
of
things
so
yeah.
Thank
you
again.
A
A
A
C
Yeah,
so
apologies
I,
my
battery
ran
out
had
to
go
and
scramble
around.
So
let
me
open
up
the
the
link
that
we
have
to
the
update,
so
I
think
the
main
update
that
we
have
there
was,
and
while
we're
at
collaboration
Summit
and
we
were
at
nocon
for
you-
there
was
a
number
of
PRS
that
were
opened
by
cayenne
and
I.
Think
I
asked
for
some
reviews
on
some
of
them,
but
I
still
haven't
closed.
C
Those
off
I
had
other
bits
to
do
this
week
and
we'll
be
looking
to
get
those
in
towards
the
weekend.
I
think
the
main
topic
that
came
up
during
the
collab
Summit
in
the
mini
Summit
from
an
LL
node
perspective
was
the
promises,
support
and
some
of
the
stringified
outputs
in
16,
and
so
some
of
the
features
that
were
missing
that
we
had
to
drop
to
get
a
build
for
16
aren't
documented
yet
and
that's
still,
an
open
are
still
an
open
item
that
is
caught
in
one
of
the
Open
tickets.
A
B
C
C
So
we
in
general
we're
chasing
or
tracking
status
for
LL
node
in
this
in
the
Diagnostics
issue.
Here:
okay
and
then
what
we
have
is
I
start
work
in
progress
at
the
moment.
Tony
started
looking
at
the
docking
user
Journey.
Here
he
got
stuck
on
some
of
the
Mac
compatibility
pieces
and
then
some
of
the
version
16
missing
features
as
well.
C
So
what
we
had
so
that's
paused
at
the
moment,
looking
for
somebody
else
to
pick
that
up
potentially,
but
what
we
need
to
do
is
understand
what
was
dropped
in
LL
node
to
get
16
and
18..
So
in
the
pull
request
that
we
that
we
closed,
we
captured
this.
C
C
So
in
here,
as
part
of
the
whole
review
process
we
spoke
about,
we
captured
what
was
missing
and
what
was
taken
out,
but
we
we
didn't
review
the
impact
of
that
from
a
user
point
of
view.
C
So
that's
still
outstanding
and
I
think
once
we
once
we've
itemized
that,
then
we
can
do
the
user
journey
to
because
then
we'll
understand
what
we
can
put
into
the
user
journey
and
what
we
can't
so
I
committed
to
putting
together
that
documentation,
but
just
with
everybody
being
in
Ireland,
I,
just
haven't
had
a
chance
to
get
around
to
it,
but
that's
the
next
steps
in
and
around
there.
C
If
somebody
wants
to
give
a
hand
that
would
be
fantastic,
but
that's
the
current
status
I
think
which,
if
we
go
back
to
LML
there
now
and
we
jump
into
the
pull
requests
so
as
I
mentioned,
then
we've
got
a
few.
So
we've
got
a
few
patches.
Really
there's
no
major
features
in
here.
We've
got
a
few
patches
to
land
to
re-enable,
Nightly
I,
think
19s,
chew
outs
next
week,
so
we're
going
to
be
re-enabling
nightly,
so
we
can
see
what's
broken.
C
We've
got
some
fixes
for
the
build
and
we've
got
some
errors:
some
fixing
to
some
errors
to
land
as
well,
so
I'll
land
those
over
the
next
few
weeks.
Sorry,
the
next
few
days
as
well
towards
the
weekend,
probably
so
that
is
pretty
much
that
yeah,
and
this
is
sorry
this
is
the
ticket
that
actually
capped
the
issue
that
captures
the
work.
That
needs
to
be
done.
So
that's
that's
the
one
that's
outstanding
to
document
the
user
impact
of
medicine,
teacher
Anton.
D
Yeah
I
saw
the
request
to
bring
back
the
support
to
16
actually
to
include
support
to
the
16
and
18,
but
that
looks
like
it's
missing.
A
few
features
it's
supported,
but
not
fully
supported.
Is
that
true.
C
So
what
the
chat
with
the
challenge
is
that
we
don't
if
they
seem.
If
you
look
at
the,
if
you
look
at
in
here,
they
they
look
fairly
innocuous
from
they
look.
They
don't
look
high
impact
from
a
cold
perspective,
but
what
we've?
What
Tony
found
was
that
they
are
actually
high
impact
from
the
usability
point
of
view,
so
we
just
need
to
understand
what
they
are
document
them.
I.
Think
16
is
really
our
challenge.
I
think
18
is
okay
but
16's.
C
It's
particularly
it
seems
to
where
we've
stuck,
but
we
we
just
need
to
understand
what
it
is
and
then
we
can
look
at
do
it,
how
we
bring
how
we
bring
them
up
if
we
can
bring
them
up
yeah
the
people.
A
C
Yeah
so
we
have
landed,
we
have
a
build.
That
is
that
now
works
with
that
compiles
and
works
to
some
extent
against
16
and
18..
So
in
npm
now
there
is
version
4
of
LL
node,
that's
released,
that's
compatible
with
14
16
and
18..
Okay,
what
we
found
were
tone
is
found
while
he
was
doing
the
user
documentation.
C
Is
that
there's
some
features
in
16
that
aren't
working
as
expected,
and
we
need
to
understand
the
extent
of
those
and
document
those
we
we
knew
it
was
there,
but
it
seems
they
could
be
quite
High,
impacting
in
terms
of
use
cases,
but
until
we've
done
the
analysis,
we
don't
know.
C
But
we
just
felt
even
getting
something
out
where
you
could
look
at.
You
could
inspect,
so
you
can
inspect
variables
and
have
a
working
have
something
that
worked
and
people
could
start
giving
feedback,
etc,
etc,
and
just
starting
the
development
Loop
was
more
important,
rather
than
hold
on
to
wait
for
a
pristine
version.
C
Tony's
use
case
is
definitely
one
of
them.
Okay,
so
they're,
not
they're,
not
he's
not
using
it
in
production.
He's
just
trying
to
put
together
a
journey
through
and
he's
hits
a
blocker.
That
I
would
consider
to
be
a
fairly
valid
use
case
using
16..
So
he's
definitely
one.
We
have
something
the
other
week
where
this
was
failing
with
a
an
internal
stack
error.
We
didn't
get
to
the
bottom
of
whether
it's
a
corrupt
Cordon
or
whether
it
was
something
missing
within
the
within
our
pasta
for
the
core
dump.
C
C
So
what
was
happening
was
it
was
a
the
LL
node
was
core
dumping,
which
is
quite
funny
and
so
away.
So
we
stopped
it
doing
that
and
that's
what
this
this
PR
is
doing,
but
we
didn't
bottom
out
whether
that
was
because
of
missing
features
or
whether
because
of
a
corrupt
core
dump,
we
didn't
the
end
user
walked
away
and
closed
it.
D
I'm
just
wondering
we
have
actually
20
Faces
the
the
macaws
issue,
but
the
idea
as
far
as
I
remember,
we
need
to
have
at
least
two
documents
right.
The
first
one
is
for
the
Diagnostics
approach
and
the
second
one
is
the
is
for
LL
node
itself
in
its
own
documentation
or
is
the
same
one.
B
C
We
were
discussing
this,
is
it
in
the
Diagnostics
one
yeah.
B
B
B
D
D
Okay,
okay
I
was
planning
honestly
I
was
planning
to
to
to
take,
take
technically,
take
lead
on
that.
But
since
we
are
about
to
to
to
land
the
next
major
very
busy
this
week
in
the
next
one,
so
I
can't
take
it.
But
if
that
stays
for
for
some
some
period
like
I,
can
take
like
two
or
three
weeks.
C
Yeah,
okay
I'm,
on
top
of
the
running
through
the
use
cases
that
we
had
an
understanding.
What's
what's
been
impacted,
that's
all
I
can
take
on
and
then
closing
those
two
PRS.
That's
all
I
want
to
do
I
think
for
the
next
between
now
and
the
next
session.
B
Else,
let's
just.
C
I'll,
keep
it
there
for
now,
I,
don't
think
anybody's
actively
working
on
it
and
we've
discussed
the
others
here.
C
E
Yeah
for
the
debug
helper
I
mean
it.
It
might
be
helpful
to
just
have
like
LL
node
back
to
like
fully
working
States
before
digging
much
into
debug
Helper,
because
it
gives
you
a
good
idea
of
like
what
things
you
can
park
into.
Debug
Calpers
appear
like
things
it
doesn't
do
yet
and
stuff.
C
E
C
Yeah
and
I
think
absolutely
the
world.
There
is
an
action
to
go
and
look
at
that.
Sorry,
that's
not
working
progress.
That
is.
B
E
Yeah,
it
seems
to
me
like,
like
good
good,
like
long
long
term,
Vision
goal
kind
of
thing.
It's
like
merge
the
two
things
together,
so
you
can
like
maintain
similar
Tooling
in
one
space,
but
you
kind
of
have
to
have
a
little
more
a
little
more
maturity
to
the
tooling
before
that
can
really
work
reasonably
yeah.
E
Yeah,
so
synchronizing
of
V8
can
be
complicated,
especially
for
older
staff,
so,
like
we
probably
have
to
like
get
changes
to
support
things
in
like
long
before.
That's
actually
like
getting
released
unknown.
D
Yeah
I
have
just
just
one
thing:
I
have
been
thinking
about.
Actually
is
that
if
you
look
to
the
issue,
you
see
that
profiling,
the
profiling
namespace,
is
missing
just
a
Content
about
using
native
tools
for
profiling.
D
The
problem
with
it
is
that
I,
don't
think
there's
as
far
as
I
remember,
the
only
native
tool
that
you
can
use
is
the
Linux
perf
itself.
Otherwise
you
will
face
V8
or
other
other
well
documented
to
not
native
one
I.
Think
Stephen
can
talk
a
bit
more
about
it,
but
I
believe
we
don't
have
any
other
natives
too,
that
you
can
profile
a
node.js
application.
E
E
You
can't
really
tell
what
the
actual
name
of
the
function
is,
but
there
is
like
with
perf
like
you
can
export
like
perf
map
data
and
like
map
that
to
like
you,
can
make
external
profilers
able
to
see
JavaScript.
It's
just
an
extra
step.
D
Yeah
but
I
mean
this
is
not
like.
If
you
use
Linux
perf
just
to
get
the
map
you,
you
can
just
use
map
the
the
Linux
verbs
to
produce
the
flame
graph.
You
don't
need,
like
any
other
natives
tool
to
then
that
Linux
perfect
itself
right.
E
Well,
like
that,
like
Linux,
perfect
itself,
will
just
give
you
like
the
native
map,
but,
like
you
can
use
the
code
event
handler
to
like
builds
like
a
mapping
from
the
Native
map
to
the
JavaScript
map.
So
you
can
like
get
actual
JavaScript
function,
names
and
things
like
that
in
there.
E
Can
but
you
can
technically
do
that
if
you
know
how
but
yeah
like
we
should
have
tools
that
can
actually
do
that
properly.
D
We
have
already
covered
V8
profiler,
using
V8
profiler
and
using
Linux
spur
for
profiling
itself
and
also
in
the
main
documentation.
We
have
mentions
to
the
the
flag,
CPU
Prof
and
dash
dash
bra
yeah.
A
Yeah
I
can
imagine
how
difficult
it's
going
to
be
for
a
end
user
to
natively
profile,
a
JavaScript
application
with
no
JavaScript
methods
available,
but
the
the
CPU
percentages
and
other
things
very
nicely
attributed
against
it.
So
it's
definitely
going
to
be
very
hard
and
nearly
impractical
to
do
any
meaningful
profiling.
E
Yeah,
the
the
state,
it
is
right
now
yeah,
it's
not
useful,
it
could
be
made
useful
but
yeah.
It's.
D
Not
currently
so
I
have
removed
it's
from
the
lease.
It
means
that
the
profiling
is
now
ready
to
to
be
moved
to
the
main
documentation.
D
So
I
would
take
that
action
later
in
this
week,
so
yeah,
that's
all
I
have
for
this
issue.
A
One
other
thing
I
mean
in
if
we
are
looking
for
completeness
in
that
section,
we
could
just
State
this
fact
that
while
there
may
be
existing
native
profiling
tools,
this
is
the
reason
why
it
is
sub-optimal
or
ineffective
if
there
are
very
Corner
case
scenarios
where
you
you
want
to
debug
with
the
native
profilers,
these
are
the
things
that
you
can
do.
These
are
the
things
that
you
cannot
do
that
that
sort
of
a
caveat
statement
will
make
it
complete,
but
I,
don't
think
it
is
necessary.
D
Yeah,
the
problem
with
mentioning
it
is
like
we'll
kind
of
export,
a
not
official
documentation
that
might
changing
like
one
release
and
will
be
out
of
date,
but
yeah
we
we
can
certainly
do
it
I
just
not
sure
how
to
how
to
express
it
like
in
a
proper
way,
without
being
too
complex
to
end
the
users
yeah.
A
A
Should
we
have
a
version
semantics
at
all.
In
the
first
place,
then
there
was
a
consensus
around
yes,
we
it's
good
to
have
because
there
are
tools,
or
at
least
one
known
tool
out
there
in
the
field
which
parses
the
report
data,
and
if
the
report
structure
changes
without
any
any
mechanism
to
to
keep
track
of
the
change
of
the
structure,
then
the
tools
will
break
with
no
no
mean
to
fix
it.
A
So
we
have
a
consensus
that
the
semantics
definition
is
useful,
but
then
we
had
convention
around
what
should
be
the
the
structure
of
the
semantics
itself.
A
So,
finally,
we
came
to
some
sort
of
a
consensus
that
a
single
digit
semantics
version
representation
that
that
gets
bombed
every
time.
The
structure
of
the
report
changes
and
that's
where
we
stand
at
this
point,
looks
like
the
this
working
group
agrees
to
that.
E
Yeah
yeah
I
just
wanted
to
bring
it
up.
I
have
two
draft
PRS
open
right
now,
adding
some
new
features
to
Diagnostics
channel
one
of
the
star
starts
Channel,
which
is
essentially
meant
to
be
a
replacement
for
async
local
storage
enter
with
it's
like
the
the
enter
width
is
kind
of
confusing
and
like
it's
it's
a
thing
that
only
apms
really
understand
how
to
use,
and
that
was
kind
of
like
the
intent
when
it
was
added
in
the
first
place.
E
But
it's
it's
still
like
a
confusing
API
surface
that
people
can
see
in
the
docks
can
shoot
themselves
in
the
foot
with
with
it.
So
wanting
to
make
the
storage
Channel
thing
a
thing
instead,
so
there's
like
a
safer
way
to
bind
to
storage
at
some
like
synchronous
Point,
so
that
that's
what
the
first
one
is
for
and
then
tracing
channels,
just
like
like
every
APM,
wants
to
trace
things,
and
we
want
to
have
some
like
correlation
between
Channel
events.
E
That
has
also
was
sort
of
considering
the
possibility
of
like
making
the
storage
Channel
just
inherit
from
chasing
Channel,
instead
of
being
like
its
own
separate
thing.
But
that's
not
super
important
I
think
those
can
be
considered
separately.
E
D
It's
different
I
have
just
a
small
question:
I
I
I'm,
looking
I'm
looking
to
the
add
the
storage,
Channel
and
I'm
I
am
a
bit
confused
about.
How
does
it
correlate
with
the
Diagnostics
Channel
itself,
like
I,
mean,
looks
like
the
API
is
pretty
similar,
but
the
only
difference
is
that
when
you
call
a
storage
Channel
it
it
gets
stored
already
directly
to
the
async
look
of
storage
in
the
current
context.
Is
that
true
or
I'm
missing
something.
E
E
It's
like
similar
to
like
domains
had
the
like
enter
and
exit
thing,
which
unfortunately,
was
kind
of
a
kind
of
a
mistake
to
expose.
But
it's
helpful
internally,
so
I
have
an
API
that
we
can
do
that.
But
it's
not
the
enter
and
exit
aspect
of
it
is
not
exposed.
Instead,
there's
just
like
a
thing:
you
pass
a
function
into
and
it'll
call
the
enter
and
access
around
this
function.
E
E
It
would
exit
automatically
because
the
like
next
tick
would
end
or
something
else
which
is
not
really
the
safest
assumption
so
yeah
this.
This
just
makes
that
exit
more
explicit
but
yeah.
It's
it's
essentially
like
more
a
feature
of
async
local
storage
itself,
but
just
put
it
like
in
the
Diagnostics
Channel
side
of
things,
mostly
just
to
make
it
clear
that,
like
the
intent,
is
binding
between
the
two.
E
E
So
like
the
the
way
we
currently
do.
That
is
like
monkey
patching,
create
server
and
yeah
doing
like
storage.run
around
that,
but
we
don't
want
to
be
monkey
patching
things,
so
we
we
had
created
that
enter
with
function,
but
it's
not
super
safe
to
use
yeah.
So
we
want
this
like
more
explicits
enter
an
exits.
D
Okay,
yeah,
but
I
don't
understand.
Why
is
it
like
in
the
diagnosis
channel
name
space
because
it
means
like
it?
It
seems
like
very
related
to
a
single,
a
sink
hook
or
seek
local
storage.
Then
diagnostic
Channel
itself.
E
Well,
it's
it's
intended
as
like,
both
a
switch
and
as
like
a
Channel
of
sorts.
It's
like
you,
you
do
like
you,
do
still
publish
data
through
it.
It's
yeah!
It's.
E
Yeah,
the
like
it's
meant
from
like
the
like
user
end
of
it
to
behave
like
it's.
A
channel.
D
E
And
that's
like
it:
it's
like
it's!
It's
a
little
bit,
weird,
how
it
works,
but
like
it
has
the
Run
similar
to
essentialical
storage,
but
it's
designed
so
that,
like
the
publisher
side
of
a
channel,
can
say
like
this,
this
is
what
I
think
it
should
be,
but
then
like
on,
like
the
subscriber
side
it
can
like
mutate
or
like
it
can
replace
that
object
with
something
else.
You
can
intercept
it
and
like
replace
it
so
like
it
is
both
like
a
publisher
kind
of
like
a
way
to
wrap
a
store.
B
B
Like
a
documentation
about
real
world
usage
of
I,
think
local
storage,
because
I
couldn't
find
in
all
of
these
issues,
like
the
things
that
you
said,
that
the
actual
apms
are
currently
implementing
this
to
send
this,
but
I
couldn't
find
it
written
in
all
of
those
issues.
If
you
can
share
this,
that
would
be
perfect.
E
Yeah
I
I
don't
know:
do
we
have
any,
like
particular
documentation
of
that?
That's
just
that's
how
apms
work
and
they
don't
mostly
don't
really
talk
about
that
I.
Don't
know
that's
yeah.
We
we
depend
on
ethical,
local
storage,
to
propagate
contacts,
because,
like
basically
we
we
have
like
within
a
request,
there's
spans.
E
A
E
Yeah
I
think
so
yeah
yeah,
it's
just
like
both
of
these
are
like
in
draft
form
right
now.
Just
wanting
to
get
other
people
to
review
like
is
the
API
itself
reasonable
and
if
so,
then,
I'll
go
and
document
all
the
things.
Okay,.
A
A
A
A
D
Yeah
I
think
like
the
next
one.
We
will
probably
skip
with
nothing
really
important
or
Nothing
changed
in
this
hope,
but
yeah
I
just
forgot
we'll
go
ahead,
go
ahead.
D
They're,
just
oh
it's
now:
it's
not
thinking
about
for
those
who
I
think
just
Girish
and
Jag
is
what
weren't
in
the
collab
Summit
right,
yeah
I
think.
D
Okay,
so
we
have
discussed
it
about
the
the
evpf
stuff
and
we
we
came
up
with
a
plan
like
the
first.
The
first
part
is
like.
Actually
let
me
get
the
issue
because
today
is
a
mess
and
I,
don't
remember
how
we're
trying
to
use
case
as
far
as
I.
Remember:
yes,
yes,
I
I
rather
and
then
I
find
the
company
supported.
D
Yes
up
first
is
like
list
of
the
current
PayPal
points
and
how
it
would
be
so
solved
and
what
what
are
the
real
world
use
cases?
Then?
What
are
the
need
or
the
needed
to
start
probes
list
like,
for
instance,
let's
say
we?
We
want
to
have
event
Loop,
six
in
other
stacked,
probes
and
the
last
one,
but
not
lasting.
D
Pretty
important
is
like
who
you
support
it,
because
is
again,
the
main
question
is:
is
there
any
company
behind
of
this
support,
because,
as
we
learned
that
the
EB
ebpf
support
requires
some
deep
knowledge
and
we
can't
bring
it
back
if
we
don't
have
someone,
or
at
least
a
team,
that
will
make
this
support
happen.
D
So
that's
the
general
idea:
okay.
Okay,
we
just
paste
that
the
issue
in
the
chat-
just
if
you
want
to
take
a
look
later,
but
that's
the
ADM,
so
so
I'm
still
figuring
out
with
my
company.
If
we
can
do
it
but
I,
don't
think
we'll
have
bandwidth
at
least
this
year
for
that,
so
we
probably
might
have
a
car,
or
at
least
it's
different.
We
can
call
about
it
and
probably
discuss
it
company
wise
and
see
if
we
can
get
something
from
both
companies
and
see
if.
B
D
B
A
Thanks
Rafael
all
right
last
call
before
we
close
the
meeting
anybody
else.
Any
other
points
take
it
as
a
no
thanks
everybody
for
joining
close
the
call
now
and
meet
in
maybe
two
weeks
time.