►
From YouTube: Diagnostics WG meeting - January 16 2019
Description
B
So
basically,
the
intent
of
this
meeting
is
to
you
know,
have
some
conversations
around
basic
structure
of
how
these
best
practices
documentation
will
be
around
and
then
look
at
the
coverage
that
is
of
what
all
different
use
cases
and
scenarios
need
to
be
covered.
And
then,
once
we
have
some
convergence
on
that
look
at
who
can
participate
and
take
up
work
items
not
necessarily
I'm.
B
A
B
Organized
based
on
deployment
scenarios
such
as
you
know,
cloud
deployments
was
desktops
for
versus
some
other
deployments
and
see
in
each
of
the
deployments.
How
do
we
perform
the
Diagnostics
on
the
deployed
applications?
So
that's
probably
one
of
the
key
things
to
have
some
conversation
around
so
any
thoughts
on
that
I.
A
Guess
my
first
thought
is:
structuring
it
around
what
problem
you're
investigating
would
make
sense
to
me
like
and
then
once
you
have,
that
you
can
then
see
how
that
works
in
the
different
environments.
So,
for
example,
like
there's
a
memory
leak,
how
would
you
do
bug
a
memory
leak
and
then,
once
you
have
like
sort
of
the
baseline
one,
you
could
then
say:
okay!
A
B
Makes
sense
the
the
pros
and
cons
I
see
with
both
approaches?
If
you
look
at
the
or
deployment
based
scenario,
the
problem
is,
there
could
be
n
number
of
different
employments.
For
example,
if
you
take
the
cloud
itself
depending
on
the
vendor,
depending
on
the
specific
topology
or
the
architecture,
you
can
have
so
many
different
variations,
and
it's
it's
not
necessarily
easy
to
come
up
with
all
the
different
combinations.
B
On
the
other
side,
if
you
look
at
the
symptom
based
approach
like
the
crash,
hang
or
performance
based
things,
that's
pretty
easy
to
document
and
relate,
but
at
the
same
time
the
actual
production
scale.
Applications
will
still
need
to
do
certain
amount
of
logistical
aspects
to
get
to
that,
for
example,
if
I
start
documenting
things
around,
how
do
you
look
at
the
crash?
The
baseline,
we
start
would
be
heard.
Take
the
crash
dump
and
launched
in
as
weekly
bugger.
Now
most
part
of
the
activity
would
be
around.
B
A
I'm
not
sure
I
see
that
as
a
drawback
though,
because
like
if
we
need
to
start
somewhere
and
it's
it
I
think
if
the
symptoms
are
common,
if
we
have
a
good
understanding
of
that,
that
gives
you
a
solid
basis
to
then
move
into
the
production
deployments
right
like
if
it's
like.
Okay,
we
have
these
five
and
generically
here's
the
best
practice.
Then
you
can
say
okay.
Well,
we
need
to
build
on
those
for
each
production,
employment
and
so,
for
example,
server
lists
you
could
say.
Well,
we
have
this
base
one
for
the
one.
A
The
example
that
you
had,
which
was
a
crash,
can
I,
you
know,
can
I
just
apply.
The
best
practice
is
already
defined
generically.
Well,
wait
a
sec,
no
because
there's
this
extra
step,
but
you
could
refer
back
to
the
the
base
one
and
say
okay.
This
is
this
is
where
you
start,
but
before
you
can
get
to
apply
that
you
then
need
to
do
these
things
which
are
specific
to
the
deployment
environment.
A
So
not
to
say
we
shouldn't
do
both
at
the
same
time
like
you
could
do
a
little
bit
of
one
a
little
bit
of
others,
but
it
seems
to
me,
starting
at
the
symptoms.
Symptoms
gives
you
the
base
on
which
you
can
then
build
all
the
other,
like
the
other
combinations
that
you
want.
If
you
want
to
work
on.
B
A
We
probably
you
know
unless
there's
differences
due
to
your
production
environment,
you
probably
probably
want
to
come
up
with
sort
of
a
one
or
some
small
number
of
sort
of
recommended
approaches,
as
opposed
and
that'll
be
hard
to
have
multiple
different
people
working
on
them
in
parallel
right,
because
you
may
come
up
with
conflicting
recommendations
or
whatever,
but
once
you
had
that
you
could
then
have
one
person
look
at
cloud.
One
person
look
at
Cerberus
one
person
looking
at
Cloud
Foundry
to
see
how
it
applies
in
each
of
those.
B
B
Okay
thanks
the
symptom
based
approach
also
has
the
advantage
that
some
of
the
we
could
get
somebody
products,
for
example,
in
the
linked
document.
Julian
was
mentioning
that
he
has
attempted
some
activity
in
this
regard
and
he
has
done
a
wonderful
job
of
documenting.
How
do
we
diagnose
the
memory
leak?
It's
it's
a
pretty
whole
document,
probably
the
the
debugger
and
the
v8
object.
Structure
itself
would
have
changed
fundamentally
since
then,
but
the
approach
and
the
the
way
we
look
at
the
objects
in
the
heap
and
the
layout
and
things
like
that.
B
B
B
So
since
that
part
is
converged,
the
next
one
is
on
the
coverage
that
is
among
the
various
set
of
problems
that
the
production
system
can't
be
subjected
to
body
the
subset.
We
want
to
focus
on
either
based
on
the
number
of
issues
which
are
reported
to
the
issue
tracker
or
based
on
the
availability
of
the
tools
based
on
the
comprehensiveness
or
the
correctness
of
the
tools
which
are
available
at
the
moment
or
based
on
which
we
think
is
most
important.
B
A
B
D
B
A
C
B
A
More
just
around
that,
like
do
you
get
the
artifacts
that
are
generated.
So
in
this
case
it's
yeah
like
a
standard
out
something
that,
after
your
cloud
function
is
run
thrown
an
exception.
Do
you
have
a
console?
You
can
just
go
and
get
that
from.
If
the
answer
is
yes,
then
you
know,
maybe
that's
that's
not
a
more
more
complicated
thing.
D
A
Examples
like
core
dumps
or
whatever
I
I,
don't
think
they
currently
have
a
place
where
you
can
go
find
those.
So
it's
it's
a
I
guess
I
was
just
getting
it
like
it,
maybe
not
that
much
on
the
base.
You
know
here
you
look
at
the
stack
trace.
You
go
do
something,
but
maybe
then
the
second
layer
is
where
it
gets.
Interesting
is
we're
like
okay,
how
do
you
get
your
stack
traces
in
each
of
these
different
environments.
D
D
B
So
what
I
think
is?
We
should
definitely
have
a
documentation
around
memory,
leak,
rush
and
performance,
because
these
are
the
things
which
general
audience
do
not
have
enough
skills
or
enough
understanding
about
how
to
diagnose
and
based
on
the
actual
symptom
scenario.
The
diagnostics
and
the
problem
determination
can
be
really
tedious.
So
my
take
is
a
first
priority:
go
to
memory,
leak,
performance
and
crash.
B
B
B
A
Guess,
like
from
my
perspective,
I'd
look
at
it.
The
reverse
way
is
to
say:
okay,
what's
the
most
effective
easiest
path
and
that
may
vary
of
course,
so
you
might
need
you
know.
Even
for
memory
leak,
there
might
be
a
couple
different
flows
that
you'd
follow,
but
it's
like
what
what's?
What
do
we
think
is
the
best
way
to
go,
and
then
we
should
be
working
on
the
level
of
the
support
for
the
tooling
to
support
that
right.
A
So
if
we
say
this
tool
is
actually
the
absolute
best
way
to
debug
these
things,
but
it's
likely
deeper
broken
at
any
time
by
a
node.
Well
then,
that's
probably
something
we
should
work
on
in
terms
of
adding
testing
and
stuff
to
improve
that.
So
it's
kind
of
like
don't
don't
determine
our
guidance
by
the
current
level,
sort
necessarily
I,
guess
that,
should
you
know
like
to
factor
in
a
bit
but
trying
to
figure
out.
A
What's
the
best
way
to
do
these
things
and
then
try
and
make
the
tools
look
at
whether
we
can,
you
know,
adjust
the
level
of
the
supports
for
the
tools
to
improve,
so
that
would
I
guess
what
I'm
saying
in
the
long
ways
we
should.
We
should
focus
on
trying
to
have
the
best
level
of
support
for
the
tools
that
we
are
saying
or
important
to
do
these.
This
kind
of
debug
and
problem
determination.
B
Yeah,
but
probably
all
right,
but
my
point
is:
if
you
look
at
crash,
for
example,
we
should
be
talking
about
the
most
common
scenario,
not
necessarily
a
corner
case.
So
if
we
are
talking
about
the
most
common
scenario,
it
could
be
a
crash
in
the
C++
code,
the
J's
code,
and
for
both
the
scenarios
there
could
be
more
than
one
tool
which
will
but
with
different
capabilities
and
different
user
experience.
C
A
I
would
think
that,
like
that
kind
of
fits,
my
thinking
where
it's
like,
you
could
kind
of,
say,
here's
your
first
attempt
using
the
simplest
approach,
that'll
be
the
fastest
that
uses
the
simple
tools
like
no
rapport
is
exactly
what
I've
been
in
mine
for
something
that
it's
like.
If
you
can
do
this,
you
can
see
that
great
you're
done.
Oh,
that
didn't
work.
Okay,
then
here's
the
next
level
of
Investigation
and
that
uses
this
this
you
know
lol,
DB
or
ll
node,
which
is
more
sophisticated.
A
It
takes
more
work,
but
will
let
you
dive
into
more
details,
so
I
I
agree
that
there
should
be.
We
probably
need
to
mention
a
couple
tools
and
it
may
even
be
like
a
layered
approach
of
like
hey
start
with
this,
then
go
to
this
more
complicated
thing
off
that
still
doesn't
work,
here's
and
even
even
more
work,
but
maybe
get
we
get
you
where
you
want
to
get
get
to.
B
B
B
B
B
A
B
A
B
B
B
Priority
as
well
as
any
specific
tools
which
we
want
to
recommend-
or
you
know,
handles
to
be
used
for
this
particular
content
type
and
then
leave
the
next
column
blank
so
that
people
can
come
and
pick
them
up
based
on
their
interest.
Yeah
then.
The
other
thing
which
was
discussed
also
in
the
connected
link,
was
about
linking
the
document
directly
in
the
format
of
the
nodes,
a
store
or
website,
so
that
you
don't
need
to
reformat
the
content
for
publications,
I'm,
not
sure
I
completely
follow
the
proceedings
on
that.
B
But
does
anybody
have
an
idea
about
that?
My
whole,
my
whole
thinking
was
on
the
lines
of
we
just
create
the
content
and
open
as
pr's
to
this
repository.
That
is
not
G,
slash,
Diagnostics,
get
it
reviewed
by
the
peers
and
landed
in
the
Diagnostics
project
area
itself
and
float
it
around
foe
consumption.
But
then
I
realized
that
it's
much
more
consumed.
If
it
is
in
the
website,
format,
the
HTML
format,
I,
don't
have
much
idea
about.
How
do
we
do
that
conversion
is
it?
C
B
B
D
Every
blog
post
example
at
the
moment
is
written
in
markdown
and
when
we
have
any
release,
it's
just
I'm,
sorry
that
is,
and
it
is
then
just
formatted
in
HTML,
and
it
should
be
pretty
straightforward
to
write
the
markdown
as
we
do
that
all
the
time
and
as
hard
as
I
know,
it's
just
perfectly
transformed
immediately.