►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
Like
is
there
more
than
the
the
general
use
case
of
hey
I
notice,
I
guess
you
could
have
a
you
know,
one
where
you're
using
a
lot
of
CPU,
but
everything
is
fine
and
you
and
you
just
want
to
reduce
your
cost
and
then
you
could
have
another
one
I'm,
not
sure
how
different
is
where
you're,
using
like
your
application,
is
running
slowly
right,
yeah.
Those
are
those
two.
B
B
A
A
B
A
A
A
B
C
B
C
B
B
D
D
D
Now,
if
you
take
the
performance
alliance's,
the
the
problem
determination
comes
in
to
two
categories.
In
some
classification,
for
example,
I
want
to
run
the
test
case
or
the
workload
for
its
whole
cause
of
duration,
allow
it
to
run
and
collect
the
samples
or
loads
or
I
want
to
run
for
a
predefined
amount
of
time
or
I
want
to
learn
the
profiler
when
a
method
enters
up
to
the
point.
When
the
method
is
you
know
completely
executed,
basically,
custom
controlling
on
the
trigger
of
the
profiler.
B
C
B
D
B
A
I
can
see
that
that
section,
although
it's
could
be
a
bit
bigger
one
thing
I
was
wondering,
is:
do
we
need
to
like
step
back
a
little
bit
like
this?
B
A
B
A
Kind
of
where
I
was
just
coming
from
is
like
you
know,
we
see
longer
latency
lower,
can
throughput
or
you
know
the
and
I
guess
we
even
put
in
consistent
ICP
usage,
but
for
the
first
two
my
first
question
would
be
am
I
using
a
lot
of
CPU,
because
that
could
be
the
cause
you
know
and
then
to
to
common
things
are
like
if
you
are
using
a
lot
of
CPU
I've.
Seen
like
you
know
the
GC.
A
If
you
have
a
lot
of
GC
that
might
be
consuming
the
CPU,
if
you're
swapping
like
that,
can
actually
result
in,
like
hey
you're,
using
tons
of
CPU
to
system
CPU,
even
though
you're
not
doing
too
much,
and
then
you
know
once
you're
through
that,
if
you
say:
ok,
I'm,
not
swapping
I'm,
not
I,
don't
have
I
am
using
a
lot
of
CPU,
it's
not
the
GCE,
it's
not
the
you
know,
I'm,
not
swapping,
then
diving
into
something
like
the
process.
The
prof
process
is
the
step
that
you
would
take.
That
makes
sense.
A
I
mean
I,
guess
even
I'm
wondering
if
things
like
prometheus
metrics
give
you
data
as
well.
So
it
might
be
worth
mentioning,
like
you
know,
in
that
first
step
of
am
I
using
a
lot
of
CPU
like
you
can
figure
that
out
through
things
like
top
or
process,
monitor
if
you're
local,
if
you've
got
diagnostic
report
installed,
you
could,
you
know,
generate
some
of
those.
You
could
take
a
look
if
you've
got
Prometheus
metrics
going
to
your
dashboard.
That
might
be
another
way.
You
recognize
that.
B
A
Don't
know
if
I
I
don't
know
technically
what
would
be
an
APM
or
not
it's.
It's
basically
provides
you,
you
know
a
standard
set
of
metrics
or
metrics
that
you
add,
but
the
idea
is
you
can
like
in
kubernetes,
for
example,
they
just
you
know,
are
streamed
out
for
all
the
containers
and
then
there's
tools
to
help.
A
You
see
the
results
across
your
containers
and
graph
them
and
all
that
kind
of
stuff,
so
it
mean
like
I'm,
not
sure
it's
an
APM
in
that
it
doesn't
instrument
things
like
it's,
not
gonna
instrument,
I
I,
maybe
I
shouldn't,
say
that
there
is
like
guidance
and
so
forth
for
how
you
get
HTTP
metrics,
which
has
a
bit
of
instrumentation
but
I.
Don't
think
it
would
do
everything
that
APM's
would
do
where
I'm
built
and.
B
A
B
B
A
B
A
A
A
B
A
A
All
right
so
yeah
I
mean
I
think
we
should
document
how
you
should
turn
it
on
to
get
that
the
behavior,
which
is
you
know
like
if
it's
only
accessible
from
localhost.
Well,
then,
you
know
anybody.
Who's
on
localhost
can
already
do
with
some
pretty
bad
things,
so
that
should
be
thought
right
and
you're
right
that,
if
that's
not
the
default
it
would,
it
would
be
good
to
consider
whether
it
should
be
too.
A
A
A
I
think
we
should
say
that
there
to
just
you
know,
people
will
look
at
the
guidance
and
say:
oh
what's
the
impact
of
turning
this
on.
So,
if
we
can
say
like
hey
turn
it
on
like
this,
so
that
you
know
only
localhost
can
access
and
by
the
way
you
know
until
you
do
anything
else,
the
overhead
is
is
expected
to
be
zero.
Of
course,
I.
Think,
once
you
turn
on
profiling,
you
will
have
an
impact
right.
B
A
B
A
B
A
A
B
B
A
A
B
A
A
Exactly
that's
I
was
thinking
so
fit,
you
know,
but
in
a
plan
ahead,
we're
like
you,
think
you
need
you're
gonna
need
to
do
this
and
it's
gonna
be
common.
You
could
write
your
module
that
you
know
addresses
and
improves
on
what
the
inspector
protocol
will.
Let
you
do
in
terms
of
security
and
workflow
kind
of
thing.
Yes,.
D
A
B
B
A
B
A
B
A
B
A
B
A
At
the
end,
it
would
be:
oh,
okay,
you're
really
stuck
here's
just
some
instructions.
You
could
fold
it
to
generate
it
like.
We
almost
want
to
I
think
in
my
mind
at
least,
it
would
be
good
to
have
a
recommendation
that
says
you
know
when
you're
putting
together
your
production
app.
You
should
do
this
in
advance,
right,
mm-hmm
and
that
way
you're
ready
to
do
in
an
easiest
way
as
possible,
and
then
you
sort
of
have
the
oh.
You
didn't
do
that!
Here's
your
next
easiest
thing!
Oh,
you
can't
do
this!
A
B
C
B
B
A
B
A
B
C
A
B
A
B
A
A
That
that
makes
total
sense
to
me
and
it's
sort
of
starting
from
the
very
simplest
things
of
like
you
know
top
or
whatever
diagnostic
report
to
say
is
CPU
even
a
problem.
Then
probably
the
next
easiest
is
to
look
at
the
JavaScript
side,
with
the
v8
CPU
apply
a
profiler
and
then
yeah.
If
that
doesn't
show
anything,
it's
it's
possibly
at
the
native
side.
Let's
turn
on
the
native
type
tools,
so
yeah
I,
like
that
flow.
B
B
A
B
A
A
A
B
A
A
A
A
B
A
Yeah,
no
I
guess
the
workflow.
There
is
different
because
you're
gonna
have
to
start
per
phone,
your
underlying
platform,
which
is
always
easy,
yeah,
but
yeah
I,
don't
think
we
should
get
bogged
down,
though
figuring
those
out
are
one
of
my
goals
as
well,
but
I
think
we
should
figure
out
the
simple
case
first
and.