►
From YouTube: Monitor:APM Weekly Meeting 2019-12-11
Description
Edit: This video was uploaded with a lower quality. See https://youtu.be/57MxvBZnyjw for a higher resolution version.
Weekly meeting for the Monitor:APM team including a discussion about the 12.7 milestone.
A
B
B
B
Okay,
so
obviously
the
coming
version,
hoping
would
be
lighter
most
of
the
time
we
started
with
like
30
ish
issues.
Now
we
are
down
to
17
and
get
the
focus
of
the
of
the
release
is
still
logs,
so
any
any
leftovers
from
total
sticks,
and
there
will
be
some
that
are
associated
with
our
logging.
Fo
will
supersede
all
the
issues
here.
B
B
The
second
issue
is
actually
the
ability
to
drill
into
the
log
Explorer
and
only
specific
time
frame,
so
we
need
to
make
sure
we
are
passing
the
time
frame
and
by
the
way,
those
two
issues
are
critical
for
our
three
hours
to
minimum.
We
discussed
this
several
meetings
ago,
and
so
those
two
issues
are
critical
to
be
in,
for
that
solid
epoch
to
be
completed
on
a
complete
list
to
get
start
with
epoch.
B
So
we
want
to
visualize
our
metric
on
the
infrastructure
on
the
environment
tab.
So
this
focus
this
issue
is
about
adding
a
drop
down
for
the
matrix
and
the
first
metric
is
the
CPU.
So
we've
discussed
the
the
visualization,
the
color
palette
development,
so
I
assume
we
don't
have
any
other
discussion
to
be
clear
before
implementation.
B
B
B
So
a
tool
tip
toward
infrastructure
dashboard,
so
this
dashboard
that
I
showed
you
previously.
When
we
hover
over
it,
you
want
to
make
sure
that
we
are
seeing
besides
the
name
of
the
pod,
we
want
to
see
the
metric
that
we
choose
for
choosing
CPU,
and
we
hover
in
over
this
code
will
see
the
name
and
we
see
the
value
of
the
metric
of
the
city.
C
Yeah
I've
got
a
question
on
that.
If
you
look
at
the
pipeline,
we
normally
don't
switch
over
to
a
pipeline.
C
But
if
you
look
at
a
pipeline
in
the
less
right
hand,
column
it's
usually
got
like
it's,
usually
the
department
of
redundancy
department
where
it's
got
like
you
know,
there's
a
name
of
the
thing
and
then
the
name
of
the
things
up
in
the
tooltip
or
whatever
it
would
it
be
int
non-obvious
if
we
took
production
off
of
that
tool
to
it
right
there
and
just
put
the
pod
name
since
we're
already
in
the
production
drop-down,
because
some
of
these.
B
C
B
B
You
can
comment
here
and
we
can
ask
if
we
will
get
assigned
to
that.
You
can
ask
if
you
can
do
it
together:
cool
thanks,
okay,
so
this
one.
Actually
we
still
have
some
discussions
around
it
and
I
would
like
the
opinion.
So
the
idea
here
is
when
you
configure
a
custom
metrics.
You
need
to
go
to
a
page,
to
add
to
different
page
and
for
me
to
manage
I,
think
and
add
custom
ethic
and
you
define
it
and
then
you
go
back
to
the
decimal
to
see
how
it
looks
and
I
was
thinking
of.
B
Maybe
we
can
add
a
link
here
to
add
metrics
so
from
here.
You'll
go
directly
to
that
page.
Now
the
comments
in
the
issue
we
discussed
with
with
Amelia
she
mentioned.
Maybe
we
should
do
it
on
the
model
itself
on
the
model
itself.
So
when
you
click
on
the
edit
metric
you'll
get
the
actual
model,
and
here
you
will
update
everything
you
hit,
save
you're
done.
C
Probably,
let's
put
it
on
the
we
shot,
probably
shouldn't
put
it
on
all
of
them,
but
because
the
adding
ones
are
only
for
custom
metrics,
so
that
would
be
down.
You
know
when
you
listen
to
custom.
Metric
you'd
probably
want
to
have
that
button
on
the
custom
metric,
but
that
would
that
would
call
out
those
things
as
custom
metrics
as
well.
B
We
should
comment
here
and
the
second
thing
that
I'm
asking
that
I
ask
I
mean
if
we
work
on
this
in
iteration
would
make
sense
to
first
like
provide
a
link
today,
custom
metric
page
and
only
after
the
afterworld
like
complete
the
entire
thing.
So
this
this
will
be
like
the
finest
the
final
stage.
The
final
stage
will
be.
When
you
click
on
this.
You
will
see
this
a
pop-up
you
will.
You
will
put
all
your
details
here.
You
could
say
it.
B
B
B
E
I
think
I
mean
I,
don't
have
a
problem
with
just
starting
out
with
just
hitting
edit
metric,
and
then
it
takes
us
over
to
that
page.
I'd
left
a
comment
in
the
issue.
We
had
read
done
that
same
approach
for,
for
the
add
metric
actually
for
the
add
metric
button
like
when
we
first
implemented
that
it
just
kicked
you
over
to
that
page.
Instead
of
leaving
you
on
the
dashboard,
which
is
not
a
problem,
it's
a
great
first
iteration,
let's
break
it
down,
keep
it
simple.
E
I
was
just
highlighting
that
we'll
want
to
get
it
out
of
that
state
as
pretty
quickly,
because
we
did
have
some
customer
feedback
on
that
that
the
user
experience
was
really
confusing
and
they
you
know
it
was
a
difficult
thing
to
figure
out
what
happened
you
know
and
why
they're
getting
kicked
out
and
over
somewhere
else,
when
they're
doing
the
interaction
from
the
dashboard
page,
which
you
know
is
fair
but,
like
I,
said
the
level
of
shame.
Let's
iterate
get
something
that
works
and
you
know
I
think
it
still
adds
value.
E
So
you
know
might
as
well
go
that
way
and
and
then
you
know-
oh
yeah,
so
there's
my
comment,
but
just
as
long
as
we
know
that
the
definition
of
done
for
this
issue
is
that
it's
all
there
on
the
metrics,
dashboard,
I,
think
I
think
we
should
be
fine
just
as
long
as
we
don't.
You
know,
lose
sight
of
this
and
then
end
up
like
absolute
four
and
five
milestones
down
the
road
and
it's
still
kind
of
redirecting
users,
nope
they're,
absolutely.
B
B
B
B
B
C
The
30
there's
a
little
bit
of
inside
baseball
associated
with
this,
because
I
do
know
that
we
look
at
the
metric
information
that
comes
back.
Is
that
right?
Do
we
look
at
the
metric
information
that
comes
back
to
get
the
metric
to
assign
the
alert
to,
or
is
it
maybe
it's
defined
to
the
dashboard
I'll?
Take
that
back
I'm,
not
sure
the
same
like
there
was
a
thing
where
we
use
the
data
use
some
component
of
the
data
that
came
back
to
get
that
alert.
Id
are
the
Prometheus
metric
alert
five
metric
ID.
B
F
C
Want
to
make
a
comment
that
it
there
you
know
there
might
be
some
some
yak
shaving
associated
with
you
know,
getting
all
the
data
into
the
thing
that
we
need
and
we
do
need
to
have
the
ability
to
say
you
know
we're
not
getting
any
metrics
back
from
this
source.
You
know,
do
we
really
speed
to
be
banging
on
it
all
the
time
you
know
every
time
we
refresh
this
page
because
there's
a
lot
of
you
know,
we
request
Prometheus
to
give
us
back
data
for
everything
every
time
and
a
lot
of
these.
G
C
B
Yeah
I
mean
the
way
the
way
I
say
it
is
that
I
don't
again
I,
don't
mind
just
for
us
to
start
working
on
that
to
say:
okay,
when
the
configuration
or
connectivity
problem,
let's
not
put
the
the
mode
at
more
which
allow
you
to
set
up
an
alert
on
the
chart,
but
I
do
see.
I
can
argue
that
we
need
this
in
spite
of
this.
This
issue,
but
just
agree
I
mean,
but
just
for
keeping
the
this
issue.
B
We
can
start
walking,
maybe
walk
on
an
MVC,
I,
say:
okay,
when
we
are
just
waiting
for
data
and
we
don't
see
it
on
the
chart,
let's
make
sure
users
will
be
able
to
set
up
in
a
lab
if
there
is
a
connectivity
or
configuration
problem
for
now.
Let's
not
do
that,
but
I
think
that
deserve
like
another
revision
when
when
one
will
implement
this,
maybe
we
edit
as
well,
because
in
then
in
a
normal
solution,
monitoring
solution.
B
There
are
less
defined
on
the
fly
you
set
up
the
metric
and
while
you're
setting
up
the
metric
you
can
set
up.
You
don't
wait
for
the
other
two,
you
don't
care.
If
there
is
a
connectivity
or
configuration
problem,
you
fix
that
afterward.
You
need
to
make
sure
they
are
left
out
there.
The
second
data
is
the
system,
so
you
know.
C
Like
the
ability
to
be
able
to
set
those
alerts
up
because
a
lot
of
times
you
set
up
an
alert
and
then
it's
you,
don't
even
see
it
on
the
page
if
the
data
team
coming
in
so
unless
I
think
this,
that
satisfies
the
law
of
least
astonishment.
It's
astonishing.
When
you
don't
see
the
ability
to
do
alerts
because
there
was.
H
I
That's
the
case,
I
think.
Maybe
we
should
go
with
what
you
were
proposing
in
your
latest
comment
on
the
issue
to
allow
setting
up
an
alert
whenever
there's
no
data
for
whatever
reason,
regardless
of
whether
it's
a
connection,
a
sure
configuration
problem
and
then
I
think
we
should
do
some
solution
notation
around
it.
Some
testing
with
the
users
to
see
if
you
know
our
assumptions
will
be
confirmed,
but
if
that's
the
expectation
to
be
able
to
stop
an
alert
on
the
fly
without
waiting
for
the
data
to
come
in
I.
I
B
Only
the
only
problem
that
we
see
is
David
mentioned.
We
are
waiting
for
some
data
because
when
you
define
in
a
lab
there
is
a
drop-down
and
you
see
the
metric
name.
If
you
have
a
configuration
problem
or
a
connection
problem
at
the
beginning,
you'll
probably
won't
even
get
the
metric
name,
which
means
you
won't
be
able
to
define.
So
some
data
need
to
or
Chris
some
pieces
of
information.
It
could
come
back
from
parameters
I'm,
not
sure
what,
but
what
I'm
saying
is.
If
we
have
this
information
we
cannot.
B
I
J
It's
a
technical
definition
is
pretty
much
because
of
an
issue
that
was
addressed
in
this
milestone,
which
is
pretty
much
changing
the
requirement
of
a
view
component
property
from
true
to
false
and
the
based
on
tests.
I
just
created
this
technical
dub
issue
as
to
iterate
upon
that
it
should
be
very
easy.
B
A
B
B
B
Essentially,
I
mean
once
will
get
implement
once
the
clone
of
a
dash
boat
will
get
implemented.
Apparently
it
will
just
take
the
common
metrics,
the
yml
file
without
the
customer
click,
and
this
is
not
really
duplicate
pairs,
but
this
is
not
really
doing
what
it
needs
to
do.
This
is
not
is
not
the
well-defined
action
when
we
say
we
clone
a
dashboard
I
think
it
means
the
need
to
clone
all
the
metrics
in
the
church.
Both
on
the
chart
so
today
was
the
front
end
will
be
implemented.
B
L
It
would
be
a
back
end
issue
so
right
now.
The
implementation
is
that,
basically,
the
section
looks
for
the
template,
yml
file
and
without
change,
checking
for
the
database.
There's
the
customers
are
defined,
so
the
next
step
is
to
take
the
section
and
check
to
the
database
if
there
are
custom,
metrics
and
somehow
inject
those
custom
metrics
into
the
weimar
file,
which
is
being
created
as
a
result
of
this
action
cool.
Thank
you
for
that.
B
E
I
think
the
plan
currently
is
to
start
with
the
general
node
data
state.
Now
we
at
least
have
something,
and
then
we
can
iterate
and
add
those
other
pieces
in
so
yeah,
so
we're
gonna
iterate
on
it
and
start
with
just
the
general
node
data
state
and
then
we'll
add
each
of
the
different
states.
Timeout
connection
fail
connection
required
as
we
as
we
go.
That
doesn't
necessarily
mean
it'll
span,
multiple
milestones,
but
it
will
have
multiple
iterations
and
maybe
it
spends
a
milestone
or
two.
You
know,
I,
don't
know
we'll
see
depending
on.
B
C
B
G
B
D
C
B
D
B
A
B
B
Like
the
summary,
if
we
have
some
sort
of
a
decision
and
because
I
found
it
very
hard,
you
know
when
I'm
working
on
an
issue
and
we
come
to
a
consensus,
then
it
can
can
take
me
like
a
week
or
two
weeks
to
go
over.
It.
I
mean
some
of
the
issues
that
I
went
with
you
today,
issues
that
I've
covered
like
two
weeks
ago
and
I
never
touch
themselves.
I
also,
sometimes
forget
doing
that.
B
L
A
G
E
A
Know
maybe
ten
of
them
didn't
have
that
I'm
guessing,
that's,
not
accurate,
so
we
should
probably
just
go
through
and
make
sure
that
things
are
ready
today
or
tomorrow,
just
to
make
sure
we
don't
have
any
other
questions
and
just
get
them
into
the
right
State.
So
I
just
put
that
note
in
here
as
well.
Yeah.
B
Thank
you
for
that.
That's
a
good
comment.
Yeah.
Let's
make
sure
to
have
this,
because
when
I'm
planning
the
the
milestone,
if
I'll
see
like
a
workflow,
that
is
planning
breakdown
or
something
like
that,
I
will
move
it
for
the
next
iteration.
So
if
you
want
to
make
sure
that
those
issues
are
in
the
duration-
and
we
don't
have
any
more
discussion
on
at
least
not
in
the
clink,
let's
label
them
as
ready
for
development,
we'll
make
my
life
easier
and
also
to
make
your
life
easier,
because
you
won't
start
walking
on
the
issue.
B
Thanks,
yes,
so
the
next.
The
next
announcement
is
that,
apparently,
all
features
in
monitor
are
moving
to
call.
This
is
the
epoch.
We
got
the
approval
from
SID
and
yeah.
This
is
the
epic
and
you
can
see
that
seat
comment
so
here
this
is
well
sick
comment
that
he
proved
that
which
means
that
we
will
move
all
of
them
to
call.
What
does
it
mean?
It
means
that
will
I'll
need
to
create
an
issue
for
each
one
of
those.
B
Each
one
of
those
areas
probably
make
this:
this
is
an
epic
I'll,
open
issues.
Let's
move
this,
that
without
that
debt,
and
we
will
schedule
them.
You
know
in
annotation.
We
won't
make
them
all
of
them.
In
the
same
in
the
same
in
a
single
iteration,
we
will
spread
it
across
multiple
iterations,
obviously,
and
if
we
need
to
break
it
even
down
in
it
even
further
we'll
do
that.
B
A
C
A
I
just
had
a
question
since
I
haven't
been
through.
This
is
how
like
what
is
the
processor?
What
is
the
coding
changes
that
are
gonna
be
needed?
Do
we
have
like?
Do
we
just
do
and
checks
to
see
what
the
environment
or
what
the
status
is
of
the
customer?
Whoever
is
using
it
and
then
allow
it
or
not.
I,
don't
know
someone
who's
been
here.
Good
answer.
E
Don't
know
it's
been
a
while
since
I've
seen
it
for
monitor
specifically
I,
don't
know
if
that's
happening
on
the
back
end
I
think
most
of
it
happens
on
the
back
end,
although
there
may
be
some
front
end
as
well.
If
we're
passing
that
through
through
that
way,
but
but
the
actual
tools
that
we
have
for
doing
that
are
in
Ruby,
so
it'll
definitely
involve
at
least
back
end,
if
not
back
into
the
front
end
to
make
it
happen.
But
yeah.
A
C
F
C
F
Yes,
I
had
one
question
because
so
now
that
all
these
features
are
gonna
go
to
Corso,
which
means
we're
gonna,
have
more
user
interaction,
more
user
volume.
I
guess
so
should
be.
We
visit
all
the
telemetry
stuff
so
that
we're
capturing
the
right
metrics
and
we
have
enough
data
because
there's
a
great
opportunity.
We're
gonna
have
increased
using
volume,
user
activity
volume
so
just
go
through
all
the
actions.
I'll
be
really
tracking
everything
from
instrumentation
perspective
and
get
those
things
straight
before
we
release
it
to
the
wild
I.
Guess:
I,
don't
know
what
you
I.
B
Think
personally,
I
think,
regardless,
if
we
have
a
large
user
volume
or
not,
we
need
to
capture
all
the
telemetry
that
is
possible,
so
I'm
trying
to
be
on
top
of
that.
Sometimes
it's
hard,
sometimes
forgetting,
but
ideally
every
new
feature
that
we
implement
should
have
a
follow-up
ml
or
an
issue
on
capturing
the
relevant
metric
around.
That
I
think
this
is
an
important
piece
of
information
we
need
to
know
in
order
to
make
up
our
product
more
better,
so
absolutely
agree
with
it.
E
Yeah,
that's
a
really
exciting
change,
I
think,
even
just
with
more
people
invested
in
it,
we
might
even
see
like
an
uptick
in
community
contributions,
or
you
know
that
would
be
an
ideal
situation
as
well
as
if
people
are
invested
in
it
and
care
about
it.
We
may
be
able
to
leverage
that
as
well,
which
which
would
be
great,
for
you,
know,
velocity
and
getting
there
faster,
but.
B
J
M
G
That's
that's
done.
That's
actually
merged
a
couple
hours
ago:
oh
good
yeah,
so
it's
it
was
pretty
good.
We
finally
found
a
middle
ground
and
yeah
took
a
day
or
two
to
finalize
and
it's
not
good
to
go,
and
so
that
was
blocking
another,
mr
of
mine
and
I,
think
one
of
my
roles,
and
so
this
is
now
unblocked
and
we
can
start
really
working
on
features.
Yes,.
B
G
G
Well,
that's
what
I
mean
it's
in
terms
of
code
and
adding
the
features
I
think
it's
probably
gonna
be
pretty
straightforward.
The
only
thing
I
would
be
worried
about
is
refreshing
automatically,
because
there's
some
architects
architecture
changes
that
we
need
to
do
and
I
think
David
has
been
looking
into
that,
but
in
terms
of
just
adding
the
drop
downs
and
the
filters,
that's
that
that
should
be
pretty
easy.
Now
that
we
have
the
new
API.