►
From YouTube: Monitor:APM Weekly Meeting - 2020-07-29
Description
Weekly meeting for the Monitor:APM team
A
Okay,
so
we
just
talked
a
little
bit
about
the
or
went
over
the
retrospective,
so
I
wanted
to
the
next
thing
I
wanted
to
call
out
is,
or
just
ask
for
opinions
about
making
our
retrospective
issues
public
there's
a
little
bit
of
kind
of
conflicting
information,
I'm
not
or
I'm
curious
what
other
people
think
so,
basically,
right
now,
when
the
issues
are
created,
they're
confidential
for
our
team
there's
something
in
the
handbook
which
I
put
a
link
here
about
after
you
do
a
retrospective.
A
B
B
Was
a
conversation
a
while
back
with
different
engineering
managers
and
different
teams
of
different
way
of
doing
things?
My
personal
preference
is
that
I
like
how
we've
been
doing
it
where
we
just
have
our
own
internal
things
and
we
try
to
propagate
the
main
themes
to
the
public
retrospective.
B
I
think
that
does
a
good
job
of
kind
of
representing
the
team
with
and
also
like,
you
know,
giving
people
space
to
to
share
more
candidly
within
the
team
rather
than
you
know,
within
the
whole
world
or
whole
company,
and
it's
also
a
little
bit
easier
in
the
process,
because
then
we
don't
have
to
like
figure
out
everyone
like.
Are
you,
okay,
with
this
being
public
or
people
accidentally
thinking?
It's
not
public,
and
it
only
gets
public
and
just
the
potential
emotions
that
might
arise
from
misunderstanding?
How
that
would
work?
A
A
So
I
don't
need
to
change
anything
for
for
the
sake
of
changing,
but
if
anyone
feels
strongly
one
way
or
the
other
or
in
the
opposite
way
where
we
should
make
it
public,
then
we
can
talk
about
that
too
or
we
can,
but
for
now
maybe
we'll
just
keep
going
like
we've
been
doing.
I
think
that
is
a
good
path.
So
thanks,
okay,
jose
yeah,
the
next
one.
C
Yes,
I
wanted
to
talk
about
custom
metrics
for
templates,
so
for
this
milestone,
we
are
working
on
on
bringing
custom,
metrics
dashboard
templates
to
users,
either
via
the
repository
view,
on
instance,
level.
That
means,
if
system
and
administrators
want
to
add
a
new
metrics
dashboard
via
ruby.
They
can
do
that
or
we
can.
C
They
can
also
add
them
via
the
a
specific
folder
in
the
repository
and
that
it
can
be
selectable
via
the
web
ide
or
the
standard
blob
repository
view,
and
one
thing
that
nadia
noticed
during
the
discussion
on
on
a
couple
of
march
requests
is
that
what
kind
of
default
templates
we
should
ship
this
feature
with,
because
currently
I
just
I'm
just
shipping
a
standard
single
stat,
one
in
the
interest
of
time,
but
perhaps
for
discoverability.
We
might
be
interested
in
shipping
either.
C
Multiple
templates
showcasing
every
single
type
of
panels
that
we
use
or
just
ship
a
large
template
that
has
every
single
panel
type
and
so
people
and
it's
well
documented,
so
people
can
understand.
What's
going
on
and
yeah,
I
just
wanted
to
get
your
input.
We
can
still
ship
this
with
the
single
step
one
and
keep
adding
more
templates
in
future
iterations.
Of
course,.
D
So
I
believe
we
should
ship
multiple
templates
instead
of
single
one.
It
would
be
easier
to
focus
on
the
given
use
case,
especially
that
we
are
targeting
this.
I
guess
for
the
newcomers
to
the
monitoring
our
monitoring
session,
so
I
it'd
be
easier
for
them
to
like
focus
on
single
feature
like
okay,
here's,
the
single
start
panel
or
here's,
the
area
chart,
etc,
etc.
D
And
since
we
have
also
the
duplicate
dashboard
feature
once
they
create
their
own
custom
tailored
dashboard,
they
could
copy
it
and
use
it
somehow
as
a
template.
So
that's
that's
my
point
of
view
on
this
material.
D
D
It
is
not
recommended
or
not
documented
solution
to
do
so
so
technically
they
are
able
to
do
so,
but
it
involves
adjusting
the
github
installation
files
which
we
do
not
recommend
and
we
don't
necessarily
bound
ourselves
to
maintain
any
kind
of
backward
compatibility.
D
So
there
is
only
the
other
solution
that
you
laid
out,
so
the
gitlab
instance
administrators
could
enable
a
special
project.
It's
not
a
folder
with
each
project,
it's
the
one
special
project
or
namespace
within
the
gitlab
instance,
which
hosts
number
of
different
kind
of
templates,
including
already
existing
one
like
the
docker
files
or
github
cis,
and
this
is
not
the
core
feature.
This
is
the
premium
feature
as
far
as
I
know,
so
I
just
wanted
to
clarify
on
that.
So
to
be
known.
E
So
if
we
would
like
to
enhance
that,
then
I
suggest
first,
let's
let's
rely
on
on
real
telemetry
and
see
if
more
users
are
using
a
specific
chart
and
let's
add
that
chart
and
we
continue
as
we
go
and
obviously
there
is
also
like
a
bit
of
a
chicken
and
an
egg.
If
you
don't
have
examples,
maybe
you
have
like
less
people
will
use
it,
but
at
least
for
the
first
like
one
or
two
templates.
C
E
C
Sounds
good,
and
also
I
wanted
to
talk
about
the
context
that
we
ship
this
templates
with,
because
the
default
common
metrics
yaml
file
that
we
use
to
build
or
the
default
dashboard
that
we
ship
with
it's
heavily
tied
to
a
kubernetes
context,
which
means,
if
you
want
to
have
your
metrics
displayed.
C
You
need
to
have
a
gk
cluster
or
a
kubernetes
enabled
cluster
with
the
labels
that
we're
using.
Otherwise,
even
if
the
queries
are
correct,
you're
not
going
to
get
any
data
so
in
interest
of
discoverability.
I
was
thinking,
perhaps
changing
these
to
a
prometheus
context,
where
we
will
use
the
default
labels
from
the
node
exporter
and
use
cpu
memory
network
throughput,
and
that
could
actually
make
it
easier
for
people
because
it
will
just
have
a
standard.
Primitive
server.
E
I
think
it's
a
great
idea
and
I
think
we
we
are
tied
into
this
concept
of
kubernetes
monitoring,
which
is
which
is
good,
but
in
reality
a
lot
of
our
users
are
either
using
hybrid
or
not
using
kubernetes
at
all,
which
makes
some
of
our
you
know.
Our
monitor
solution
will
still
work,
but
it
can
be
some
it's
going
to
be
challenging
like
the
default.
Dashboard
is
not
working
and
you
need
to
create
environments
and
other
stuff
like
that.
So
if
we'll
be
able
to
have
something
that
will
just
work,
it
doesn't
matter.
E
If
you
have
like
kubernetes
or
not
it's
not
tied
to
that
limitation,
it
will
be
awesome
and
I
would
say
if
we
can
think
about,
maybe
adding
another
dashboard
like
we
have
the
default
dashboard
now
and
we
are
working
on
adding
like
a
pod
health
dashboard.
So
if
we
can
like
maybe
take
this
and
maybe
augment
it
a
bit
and
say
hey,
this
is
like
oh
dashboard
that
will
always
work
out
of
the
box.
If
you
don't
have
a
kubernetes
instance,
it
will
also
be
awesome.
C
I
think
we
can
actually
add
a
prefix
and
say
like
primitives
underscore
cpu
underscore
average,
and
that
will
be
just
on
a
prometheus
context
and
we
can
swap
the
prefix
to
kubernetes
and
say
hey
it's
the
same
thing,
but
for
a
kubernetes
context,
that's
yeah!
It's
a
good
idea.
Clement
had
a
question.
I
don't
know
you
want
to
voice
it
or.
C
Awesome
just
just
to
quickly
summarize
the
question
clement
asks
if
we
needed
some
feedback
from
the
sra
team,
and
I
asked
cindy
from
the
sr
team
which
she
is
she's
in
charge
of
the
all
the
metrics
that
we
have
here
at
gitlab
and
she
showed
me
that
they
don't
use
the
kubernetes
exporters
for
their
own
monitoring.
They
actually
use
the
standard,
node
exporters,
so.
C
Awesome
so
I'll
update
the
issue,
and
I
will
mention
that
for
our
templates.
We
want
to
keep
a
primitive
context
and
we
will
work
on
having
also
kubernetes
templates
for
that
as
well,
but
for
our
first
iteration
we're
gonna
stick
to
prometheus
once.
Thank
you,
tough.
I
think
you
have
the
next
point.
E
It
sorry
I
shared,
and
I
looked
for
the
finance
button
yeah.
I
just
want
to
call
out
an
epic
that
I'm
working
on
is
bringing
metrics
to
to
complete
and,
as
you
know,
one
of
our
solutions
is
metric
and
we
want
to
mature
it
from
viable
to
complete
the
work
on
that.
I
know
that
there
is
a
date
which
will
probably
need
to
move
and
push,
because
we
didn't
take
into
consideration
the
realignment
that
we
had
in
the
team.
E
So
there
is
like
a
set
date
for
it
to
be
complete
on
our
maturity,
page
so
disregard
this
this
date
and
don't
get
pressured
by
it.
So
you
know-
because
I
think
it's
by
the
end
of
this
traditional
next
generation
doesn't
make
sense,
because
the
work
that
we
need
to
do
in
order
to
move
metrics
to
complete
is
pretty
big.
It's
based
on
some
of
the
discussion
that
we
had
in
the
past
around
our
roadmap,
and
I
wanted
to
break
it
down.
E
We
have
it
on
our
handbook
and
now
I
want
to
break
it
down
into
like
real
issues,
and
so
I
bucket
them
by
several
themes,
which
are
also
epic,
which
is
the
agent
configuration
metrics,
instrumentation,
visualize
data
and
alerts.
I
have
to
say
that
maybe
alerts
will
be
will
become
a
category
by
its
own.
E
But
for
now
I
said,
let's,
let's
add
a
letter
there,
because
it's
a
part
of
the
metric
solution
today,
but
maybe
you
know,
we
will
cut
it
from
here
and
we'll
put
it
in
a
different
in
a
different
epic.
E
And
basically
I
I
took
a
lot
of
the
work
that
we
did
to
do
today
and
try
to
bucket
it
by
by
epic
and
sub
epic.
So
the
work
that
we
do
around
like
the
dead
metric
is
a
part
of
the
visualization
of
the
data
alerts,
a
metric
to
instrument
to
minimal.
We
want
to
enable
our
users
to
instrument
their
application
and
they
are
deploying
use
using
git
lab
and
the
configuration
work
that
we
just
discussed
briefly
on
on
some
issues
on
how
we
can
set
up
multiple
prometus.
E
I'm
not
100.
This
is
like
a
finalized
list,
because
there
are
a
lot
of
issues
that
I'm
keep
remembering
that
that
we
need
to
add,
but
think
about.
Think
about
those
like
buckets
that
when
you
create
issues
with
let's
try
to
map
and
see
if
they
fit
into
one
of
those
one
of
those
template,
I
think
it
will
help
first,
it
will
help
us
organize
the
work
it
will
help
us.
E
It
will
help
show
the
status
on
you
know
what
are
we
working
on
and
you
know
we're
working
on
a
data.
What
does
it
mean?
What
what
part
of
the
roadmap
it
is
and,
of
course,
if
you
have
feedback-
and
you
have
things
that
you
think
I
forgot
and
we
need
to
add,
feel
free
to
mention
I'm
sure
we
have
like
hundreds
of
issues.
E
F
Comment:
yeah
there's.
I
just
wanted
to
point
out
because
there's
something
that's
coming
up
in
a
couple:
different
areas:
that
the
in
order
to
move
maturity
levels
that
we
want
to
make
sure
that
there's
enough
time
allotted
to
complete
this
category,
maturity
scorecard,
and
it
is
a
little
time-consuming
and
that's
why
I
want
to
bring
it
up
that,
when
you're
creating
your
plan
to
do
this,
that
we
need
to
consider
how
to
fit
this.
In
from
our
timing,
perspective.
E
Yeah
we
actually,
we
were
one
of
the
first
that
did
this
exercise,
I
think
with
logging
and
so
yeah
it
can
take.
I
mean
it
really
depends.
Clementine.
Answering
questions
really
depends
on
like
what
what
are
the
jobs
to
be
done,
like,
for
instance,
for
logging
we
had
to
set
up
like
this
entire
entire
environment.
We
simulate
a
problem
that
were
were
shown
in
the
logs,
and
then
we
presented
to
the
user
with
a
scenario
saying
hey,
there
is
a
problem
now
go
find
it
in
the
logs.
E
That's
your
task
and
the
user
had
to
find
his
way
around
and
see.
If
you
were
able
to
discover
the
problem,
the
main
work
was
to
create
to
create
the
environment,
to
fake
the
data,
so
this
was
like
the
most
of
the
work
and
obviously
recruiting
the
people,
so
I
would
say
it
really
depends
based
on
based
on
the
work
that
we
did
on
logging.
I
think
we
can
improve,
we
can
make
it.
I
think
we
can
make
it
faster.
Logging
was
like
a
bit
of
a
bit
more
challenging.
I
think
like
for
this.
E
A
All
right
thanks
we're
out
of
time
here,
so
I
know
there's
a
couple
more
things
on
the
agenda,
but
maybe
we
can
cover
those
next
week.
That's
all
right,
but
people
can
read
through.
I
think
it's
in
the
product
goals
are
very
worthwhile
to
talk
about
and
interesting
to.
C
A
Review
for
everyone
so
read,
read
through
those
and
then
we
can
talk
about
them
next
week.
We
can
discuss
them
all
right.
Thank
you,
everyone
and
have
a
good
rest.
Your
day,
thanks.