►
From YouTube: Verify:Testing Group Think Big #16
Description
Think Small today about open metrics and the custom metrics reports available in GitLab.
Think Big portion: https://www.youtube.com/watch?v=bWqoHtS1Mco
A
So
this
is
the
think
small
for
the
verified
testing
team.
It's
april
22nd
2021
last
week
in
the
video
that
I'll
link
below
we
started
with
think
big,
around
open
metrics.
Now
that
that
standard
is
actually
published,
it's
used
by
our
custom
metrics
reports
feature
so
we're
interested
in
exploring
new
use
cases.
The
other
ways
that
customers
could
use
that
users
could
use
that.
A
Just
to
sum
up
last
week
we
talked
about
an
ideal
outcome,
for
users
would
be
to
create
custom
metrics
as
part
of
their
builds,
our
pipelines,
job
timing
package,
size
total
pipeline
time.
Those
are
some
examples
of
things
that
they
could
track
and
then
see
how
those
trend
over
time,
at
the
project
level,
probably
roll
that
up
into
the
group
level.
A
One
of
the
examples
that
kevin
from
the
release
monitor
team
came
to
us
with
was
a
prospect
he
had
talked
to
somebody
who
wanted
to
gamify
pipeline
time
amongst
their
teams
have
a
leaderboard
of
bits
which
is
super
interesting,
so
grouping
those
things
together,
either
by
group
or
some
other
logical
unit,
which
is
a
whole
other
ball
of
wax
canon
worms
that
we've
talked
about
for
various
other
bits
that
we
want
to
group
together,
not
in
our
normal
grouping
logic.
A
So
that
is
our
kind
of
high
level
ideal
situation
for
customers
creating,
oh,
creating
and
capturing
custom,
metrics
utilizing
open
metrics
did
I
miss
anything
in
our
recap
or
from
the
notes
that
I
reread
this
morning.
A
All
right:
well,
let's
jump
in
talked
about
kind
of
the
big
vision.
You
have
a
awesome
graph.
That
shows
you
how
everything
you've
defined
for
custom
metrics
is
tracking
over
time.
A
bunch
of
other
cool,
smart
stuff
drew
mentioned
a
lot
of
extra
analytics
that
maybe
we
can
make
available
like
here's,
the
95th,
percentile
99th
percentile,
here's
things
that
are
outliers
to
that,
where
you
should
pay
attention,
that's
our
long-term
vision
towards
that.
What
could
we
do
in
our
next
milestone?
If
we
put
together
an
issue
today
for
fortino?
C
No
there's
there's,
I
I
like
this
idea
put
a
number
somewhere.
This
is
this
is
great.
Well
because
what
something
about
this,
what
what
numbers
can
we?
What
number
would
be
easy
for
us
to
automatically
put
there
right
to
like
essentially
like
seed
the
data
structure
right?
So
so
people
aren't
starting
from
nothing
right.
Maybe
they
don't
know
what
the
open
metrics
format
is
off
the
top
of
their
head.
They
don't
have
to
google
it
like
we
just
throw
in
like
any
old,
dumb
metric.
C
C
C
That,
if,
if
we
do,
if
we
put
any
metric
in
there,
no
customer
will
have
to
come
up
with
their
open
metrics
reporting
from
a
whole
flag
from
whole
cloth.
It'll,
always
just
it's
always
easier
for
me
to
go
into
something
existing
and
add
one
more
thing
just
like
it
than
it
is
to
go,
invent
the
whole
thing.
D
D
C
C
A
A
So
if
we
just
pulled
something
out
of
ci,
for
instance,
because
you
you
have
to
be
running
a
pipeline
to
utilize
this,
because
it's
going
to
be
a
job
that
tracks
it
right,
total
job
time
might
be
available
or
total
pipeline
time.
That's
always
going
to
be
something
that
you
have.
If
you
run
a
pipeline,
you
might
not
have
tests,
you
might
not
have
deploys
right.
Yes,
you
should
have
commits-
maybe
probably
maybe
not
trying
to
try
and
think
about
what
is
the
most
stripped
down.
C
D
A
C
There's
there's
a
problem:
there's
a
time
travel
problem
in
there
that
we
can't
write
the
amount
of
time
that
the
job
has
taken
while
the
job
is
running,
yeah.
D
Doesn't
really
give
us
much
does
it,
though,
is
there
a
way
we
can
on
pipeline
completion
edit
an
artifact
or
something
I
don't
know
if
that's
very,
if
I
don't
know
if.
A
There's
probably
something
else:
a
number
of
jobs
scheduled
to
run,
or
something
like
that
that
we
could
include.
That
is
a
little
more
there's
less
time.
C
And
it's
also
just
thinking
that
making
the
number
like
especially
useless,
might
drive
engagement.
C
A
Even
if
it's
not
useful,
if
it
runs
really
quick
and
it's
example
of
how
you
could
track
this
thing,
that
it
encourages
them
to
look
into.
How
might
I
use
this
to
solve
my
own
problems
if
we
then
link
them
to
a
library
or
examples
of
other
things
that
we
track,
but
then
gitlab
like
hey,
we're
tracking
all
of
our
gem
sizes,
we're
tracking,
whatever
else
we
actually
track
with
custom
metrics,
because
we
use
it
for
a
lot
of
stuff
that
it
will
prompt
them
to
figure
out.
A
D
Are
we
at
all
concerned
for
customers
that
are.
D
For
them
I
mean
it,
wouldn't
really
be.
It
wouldn't
be
a
big
artifact,
because
I
mean
the
amount
of
it's
a
text
file,
and
so
I
don't
know
we
could
probably
count
the
number
of
bits
that
it
would
take
for
us
to
like
take
up,
but
if
they're
running,
like
tons
of
pipelines
but
they're
still
concerned
about
their
storage
for
artifacts
or
something
I
don't
know,
I'm
just
concerned
that
there's
gonna
be
one
a
few
customers
that
are
like
hey.
Why
are
you
automatically
creating
an
artifact?
We
don't
want
this.
A
D
D
A
And
we
could
even
do
it
as
a
informal
template
like
not
actually
spin
up
a
project,
but
here's
an
example
of
some
syntax
that
you
could
use
to
insert
go
from
there,
maybe
beyond
what
we
have
already
in
documentation
today.
D
C
What
were
you
gonna
say?
I'm
just
thinking
it's.
It
seems
like
more
of
a
not
like
include
this
template
for
the
complete
solution,
but
include
this
template
as
like
you're
getting
here,
you're
getting
started
with
open
metrics
like
you
include
this,
but
you'll
be
expected
to
expand
on
it,
because
it's
a
completely
open-ended
boarding
system.
A
How
is
this
trending
over
time
so
moving
on
to,
I
guess:
what's
our
what's
the
riskiest
assumption
we're
making
here
if
we're
assuming
things
that,
if
they're
not
true,
it
would
blow
up,
and
we
would
never
get
to
our
ideal
outcome
for
a
customer.
C
A
Yeah,
I
think
we
talked
a
little
bit
last
week
about
open.
Metrics
is
still
a
little
undefined
by
and
large
prometheus
follows
it.
But
people
follow
this
kind
of
the
standardization
that
prom
has
put
together,
and
that
would
be
the
format
and
that
it
enables
people
to
potentially
output
those
artifacts
or
grab
those
artifacts
and
put
them
into
other
places
as
well
and
for
visualization.
A
A
A
Yeah,
I
think
I
think
you're
right
on
with
that.
That's
a
risky
assumption
that
people
want
to
use
that.
I
think
we've
proved
or
we
have
evidence
that
implies.
Customers
want
to
track
additional
things
that
we're
not
giving
them
as
part
of
their
builds.
A
I
think
a
lot
of
the
really
interesting
problems
to
solve
are
around
build
metrics
themselves
pipelines
which
jobs
take
the
longest.
I
think
that
we
did
it.
I
hadn't
really
thought
about
the
you
can't
get
metrics
on
a
pipeline
until
the
pipeline
is
done
running,
so
you
can't
do
it
within
the
pipeline
problem
before
so.
A
That's
interesting
to
expose
and
think
about,
but
I
think
there
is
our
appetite
there
to
track
other
stuff
as
part
of
these
pipelines,
which
is
interesting
any
other
assumptions
around
the
use
cases
that
we
talked
about
either
the
things
small
or
the
think
big.
A
So
I'm
thinking
about
both
our
think
small
and
our
think
big,
our
ideal
outcomes
here
and
the
things
that
we
could
potentially
build
in
our
next
milestone.
What's
the
hardest
problem
here
to
solve.
D
C
The
I
think,
you're
right
about
the
open-endedness
being
the
the
chart
is
an
ins
is
probably
the
most
obvious
place
where
this
happens,
but
we
don't
know
what
people's
solution
is
going
to
look
like
like
we,
we
don't
have
an
end
goal
to
be
like.
Oh,
we
can
automate
this
much
of
the
process.
We
have
no
idea
where
people
want
to
go
with
it.
D
D
C
Yeah
and
I
internal
on
dot
com,
we
should
probably
do
some
monitoring
some
informal
quiet
surveying
of
metrics
artifacts
to
see
what
people
are
actually
using
it
for
that
people
want
a
time
series
chart
is
already
an
assumption
of
what
kind
of
data
is
going
to
be
in
there.
D
A
A
Cool,
I
think,
along
with
displaying
that
data,
is
storage
of
that
data.
That's
probably
it's
something
that
we're
doing
more
and
more
often
lately
as
a
team
in
some
of
our
feature
sets,
but
that's
always
something
to
keep
keep
in
mind
how
that
grows,
how
we
deprecate,
how
we
not
deprecate,
how
we
sunset
data
at
the
end
like
what
our
window
is
to
keep
those
tables
small,
potentially.
A
A
A
Awesome,
the
two
minutes
left
any
other
things
we
want
to
dig
into
here
on
openmetrics.
Anybody
would
just
open
up
the
format
the
documentation
and
read
through
it
and
pour
over
it
and
record
that
put
it
on
unfiltered.