►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone-
and
this
is
bob
from
the
scalability
team
and
eric
from
the
verified
testing
team-
we're
going
to
be
talking
about
how
to
work
with
the
group
dashboards
that
we've
just
built
eric
added
some
metrics
regarding
the
artifact
parsers,
that
their
team
is
using
to
gather
data
on
the
test,
suite
and
so
on,
and
now
we
want
to
add
these
metrics
to
the
testing
dashboard.
A
A
We
can
remove
these
panels
since
the
testing
team
is
a
relatively
young
team,
there's
not
a
lot,
that's
happening
on
the
git
fleet
or
the
api
fleet,
and
everything
is
all
in
sidekick
and
the
web
fleet
to
modify
the
dashboard.
There's
a
handy
link
here
at
the
bottom
that
links
to
it
in
the
runbooks
repository.
So
let's
click
that.
A
And
right
now,
as
you
can
see,
it's
quite
limited,
but
that's
the
file
to
get
to
get
started
with.
I've
already
got
this
cloned
right
here.
So
let
me
open
that
file
there.
A
It
is
so
to
work
with
this
there's
in
the
run
books
repository
this
all
lives
in
the
dashboards
folder,
there's
an
extensive
readme
that
helps
you
set
up
the
tooling
to
start
contributing
to
to
the
dashboards
the
things
that
you'll
need
to
install
and
mainly
that's
going
to
be
jsonnet,
which
is
like
a
json
generator
language
and
yeah.
That's
all
documented
here.
A
You
also
need
the
grafana
api
token
that's
stored
inside
one
password,
I'm
not
going
to
show
it
here,
but
once
you've
got
that
all
set
up,
there's
a
bunch
of
scripts
that
help
you
contribute
to
the
dashboard.
So
we'll
start
by
loading,
the
grafana
api
token
inside
our
environment
and
trust
me
it's
in
there,
I'm
not
going
to
show
it,
and
then
we
can
do
a
test
dashboard,
including
a
bot
to
the
dashboard
that
we're
going
to
be
working
with
and
that's
located
as.
A
This
one
and
that's
going
to
spit
out
a
url
to
a
snapshot
of
the
dashboard.
What
it
currently
looks
like
so,
if
we
open
that
we
get
a
copy
of
the
dashboard
we
just
saw,
this
snapshot
is
going
to
live
for
a
limited
time
like
ci
cleans
it
up
for
us
so
generate
as
many
as
you
like.
It's
also
handy
for
a
reviewer
later,
if
you
link
it
from
the
merch
request
description,
because
that
way
they
can
see
what's
going
on.
So
let's
make
a
change
here.
A
A
And
it's
called
so.
What
we
can
also
do
now
is
remove
the
panels
that
are
empty.
I
don't
know
exactly
how
that's
done,
but
we
can
see
here.
The
helpers
live
in
this
file.
So
let's
look
at
that.
A
A
A
There
we
go
next
up,
let's
have
a
look
at
the
metrics
that
are
already
there
and
eric
do
you
have?
Are
you
somewhat
comfortable
with
the
tunnels
and
showing
us
the
metrics
you've.
B
Added,
or
should
I
I'm
not
yet
still
getting
used
to
and
learning
around
korean?
So
I'm
not
that
confident.
Yet.
A
Okay,
so
I'll
take
it,
but
you'll
have
to
help
me
with
the
the
metric
names
and
the
partial
names
and
stuff
like
that,
because
I
don't
know
them,
but
you've
added
a
histogram.
That
is
this
one.
A
A
B
A
B
A
Yeah,
the
rest
are
much
lower.
Okay,
so.
B
A
So
a
histogram
generates
a
few
metrics
like,
let's
look
at
it
through
the
autocomplete.
If
we
take
ci
import,
parser
duration,
you
see
here
that
we
have
the
second
sun,
the
seconds
count
and
the
seconds
bucket.
A
The
second
bucket
is
going
to
include
the
le
label
and
that's
all
of
the
buckets
that
you've
defined
on
the
on
the
histogram
in
the
code.
The
duration
seconds
count
is
every
time
that
metric
got
recorded.
That
counter
will
go
up
and
the
ci
partial
duration.
Second
sum
is
going
to
be
the
sum
of
all
all
durations.
A
Okay,
yeah,
so
the
thing
that
we're
looking
at
here
now
is
the
rate
is
going
to
show
us
how
many
parsers
ran
per
second
over
the
last
hour.
A
Makes
sense,
but
what
we're
going
to
be
more
into
well,
I
think
what
we're
going
to
be
more
interested
in
is
the
duration
of
those
parsers.
So
that's
why
you
added
the
histogram.
So
if
we
take
the.
A
Now
we
need
this
sum
by
parser
and
elite.
That's
the
bucket.
I
was
just
talking
about
that's
the
label
I
was
just
talking
about.
That
is
actually
the
the
bucket.
A
A
I
think
we
might
have
forgotten
something.
I
know
this
is
the
15th
percentile,
so
yeah
the
this
kobertuda
parsha
is
apparently
the
slowest.
A
What
is
generally
also
interesting
to
look
at
is
the
95th
percentile
and
I'm
bringing
this
up,
because
I
already
saw
that
this.
I
think
we
need
to
have
a
rate
in.
A
A
Yeah,
because
we
can
see
the
capacitor
parser
here
hits
the
ceiling,
I
think
that's
the
highest
bucket.
We
put
in
there.
B
But
it
doesn't
show
anything
above
the
the
last
bucket.
It
doesn't
include
like
the
plus
infinity.
A
A
C
B
A
Transitioning
from
env
to
environment,
but
specifying
the
label
sometimes
speeds
things
up
in
tunnels.
So
if
you,
if
you
actually,
I
would
recommend
to
only
specify
one
but
right
now
like
I
specify
both
because
it
might
make
things
go
faster
for
this
metric
because
it
doesn't
have
super
high
cardinality.
C
Like
this,
what's
wrong
and
closed.
A
Then
we
see
that
there's,
let's
maybe
leave
out
the
rate
and
then
we
can
see
how
many
parsers
ran
and
ended
up
in
the
infinity
bucket.
A
So,
as
we
can
see,
these
are
all
unknown,
so
that
would
like
I'm
looking
at
the
spread
and
it's
quite
good.
We
just
need
some
more
high.
A
I'm
going
to
leave
that
up
to
you,
but,
as
I
mentioned
on
the
merger
quest,
we
kind
of
want
to
like
the
cardinality
grows.
Quite
a
lot,
if
you
add
buckets
to
the
histogram,
because
it's
a
number
of
labels
times
number
of
buckets
so
and
we're
mostly
going
to
be
interested
in
the
parses
that
run
run
too
slow.
A
So
I
would
perhaps
keep
the
five
bucket
and
then
add
a
few
above
that,
but,
as
we
could
see
on
the
graph
before
the
the
difference
is
quite
hot,
like
there's
a
big
difference
between
the
parsers,
so
perhaps
we
would
need
to
yeah
specify
different
buckets
for
different
parsers,
but
let's
start
with
what
we
have
and
here
we're
looking
at
the
wrong
one
here
I
meant
this
is
quite
a
big
difference
between
the
parsers
okay.
So
now
we
want
to
show
this
on
the
on
the
testing
panel
on
the
testing
dashboard.
A
Everything
you
want
to
add
to
your
dashboard.
You
basically
chain
to
this
dashboard.
A
A
C
A
Did
so
we're
going
to
add
a
new
grid,
we're
going
to
add
a
little
title.
A
A
C
A
A
A
A
Hold
on
a
second,
let's
create
an
array
of
the
metrics
of
the
parsers.
That
testing
is
interested
in
which
are
those.
B
Eric
which
person
that's
all
over
tour.
B
C
C
A
A
A
A
B
I'm
thinking
it
may
help.
Yes,
I
guess.
C
A
B
A
A
A
A
C
C
A
B
B
A
A
C
C
C
A
And
there
we
go
now
we
only
have
the
parcels
that
we're
interested
in.
I
see
we're
already
past
the
hour,
so
I
would
suggest
maybe
I'll
finish
up,
adding
the
histogram
as
well
and
then
assign
it
to
you
for
review
and
as
a
first
review
and
then
I'll
chat
with
some
of
my
team
members
to
take
the
second
review.
Is
there
anything
else
that,
like
you,
have
questions
about
or
that
you
want
me
to
add
to
the
dashboard
while
I'm
at
it.
A
The
duration
is
what
I'm
going
to
add
next,
but
I
see
that
we're
already
like
in
here
for
40
minutes.
So
I
would
maybe
do
it
offline
because
right
now,
it's
more
like
just
watching
me
type
queries
so
yeah.