►
From YouTube: Support Metrics Analysis Workgroup - 2020-10-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Happy
tuesday,
folks
and
happy
thanksgiving
in
canada,
as
of
yesterday,
it's
a
belated
thanksgiving,
metrics
working
group
today
we're
getting
near
the
end.
I
updated
our
page
last
week
and
we're
like
almost
done
with
our
exit
criteria.
A
The
work,
of
course,
of
getting
the
metrics
into
the
right
place
is
not
yet
done,
but
we're
getting
there
first
thing
on
the
agenda
is
to
take
a
look
at
the
persistent
link
items,
including
our
metrics
elia.
Would
you
take
us
through
those.
B
We'll
do
right
so
for
s
set,
we
don't
this
week,
we're
42,
but
it's
just
the
beginning
of
the
week.
That's
actually
because
we
had
only
one
in
premium
basic
and
two
in
silver,
and
one
of
them
was
a
poor
one.
So
that's
why
we're
75,
but
it's
out
of
three
surveys
so
still
yet
to
be
seen.
But
anyway,
we're
keeping
above
the
90
percent
flts
seems
to
be
very
stable
and
notice.
We're
also
hitting
the
target
again
week.
B
42
is
the
beginning
of
the
week,
so
we
still
have
some
time
to
go
over
that
for
elena,
I
believe
we
have
very
good
results
for
frt.
A
Thank
you
yeah,
just
for
posterity
in
the
recording
we
had
our
pizza
party
on
friday
to
celebrate
l
r
results
that
was
really
fun.
So
if
you
watch
this
in
the
next
90
days,.
A
To
slack
support
pizza
party
channel,
let's
see
what's
number
two
number
two
is
hypothesis
action
items
above.
A
I
think
the
only
ones
that
are
left
are
mine.
Number.
Six:
hey.
Did
we
decide
to
close
that
one
out,
frt
hawks,
I
thought
we'd
a
didn't,
write.
Okay,
so
we
didn't
close
it
out,
but
we're
not
prioritizing
and
then
number
nine
sas
tickets
have
gotten
harder.
A
D
A
In
liquor's
defense,
like
looker,
it
was
a
different
product.
So
even
though
it
did
the
same
thing,
at
least
that
one
that's
when
they
they
changed,
the
names,
I'm
going
to
be
saying
would
be
saying
other
things
that
will
change
in
caleb
for
a
long
time.
I
suspect,
okay.
So
I
think
the
bulk
of
what
I
want
to
talk
about
today
is
getting
hypotheses
into
a
state
where
they're
a
repeatable
analysis.
A
A
A
A
I
mean
so
right
now
it
there's
so.
Let's
say
right
now
today,
like
today,
our
assets
below
where
we
want
it
to
be
we're
going
to
form
a
group
of
managers,
they're
going
to
take
a
look
at
sf.
Actually,
that's
a
bad
example
because
we
haven't
done
any
specific
set
ones
so
say:
frt
was
looking
bad
in
a
few
weeks.
We
form
a
group
of
managers
and
we
say:
okay
go
figure
out
why
frt
is
bad
and
they're
like
okay.
I
have
some
ideas.
These
are
my
hypotheses.
C
Can't
I
use
some
of
the
explore
queries
that
that
was
maybe
set
up,
because
I
think
a
lot
of
the
hypotheses.
We
set
up
new
queries
and
looked
at
different
data
and
trying
to
find
the
right
data
points,
and
maybe
that's
that's
a
part
that
could
be
reused.
B
Yeah,
I
mean
I've
checked
the
hypotheses.
Some
of
them
do
have
it
like
the
pto.
We
have
the
spreadsheet
and
the
hypothesis
about
ticket
volume
has
the
links
for
the
reports,
but
yeah
there's
a
few
that
the
they're
based
on
spreadsheets
and
yeah.
We
can
do
a
better
job
there
to
define
what
that
means.
B
A
D
Sure
my
gut
says
a
run
book
of
sorts
right
because,
if
we're
tracking
some
of
the
things
we
hadn't
been
doing
with
tracking
week
to
week
and
looking
at
trends,
so
a
lot
of
the
things
that
this
team
has
done
is
been
like
three
months
of
data
and
gone
back
and
and
tried
to
validate
hypothesis
right.
So
I
think,
if
there's
a
sort
of
runbook
to
say,
validate
that
this,
isn't
this
isn't
the
case?
This
isn't
the
case.
D
This
isn't
the
case
because
my
supposition
is
that
we'll
run
into
new
problems
that
will
need
new
data,
and
so
you
kind
of
run
down
this
list
of
things.
We've
already
covered
things
that
we've
been
that
potentially
could
rise
up
in
the
three
week
trend
period
or
whatever,
and
if,
if
it's
caught
yep,
that's
that
looks
like
that's
the
kid.
That's
the
case
then
great.
If
they
go
through
the
list
and
say
none
of
these
seem
to
fit,
then
there
has
to
be
this
work
of
you
know:
what's
what's
the
new
data
we
need?
B
So
once
we
have
all
the
reports,
we
can
just
make
sure
that
they
are
associated
with
the
original
support,
metrics
dashboard
and
just
to
link
that
those
pages
directly.
So
it
would
be
easily
found.
A
A
I
mean
or
an
entire
new
dashboard
with
tabs
for
each
process.
I
mean.
B
It's
possible
that
these
some
of
these
already
exists
in
specific
tabs
already.
A
A
And
then
I
guess
the
vision
would
be
to
then
link
to
a
separate
page
where
the
run
book
is
a
little
bit
more
spelled
out.
And
I
guess
we
don't
quite
have
a
dashboard.
Yet
for
that,
but
and
then
specific
actions.
A
It's
a
bit
confused.
I
need
to
get
this
a
little
bit
more
in
order,
but
so
if
regional
handover
is
causing
tickets
to
be
missed
and
we
link
to
how
you
figure
that
out
what
you
should
do,
what
you
should
monitor
ptos,
causing
problems-
not
much
you
can
do-
make
sure
you're
planning,
but
that's
kind
of
how
what
I'm
envisioning
is
having
a
sort
of
these
are
potential
reasons.
Why,
if
this
is
what
it
is,
do
this.
D
Yeah,
I
I
agree
with
that
vision.
I
think
the
one
thing
we
want
to
consider
the
dashboards
themselves.
D
A
lot
of
times
are
self-evident,
but
the
reports
within
the
dashboards
are
self-evident
is
what
they're
measuring,
but
we
may
need
a
description
to
say.
Here's
what
you're
looking
for
in
this
report
right
to
be
to
kind
of
zero
in
on,
like
handovers
right,
look
at
these
times
time
periods
in
this
report
as
you're,
viewing
it
right
and
that
might
change
because
it's
going
to
be
in
your
own
time
zone
or
whatever.
But
it's
it's.
D
A
A
Subtly
nodding
heads
I'll,
accept
it
all
right,
I'd
like
to
continue
our
conversation
from
last
week
about
shaping
ascent.
A
I
think
that
this,
mr
isn't
good
enough
of
a
state
that
we
can
start
a
work
group
and
I'd
like
to
actually
see
what
the
other
managers
do
with
what
we
have
so
far
specifically
around
asset,
but
we
haven't
really
actually
set
them
up
with
very
much
in
terms
of
like
hypotheses
for
yes,
that
might
be
bad
or
how
to
analyze
that.
A
A
So
we
didn't
really
leave
off
anywhere,
but
is
there
anything
else
that
we
could
do
to
set
up
the
manager
group
who's
going
to
take
a
look
at
sap
for
success?
Is
there
other
things
that
we
can
sort
of
like
pre-think
about
for
them
to
think
about.
B
I
think
that
a
nuanced
approach,
so,
first
of
all,
the
new
labels
from
rebecca
are
great,
I
think
the
analysis
of
the
labels
themselves
and
the
nuance
would
be
to
maybe
deep
dive
into
the
reports
of
those
specific
two
weeks
to
trying
to
find
a
common
trend.
B
So
presuming
we
have
a
two
like.
We
agree:
the
two-week
downward
trend
in
a
specific
category,
to
check
what
happened
in
those
two
weeks.
We
did
find
last
week
that
dot
com
and
account
were
not
labeled
correctly
in
the
feedback,
so
we
asked
jason
to
fix
that
so
starting
last
week
we
should
have
the
correct
ticket
forum
and
we
should
be
able
to
filter
out
the
specific
ticket
forum
which
feedback
labels
we
have
and
then
just
go
over.
The
comments
themselves.
D
Sense,
yeah.
I
think
I
think
that
the
challenge
one
of
the
challenges
with
that
is
it
can
be
very
point
in
time
without
being
able
to
solve
going
forward.
Okay,
and
by
that
I
mean
for
com,
we
could
have
three
outages
in
two
weeks
and
that
be
the
cause
of
a
lot
of
dissatisfaction
with
support
right
from
the
people
that
actually
respond
to
the
survey,
but
there
it
so
analyzing
that
coming
up
with
that
is
one
thing.
But
what
steps
would
we
take
to
be
any
different
right?
Is
there
a
different
communication?
D
D
D
B
D
C
I
was
wondering
just
looking
at
sort
of
the
feedback.
I've
been
tracking
feedback
for
eleanor
for
a
while,
as
part
of
just
part
of
the
stuff
that
we
that
I
bring
to
the
business
fulfillment
meeting,
where
sales
and
bizops
and
and
product
where
other
teams
are
present
and
what
I've
noticed
sort
of
just
in
the
train
is
that
the
feedback
doesn't
give
it.
C
Doesn't
it's
not
surprising
most
of
the
time
it's
about
things
that
I
already
know
is
pain,
points
for
customers
or
maybe,
in
that
case
like
it
could
have,
we
could
have
worded
things
a
little
bit
better
and
de-escalated
the
situation,
so
there
are
maybe
things
that
we
could
have
done
to
avoid
it,
but
a
lot
of
times
it's
just
sort
of
customers
telling
us
what
we
already
know
and
just
sort
of
saying.
C
Okay,
we
know
these
things
are
problems
for
customers,
but
at
the
end
of
the
day
they
have
to
be
prioritized
and
we
sort
of
have
to
work
with
them.
So
I've
I've
sort
of
struggled
with
what
to
do
with
with
the
feedback,
and
I
have
all
this
information,
but
I
have
to
say
from
tracking
all
of
that
I
haven't
found.
I
haven't
really
figured
out
what
to
what
to
do
with
it.
C
The
best
I
can
do
is
say
this
is
what
the
data
is
showing,
because
we
already
know
where
the
pain
points
are
and
people
I
can
make
people
aware
and
say:
look.
This
is
transparency.
This
is
all
we
have
and
then
it's
up
to
other
teams
and
and
us
as
well,
but
it's
all
in
a
process
to
sort
of
figure
out
how
these
are
going
to
be
prioritized,
and
that
takes
a
little
bit
of
time
and
it
does
get
better,
but
the
feedback
is
still
there
and
we
still
get
people
who
will.
C
Let
us
know
that
this
is.
This
is
a
big
pain
point
for
them.
So
I
don't
have
the
answer,
but
I
I
just
also
wanted
to
say
I've
been
thinking
about
this
and
I
I've
seen
some
managers
really
encourage
responses
on
that
feedback,
so
I've
been
paying
to
sort
of
reach
out
to
the
customer.
Again
and
say:
oh
you
know,
we,
we
see,
you've
submitted
feedback
to
us
and
the
thing
I've
realized
about
that
as
well.
Is
that
sometimes
there
is
sort
of
just
bad
news.
C
Yeah,
I'm
just
sort
of
laying
some
questions
out
there.
It's
things
that
I've
been
sort
of
thinking
about
for
a
while,
but
I
haven't
really
come
to
any
sort
of
answers
by
looking
at
the
data
so
far.
D
Yeah,
so
this
that's
actually
really
good,
because
I
think
one
thing
to
think
about
is
you're
reviewing
that
data.
If
there
are
things
we
already
know
and
they're
going
to
be
elongated
before
they're
solved,
then
setting
expectations
ahead
of
time
with
the
customers
right
and
saying,
here's,
what
we're
working
on
as
a
priority,
whether
it's
a
blog
post
or
response
to
tickets
or
whatever,
right
and
setting
the
expectation
at
right
out
front
as
the
ticket
gets
submitted.
D
And
you
know
whether
it's
a
first
reply
or
whatever
there
might
be
an
opportunity
to
stymie
that
dissatisfaction
by
setting
their
expectations
or
up
front
is
what
you've
run
into.
Is
a
bug
we're
still
working
on
it,
but
we'll
work
to
get
a
resolution
or
whatever
right.
And
so,
if
there
are
things
that
kind
of
be
associated
to
that
kind
of
upfront
expectation
setting,
because
we
know
it's
going
to
take
a
long
time
to
solve.
D
C
Yeah,
absolutely,
I
think,
yeah
those
those
are
really
great
to
come
back
and
say
by
the
way,
we're
working
on
it,
and
I,
I
think,
for
the
most
part,
what
I've
seen
is.
We
do
sort
of
do
that.
This
moment
we
see
someone's
running
into
a
bug.
It's
usually
part
of
the
conversation.
C
Some
some
people
still
go
well,
this
is,
you
know,
still
not
happy
with
it.
They
should.
C
I
should
be
able
to
do
this
and
that
I
can
completely
understand
there
are
things
where
it's
just
sort
of
a
rule
like
sorry
that
they
cannot
do
this
in
our
system
or
they
have
to
pay
for
this,
and
and
that's
where
the
conversations
get
trickier,
because
we're
not
we're
not
the
ones
sort
of
making
the
rules
and
setting
the
pricing
and
deciding
what
what
we
pay
for
and
what
one
can
and
can't
do.
C
Yes,
we
can
advocate
for
what
kind
of
features
people
want
and
what
kind
of
self-serve
things
they
want
in
the
customers
portal,
but
there
is
limited
control
and
product
has
done
a
really
great
job
of
looking
at.
What
we
feel
is
important
in
support,
but
product
also
has
other
stakeholders
that
they
have
to
look
after.
So
there's
there's
definitely
a.
C
I
think
we
can
say
well,
these
ones
are
product,
might
look
at
it
or
yeah
product
might
look
at
it,
but
we
don't
know
we
don't
have
sort
of
an
in
data
conclusion
where
we
can
say
this
is
going
to
happen.
We
could
point
customers
to
issues
and
say
this
is
being
discussed,
follow
there,
so
maybe
we
can
get
into
the
habit
of
being
more.
C
Maybe
advertising
our
product
work
a
little
bit
more
and
saying
this
is
coming
up.
Look
out
for
that.
So
that's
that's!
Maybe
something
that
we
can
look
at
other
types
of
tickets
are
maybe
processes
if
we
took
too
long
if
another
team
took
too
long
to
come
back
and
there's
a
handover,
so
there's
definitely
room
for
improvement.
C
C
That's
maybe
that's
maybe
a
question
that,
from
a
manager
perspective
it'd
be
good
to
hear
whether
we
are
we
focus
on
the
customers
who
do
take
the
time
to
submit
that
feedback.
Or
do
we
look
at
our
data
to
to
drive
our
decisions.
D
D
I
think
some
of
the
things
you
you
mentioned
kind
of
point
to
are
we
measuring
an
sat
anymore
or
are
we
measuring
csat
right,
so
the
satisfaction
with
support
if
these
things
are
out
of
supports
control?
Should
we
be
considering
those
as
part
of
ssat
versus
you
know?
This
is
a
customer
satisfaction
element
and
there
are
multiple
pieces
to
that
satisfaction
right.
So.
D
A
ticket-
yes,
it's
support,
that's
managing
it,
but
the
satisfaction
isn't
specific
to
support
it's
specific
to
the
customer.
Satisfaction
with
gitlab
as
a
company
right,
so
you
know
we've
kind
of
bundled
it
all,
because
that's
the
survey
that
we're
doing
it's
within
supports
realm
of
control.
That
type
of
thing.
So
we've
identified
it
with
us
rather
than
getting
that
granular
and
saying
picking
out
the
pieces
that
are,
you
know,
issue
related
or
sales
contact
related,
or
you
know
the
fact
that
you're
charging
me
two
months
related
that
type
of
thing
right.
D
So
yeah,
it's
it's
it's
philosophic,
but
it
it
does
kind
of
raise.
The
point
I
mean
do
we?
What
are
we
measuring
the
supports
the
satisfaction
with
the
support
they
have
gotten
versus
their
dissatisfaction
with
other
elements
of
interacting
with
gitlab.
C
I
think
that
distinction
is
really
maybe
more
important
for
eleanor
than
other
queues,
because
there's
so
much
cross-collaboration
there
are
definitely
feedback
where
it's
sort
of
bad
and
then
the
customer
says
support
was
great,
but
I
didn't
get
an
answer
from
sales
or
not
happy
with
this
part
of
the
product
or
whatever.
So,
if
we're
interested
in
making
that
distinction,
I
think
it
would
benefit
eleanor
just
to
sort
of
say
well.
How
are
we
doing
in
eleanor
support
versus
maybe
where
the
issues
with
the
collaboration
comes
in.
D
Applies
as
well
to
to
the
sas
customers
as
well,
when
they're
impacted
by
outages
right.
So
if
they're
satisfied,
they're
satisfied
with
supports
continuing
updating
them-
and
you
know
that
piece
but
they're
dissatisfied
that
the
system's
been
down
for
an
hour:
okay,
yeah,
okay,
we
get
it.
That's
not
that's,
not
a
pleasant
experience,
but
you
know
what
we're
trying
to
do
with
support
is:
keep
you
updated
type
of
thing,
so
yeah.
It
applies
specific
to
lr
because
of
multiple
groups
and
and
sas
outages,
and
that
sort
of
thing.
B
What
changed
in
this
review
period
over
the
previous
ones
and
that
might
give
some
light
and
the
other
question
and
that's
to
what
you
mentioned
also
donique-
is
do
we
want
to
follow
up
with
customers?
I
remember
I
previously
suggested
an
issue.
I
can't
remember
how
it
ended
up,
but
should
we
recommend
a
follow-up?
B
B
Maybe
we
want
to
tackle
only
what
we
feel
might
be
potentially
the
problem
to
get
more
clarity
from
customer
feedback.
It
will
require
again
more
overhead,
but
I
want
to
raise
this
to
the
group.
What
do
you
think
about
that?.
A
We,
according
to
the
workflow
for
sa
review
manager,
we
should
be
following
up
with
some
customers,
always
so
yeah
there
are.
There
are
cases
where
it's
like
we
we
choose
not
to,
but
are
you
suggesting
that
we
sort
of
turn
that
dial
up
in
heightened
periods
and
do
it
even
on
ones
where
we
might
not
otherwise.
B
Yes,
exactly
you
worded
it
correctly
again,
I
want
to
hear
what
you
think,
but
I
think
that,
in
order
to
get
more
clarity
than
those
words,
the
only
way
to
go
forward
is
to
solicit.
Maybe
a
conversation
suggest
a
zone
call
or
just
ask
for
more
clarifications.
D
I
I'd
agree,
I
think
it's
worth
the
investment,
so
it
keeps
us
from
guessing
right
and
you
would
hope
that
if
you
want
to
improve
and
your
customers
want
you
to
improve
that
they
give
you
the
feedback
in
a
direct
fashion
as
well
so
yeah,
I
like
the
idea
about
dialing
it
up.
If
the
trend
is
down,
there
may
be
an
element
of
that.
If
you
look
at
age
of
tickets,
you
know
suddenly
we
just
closed
out
a
bunch
of
tickets.
D
B
A
The
anmar,
I
think,
that's
really
helpful.
We
are
getting
close
to
the
end.
I
want
to
make
sure
that
we
get
to
elias
point
number
five,
really.
B
D
A
Okay,
if
not,
then
I'm
going
to
go
ahead
and
share
once
more.
Our
exit
criteria
define
conditions
under
which
the
metrics
group
will
be
reformed.
We
have
done
that.
100
create
a
dashboard
that
exposes
the
metrics
and
trigger
actions,
including
form
of
this
group.
They've
done
that
and
then
document
common
hypothesis
and
expose
the
data
support
nearly
done
with
that
so
outstanding.
A
There
is
making
the
analysis
a
little
bit
more
repeatable
and
including
notes
about
what
to
look
for
in
dashboards.
B
To
be
conscious
of
our
progress
and
the
fact
that,
right
now,
what
we
have
left
is
just
to
summarize
everything
and
and
presented
to
the
group.
Should
we
maybe
schedule
or
cancel
the
meeting
next
week
and
wait
for
two
weeks
to
accomplish
our
results.
A
Same
sign
to
me,
okay,
I
do
intend
to
get
the
managers
started
this
week
on
set
review,
so
we
may
have
a
few
more
voices
as
we
kind
of
get
things
in
there,
but
this
is
good.
I
think
we're
almost
done.
A
Okay,
so
we
will
cancel
next
week
I'll
work
on
that
and
we'll
see
you
all
in
a
couple
of
weeks,
thanks
for
your
time,
thanks
lau
thanks,
donate.