►
From YouTube: Support - Metrics Analysis Workgroup - 2020-09-29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
It's
20th
of
september,
hello.
Everyone
support,
metrics
analysis.
A
We're
just
discussing
before
the
recording
asset
and
how
our
number
of
total
replies
have
gone
down
and
talking
about
analyzing,
any
anything.
That's
come
out
of
those
negative.
As
that
reviews,
do
you
have
anything
more
on
that
topic.
C
I
would
mention
the
I
guess,
hypothesis,
but
probably
fairly
close,
the
2fa
in
the
free
user
extraction
of
work.
That's
been
done,
having
a
result
in
surveys
that
we
received
as
well.
So
our
responses
are
down
with
responses
down
the
more
negative
has
a
bigger
impact
and
when
we
removed
those
three
and
two
of
a
that
was
discussed
as
well,
so
as
potential
impact.
B
Something
that
doesn't
make
sense
to
me,
but
I'm
figuring
it
out
now,
is
that
the
responses
for
license
renewals
also
went
down
and
that
queue
is
like
98,
not
free,
that's
paying
customers
that
are
talking
about
their
subscriptions.
So
I'm
wondering
how
free
users
are
determined.
Is
it
the
org
link
with
salesforce.
D
Yes,
it's
only
if
they
specify
the
free
customer
tag
on
themselves.
D
Sorry,
of
course,
of
course,
showing
my
screen
so
frt
looks
to
be
increasing,
shared
effort
between
all
of
us
on
lnr.com
and
self-managed.
You
can
see
it
also
for
self-managed,
it's
been
steadily
going.
Up.Com
is
also
looking
great.
I
know.
Like
I
said
you
know,
support
crew
has
been
really
helpful
as
well.
I
know
in
other
regions
there
are
other
efforts
same
with
l
r,
so
I'm
still
doing
very
well
for
effort
for
nrt.
D
We
do
have
a
kind
of
a
stabilization
around
90
for
asset
and
in
total
for
the
month
of
september,
we
had
so
far.
Ninety
percent
that's
kind
of
the
end
of
the
month.
That's
probably
we're
gonna
stand.
I
just
pointed
out
that
the
amount
of
replies
that
we
received
was
444,
which
is
15
percent
our
ticket
volume.
Typically,
we
get
a
significantly
higher
response
rate,
but
that
being
said,
like
we've
discussed,
removing
the
free
customer
tickets
and
the
two
face
from
getting
those
surveys
did
lower
our
percentage.
D
Increasing
the
ratio
of
poor
replies
compared
to
rated
replies,
taking,
probably
is
the
reason
why
we
are
kind
of
sliding
a
little
bit
forward
in
this
month.
A
Great
thank
you
hypotheses
above
I
had
a
outstanding
one
on
number
six
other
than
that.
I
think
we're
all
closed
number
six
was
frt
oxer
spending
more
time
on
needs
or
tickets
and
that's
causing
a
different
performance.
A
I
put
a
note
in
there
and
I'm
going
to
leave
it
to
you
all,
but
I
think
this
one
while
interesting
is
maybe
not
worth
spending
a
lot
of
time
on,
given
that
we've
kind
of
isolated
the
problem
to
light
isolated,
the
overall
different
performance
to
coming
largely
from
the
sas
cues.
A
The
neat
sword
problem:
there
is
just
it's
different
and
then
we're
also
like
jason,
has
spent
a
lot
of
time.
This
quarter,
reducing
that.
So
I
think
it's
worth
taking
a
look
at
someday,
but
I
don't
know
if
it's
worth
spending
a
lot
of
time
in
this
group.
How
how
say
you.
C
Yeah,
I
agree,
I
think
we
give
a
month
or
two
for
jason's
efforts
to
to
take
hold
and
see
if
it's
impacting.
A
Oh
look:
number
nine.
I
opened
up
and
I'll
present
right
now.
That
was
one
that
came
out
of
the
work
that
ellie
and
I
did
did
they
share
the
right
screen?
Are
you
seeing.
D
The
issue
with
okay.
A
Normally,
there's
a
little
green
bar
around
my
screen
and
I
get
scared
when
it's
not
there,
so
the
hypothesis
is
that
sas
tickets
have
gotten
harder
and
so
ilia
and
I
observed
that
the
volume
has
been
relatively
steady
over
the
past
12
months,
but
the
median
ttr.
If
you
compare
self-managed
slightly
down
over
the
year,
maybe
unchanged
you
could
argue,
whereas
the
sas
queue
is
definitely
trending
up,
requester,
wait
time
and
self-managed,
arguably
pretty
flat,
maybe
slightly
down
in
sas
gone
very
up,
and
so
we
asked
the
question
like
support.
A
Engineers
working
south
q
is
have
tickets,
gotten
harder
and
so
cynthia
put
together.
Like
a
nice
thing,
you
can
read
that's
linked
here,
but
I
pulled
out
a
few
points.
A
One
was
kubernetes,
and
so
I
was
asking
the
question
like:
what's
the
what's
the
growth
in
the
number
of
integrated
clusters,
the
idea
being?
If
we
have
more
clusters
connected
to
gitlab,
then
there's
likely
more
tickets
related
to
kubernetes.
A
A
A
C
A
A
A
Upward
trend-
this
is,
I
think,
the
noisiest,
though,
because
we've
kind
of
we
had
separate
repositories.
We
combined
them.
We
moved
things
around
and
so
and
we
also
increase
the
number
of
people
handling
these,
because
there's
some
l
r
issues
in
here
now
as
well,
and
so
it's
a
little
bit
hard
to
draw
any
kind
of
conclusion
from
from
that.
C
I'm
curious
of
the
net
in
the
packages
piece.
Do
we
do
we
think
that's
the
quality
of
that
capability
or
just
the
complexity
of
trying
to
figure
it
out
perspective.
A
I
I
think
it's
the
complexity,
okay,
so
I
the
what
happens.
I
think
in
a
lot
of
these
tickets
are
it's
it's
all
integrated
into?
How
do
I
make
my
ci
work
and
so
then,
that
dives
into
sort
of
the
complexities
of.net
how
nuget
builds
packages,
how
they
get
published?
And
so
there's
it's
not
just
like
the
package
registry,
like
from
a
quality
point
of
view.
A
C
Well,
I'm
gonna
just
anecdotally,
bring
it
up
with
mack.
I
have
a
meeting
with
him
like
we're
scheduled
meeting
with
him
today,
and
so
these
areas
that
we
potentially
identify
but
are
still
investigating,
are
worth
bringing
up
to
him
as
well,
so
that
he
can
potentially
correlate
to
what
they're
tracking
as
issues
when
from
a
qa
perspective
and
maybe
help
us
get
a
bit
attention.
Even
if
it's
a
training
attention
not
fix
the
product
attention.
A
Yeah,
I
think,
there's
also
it's
and
we
observed
this
early
on
when
we
released
like
maven
support,
there's
really
quick
adoption
on
dot
com,
where
the
feature
gets
released
before
release
folks
notice
it
and
they
start
using
it
and
the
docs
aren't
quite
there
and
we
we're
always
kind
of
playing
catch
up
when
stuff
like
this
gets
released
and
it
kind
of
gets
to
the
point
where,
like
we
can,
we
can
do
good
enough
for
a
while,
and
then
somebody
has
to
just
be
like
okay,
that's
it.
A
I'm
tapping
out,
I'm
gonna
be
expert,
I'm
going
to
handle
all
of
these
tickets,
and
we've
seen
that
I
caleb
has
done
a
really
great
job
of
kind
of
like
becoming
the
team
expert
on
packaged
stuff,
but
it
does
mean
that
really
solid
replies
are
going
to
be
limited
to
amer
region,
and
so
I-
and
I
a
lot
of
this-
is
a
conjecture.
If
anybody's
watching
this
recording,
I
I'm
not
trying
to
say
that
you're
doing
a
bad
job.
I
really
don't
know.
A
I
haven't
looked
at
a
lot
of
these
tickets,
so
this
is
a
lot
of
hearsay,
but
the
general
theme
is
that
we
get
a
single
expert
in
the
world
and
they
end
up
handling
a
lot
of
the
stuff
which
can
cause
delays,
and
it
also
means
that
they're
not
contributing
in
other
areas.
So
I
think
that
there's
some
sense
of
we
need
to
yeah
spread
the
knowledge
more
evenly
throughout
the
team
and.
C
Earlier,
I
think
that
that
makes
a
lot
of
sense
when
you
talk
about
the
the
releases,
the
speed
of
releases,
the
preparation
releases
and
how
we
take
that
after
the
fact,
activity
and
move
it
forward
is
part
of
the
release
activity,
for
example,
right
and
and
at
a
global
level
right
so
caleb
as
an
example
gets
gets
the
benefit
of
it.
He
figures
it
out.
C
He
put
some
instruction
modules
together
and
they
start
to
get
distributed
across
different
teams
when
they
have
an
opportunity
to
actually
take
them
and
now
we're
months
behind
the
release
right,
not
in
front
of
the
release,
so
cool
thanks.
D
Yeah,
I
think
something
worth
mentioning.
Is
that
another
indicator
as
to
why
we
might
have
something
additional
here
is
the
number
of
engineers.
So
we
we
check
the
number
of
active
engineers
on
the
queue
around
august
2018.
It
was
six
around
august
2020.
It
was
14.,
so
just
an
additional
compounding
factor
as
to
why
we
suspect
that
there
is
something
beyond
load
and
something
to
do
with
the
types
of
tickets
that
are
being
handled.
D
It's
not
I
mean
I'm
guess
I'm
suggesting
is
that
it
is
likely
the
difficulty
of
the
tickets,
but
something
intuitively.
I
feel
that
there
might
be
something
else
that
we
might
be
missing
and
worth
investigating
into
still.
I
don't
have
any
hypothesis
about
it
yet.
A
Yeah
I
mean
it
isn't.
Sas
is
interesting
because
I
think
there's
an
opportunity
like
the
speed
of
a
race
release
right
now.
It
feels
like
it's
like
hurting
us
a
little
bit,
but
if
we
it
could
be
also
an
advantage
where
like,
if
we're,
if
we're
encountering
those
tickets
quickly
and
then
feeding
that
information
back
into
the
team,
like,
I
think
the
self-managed
they
see,
they
don't
see
tickets
on
well,
they.
A
See
some
within
the
month
of
the
release,
but
I
think
there's
probably
a
ramp
up
which
would
be
worth
taking
a
look
down
the
road,
but
my
guess
is
probably
about
three
to
four
months
is
when
we
start
seeing
tickets
about
a
lot
of
tickets
about
the
month.
The
the
release
three
to
four
months
before
so.
If
we
get
build
that
expertise
rapidly
in
the
sas
team,
export
that
to
self-managed
like
we're,
building
a
learning
machine
that
is
gonna
just
make
all
of
our
replies
better.
D
We
want
to
consider-
I
don't
know
if
it's
relating
to
this
group,
but
tackling
the
problem
types
in.com.
Yes,
I
mean
I
was
just
thinking.
I
know
it
might
be
not
related
to
this
group,
but
maybe
we
can
have
a
free
form
field
of
a
problem
type.
D
Maybe
it's
not
the
best
for
analytic
collection,
but
it
can
kind
of
in
a
quick
glance.
You
can
see
what
is
the
type
of
issues
that
are
being
handled
and
maybe
we're
missing
a
new
problem
type.
Then
we
can
see
from
the
free
forum
type
and
also
what
we've
discussed
the
previous
week's
live
about,
the
difficulty
of
the
ticket
kind
of
field.
That
might
also
give
a
bit
of
more
visibility
into
this.
D
A
One
of
my
far
dreams
is:
if
we
could
open
up
tickets
within
get
lab.
We
could
capture
like
the
url,
where
I'm
having
a
problem
and
that
could
help
us
correlate
both
to
pm,
as
well
as
to
problem
type
where,
if
they're
on
the
merge
request,
page
and
they're
saying
I'm
getting
a
500.,
then
we
can,
we
can
say
like
folks
who
handle
merge,
requests
page.
This
is
something
you
might
want
to
take
a
look
at.
We
know,
I
don't
know
we
don't
talk
about
that.
I
want
to
talk.
A
The
next
thing
I
wanted
to
talk
about
is:
I
wanted
to
spend
some
time
on
the
exit
criteria.
We've
done
a
good
job
of
tackling
hypotheses.
We
haven't
talked
so
much
about
the
conditions
under
which
this
group
reforms
or
the
targets
that
will
trigger
actions
on
part
of
the
support
team,
including
forming
this
group.
A
I
think
probably
the
second
one
will
be
the
easier
one
to
talk
about.
So
what?
When
do
we
trigger
actions?
At
what
point
do
we
like?
We
talked
about
like
two
percent?
Okay,
I'm
sorry,
I
should
should
make
notes.
I
guess
the
question
is,
and
we
talked
about
this
a
little
bit
at
what
granularity.
C
A
C
I'm
not
sure
I'd
be
that
concerned
on
nrt,
but
there
may
be
a
longer
target
on
nrt
or
a
more
gracious
gap
that
we
would
accommodate
or
a
lot
more.
But
you
know
my
view
is:
if
we've
we've
gone
one
week
at
93
percent
frt
and
the
next
week
is
a
92,
then
we'd
initiate.
C
If
we
go
93
first
week
and
94
the
second
week
we
wouldn't
need
to
initiate,
because
it's
just
bordering
on
close
to
the
target
right
or
within
that
range
of
acceptability,
and
so
that
also
means
that
until
we
move
the
needle
on
the
ones
that
are
below
that,
that
range
now
that
we
continue
to
push
on
solutions.
So
that's
my
thought
I
mean.
Maybe
that
needs
to
be
extended
or
shortened
on
a
time
frame.
But
that's
how
I
would
view
it.
C
I
think
it's
manager
sinks,
we
have
them
the
kpi
review
and
the
manager
syncs,
and
we
should
be
making
the
action
happen
in
those
syncs
when
these
triggers
are
reached
that
make
sense,
so
we're
not
creating
another
audit
team
or
whatever
else.
It's
up
to
managers
to
to
make
sure
that
that
that
audit,
if
you
will
is,
is
happening
on
a
weekly
basis
and
then
triggering
a
worker.
C
C
And
we
refine
the
focus
right.
So
if
we
get
to
93
on
nrt
for
everything,
then
our
focus
is
on
sat
for
the
pieces
that
are
out
of
that
range,
so
it
kind
of
narrows
the
focus,
but
you
know
I
I,
if
we
and
whatever
it
is,
I
mean
if
it's
three
percent,
I'm
fine
with
three
percent.
Whatever
you
guys
think
is
the
right
trigger
it's
just
the
the
further
you
get.
C
The
more
work
may
need
to
be
done
to
get
back
on
track
right,
so
it
may
be
and
that
that
could
be
the
further
percentage
you
allow
for
the
grace
or
the
longer
time
you
do
a
validation
of
the
trend
right
either
way
it
could
be.
If
you
make
them
longer,
it
could
be
harder
to
get
back
on
track.
A
Okay,
so
I
mean
this
makes
sense
to
me,
but
what
we
need
is
a
really
easy
way
to
determine
the
delta
in
those
two
weeks
where
it's
like.
C
Good
question:
I
don't
know
about
the
green
red,
I
mean
the
we
have
the
weekly
breakout
in
the
data
so
and
it's
not
a
it's,
not
a
percent
of
some
other
value.
It's
a
absolute.
You
know
two
percent
from
95
kind
of
thing
or
three
percent
from
95.
and
frankly
I
I'd
be
happy.
If
it
was
we
we're
at
92,
csat
or
sorry
says
that
on
on
a
consistent
basis,
because
it's
95
is
really
hard
to
achieve
in
the
kind
of
world
we're
in.
But
you
know,
but
again
that's
just
my
thoughts.
D
I
mean
I
can,
I
can
add,
just
as
an
initial
iteration,
a
lower
lower
graph.
Let's
say
we
agree
on
92
percent,
so
there's
going
to
be
green
line
at
95
and,
let's
say
red
line
at
92,
but
I
can
try
to
maybe
set
up
like
a
big
button
that
says
we're
over
we're
under
and
that
it
would
show
it
could
be
great,
be
green
or
something
like
that.
B
For
eleanor,
which
is
maybe
just
something
some
context-
there's
a
film
business
fulfillment
meeting
every
week
with
sales
and
product
and
I.t,
so
a
bunch
of
stakeholders
coming
together
and
they
used
to
have
a
collective
target
of
cset
95
for
eleanor
as
a
measure
of
whether
the
the
teams
that
do
work
that
impact
eleanor,
whether
whether
the
customers
are
being
served
and
as
a
measure
of
sort
of
successful
product
and
sales
and-
and
I
t
because
they're
linked
to
the
portal,
so
they
use
that
they
have
now
sort
of
switched
or
we're
switching
to
a
subscriptions
to
ticket
ratio.
B
So
seeing
how
many
subscriptions
we
have
and
how
many
tickets
we're
getting
in.
So
it's
switching,
but
there
is
still
they're
still
looking
at
the
sea
set.
So
just
from
eleanor
perspective
every
week
there
is
a
focus
on
what
is
the
c
set
and
when
it
drops,
then
there
is
the
question
of
what
happened
so
from
eleanor
every
week.
B
We
kind
of
ask
that
question
and
we
we
look
at
things
every
single
week
and
we
have
to
figure
out
what
happened
and
why
why
the
c-set
dropped
or
increased
and
yeah
from
because
from
their
perspective
it's
not
just
support.
It's
a
it's
a
collective
effort
from
all
of
those
teams
together.
So
that's
just
something
something:
to
keep
in
mind
for
for
our
side,.
C
So
that's
the
question,
then,
what
I
think
that
raises
is:
do
we
just
exclude
l
r
from
our
triggers
or
and
have
dominique
bring
things
that
come
at
that
business
meeting
that
are
support
related
to
this
meeting.
If
a
work
group
exists
or
duplicate
the
view
of
lnr
triggers
in
this
nrv,
that
makes
sense.
It's
certainly
enough
to
handle
companies.
B
I
wouldn't
want
eleanor
to
lose
out
on
focus
for
managers
and
appropriate
action
that
might
need
might
be
needed
when
our
the
metrics
for
support.
When,
when
that
gets,
you
know
at
a
level
that's
maybe
worrisome
or
when
action
is
needed,
then
I
think
eleanor
definitely
needs
a
manager
to
have
be
able
to
think
about
it
and
and
make
decisions
because
yeah,
I
certainly
don't
feel
comfortable
to
yeah.
B
Even
though
I'm
looking
at
every
week,
I
would
really
love
for
the
focus
to
remain
and
be
treated
as
the
other
cues,
so
that
decisions
are
made
that
not
only
benefits
eleanor
but
also
the
others
and
vice
versa.
So.
C
Yeah,
I
agree,
I
don't
want
it,
I
don't
necessarily
want
to
separate
it
out.
I
just
also
don't
want
to
duplicate
work
or
have
multiple
threads
doing
the
same
thing
in
different
teams.
So
we
need
to,
you
know,
make
sure
hey.
If
that's
already
being
done,
then
we
can
check
off
okay,
we
don't
need
to
analyze
it
we'll
just
keep
track
of
it
kind
of
thing
and
not
go
off
and
start
another
thread
of
well.
I
need
bizops
to
go.
C
Do
these
things
or
I
need
sales
to
do
these
things
when
they're
already
being
taken
care
of
in
this
other
business
group.
So
I
think
we
keep
it
we'll
analyze,
it
keep
the
trend
identified
and
from
a
manager's
perspective,
and
then
we
can
sink
back
and
say:
okay,
are
these
already
being
addressed
in
a
different
form.
A
So
if
we're
taking
a
look
at,
sat
in
frt,
primarily
like
what
what
actions
might
we
propose,
we
take
to
shape
these
so
we're
in
a
heightened
state,
we're
below
our
95
we're
below
93
on
set,
and
so
what
are.
A
To
help
shape
that
it
seems
like
potentially
one
thing
would
be
like
the
s
reviewing
manager
should
be
like,
rather
than
us
asking
hey.
Did
you
notice
any
trends?
They
should
be
actively
reporting
on
that,
like.
C
Is
that
I
agree,
I
don't,
I
see
you
know,
activity
on
and
maybe
I'm
just
missing
something
somewhere,
but
what
I
don't
see
is
an
analysis
of
the
trend
of
that.
A
summary.
C
B
A
If
they're
not
detecting
any
trends,
then
is
the
action
then,
to
try
and
increase
the
number
of
assets
that
get
submitted
to,
or
I
was
imagining
so
like
I'm
sitting
down,
I'm
looking
at
all
these
things
and
it's
just
all
across
the
board.
I
have
no
id
like
there's
no
trend,
it's
just
people
are
like
generally
annoyed,
and
so,
as
I'm
reviewing
this
assad
like
what
am
I
going
to
tell
the
management
group
and
what
what
thing
am
I
going
to
do
to
get
this
up.
A
A
C
It
yeah,
I
think
it's
a
it's
an
interesting
idea,
because
it's
it's,
I
think
it's
one
of
those
philosophical
things.
Do
you
ask
more
people
to
make
sure
you
fill
out
a
c-set
thing
and
how
does
it
make
it
feel?
I
know
lee
was
looking
at
it
from
a
targeted
customer's
perspective
at
one
point
with,
I
think
it
was
thiago.
They
were
looking
at
data
for
enterprise
level,
customers
or
something
and
going
to
tam
specific
for
that
customer,
so
that
we
could
get
more
data
from
a
satisfaction
perspective.
B
C
Survey
to
some
extent
so
that
it's
different
because
as
customers
see
the
same
thing
over
and
over
again
they're
less
likely
to
click,
but
if
they
see
some
kind,
this
is
new.
Let
me
check
into
this.
That
starts
to
generate
a
little
bit
more
interest
as
well.
C
A
B
A
D
You
want
to
point
out
a
philosophical
approach
to
asset
and
effort
declining,
it's
possible
that
if
we
look
at
the
asset-
and
we
do
understand
what
were
the
causes
last
month-
and
we
might
just
say
okay,
we
know
why
it
happened
last
month.
We
don't
expect
it
to
happen
next
month
and
that
also
can
be
a
potential
action,
just
understanding
what
were
the
causes?
Yeah,
not
necessarily
I
mean
acting
upon.
D
D
And
trying
to
understand
whether
we
can
narrow
down
where
the
effort
is
specifically,
I
mean
we
try
to
do
that
with
our
points
hypothesis,
but
it's
possible
that
there
might
be
again
something
new
popping
up
a
new
product
causing
people
to
be
afraid
to
pick
up
the
ticket
or
some
extra
ptos
and
potentially,
all
of
our
hypotheses,
I
guess,
can
be
temporarily
looked
at
whenever
a
situation
comes
up
to
see
if
they
might
be
relevant.
In
future
events.
A
B
A
We
touched
on
reforming
this
group
in
that
we
said
not
closing
this
group
so
long
as
metrics
are
down
but
say
we
get
back
to
a
point
of
everything
is
wonderful,
95
across
the
board,
and
then
we
see
some
dips
are
we?
When
do
we
reform?
Is
it
after
a
large
percentage
drop?
Is
it
sustained
drops
over
some
period.
D
I
mean
we
reformed
now
at
80
percent.
I
thought
it
just
was
an
example
of
where
we
decided
it
was
a
continuation
for
over
two
months.
Does
that
make
sense,
or
is
it
too
late?
C
If
we've
got
these
triggers
at
some
percent
underneath
for
two
weeks,
then
we
reform.
If
we
can't
adjust
quick
enough,
then
we
reform,
I
think
in
the
next
week
or
two
maybe
give
an
opportunity
for
whatever
change
or
idea
to
take
effect.
I
think
we
can
probably
I
think
if
we
have
again,
we
come
down
to
what's
us
mean
from
a
criteria
perspective,
but
we
we
judge,
based
on
we
judge
a
regroup
and
meeting
sync
based
on
the
severity
of
the
the
challenge
right.
C
So
if
we're
at
ninety
percent
for
three
weeks-
and
we
can't
seem
to
make
a
change,
but
it's
only
it's
frt-
not
s-
that's
great
frt
is
then
we
can
continue
to
do
things
async,
but
if
both
frt
and
sn
are
going
down-
and
we
haven't
been
able
to
impact
things
through
these
trigger
identifications-
and
you
know
validated
hypothesis,
then
we
regroup.
C
Impact
impact
and
change
and
how
we
rolled
into
the
manager,
so
the
first
idea
is
to
ensure
managers
are
where
this
group
is
going
to
be,
and
we
can
rotate
people
in
and
out
or
whatever.
However,
you
want
to
do
it
but
responsible
for
I
hate
to
say
policing,
but
fundamentally
that's
what
it
is
right
and
until
maybe,
when
the
manager
sinks
start
to
take
over
some
of
the
roles
and
triggering
through
the
review
of
the
data,
then
date,
then
actions
can
be
taken
out
of
those
manager
sinks
and
avoid
having
this
group
regroup.
D
C
D
C
A
A
C
Sorry
go
ahead,
I
I
think
that's
a
if
we
don't
see
progress
right,
so
it's
like
you
said
we're
building
these
hypothesis
that,
based
on
the
last
four
or
five
months
or
whatever,
and
if
those
things
that
we've
identified
to
say
you
know
it's
staffing
its
attention
to
the
hawk
period,
whatever
it
happens
to
be,
and
we
take
those
actions
and
there's
an
influence
on
the
data,
then
that's
fine.
C
And
I
think
that's
the
whole.
For
me,
the
general
purpose
is
to
be
a
high
performing
team.
The
other
purpose
is
to
not
create
a
lot
of
work,
because
we've
we've
ignored
the
challenges
for
several
weeks
and
now
have
to
fight
our
way
back
uphill
right.
So
that's
that's
trying
to
capture
it
catch.
It
early,
take
action
earlier
and
avoid
regrouping.
A
All
right
anything
else
on
this,
I
I
think
the
action
item
for
me
is.
I
can
I'm
going
to
put
this
together
in
an
mr
and
we
can
review
that
as
a
group
asynchronously
and
then
maybe
finalize
it
in
our
meeting
next
week.