►
From YouTube: 2022-07-21 Enablement 15.3 NEXT Prioritization
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thank
you
yeah,
so
welcome
everyone.
This
is
the
enablement
cross,
functional
prioritization
review
for
release,
15
3,
and
we
have
done
our
homework
prior
to
this
meeting.
So
let's
go
to
the
questions
to
answer
and
go
through
each
list
see
if
there
are
any
questions.
The
first
question
are
dashboards
accurate.
If
not,
the
one
is
to
be
done
to
correct
the
name
number
one
a
is
for
current
milestones.
Please
clean
up
issues
to
be
defined
with
a
probability
type.
A
So
my
comments,
I
have
a
few
comments.
First
of
all,
I
want
to
highlight
that
distribution
teams
are
excluded
from
this.
This
exercise
for
this
release
because
they
are
focusing
on
the
final
stretch
of
fips,
so
I
told
them
they
don't
need
to
do
this
so
make
sure
please
understand
distribution
data.
I
already
singled
singled
out
from
the
spreadsheet
and
oh.
B
Here
is
jen,
is
this
due
to
the
time
it
would
take
to
do
the
exercise
or
other
reasons.
C
A
Here
the
expression
is
in
the
in
the
chart
window.
Oh,
thank
you!
A
That's
what
wayne
prepared,
so
I
just
added
a
attack.
B
And
now
is
to
make
sure
the
data
was
accurate
enough
to
do
the
review.
Eric's
recommendation
was,
which
I
think
is
wise.
Is
that
the
single
source
of
truth,
the
issues
themselves
should
be
the
count
of
those
should
be
within
five
percent
of
what
the
dashboard
shows
or
the
reviews
weren't
worth
doing
until
we
get
the
data
correct,
so
it
is,
is
enablement
is,
are
the
dashboard?
Do
the
dashboards
have
a
prop?
You
know
within
five
percent
of
the
same
number
of
issues
for
enablement
as
the
single
source
truth.
B
A
A
B
A
And
also
for
the
discussion
of
the
distribution
of
the
types
of
work,
please
look
at
the
charts
here,
because
the
science
is
is
not
up
to
date.
A
Okay,
so
distribution
is
excluded
and
for
all
the
53
issues,
every
issue
was
labeled
with
the
type
label
except
one.
This
one
is
a
planning
issue,
so
there
was
a
discussion
in
the
slide
how
to
process
how
to
process
the
planning
issues,
and
my
next
bullet
is
a
few
scenes
I
want
to
that.
May
let
me
that
led
to
the
accurate
data
or
high
percentage
of
undefined,
so
the
things
I
I
listed
here
when
you
sorry
you
messed
up
my
my
other
right.
You
want
to
verbalize
this.
A
B
Sure
yeah,
just
clearly
when
we
said
the
dashboards
were
inaccurate,
everything
is
like
the
dashboards,
don't
represent
the
the
issues.
B
Undefined
is
also
really
important,
but
we
wanted
to
make
sure
that
so
in
terms
of
accuracy,
we're
looking
at
are
the
dashboards
accurate,
not
how
many
issues
are
undefined,
but
still
really
good
analysis
here.
Just
a
clarification.
A
Yeah,
the
the
accuracy
is
because
of
the
data
refresh
delay.
That's
all
so
it's
not
up
to
date.
The
spreadsheet
is
up
to
date,
data
and,
of
course,
the
discussion
about
the
planning
issues
is
happening
in
slack.
So
the
question
is
whether
we
want
to
add
a
tight
planning
for
these
issues.
So
no
conclusions,
so
we
can
continue
the
discussion
in
the
slack
or
do
we
want
to
discuss
that
here.
Yeah.
A
Thank
you.
Thank
you.
Wayne
yeah
distribution
was
not
there
and
also
data's
science
delay.
So
that's
the
major
source
of
the
discrepancies.
D
Yeah,
I
I
look
at
these
every
week
and
I
think
the
the
the
search
I
use
has
significantly
missed
simply
different
data
than
what
the
chart
show,
and
so
I
I
think
I
had
some
confusion
on
why
that's
happening.
A
Yeah,
I
think
the
search
will
use
the
relies
on
the
means
that
I
said.
Oh
I,
my
gut
feeling
is
this:
this
label
is
not
applied
consistently
from
the
table
inside
sense,
so
we
have
one
with
159,
plus
two
bucks,
no
ice
ones,
only
eight
s2,
so
that's
only
five
percent
of
the
overall
pasta
deal.
So
I
think
we
are
in
good
position.
Speaking
of
positive,
bugs.
E
So
there
was
another
conversation
further
down.
I
included
a
screenshot
there,
but
there
is
a
pretty
good
number
of
overdue
bugs
that
159,
I
think,
doesn't
include
s4s
there's
another
100s
force.
A
E
D
Yeah,
I
think
it'd
be
good
to
make
sure
that's
fixed,
because
it's
much
easier
to
deal
with
than
the
dashboards
and
license
it's
up
to
date.
You
know
you,
can
you
can
label
things
you
can
click
into
them.
So
yeah.
I
think
that's,
that's
really
good
to
try
and
get
updated,
no
john!
If
you're
taking
action
there,
thanks
yeah,
I
can
do
it
too.
A
Okay,
we
don't
have
any
infra
dev
issues
and
we
have
seven
positive
security
issues.
A
I
have
security
issues
will
be
reviewed
later,
so
I
will
share
more
details
later
when
we
get
to
the
security
issues,
item
and
christy.
You
want
to
verbalize
your
comment
about
the.
F
Yes,
maybe
I'm
looking
at
that
one
wrong.
Oh,
that
was
severity.
Okay,
wait!
Okay,
no
okay,
got
it
we're
actually
down
to
one
sorry:
we've
had
lots
of
problems
with
the
data
being
accurate
around
unseveritized
sus-impacting
issues.
F
Let
me
delete
this
comment
because
I
just
was
looking
in
the
wrong
place.
This
I
did
see
seven
yesterday
there
were
nine
in
the
dashboard,
but
now
looks
like
we're
down
to
one
in
the
dashboard,
so
maybe
we're
okay
in
enablement.
A
Yeah
this
one,
this
search
really
returns.
Seven
open
issues
here,
but
I
guess
the
question
from
maybe
for
wayne
or
or
you.
G
A
To
the
leaders
this
this
is
a
concern
or
or
some
guidance
there's
some
students
here.
B
F
The
trends
I
I
agree
that
the
list
isn't
as
helpful,
as
is,
I
think,
what
I'm
still
seeing
wayne
and
what's
just
really
confusing
to
me,
is
when
I
look
at
the
issue
list
versus
the
dashboard
they're,
not
in
sync.
So
now,
even
when
I
we
look
at
the
issue
list,
it
still
says
there
are
seven
unseveritized.
F
But
when
I
look
at
the
chart
here
it
says
it's
only
one
and
then
yesterday
it
was
seven
and
nine.
So
I'm
like
I
don't
know,
what's
happening,
I'm
assuming
the
issue
list
is
the
the
source
of
truth.
I
don't
know,
I
don't
want
to
derail
the
whole
conversation
around
this.
I
just
keep
trying
to
call
this
out
where
I'm
seeing
the
discrepancies.
That's
all.
A
Yeah
but
overall
enablement
is
not
a
very
heavy
unsized.
That
was,
you
know
a
later
comment
to
from
george,
so
linda
geo
and
and
global
surge.
Have
some
final
front-end
features
there.
So
we
are.
The
rest
of
the
team
are
very
low.
On
the
on
the
front
end.
A
If
no
more
questions
around
this,
I
will
move
on
to
the
second
bullet:
is
the
maintenance
from
development
getting
prioritized
josh.
D
D
C
Like
you
said
yes-
and
I
was
gonna
like
well
like
help
me
out
a
little
bit
here-
to
understand
yeah
so
yeah,
it's
it's
it's!
It's
kind
of
interesting
that,
like
like
some
teams
are
really
high
and
then
other
teams
are
not,
I
wouldn't
say,
really
low,
but
you
know,
like
you
know,
12
12
and
a
half
percent
feels
like
wow
gosh.
That's
that's
really
low,
but
like
to
your
point
like
maybe
that's,
the
answer
is
where
we're
seeing
something
that
maybe
is
more
maintenancy
is
we're
working
on
characters
and.
C
D
Yeah,
I
that
that's
a
certain
I
can
validate
that.
I
I
because
I
doing
the
kickoff
we've
been
working
on
cell
service
framework
for
the
past
like
nine
months
and
yes,
we
will
be
like
by
virtue
of
moving
things
to
the
cell
service
framework.
We
are
adding
features,
because
this
in
some
cases,
is
more
full
feature
than
the
previous
code,
but
it's.
E
D
What
we
like,
I
think,
we've
removed
like
10
000
lines
of
code
we've
consolidated
from
15
different
ways
to
replicate
data
types
to
one
like
I.
I
think
it's
as
much
maintenance
as
it
is,
as
it
is
feature
so
I'll
double
check
and
make
sure
that's
actually
why
it's
it's
12.5
percent.
But
I
that's
my
guess
here
and.
C
Memory
memory
gain
almost
80.
Do
we
do
that
as
a
problem,
or
do
we
view
that,
as
like
kind
of
how
should
like
to
me,
that's
that's
the
other
one
that
kind
of
jumped
out.
D
I
I
think,
that's
that's
their
in
that's,
primarily
their
mission,
they're
they're,
trying
to
find
areas
of
our
code
base
that
are
inefficient,
that
are
slow
and
and
try
to
fix
them.
We,
we
will
be
pivoting
them
more
towards
performance,
but
I
think
even
the
performance
work
will
still
be
largely
building
the
frameworks,
so
teams
can
better
understand
their
performance
and
then
also
potentially
fixing
some
of
these,
and
so
it's
going
to
be
a
you
know,
not
user-facing
feature
work
for
primarily
much
of
their
output.
A
Yeah
enablement
overall
is
maintenance,
heavy
and
I
think
the
geoteam
this
this
release
and
probably
this
quarter
will
be
outlier
from
the
normal,
because
the
remaining
capacity
50
of
the
remaining
capacity
is
going
to
work
on
the
dedicated
a
geo
for
dedicated.
So
that
is
that
is
new
feature.
D
I
one
quick
voiceover
I
worked
on
distribution
distribution
has
been
working
heavily
on
fips
and
on
operation
vacation
as
the
two
main
items
they've
been
working
to
deliver
as
those
wrap
up
here.
Hopefully,
this
milestone
knock
on
wood.
Will.
One
of
the
major
next
themes
is
to
go
work
on
improving
our
pipeline
efficiencies
and
other
efficient
projects
for
the
overall
group,
and
so
that
will
be
ticking
up.
I
would
expect
here
in
the
coming
months
it's
already
above
30,
but
I
would
you
know
it
changes
deployment
higher.
A
Yeah,
the
improved
efficiency
will
be,
will
be
maintenance
and
then
the
other
major
part
will
be
fedramp,
but
still
to
dvd.
What
that
work
is.
D
Yeah,
I
also
look
at
these
every
week
and
we
from
my
search,
I
was
using
really
had,
I
think,
eight
or
ten
missed
slo
bugs,
but
I
I
think
that
that
likely
is
due
to
the
cause
of
the
label
not
being
applied
so
yeah.
I
think
sean
you
should
come
in
further
here.
D
A
Yeah
lily
just
added
this
table,
that's
a
repeating
information,
so
159
in
this
table
past
the
deal
box.
Dania,
you
want
to
mobilize
your
point
here.
E
Yeah,
so
I
I
included
a
link
to
the
book
dashboard
that
kiwi
uses
to
make
the
suggestions
for
bugs
each
milestone.
This
is
specifically
scoped
to
all
the
enablement
groups,
so
it
should
be
specific.
Just
to
this
section
we
are
seeing
the
bug
backlog
growing
and
a
good
number
of
backlog
bugs
that
are
past
slo,
mainly
the
bug
backlog
is
being
driven
by
s3s
and
s4
is
growing.
A
Looks
like
the
pasta,
mismatches
the
table
that
lily
prepared
in
the
next
review
board.
So
would
you
mind
to
sync
up
with
lily
to
get
the
data
synced
because
of
this
ice,
2
pasta
due
is
22,
but
in
that
table
pasteur
is
only
eight.
I
think,
as
I
took
a
look
last
night.
E
Yeah
I'll
touch
base
with
lily
this
past
due
is
based
purely
on
the
date
I
think
for
us
to
use
its
default
60
days.
So
if
the
bug
is
older
than
60
days,
it's
counted
here,
I'm
not
sure
if,
if
lilies
is
maybe
based
on
when
the
label
is
added,
there
might
be
some
mismatch
there
I'll
touch
base
with
her
and
figure
out
where
we're
different.
C
You
don't
need
to
investigate
this
time
yet,
but
I
just
want
to
ask:
do
you
know
if,
like
the
the
number
of
incoming
bugs,
is
in
line
with
what
we're
thinking
or
do
you
have
any
feel
for
that,
based
on
what
you've
done.
C
E
Yes,
that's
very
different,
based
on
each
section
in
each
group.
It's
it's
pretty
reasonable
for
enablement.
I
would
say
it's
if
we
were,
if
we
just
raise
a
little
bit
the
number
of
bugs
we're
addressing
it's
we're
addressing
the
right
number
and
the
number
is
pretty
low
compared.
C
I
mean
this
is
more
a
general
question
for
you
and
I
don't
kind
of
work
out
outside
of
this
conversation,
but
like
conceptually.
What
I'm
trying
to
think
about
is
how
to
get
ahead
of
the
game
right
so
like
if
it
turns
out
that,
like
the
group
has
a
what
you
feel
like
is
an
incoming
rate.
That's
too
high
an
escape
rate.
That's
too
high!
Then
we
should.
We
should
be
working
to
address
that.
Probably
first,
we.
C
E
C
Again
again,
yeah
major
time
effectively
like
this
doesn't
have
to
be
like.
This
could
be
two
milestones
from
now.
We
start
to
have
this
conversation,
but
like
conceptually,
I
think
that's
where
we
went
ahead
eventually.
E
E
D
I
think
I
think
just
comment
down
here.
I
agree
we
need
to
tackle
the
s2s.
It's
surprised
to
me,
like
you
know,
based
on
earlier
comments.
I
do
try
and
stay
on
top
of
these
and
having
22
or
whatever
the
number
is
open.
That's
you
know,
and
and
beyond
beyond
do
I
think
you
know
I
agree.
We
need
to
solve
them.
I
think
the
one
thing
on
s3s4
in
particular
for
an
event.
We
have
a
huge
surface
area
of
our
product
and
I
think
I'm
not
saying
we
don't.
D
We
shouldn't,
try
and
keep
up
or
burn
down.
I
think
we
should
just
make
sure
we
approach
them
intentionally
and
try
and
make
sure
we're
fixing
them
the
areas
where
we
want
to
keep
investing,
because
you
know
there's,
like
the
background,
object,
store,
storage
like
we
want
to
replace
that,
and
things
like
that,
so
I
think
we
just
want
to
make
sure
we
keep
keep
an
eye
on
that,
or
maybe
we
just
use
this
opportunity
to
just
deprecate
and
officially
remove
support
for
things
too
and
be
like
look
we're
not
going
to
touch
it.
C
About
this
yesterday
and
just
so
we're
consistent
and
message,
you
know,
one
of
the
things
is
is
like
we
want
to
look
at
these
charts.
You
know,
burning
down.
Them
is
not
necessarily
the
absolute
requirement
here,
like
our
business
decisions,
to
be
made
here.
C
That
may
not
make
everybody
in
this
room
satisfied,
and
I
get
that,
but,
like
that's
the
that's
conversations
we
want
to
be
having
associated
with
them
now,
if
the
answer
is,
is
we're
gonna
we're
gonna,
have
a
lot
of
extra
bugs
and
make
the
problem
even
worse.
I
think
that's
where
you
would
see
definitely
escalation,
but
if
it's
hold
the
line,
for
you
know
some
period
of
time
because
of
either
business
immediate
business
needs,
and
those
are
considerations
that
we
have
to
look
at
cool.
E
C
You
look
at
as
well
that
aspect
thanks.
Thank.
A
You
tanya,
let's
move
on
to
holding
seven
minutes
remaining,
so
we
move
faster,
so
portion
issues.
The
proportion
of
issues
types
is
changing
from
milestone
to
milestone.
Answer
is
not
significantly
any
questions.
A
Okay,
then,
the
predictability
of
of
issues
milestone
to
milestone
so
josh's
question
is:
how
do
you
tell
the
wrong
number
of
issues
so.
B
A
Yeah
and
also
over
the
the
mouse
cursor
over
the
chart,
you
will
see
the
total
in
the
tooltip
and
my
question
is
actually
I'm
not
sure
what
this
metric
tells
us,
because
there
are
many
variables
here
that
impact
the
total
number
of
issues
from
released
from
release,
because,
like
capacity
variation,
each
way
the
priority
shifts
during
the
release.
A
So
I
have
another
chart
on
my
board
to
watch.
So
you
can
look
at
this
chart
to
see
how
the
total
issues
various
release
from
release
to
release
for
enablement,
the
the
green
bar
there.
So
yeah,
it's
just
a
question.
I
I'm
not
sure
how
what
this
metrics
tell
us
and
what
action
items
can
be
taken
from
this
metric.
B
It
always
needs
to
be
with
the
context
of
the
quad
right
who
knows
the
the
section
and
the
groups
better
than
those
outside
is
it
looks
at
is
the
predictability
what
you
would
expect
based
on
everything.
You
know,
I
think
that's
th,
that's
the
real
question
or
that's
a
better
question.
What
is
written
here.
C
So
it's
kind
of
like
a
sanity
check
chun
like
another
way
to
say
it
is
is
like
if,
if,
if
your
issues
proportionally
say,
here's
what
we're
gonna
go
after
and
then
your
mrra
doesn't
show
that
same
proportionality,
then
there's
lots
of
questions
that
are
associated
with
it.
As
an
example
did
we
overload
the
team
and
because
of
that
they
focused
on
one
particular
type
of
work
as
an
example
right
so
like?
C
G
C
Aspirational
some
teams
are
executing
extremely
well.
E
D
C
Are
struggling
right,
all
those
things
kind
of
are
associated
with
it,
so
I
think
those
are
all
things.
Another
good
example
is.
It
could
be
that
we're
doing
a
lot
of
feature
work,
but
behind
this
means
we
have
to
do
a
ton
of
maintenance
work
to
even
get
a
feature
done,
and
that
would
be
an
example
of
well,
maybe
there's
some
efficiencies
we
could
go
fix
right
associated
with
it.
So
so
it's
kind
of
like
that
question
there
and
that's
kind
of
the
purpose
of
it.
C
A
Gotcha
yeah,
but
this
is
a
chart,
so
the
narrow
imr,
of
course,
is
lower
than
the
issued
numbers,
because
this
is
only
narrow,
mr
the
total,
mrs
are
higher
and
the
the
green
chart.
The
green
bar
is
the
total
issues
release
over
release.
So
you
see
that
it's
not
predictable
for
enablement,
specifically.
A
D
I
I
think
20
features
is
not
super
surprising
to
me:
we've
discussed
how
maintenance
heavy
our
groups
are,
and
I
I
think
of
the
I
think,
as.
G
D
A
Yeah-
and
I
also
think
that
overall
is
expected
because
we
are
maintenance
heavy
in
general
and
also
my
suggestion
for
this
question
is
to
look
back
not
forward,
because
the
the
next
milestone,
usually
at
this
time,
is
not
fully
planned.
Yet.
So
it's
premature
to
look
next,
but
looking
back
like
x
numbers
of
milestones
that
will
be
a
a
better.
You
know
better
to
tell
the
trend.
D
D
I
I
wouldn't
call
searches,
backlog,
healthy
we've
been
working
on
like
I,
I
know
it
comes
up
in
soft
scores
and
I
know
there's
a
lot
of
work
to
do
there,
but
what
I
would
say
is
that
we've
been
working
on
a
lot
of
maintenance
to
keep
the
lights
on
and
historically
the
team's
been
very
small
and
we
are
growing
the
team
second
half
of
this
year.
So
hopefully
we
can
get
some
more
throughput
and
get
it
be
able
to
drive
some
of
these
improvements
more
rapidly.
G
No,
I
agree.
Global
search
is
definitely
the
area
here,
an
enablement
where
we
have
the
highest
number
of
high
severity,
self-impacting
issues,
and
it's
also
when
comparing
to
other
groups
in
other
departments,
quite
a
high
number
to
have
six
severity,
two
issues.
So
if
we
could
burn
this
down
quicker,
that
would
really
be
very
much
appreciated
and
would
have
a
big
impact.
A
I
think
it's
probably
again
the
the
mixed
sro
label
was
not
applied
consistently.
A
I
looked
at
our
positive
security
backlog,
so
we
have
actually
we
have
seven
past
due
and
overall
we
have
a
total
of
14
as
one
two
three
security
issues,
so
seven
of
them
are
past
the
due
we
are
actually
actively
working
on
two
two
of
them
and
then
one
is
blocked
and
one
just
the
two
just
became
past
two
this
week
and
the
rest
two
were
delayed
due
to
the
priority
shift
during
this
release.
Two
two
fifths.
B
So
maybe
maybe
kyle
I
don't
know,
I
don't
know
who
who
maintains
the
the
automation?
That
applies
the
label,
but
maybe
I
don't
tanya
would
kyle
be
the
best
to
look
into
that.
Perhaps
yeah.
D
Yeah,
I
think
in
particular
this
is
really
critical,
because
I
think,
as
far
as
I
know,
all
the
teams
use
that
label
determine
if,
like
in
their
planning
cycles
and
their
boards,
I
think
that's
what
they
use
so
yeah.
I
think
we
should
get
this
fixed
pronto
here,
in
particular
for
security.
We
have
s,
delays
and
contracts,
and
things
like
that.
B
So
I
know
we're
out
of
time
I'll
I'll
ping
kyle.
I
know
we're
out
of
time.
I
think
there's
there's
more
good
questions
to
continue
discussing.
Perhaps
do
a
follow-up
john
to
continue
the
discussion.
A
B
A
That
works,
unless
we
cannot,
you
know
fully
address
that
asynchronously.
We
can
schedule
another
session.
B
All
right,
christy-
and
I
have
our
third
in
a
row
now
we're
on
to
sam
goldstein's
team
and
we're
late,
so
good
stuff
thanks.
Everyone
thanks.