►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
First
of
all,
I
just
wanted
to
talk
a
little
bit
about
the
priorities
issue
that
we've
been
using
as
well
as
I
wanted
to
discuss
how
we
are
scheduling
things.
I
know
the
way
I've
been
working
is
a
little
bit
different
from
how
things
have
been
done
before
so
I
just
wanted
to
make
sure
I
had
a
chance
to
discuss
what
that
looks
like
and
how
we're
scheduling
milestones
moving
forward.
A
So,
first
of
all,
let
me
show
you
the
screen
for
our
priorities
issue
in
here
we
have
a
list
of
priorities.
This
is
just
the
new
feature
work,
so
this
does
not
include
any
bugs
or
maintenance
that
gets
done
in
a
milestone
and
those
all
need
to
get
done
as
well.
But
this
is
focused
just
on
the
new
feature,
work
and
so
we're
actually
targeting
for
the
new
feature
work
to
take
up
around
60
of
the
team's
total
capacity
during
the
milestone
with
bugs
taking
around
10
and
maintenance
work
taking
around
30.
A
That's
pretty
consistent
with
what
we've
been
doing
up
to
this
point.
So
I
don't
really
see
this
as
a
big
change,
but
when
we
do
the
milestone
planning,
the
idea
is
that
the
engineering
manager
will
put
items
into
the
milestone
just
roughly
trying
to
address
those
target
percentages
and
then,
as
a
product
manager,
I
go
through
and
I
prioritize
the
bugs
with
a
priority
one
through
four
and
then
the
engineering
manager
chooses
which
of
those
to
actually
put
in
the
milestone.
So
as
a
product
manager,
I'm
not
doing
any
of
the
milestone,
scheduling
myself.
A
A
As
far
as
new
feature
work,
we
now
have
the
back
end
dri
and
the
front
end
dri,
and
I
don't
think
we've
talked
about
that
very
much
yet.
But
the
idea
of
the
dri
is
that
that
person
almost
acts
like
a
mini
engineering
manager
for
that
feature,
so
the
dri
is
in
charge
of
making
sure
that
the
work
keeps
moving
they're
the
person
to
go
to
to
answer
any
questions
about
it
or
make
decisions
or
even
communicate
timing
expected
timing
for
release
back
to
myself
as
part
of
that
they
typically
do
the
refinement
of
it.
A
So
they'll
create
the
issues,
the
implementation,
implementation
issues
in
it
and
and
do
the
refinement,
but
they
don't
have
to
do
all
the
work
to
actually
implement
the
feature.
So
those
issues
that
get
created
can
be
picked
up
by
anybody
in
the
team.
You
know
they
get
moved
into
the
milestone
and
then
anyone
on
the
team
once
they're
available.
They
can
pick
that
up.
So
it's
not
like
you
know.
Each
of
you
have
to
work
only
on
these
specific
areas
and
nowhere
else
we
actually
want
to
promote
cross-sharing
in
the
group.
A
A
knowledge
transfer
in
the
group
so
that
everybody's
aware
of
what's
being
worked
on,
so
I
just
wanted
to
provide
an
update
there
in
case
there
is
any
confusion
if
you
have
any
questions
on
that,
please
feel
free
to
reach
out
to
me
or
if
you
have
concerns
or
feedback.
If
you
know
you
feel
like
this,
isn't
a
good
way
to
do
it,
I'm
very
open
to
hearing
that
as
well.
A
All
of
this
is
a
little
bit
of
an
experiment,
as
we
try
out
this
new
way
of
working
here
on
the
team,
and
so
I,
the
other
thing
that
I
wanted
to
talk
about
today,
is
I
just
wanted
to
do
a
little
bit
of
a
deep
dive
into
these
first
three
items
and
they
all
relate
to
each
other
in
a
lot
of
different
ways.
A
But
I
wanted
to
talk
about
this
specifically
and
make
sure
that
everyone
on
the
team
has
at
least
a
high
level
understanding
of
what
we're
trying
to
accomplish
from
a
product
management
perspective.
I'm
not
going
to
go
into
any
of
the
deep
engineering
details.
I
know
there
are
a
lot
of
discussions
on
that,
but
just
from
a
really
high
level,
we're
actually
trying
to
address
a
few
different
problems.
The
first
problem
is
that
our
license
analyzer,
our
license
scanner,
is
not
well
maintained
upstream.
A
It
requires
a
lot
of
maintenance
work
for
us
to
keep
that
even
just
updated
and
working,
and
it's
taking
a
lot
of
the
team's
time
so
would
like
to
eventually
get
rid
of
that
and
replace
it
with
this
new
architecture.
A
The
second
thing
that
we're
working
to
address
is
we
would
like
to
move
our
dependencies
into
the
database,
because
we
have
a
whole
lot
of
work
around
improvements
for
the
dependency
list
that
we
just
can't
do.
While
we
read
from
artifact
files,
things
like
showing
the
dependencies
at
the
group
level
or
allowing
users
to
search
or
group
the
dependencies
by
different
things.
All
of
that
is
just
really
not
scalable
when
you're
parsing
out
individual
artifact
files
and
then
the
last
thing
that
we
want
to
address
is
ideally
you
know.
A
The
name
of
this
epic
is
continuous
vulnerability.
Scans,
ideally,
we'd
be
able
to
give
users
updates
anytime
that
our
advisory
database
changes
rather
than
requiring
them
to
rerun
the
pipeline
job.
So
to
get
around
that
a
lot
of
users
today
have
a
scheduled
pipeline
job
where
they
run
that
pipeline
again
and
again
and
again
just
checking
to
see
if
anything's
changed
on
our
advisory
database.
A
A
So
those
are
the
three
big
problems
that
we're
trying
to
solve
at
a
very
high
level.
The
approach
that
we're
proposing
to
solve
this
with
is
to
have
the
container
and
dependency
scanners
output.
Just
a
software
bill
of
materials.
A
They
won't
actually
do
any
scanning.
So
scanning
is
kind
of
misleading
here
in
that
term,
it's
really
just
scanning
for
dependencies,
but
it's
not
doing
any
vulnerability
scanning
at
this
stage.
It
would
just
be
creating
that
cyclone
dx
formatted
s-bomb
that
would
get
fed
into
the
database
where
it
gets
stored
here
in
the
dependency
list.
A
Additionally,
we
would
want
to
include
our
get
lab
advisories
database
and
store
that
in
the
gitlab
database
as
well,
so
that
we
can
then
do
that
match
right
here
inside
of
rails,
where
we
do
that
comparison
and
we
identify
which
dependencies,
what
match,
which
advisories
and
great
vulnerabilities
off
of
those
and
again
this
shifts
it
from
a
model
where
we
do
that
scanning
every
time.
There's
a
new
pipeline
run
into
a
model
where
we
run
that
scanning
on
change,
we
would
run
that
anytime,
a
new
s-bomb
comes
in.
A
That
would
be
a
change
or
anytime
there's
a
change
to
the
advisory
database.
We
would
update
those
vulnerabilities
synchronously,
you
know
almost
near
real
time
as
part
of
this.
We
need
some
sort
of
asynchronous
process
to
update
this
advisory
database
with
its
upstream
sources,
and
then
the
idea
is
that
we
can
do
the
exact
same
thing
for
licensed
data
here
as
well.
There's
no
need
to
have
a
separate
scanner
we
can
have.
All
of
that.
A
We
can
just
reuse
that
same
cyclone
dxs
bomb
that
gets
stored
in
the
database
and
we
can
use
that
for
license
matching
as
well.
So
if
we
create
a
new
place
in
the
database
to
store
license
information
that
associates
licenses
with
specific
packages
and
versions,
we
can
then
run
that
match
synchronously
and
update.
It
live
anytime
that
there's
a
change
to
either
of
these
two
just
like
before.
A
We
need
some
sort
of
asynchronous
way
to
update
this
license
information
data
in
the
database,
so
that
that's
kept
up
to
date
and
it
would
trigger
that
license
match
job,
which
would
then
provide
the
new
license
data.
So
this
is
a
really
big
undertaking,
but
it
has
some
huge
benefits
for
customers
all
the
way
from
making
sure
that
the
data
is
constantly
being
updated.
A
So
they
have
continuous
scanning
to
having
the
data
in
the
database
so
that
we
can
do
group
level,
dependency
lists
and
searching
and
grouping,
and
then,
lastly,
it's
going
to
allow
us
to
free
up
our
own
engineering
time
by
reducing
our
maintenance
burden,
specifically
with
the
license
compliance
piece
of
the
work
again.
If
you
have
any
questions
on
any
of
this
or
concerns
or
feedback,
please
feel
free
to
reach
out
to
me
in
any
way,
that's
comfortable
for
you
and
thanks
for
watching.