►
From YouTube: Protect:Container Security group discussion 2022-06-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
let's
do
it
all
right,
welcome
everyone
to
our
weekly
meeting
for
container
security.
It
feels
like
it's
more
like
monthly
or
even
every
other
month.
Sometimes
the
frequency
that
we've
been
recording
lately,
but
welcome
anyway,
and
let's
see
I
don't
quite-
have
the
agenda
pulled
up
yet.
A
I
have
it
the
there
are
no
follow-up
issues,
demos
or
issues
for
playing
breakdown,
there's
only
in
other
discussions,
thiago,
sam
and
neil
to
discuss,
cross-functional,
prioritization
and
planning
all
right,
neil
is
on
the
working
group.
I
believe
so.
He
can
inform
us
all
of
yeah
that
goodness.
C
C
This
more
heavily
than
other
groups,
but
it'll
be
important
for
other
groups,
especially
since
we're
so
closely
aligned
with
container
security.
Sorry,
I
was
just
thrown
off
because
my
url
was
all
messed
up.
I
just
fixed
that
but
yeah.
So
the
initiative
here
is
that
teams
will
start
to
review
data
around
work
type
distribution
with
a
guideline
that
teams
are
looking
toward.
C
A
60
of
the
work
is
feature
work,
30
is
maintenance,
type
work
and
then
the
remaining
10
would
be
bugs
there's
kind
of
a
healthy
distribution
of
work,
with
the
understanding
that
not
every
team
is
going
to
be
able
to
get
close
to
those
numbers
initially
or
ever.
You
know
some
some
circumstances,
so
that's
part
of
it,
so
we
could
kind
of
start
there
and
look
at
the
dashboard
page.
C
The
other
part
of
this
is
the
actual
planning
effort,
so
having
organized
backlogs
that
have
feature,
work,
maintenance,
work
and
bug,
work
ready
to
be
pulled
in
so
that
our
pm,
you
know
same
in
this
case-
can
have
an
effective
menu
of
things
to
pull
into,
and
you
know
maybe
get
to
those
ratios.
C
So,
let's
look
at
the
metrics
first.
I
think
that
would
be
cool,
yeah
curious
to
see
those
where
so.
I'm
sharing
my
screen
now
they're
on
everybody's
handbook
page.
So
every
group
has
gone
through
the
effort.
I
think
I
did
the
container
security
one
definitely
did
thread
insights.
It
was
a
quick
snippet.
It's
a
one-liner
that
we're
adding
to
our
handbook
pages,
which
gives
us
two
new
dashboards.
One.
C
Is
this
backlog
view,
which
is
we're
not
going
to
look
at
this
right
now,
but
it
tracks
like
open
bugs
or
incident
issues,
so
you
can
kind
of
see
those
trends.
C
The
focus
for
this
prioritization
effort
would
be
looking
at
our
merge
request
data,
and
so
this
is
a
work
in
progress
in
my
opinion,
because
it's
a
little
bit
hard
to
read,
but
we
have
a
couple
main
emphasis
is
one
is,
is
emphasis.
One
is
the
our
goal
is
that
we
have
close
to
zero.
The
the
real
goal
is
less
than
five
percent
of
our
mrs
are
not
typed.
C
So
we
have
that
the
type
label
is
maintenance,
feature
or
bug
and
then
there's
subtypes
as
well,
but
it's
just
making
sure
that
every
mr
that
gets
pushed
through
in
container
security's
case
you
know,
has
a
type.
It's
expected
that
some
things
will
slip
through
and
they
just
don't
have
the
right
categorization,
that's
fine!
That's
where
the
less
than
five
percent
is
our
goal
for
now
so
that'd
be
first
thing
to
look
at
is
how
are
we
doing
we're
less
than
five?
So
that's
fine!
C
We
can
totally
continue
on
if
we
wanted
to
find
out
more
about
this.
There
are
these
two
tables
this
first
one
will
show
us.
So
if
I
hover
over
this,
this
is
only
for
the
current
month.
So
it's
since
june
1st.
So
it's
only
two
weeks
worth
of
data.
If
you
want
more
historically,
you
can
look
here,
but
this
will
show
us
those,
mrs
that
don't
have
a
type
label,
so
we
can
definitely
go
through
and
categorize
these,
I
think,
as
a
you
know,
value
of
iterating.
C
We
shouldn't
probably
go
back
too
far
and
adjust
these,
but
we
can
absolutely
go
back
and
correct
ones.
Like
this
june
3rd,
in
the
the
last
couple
in
may
by
all
means,
we
probably
should
like
leave
these
april
ones
alone,
but
for
the
more
recent
ones
we
can
jump
in
there
and
figure
out
what
the
correct
type
label
is.
What
I've
been
doing
is
just
adding
the
correct
type
label
and
then
pinging
the
author
of
the
team
and
just
saying
hey.
C
So
then.
Secondly,
thinking
about
the
the
distribution
thinking
about
how
this
reflects
is
this.
Does
this
match
the
current
goals
of
the
team?
So
we
have
a
pretty
much
even
split
between
maintenance
and
features,
17
bugs
and
just
kind
of
a
conversation
around.
How
does
that
feel
if
it
feels
off?
Why
might
that
be?
C
If
it
feels
good,
then
there's
not
much
more
to
talk
about?
Actually,
that's
that's
the
direction.
I
don't.
I
always
actually
it's
right
now.
This
handbook
page
is
driving
me
crazy.
I
actually
got
it
on
the
first
try
because
my
history
isn't
capturing
this
for
some
reason,
but
so
to
assist
in
all
this.
We
have
a
lot
of
new
handbook
updates
and
so
kind
of
what
I
described
actually
not
the
data
review,
but
the
prioritization
is
described
here
and
there's
a
whole
nother
page
that
talks
about
the
data
review,
review.
C
Shoot
it's
probably
shows
that
we
don't
have
the
the
page
links
all
correct
right
now,
if
it's
hard
to
find
these
there's
a
sibling
page
of
this
that
describes
the
data
review
and
kind
of
the
cadence
that
we'll
be
taking.
So
it's
expected
that
team
or
groups
will
be
doing
this
on
a
monthly
basis.
C
You
know,
post
milestone,
will
review
and
kind
of
high
level
do
that
assessment
that
I
mentioned
and
then
there's
a
more
in-depth
review
every
quarter
where
the
team,
including
their
leadership
group,
will
be
reviewing
this
and
there's
also
going
to
be
a
vp
level
review
as
well.
It's
just
to
make
sure
teams
have
the
right
amount
of
support,
and
everybody
understands
the
reasons
you
know
behind,
where
the
team
is
tracking.
A
A
We've
been
doing
something
very
similar
to
this
for
a
long
time
now
you
know,
maybe
it
hasn't
been
as
data
driven
as
this,
but
at
least
we've
been
trying
to
track
like
10
to
15
percent
of
our
total
time
spent
on
bugs
with
the
rest
allocated
to
maintenance
and
feature
work.
So
it
looks
like
we've
we're
actually
doing
pretty
good
in
that
regard,
or
at
least
tracking,
pretty
close,
especially
recently.
A
C
Yeah
so
feature
work
is
anything
that
affects
impacts,
the
user
experience,
so
any
ui
changes
fall
into
feature
work
any
like
type
of
workflow,
anything
that
would
directly
impact
how
our
users
are
using
or
consuming
the
information.
So
even
a
text
change
would
be
a
feature.
Maintenance
work
is
behind
the
scenes.
C
You
know
code,
cleanup,
refactoring,
fixing
a
bug
that
doesn't
directly
impact
the
user.
Experience
might
be
classified
as
maintenance.
Well,
I'm
sorry,
I
said
bug
fixing
a
behavior
that
would
directly
impact
the
user.
Experience
might
be
classified
as
maintenance
or,
if
it's
noted
as
a
bug
and
it's
impacting
the
user
experience,
it
has
a
priority
and
severity.
You
know
that
would
be
bug.
C
There
is
a
page
yeah
go
ahead
and
I'll
find
that
other
reference
there
we
go.
So
this
engineering,
metrics
work,
type
classification,
section,
describes
the
high
level
categories,
bug,
feature
maintenance
and
then
there's
subcategories
within.
C
A
Got
it
yeah
the
way
you're
doing?
Can
you
hear
me?
Okay,
good,
I'm
sure?
Okay,
we
can
hear
you
the
way
you're
describing
it
almost
sounds
like
front
end
and
back
end,
but
I'm
wondering
if
you
know,
because
a
lot
of
times
we'll
do
a
feature
and
we'll
release
an
mvc
and
then
we'll
do
a
bunch
of
follow-on
improvements
to
that
feature,
and
those
really
are
improvements.
A
I'm
wondering
if
some
of
our
and
and
we
could
answer
this-
I
guess
by
going
back
and
looking
at
the
individual,
mrs,
but
I
wonder
if
some
of
those
things
that
really
are
feature
work
where
they're
just
like
follow-ups
to
the
mvc,
but
they
actually
are
like
future,
probably
should
be
categorized
as
like
feature
enhancement
would
be
the
subtype
there.
I
wonder
if
some
of
those
we
bucket
under
maintenance
instead,
I
I
don't
know
I
kind
of
wonder
if
we're
categorizing
things
accurately.
B
I
think
that's
quite
possible
sam,
particularly
because
we
weren't
I
I
wasn't
anyway
paying
too
much
attention
to
what
I
was
selecting
like.
Oh
it
looks
right,
go
for
it,
but
now
that
we're
tracking
and
we're
making
an
effort
to
to
stay
on
that
60
30
10
ratio,
it
becomes
becomes
important
to
to
get
that
right.
A
Yeah
and
the
other
thing
to
keep
in
mind
with
these
metrics
too,
is,
as
I
understand
it,
this
is
just
looking
at
counts
of
mrs
and
it
doesn't
look
at
the
size
of
the
mr
correct,
which
you
know
I
I
don't
really
know
again.
I
don't
have
like
hard
data
behind
this,
but
I
would
expect
that
you
know
some
things
like
some
bugs
and
maintenance
tasks
are
more
likely
to
be
really
small.
In
fact,
even
just
today,
I
opened
an
mr
maintenance,
mr
to
update
the
text
on
one
of
our
pages.
A
A
So
you
know,
I
think,
that's
there's
an
angle
of
that
too,
like
we
have
a
lot
of
really
small
doc
updates
and
those
might
be
over
inflating
that
maintenance
number,
but
in
times
of
terms
of
like
actual
time
spent
or
actual
work
done,
you
know
I
feel
like
we
actually
tend
to
spend
more
time
on
the
new
feature
work.
So
I
I
don't
know
I
I'm
pretty
comfortable
with
where
this
is
at.
A
C
You
know
that's
definitely
the
goal
of
this
this
conversation,
this
evaluation
is
how
does
this
feel?
What
are
the
reasons
the
data
might
be
different
than
we
expected,
and
if
we
can
justify
that,
that's
amazing-
and
I
don't
think
anyone
at
this
stage,
especially,
is
looking
for
teams
to
just
shift.
You
know
their
process,
it's
better,
just
like
to
have
an
understanding
that
your
point
about
the
the
sizing
came
up.
We
had
to
ask
me
anything
or
any
man's
session.
C
This
is
a
few
weeks
ago
now,
but
right
after
the
working
group
started
formalizing
some
of
the
direction
we
had
some
ama
scheduled.
That
was
a
question
I
raised
food
for
thought
was.
The
mr
counts
are
very
much
dependent
on
the
scale
of
that
work.
I
love
your
perspective
on
the
maintenance.
You're
right.
A
lot
of
maintenance
items
will
be
a
lot
quicker
than
a
big
feature.
C
I
was
curious
how
teams
might
start
to
react
if
we
might
see
a
team
starting
to
break
their
work
down
smaller
so
that
we
end
up
with
more
mrs,
it's
kind
of
another
indirect
contributing
factor
to
the
mr8.
We
might
say.
C
B
B
B
Hopefully,
we
won't
be
trying
to
gain
this
ratio
where
we
land,
if
we
understand
it
and
and
what
sam
brought
really
good
points
there.
We
might
just
make
a
call
say
yep
that
looks
right,
we're
comfortable
with
it.
Let's
keep
rolling.
A
C
I
need
to
find
the
other
reference
for
the
metrics
review,
because
there's
a
really
good
handbook
page
that
describes
the
cadence
those
sessions
that
are
taking
place.
I
was
talking
to
somebody
earlier
and
they
mentioned
that
their
group
is
exploring
reviewing
these
metrics
as
part
of
their
team
retrospective,
where
other
groups
were
more
resistant
to
that
because
they
didn't
wanna,
they
wanted
to
keep
it
more
high
level.
This
is
something
the
em
the
pm
quality
managers.
C
I
think-
and
I
feel
like
I
did
a
disservice
because
I
feel
like
this
might
all
be
linked
from
the
tasks
issue
that
I
originally
linked
up
into
the
agenda.
I'm
not
sure
it
is
oh
yeah,
it's
right
there
cool!
So
this
this
page,
the
cross-functional
dashboard
reviews
talks
about
the
different
sessions
like
I
mentioned,
so
each
group
is
expected
to
do
a
monthly
review,
so
this
would
be
somewhat
you
know,
sometime
post,
milestone.
C
Looking
at
the
metrics,
like
I
mentioned,
that
pie
chart
was
a
monthly
view.
It's
not
looking
at
milestone
and
it
doesn't
overflow
either.
So
it's
just
like.
Like
I
mentioned
it's
only
two
to
two
weeks
of
data
right
now,
so
the
reliability
is
suspect
at
times
during
the
month,
but
the
team
is
looking
at
that.
The
participation,
as
I
mentioned
by
default,
would
be
the
leads
of
the
team.
You
know
those
counterparts
that
are
helping
with
the
planning
and
prioritization
efforts.
C
What
are
we
looking
at?
It's
the
five
percent
assessment
of
the
other
ratios
and
then
additionally,
there
are
two
quarterly
reviews.
One
would
be
a
sub
department
level,
that's
looking
at
all
the
groups
within
that
sub
department
and
then
another
quarterly
session.
I
think
this
one's
this
one's
going
to
be
interesting
how
this
actually
comes
out,
but
this
would
be
you
know,
christopher
eric
trying
to
remember
her
name,
mack
and
then
christopher's
there
david
would
all
be
part
of
that.
C
So
that'll
be
a
much
broader
session
so
and
I
think
there's
a
cross-functional
dashboard
for
that
too.
That
does
a
really
big
roll
up
of
you
know.
We
were
looking
at
a
lot
of
specific.
Oh
I'm
not
going
to
log
into
sizes
right
now,
since
that's
not
loading,
but
the
view
here
is
much
more
high
level
than
the
page
we're
looking
at.
C
C
The
goal
here
is
that
we
have
dris
aligned
with
the
different
types
that
we
mentioned.
So
product
manager
is
looking
at
prioritization
for
feature.
Engineering
manager
is
looking
at
the
maintenance
bucket
and
then
quality
is
looking
at
our
bug
types,
and
so
we
just
have
kind
of
different
contexts
that
we're
thinking
about
which
would
be
pretty
cool
I'll
get
here
in
a
second
and
then
there's
definitely
going
to
be
a
lot
of
collaboration
right.
It's
not
just
in
discrete.
You
know
no
overlap
going
on.
C
C
This
puts
the
product
person
in
a
really
good
place
to
effectively
take
those
prioritized,
sets
of
buckets
and
put
them
into
a
plan
for
each
milestone
or
for
the
milestone,
and
so
that's
described
here.
C
C
And
I
don't
know
sam
if
you've
been
involved.
I
know
that
this
topic
was
raised
in
the
product
group
meeting.
I
think
two
weeks
ago
there
was
definitely
a
lot
of
conversation
around
it.
I
forget
his
name
again:
justin
strongly
represented
the
feedback
back
to
the
working
group
and
we've
been
talking
a
lot
about
it
in
some
ways
we're
trying
to
slow
down
some
of
these
changes
regarding
the
dris
and,
in
other
ways
we're
trying
to
put
more
direction
in
place.
I
think
a
big
goal
here
is
some
consistency.
C
A
A
Security
currently
works,
then
I
really
see
myself
as
the
product
manager
responsible
for
priority,
but
I
feel
like
it.
It's
not
really
necessary
for
me
to
get
involved
in
actually
planning
out
each
milestone.
A
I've
always
deferred
that
to
you
and
to
thiago
to
actually
plan
out
the
milestone
and
determine
which
items
to
put
in
so
my
deliverable
has
always
been
that
prioritized
list
for
new
feature
work,
and
then
I
also
put
priority
labels
on
the
bugs,
but
then
I
rely
on
the
rest
of
the
engineers
to
choose
what
they
actually
work
on,
because
at
the
end
of
the
day
like
if
I'm
prior,
if
I'm
planning
it
out,
I'm
just
gonna
go
dump
in
the
things
that
are
top
of
my
priority
list,
but
that
may
or
may
not
actually
make
sense
from
an
engineering
perspective.
A
You
know
you
may
have
constraints
regarding
skill
sets
or
availability
or
blockers.
You
know
there
can
be
a
wide
variety
of
other
things
that
factor
into
that
and
from
my
position
as
the
product
manager,
I'm
just
really
not
in
the
best
place
to
be
making
those
judgment
calls.
So
I
just
always
assume
that
you
know
you're
working
top
down
from
that
priority
list,
but
I
leave
the
actual
milestone
planning
itself
up
to
you
and
thiago,
and
for
that
you
know,
I
think,
we've
mostly
just
relied
on
our
issue
boards.
B
B
Is
probably
the
best
person
to
be
responsible
for
tracking
that
ratio
right?
So
if,
if
I'm,
if
I'm
looking
at
the
priorities
list
and
pulling
things
in
and
go
cool
I've
stacked
now
15.2
and
then
I
need
to
balance
that
out
and
go
you
know
what
there's
you
know,
there's
a
lot
of
maintenance
here.
I
should
take
a
little
bit
off
that
part
there.
I
wanted
to
see
how
everybody
feel
that
that's
how
I
I
feel
whoever's
doing
that
should
should
be
in
charge
of
hey.
Am
I
skewing
this.
B
C
Yeah,
so
I
think
the
the
cadence
of
prioritization
is
just
ongoing,
so
you
know
this
prioritization
of
these.
These
buckets
is
just
not
going
after,
like
we're
doing
refinement.
Now
it's
just
kind
of
a
part
of
our
process,
as
we
always
have
these
prime
backlogs
to
pick
from
and
then,
as
far
as
a
cadence
to
plan
the
milestone,
it's
it's
going
to
be
much
more
easier.
If
we
were
to
go
into
that
stack
session,
you
know
whoever's
doing
that.
C
B
C
Yeah,
so
let
me
show
you
the
dashboard
we've
been
experimenting
with.
We
don't
want
to
be
prescriptive,
plus
it's
it's
we're
just
experimenting
with
this
too,
and
I
don't
want
to
push
down
anything
hard
and
fast
to
the
teams.
But
this
dashboard
is
is
really
straightforward.
C
What
would
be
fantastic
is
if
these
cards
had
the
milestone,
so
you
could
see
if
they
were
already
scheduled
work.
That
would
be
amazing.
We
have
an
issue
out
to
our
planting
as
a
feature
request
to
aid
in
that.
C
But
so
the
idea
here
is
that
independently
so
thiago
and
I
looking
at
the
maintenance
column,
so
we
could
additionally
add
like
a
front
end
or
back-end
filter
and
slim
this
down
even
further,
and
that
gives
us
that
ability
to
visually,
like
you
said,
kind
of
move
things
around,
I
think
container
security
is
interesting.
If
I
go
over
to
thread
insights,
there's
a
lot
of
issues,
I'm
pretty
overwhelmed,
I'm
not
sure
where
to
start
this
actually
seems
pretty
manageable.
C
So
you
know
we
just
it's
a
matter
of
like
rearranging
these
and
saying
I'd
like
to
do
this
first
I'd
like
to
do
this
next
type
of
thing
and
we're
all
kind
of
doing
that
independently
as
well
features
in
terms
of
planning.
The
milestone
is
where
it
gets
interesting
and
that's
where
that
indicator
would
help.
C
C
C
C
So,
while
you're
thinking
about
how
you
might
prioritize
within
this
little
mini
list,
there's
actually
a
neat
trick
that
our
issues
has
a
manual
filter
the
order
by
manual
which
actually
allows
you
to
drag
and
drop
within
the
issues
list,
something
I
didn't
know
existed.
Somebody
was
like
hey,
you
can
do
this,
and
so
this
actually
conforms
to
the
same
exact
ordering.
So
right
now
I
have
it.
Why
is
that?
Why
is
maintenance
up?
Because
I
went
to
the
milestone
view.
C
There
were
26
issues.
We
should
see
26
over
here,
so
it's
the
same
set
and
if
I
rearrange
these
two,
I
apologize.
If
I
screwed
something
up
by
doing
that
and
I
refresh
they
should
mimic
the
same
data
set,
don't
know
how
that
works.
Please
don't
ask
me,
but
it's
really
cool,
and
I
think
this
can
be
a
lot
more
flexible
if
you're
dealing
with
large
sets
of
issues
that
you'd
like
to
rearrange
and
then
this
might
be
a
better
consolidated
team
view.
B
I
don't
know
so.
C
B
What
what
do
we
have
next
for
this
just
jump
in
and
start
doing
this
for
for
15.2.
C
Yeah
the
data
review
was
that
was
a
great
first
session.
So
as
far
as
that
issue
tracking,
you
know
that's
step
one
is
the
team
should
start
reviewing
data,
so
the
assessment
that
sam
made
was
fantastic,
yeah
and
then
looking
at
this
next
milestone,
which
is
starting
pretty
soon
trying
to
implement
some
of
this.
I
think
my
goal
is
to
visit
the
the
maintenance
column
for
front-end
issues
and
probably
with
alexander
and
just
make
sure
that
they
feel
ordered
enough
again.
C
I
think
the
problem's
much
simplified
in
container
security,
then
over
and
thread
insights,
there's
a
lot
more
going
on.
So
I'm
not
looking
forward
to
that
one
so
and
also
I'm
curious
between
you
know,
since
we
have
two
managers
or
engineering
managers.
What's
that
relationship
look
like
as
we
coalesce,
we
have
essentially
four
buckets.
C
So
tiago
you-
and
I
can
kind
of
sync
on
that.
I
guess
one
thing
I
didn't
mention
is
the
bug.
Column.
Quality
is
working
on
some
automation
that
will
auto
schedule
sev1
and
step
two
bugs.
C
I
don't
know
exactly
what
that
looks
like,
because
we're
not
going
to
be
able
to
achieve
every
single
bug
right,
even
if
it's
scheduled,
but
that's
also
part
of
the
the
communication
here
is
that
and
we
have
a
a
value
within
the
company
that
I
wasn't
actually
aware
of,
although
I
followed
it,
but
the
plan
ambitiously
or
it's
okay
to
over
plan
and
under
achieve
that's,
not
a
bad
thing,
and
I
think
that'll
definitely
take
effect
for
bugs
we're
just
not
going
to
be
able
to
hit
every
bug.
That's
prepared
or
that's
scheduled.