►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
our
container
security
group
meeting,
I've
got
the
first
agenda
item.
I
just
wanted
to
welcome
karen
quintus
to
the
team
she's
going
to
be
joining
us
with
50
of
her
time
as
a
product
management,
intern
or
apprentice
through
our
internship
program.
Here
that
we
have
and
she's
going
to
be
shadowing
me
so
she's
invited
to
optionally
join
all
of
my
meetings
and
she's
gonna
help
with
some
of
the
work
that
I
have
going
on.
So
just
wanted
to
extend
a
warm
welcome
to
karen
and
alexander.
B
A
Awesome
yeah
welcome
to
both
it's
exciting,
to
have
not
one
but
two
people
joining
us,
this
iteration
just
helping
out
on
the
team,
so
we're
excited
to
have
both
of
you
so
for
the
meet
of
today.
I
wanted
to
talk
a
little
bit
about
the
research
spike
on
policy
configuration
inheritance
since
that's
raised.
A
You
know,
we've
had
a
lot
of
comments
here
in
the
thread
and
I
felt
like
it
might
help
to
discuss
it
in
this
meeting.
So
just
a
minute,
I'm
going
to
share
my
screen
here
and
we'll
talk
through
it.
So
just
to
kind
of
set
the
stage
we've
got
this
by
calc,
get
gitlab
configuration,
inheritance
works
and
just
to
explain
where
we're
at
here.
This
is
under
the
epic
for
dast
project
level,
scan
execution
policies.
A
This
is
our
mvc
for
for
releasing
our
initial
pass
at
you
know
some
scanner,
type
policies
and
you'll
notice.
The
actual
requirements
for
this
mvc
are
relatively
minimal.
We
see
that
you
know
one
we're
going
to
be
moving
the
policies
section
out,
so
we're
moving
it
to
a
new
place,
the
navigation,
instead
of
being
here
under
security
and
compliance
throughout
monitoring
and
in
a
tab
for
policies,
we're
planning
to
make
that
a
top
level
now
item.
So
you
just
go
to
security
and
compliance
policies,
and
the
reason
for
that
is
with
this
change.
A
A
Additionally,
there's
just
some
slight
ui
changes,
there's
going
to
be
a
new
column
called
type
that
specifies
either
container
runtime
or
scan
execution
policies,
and
then,
lastly,
users
will
be
able
to
view
create
edit
and
delete
scan
execution
type
policies,
as
described
in
this
prototype.
So
if
we
come
over
to
the
prototype,
there's
a
lot
of
stuff
in
this
prototype
that
extends
way
beyond
the
scope
of
these
requirements.
A
A
We're
just
worrying
about
dast
and
in
this
there's
a
few
options
in
here
as
well.
We
can
narrow
the
scope
even
further
for
our
first
release
just
to
these
schedule
type
rules,
so
we
don't
even
need
to
worry
about
governing
making
sure
that
dast
runs
when
the
pipeline
is
run.
We
certainly
don't
need
to
worry
about
when
a
commit
is
merged.
For
now
we
can
focus
just
on
scheduling
these
to
run
at
a
regular
interval.
So
the
idea
is
you
come
in
here
to
the
project.
A
If
it's
daily,
you
can
pick
what
what
time
limited
to
the
hour
right
now,
you
can't
get
any
more
specific
than
that
or
you
can
pick
weekly
and
you
can
pick
a
day
of
the
week
and
then
you
can
pick
a
time.
So
you
know
right
now
the
functionality
here
at
least
the
ui
wise,
it's
fairly
basic.
A
So
here
I
want
to
scan
the
master
branch
weekly
on
sunday
at
5
am
and
then
I'm
going
to
require
das
scans
to
run
with,
and
then
you
pick
a
scan
so
right
now
the
dash
team
is
working
to
create
a
scan
object
that
encapsulates
both
their
stan
scan
profile
and
their
site
profile.
You
have
to
have
both
of
those
in
order
to
run
a
desk
scan
and
they're,
basically
creating
a
saved
configuration
that
combines
both
of
those
that
gives
you
all
of
the
settings.
A
A
So
ideally,
this
will
open
a
modal
here
that
just
pops
over
and
lets
you
create
a
new
dashed
profile
right
here
on
the
fly
lets
you
save
those
changes,
and
this
is
not
fully
prototyped
out
so
to
see
what
this
looks
like.
We
actually
need
to
go
reference.
The
das
teams
mocks
for
this
that
anabol's
been
working
on,
but
basically
you
can
create
a
new
scan
right
here
in
line
and
pick
that
or
you
can
pick
one-
that's
already
been
pre-configured
again
once
you
save
this.
A
If
I
just
go
back
to
the
main
policies,
page
you'll
see,
there
should
be
a
new
column
here
for
either
container
runtime
or
scan
execution,
where
a
scan
execution
would
be
one
of
those
das
policies
so
again
things
that
are
not
in
scope
for
this
mvc
we're
not
covering
you
know
we're
not
doing
this
at
a
group
or
workspace
level.
So
we
don't
have
to
worry
about
inheritance
or
anything
like
that.
A
We're
not
doing
scan
we're,
not
doing
scan
results
policies,
so
we
don't
have
to
worry
about
failing
the
pipeline.
A
All
of
these
policies
should
just
be
additive,
so
you
know,
if
you
have
four
policies,
then
you'd
be
running
four
scans,
so
we're
not
trying
to
you
know,
there's
no
conflict
resolution
required
here,
because
the
action
is
just
to
run
a
scan,
and
so
you
don't
have
to
deconflict
that
you
can
always
just
run
more
scans.
A
So
there'll
be
you
know,
one
scan
for
each
policy,
that's
run,
and
that's
really
the
extent
of
this
of
this
mvc
and
I
just
wanted
to
make
that
a
little
bit
clearer
because
we
are,
you
know,
we've
got
a
lot
of
aspirations.
We
want
this
to
cover
all
of
the
different
scanners
we
want
to
have
you
know,
pipeline
scans
and
push
scans
and
scan
results.
We
want
to
let
you
take
various
actions
like
allow
and
fail
the
pipeline
requiring
approval,
sending
out
emails
or
slack
notifications.
A
A
Just
because,
as
we
go
to
start
this
work,
we
don't
want
to
paint
ourselves
in
a
corner
so
to
speak,
or
you
know,
we
want
to
make
sure
that
the
architecture
can
support
the
longer
term
vision,
and
so
I
think
there
may
have
been
some
confusion
because
also
attached
to
this
epic,
we
have
another
research
spike
that
we
did
about
two
months
ago,
where
I
outline
kind
of
the
long-term
requirements
not
for
the
mvc,
but
for
really
the
end
state.
Where
you
know
we
need
a
two-step
approval
process,
full
audit
logging.
A
A
A
I
think
I've
unsuccessfully
attempted
to
communicate
the
answer
to
that
question.
Thiago
did
a
great
job
summarizing
that
and
what
he
wrote
is
exactly
the
behavior
that
we
want
here
so
just
to
read
that
real,
quick,
the
project
will
apply
to
as
many
policies
as
it
inherited
or
as
it
is
defined
for
scan
execution.
Policies
will
trigger
as
many
scans
as
are
found
in
their
defined
policies
and
then
scan
results.
Policies
will
all
be
evaluated
so
we'll
just
evaluate
all
the
policies.
A
A
So
I
hope
that
helps
if
I'm
more
than
happy
to
do
another
follow-on
synchronous
discussion
if
needed.
I
know
there's
not
a
lot
of
people
on
the
call
synchronously
today,
but
I
just
wanted
to
take
that
opportunity
to
clarify
some
things
that
may
have
been
a
little
bit
difficult
to
decipher
across
all
of
those
different
epics
and
issues
and
direction
page.
A
So
if
there
aren't
any
other
questions
on
that
alexander,
I
think
you've
got
our
last
agenda
item
for
today.
B
Yeah,
I
I
need
to
go
back
and
actually
re-watch
the
last
meeting,
because
we
talked
a
lot
about
how
we
are
for
the
alerts
project,
what
and
rolling
it
out.
What
does
testing
look
like
for
it
and
what
does
what
we're
doing
with
the
feature
flight
and
none
of
that
was
sort
of
documented
in
the
docs
below?
So
I
wanted
to
revisit
that
and
just
make
sure
we
were
all
on
the
same
page.
I
feel
like
that
discussion
talk.
We
talked
about
a
lot
of
things.
B
A
B
A
Just
to
answer
your
question
as
best
as
I
remember
so,
you
know:
how
are
we
managing
that
feature
flag
and
I
could
be
wrong
on
this,
so
don't
hold
me
to
it,
but
I
think
our
plan
is,
you
know.
First,
it's
gonna
be
rolled
out
to
staging
so
that
the
get
lab
kubernetes
agent
and
the
kubernetes
agent
service
is
going
to
be
turned
on
in
staging
soon.
A
I
I
don't
know
exactly
when,
but
my
best
understanding
of
the
timeline
is
like
by
the
end
of
the
month,
roughly
so
sometime
around
like
january
31st
february,
1st
kind
of
thing,
so
once
that's
done,
we
can
test
in
staging
when
we're
ready.
What
we'll
want
to
do
is
leave
the
feature
flag
in
place,
but
just
turn
it
on
by
default,
both
for
com
and
self-managed.
At
the
same
time,
that
way,
you
know
we
still
removing
the
feature.
A
Flag
is
a
little
bit
more
of
a
process
anyway,
and
if
we
have
it
on
by
default,
then
we
can
claim
it
as
shift.
You
know
so
so
that
kind
of
checks,
the
box
it
moves
it
forward,
but
leaving
it
in
place,
is
a
little
bit
of
a
safe
safeguard
for
us,
because
it
makes
it
really
easy
to
turn
it
back
off.
If
something
you
know,
heaven
forbid
goes
horribly
wrong,
so
we'll
probably
just
want
to
turn
it
on
by
default
and
then
have
another
discussion
later
about
removing
the
feature
flag
entirely.
B
Got
it
okay?
That
sounds
good
to
me.
The
only
only
thing
I
think
I
problem
I
see
is:
if
we
turn
on
for
production
before
cask
gets
to
production,
then
we
have
the
alerts
tab
on
there,
but
no
one
can
use
it.
A
So
we
can
just
planning
for
that
in
the
ui
right
like
we'll,
have
that
little
alert
banner
that
disables
the
button
so
it'll,
be
there
it'll
be
sort
of
like
a
teaser
right
like
oh
yeah
yeah.
You
can't
use
it
yet,
but
it's
you
know
you
have
to
have
cass
enabled.
So
I
think
it
will
be
there
as
long
as
we
have
that
front.
End
issue
in
first
that
you
know
prevents
them
from
using
it
until
cast
is
enabled
got
it.
Okay,.
B
B
And
then
the
yeah
and
then
testing
it-
we've
we've
covered
that,
hopefully
cass
gets
in
at
the
end
of
the
month
and
we
can
test
in
the
staging
it'd,
be
nice
to
test
this
in
a
self
self-managed
and
not
rely
on
staging,
I'm
sure.
There's,
maybe
there's
a
and
I've
talked
to
lindsay
about
that
and
she's
mentioned
there's
a
team
that
she's
bringing
in
the
estet
team.
A
A
I
mean
it
would
require
some
work
to
set
up,
but
if
someone
has
the
time
or
wants
to
spend
the
time
like,
you
can
always
go
into
aws
and
just
deploy
gitlab
on
your
own
there,
which
would
be
a
self-managed
instance
or
just
be
running
in
aws.
So
I
don't
think
that
we're
like
technically
blocked.
It's
just
a
matter
of
time
and
resources,
and
you
know
someone's
going
to
have
to
set
up
a
self-managed
instance.
I
mean
the
instances
that
the
developers
run
locally.
Those
effectively
are
self-managed
instances
if
they're
running
them
locally.
A
B
Yeah-
and
you
know,
and
as
you
mentioned,
locallyworks
and
zamir
gave
us
that
wonderful
walk
through
last
meeting.
So
if
it's
working
there,
then
essentially
it's
good
to
go
yep
yep.
A
Okay,
cool
awesome:
well,
thanks
for
your
time
today
and
we'll
meet
again
next
week,
all
right
great
walkthrough,
thanks.