►
From YouTube: Protect:Container Security Show and Tell 2022-07-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Know
I'll
talk
for
a
minute,
so
just
as
a
reminder,
this
is
something
that
we're
trying
out
here
to
basically
we're
marrying.
What's
done
on
the
composition,
analysis
team,
they
actually
have
a
weekly
meeting
where
they
have
a
show
and
tell,
and
the
purpose
is
really
just
to
share
what
you're
working
on
it
doesn't
have
to
be
a
demo
or
anything
visual.
A
You
can
just
talk
a
little
bit
about
some
of
the
things
you're
doing
it's
mostly
to
facilitate
collaboration
and
knowledge
sharing
and
help
build
the
team
a
little
bit
if
you're
not
quite
ready,
Alexander.
That's
totally
fine.
C
Yeah
it'll
be
fine.
Okay,
so
I'll
share
my
screen,
foreign.
C
Okay,
so
the
biggest
thing
I've
been
working
on
is
group
level
policies
having
a
UI
for
that
which
is
very
exciting.
So
here
I
am
in
prod.
There's
a
policy
page
that
we
don't
have
anything
or
this
is
a
project
policy
page
doesn't
have
anything,
but
if
we
go
up
to
the
group
level,
you'll
see
a
new
policy
sidebar
option
which
is
very
exciting
and
then,
if
you
click
on
there,
you
get
the
same
page
and
you're
able
to
do
stuff.
C
If
we
go
to
a
demo
with
more
data
here
we
have
CMP
and
staging,
and
we
have
this
new
source
column.
It
says
imperative
from
the
project
or
the
group.
If
you
click
on
that
it
takes
you
to
that
group
level
policy
page
and
says
Source
this
group.
It
shows
up
in
the
sidebar
to
Source
this
group.
Only
if
you
go
back
to
the
project,
you
click
on
there.
It
says
this
project
level
policy
gun
inherited.
C
This
is
where
it's
inherited
from
and
again
you
have
a
link
to
click
off
to
the
group
level
policy,
page
and
you'll
notice
that
these
group
level
policies
the
edit
button
is
disabled.
We
don't
want
people
changing
them
from
this
level.
Again
we
have
a
third
link
to
go
back
to
the
group
level.
I,
don't
know
if
that's
too
many
links,
but
it's
fine
as
an
MVP
and
then
at
the
group
level.
You
can
add
the
policy
if
you
click
on
that.
C
Those
gives
view
this
information
up
here,
that
his
policy
will
affect
many
projects
in
this
group.
So
all
projects
this,
maybe
you
should
say
projects
and
subgroups
in
this
group.
Maybe
someone
could
take
a
note
of
that
change
for
me
and
then
what
else
yeah
otherwise
editing
this
is
is
just
business
as
usual,
so
you
modify
something
click
to
configure
with
merge
request
and
it
takes
you
to
an
MR,
that's
been
created,
so
that's
very
exciting
there.
It
is
changes.
A
C
Yes,
I
saw
that
thread
in
slack.
Someone
was
like.
How
are
we
gonna
do
this
and
you're
like?
We
have
a
thing
for
it,
which
is
great
next
item
on
my
agenda
has
been
the
validation
for
the
policy.
Editor
I
have
an
MR
up
for
that
right
now,
and
so
that
is
going
to
alert
users
that,
if
they're
typing
something
incorrectly
in
the
yaml
mode
for
the
scan
execution
policy,
it's
going
to,
let
them
know
hey,
you
know,
that's
not
correct.
C
So
in
this
example,
I've
typed
scan
execution
policy
and
I've
put
an
a
at
the
end,
and
it
gets
this
little
squiggly
line
underneath
being
like
hey.
That
needs
to
be
scanned
execution
policy.
So
this
is
a
way
that
we
can
help
our
users
type
things
in
make
sure
things
are
typed.
In
correctly
also
we
can
in
this
one
we
don't
have
a
value
for
name
that
it's
commented
out
and
we
get
a
little
squiggly
here.
C
It
says
incorrect
type
that
needs
to
be
a
string,
so
users
aren't
excellently
switching
up
types
for
all
these
different
values,
and
so
that
is
up
for
review
right
now,
working
through
it.
I
have
an
MVP
but
needs
a
little
refinement
before
it
gets
merged.
C
I
have
I
have
a
question
for
this
actually
feature
this
there's
validation
in
the
policy
editor
here.
Do
we
Sam?
Do
you
know
if
we
need
or
would
want
validation
on
the
file?
If
someone's
editing
the
policy.yaml
directly?
C
Oh,
can
we
do
that
I
think
we
might
be
able
to.
C
The
the
schema
that
we
get
from
the
back
end
for
for
this
validation
actually
is
for
the
policy.yaml
file
or
no
I'll
show
you
this
it's
for
multiple
policies
where
the
policy
editor
is
a
single
policy
editor.
Well,
technically,
you
could
add
multiple
policies,
I
think.
Maybe
actually
don't
quote
me
on
that,
but
so
I've
had
the
massage
the
schema
to
fit
this
use
case
I
think
by
default.
C
D
Yeah
I
have
a
question
actually
thanks
for
for
demoing.
That
I
was
going
to
ask
about
the
schema
if
you're
like,
but
if
you
implement
something
else
for
the
validation.
But
you
kind
of
answered
that
already
that
you're
kind
of
doing
something
in
the
front
and
now,
instead
of
using
the
schema.
D
Me,
let
me
bring
up
the
issue
if
it's
too
technical
for
today,
that's
fine!
It's
it's!
Just
that
I
got
curious
when
I
saw
that.
C
Yeah
I
there's
a
great
there's,
a
great
link
somewhere.
Maybe
maybe
I
just
send
this
to
you
later.
Basically,
I
got
the
schema
and
was
where's
the
back
end
issue.
C
It's
just
that
the
schema
looks
slightly
different
for
multiple
ish,
multiple
policies
than
it
does
for
a
single
policy,
and
where
is
my
post
yet
so
this
is
the
the
difference
in
format
for
the
policy.
So
in
the
Palestinian
file,
it's
scanners
teach
policy
and
then
it's
an
array
of
policies,
whereas
for
the
single
policy
editor,
the
scan,
execution
or
policy
shows
up
as
a
type
property
and
then
it's
not
the
array,
it
doesn't
have
an
array
underneath
it.
C
It
just
has
all
these
values,
so
I
basically
had
to
move
the
schema
around
a
little
bit
so
that
these
aren't
an
array
underneath
this
thing
and
then
there's
a
type
property,
but
it's
not
too
much
and
the
massaging
I
have
to
do
isn't
changing
the
schema.
It's
basically
moving
things
around.
So
if
the
schema
changes
in
like
what
property
accepts
as
a
value
or
what
that
value
is
we'll
get
that
immediately
for
this
validation
as
well.
D
C
That's
a
great
question:
I
hadn't
thought
about
that
and
I
probably
won't
do
that
for
the
V1,
because
getting
the
schema
implemented
is
going
to
be
is
going
in
mrm
itself,
but
I'll.
Look
into
that.
D
The
the
concern
the
only
concern
I
would
have
is
that
we
can
have.
Let's
say
that
the
schema
is
in
the
group
level.
Let's
say
that
a
project,
sorry,
the
policies
in
a
group
level.
We
can
have
different
projects
and
then
a
policy
might
have
some
rules
for
some
projects
that
don't
don't
make
sense
to
other
projects.
Let's
say
like
Branch
names
and
that
user
and
things
like
this
so
then
blocking
the
user
would
be
tricky.
I
I
wouldn't
be
able
to
suggest
how
it
would
go
forward.
Then.
C
For
the
V1
of
the
rule
mode
for
scan
execution
policy,
we're
not
even
I,
don't
even
think
we
are
retrieving
all
the
branches
for
a
child
policies
or
child
projects
or
subgroups.
If
we,
if
we
were
to
allow
rule
mode
at
the
group
level
so
and
that
validation
wouldn't
even
be
part
of
the
schema,
the
schema
is
not
going
to
have
Branch
names,
a
lot
of
Branch
names.
C
It's
going
to
say
you
know
if
this,
if
this
can
be
an
array,
so
I,
don't
think
we
have
to
worry
about
your
concern
in
terms
of
the
schema.
But
we
do
have
to
worry
about
that
for
rule
mode
for
the
group
level
policies,
but
Sam
and
I
have
had
some
back
and
forth
about
that
in
the
one
of
the
issues.
I.
A
A
C
Thank
you.
My
third
thing
I
wanted
to
show
off
is
only
maybe
it's
only
energy
Camp,
actually,
actually
it
might
be
turned
on
in
this
project.
C
Let
me
check
to
see
if
I
have
something
if
I
can
demo
this
on
staging
quick
and
it's
basically
real,
as
I've
allude
to
rule
mode
on
scan
execution
policies,
no
I
suppose
it
wouldn't
be
turned
on
for
the
group
level.
Maybe
it's
turned
on
for
one
of
these
project
levels
and
I
have
one
Mr
up
or
one
Mr
emerged
for
adding
the
featured
flag
and
adding
some
oh
there.
It
is
yeah.
Look
at
that
yeah,
here's
what
I
have
thus
far.
C
We've
got
the
name,
the
description
and
the
policy
status
as
well,
the
preview
and
the
buttons
here,
the
this
first
Mr
added
the
feature
flag
as
well
as
made
all
of
these
fields
and
these
components,
sort
of
generic
or
inherited
for
every
policy
so
before
these
fields
had
to
be
manually,
implemented
for
every
policy
type.
C
Bold,
scan,
execution
and
scan
result,
but
I
arranged
the
code
differently.
So
now
that
if
for
every
subsequent
policy
types
we
create,
we
immediately
get
name
description
policy
status
for
free
and
then
because
most
of
all,
the
differences
between
these
policies
is,
of
course,
the
rules
and
the
actions
which
have
to
come
next
for
this
Mr.
But.
A
Yeah,
it
looks
great
I
noticed.
I.
Think
that
yaml.yaml
preview
is
a
formatting
is
a
bit
off.
I.
Think
I
might
have
even
seen
that
weird
behavior
on
the
scan
or
result
policies
too
I'm,
not
100,
sure
oops,
Yeah
I
guess
we
can
always
test
it
out
real
quick,
but
oh
yeah
there.
It
is
yeah,
so
it
might
be
a
shared
formatting
bug
there.
C
C
But
yeah,
so
that's
just
it's
beginning
stages
and
I'm
working
on
that
next.
A
So
Brian's,
not
here
I,
can
walk
through
a
couple
of
his
items
because
I'm
pretty
familiar
with
it,
I,
don't
know
how
much,
how
much
have
all
of
you
been
following?
The
continuous
vulnerability
scans,
Epic
I,
don't
know
with
details,
but
so
right
now
the
when
dependency
scanning
runs
as
well
as
container
scanning.
We
output
two
artifacts
right:
we
output
one
artifact
for
vulnerabilities
and
we
output
another
artifact
for
dependencies
and
those
dependencies
get
fed
into
the
dependency
list
page.
A
Let
me
pull
this
up,
so
I
can
show
you
it's
here
under
security
and
compliance
dependency
list,
I,
don't
know
if
any
of
you
have
taken
a
look
at
this
at
all,
but
basically
it
lists
out
all
of
the
dependencies
that
are
used
in
the
project
and
again
both
depend
the
both
dependency
scanning
and
container
scanning
feed
into
this.
So
you
see
our
container
scanning
results
are
here.
It
shows
the
location
is
an
image.
A
If
you
come
down
the
dependency
scanning
results
come
in
and
the
location
is
a
file
and
the
packager
is
like
a
python
or
pip.
But
essentially
you
know
all
of
these
come
in
here
right
now.
This
UI
is
being
created
from
the
artifact
file,
so
none
of
this
information
is
in
our
database,
it's
actually
being
parsed
out
and
read
from
that
artifact
file
individually,
when
the
page
is
loaded,
that's
great
for
an
MVC,
but
as
we
look
to
scale
this,
that's
just
not
really
scalable.
A
If
we
ever
want
to
have
a
group
level
dependency
list
or
support
service,
searching
or
sorting
or
filtering
beyond
what
we
have
today,
it
just
doesn't
really
work
and
in
addition,
there
are
some
other
some
other
challenges
that
we
have
with
both
container.
A
Scanning,
because,
right
now
the
vulnerability
database
gets
updated
separately
from
when
we
run
scan,
so
it's
possible
that
someone
runs
a
scan
and
a
year
goes
by
and
of
course,
now
we
have
all
these
new
vulnerabilities
in
our
advisories
database,
but
the
data
that's
shown
from
the
scans
is
outdated
and
then
also
we
have
a
separate
license
scanning
job,
that's
being
run
that
looks
for
licenses.
So
you
know
what
we
have
is
great
for
an
MVC,
but
going
forward
long
term.
A
It's
not
super
efficient
and
it's
not
really
what
we
want
to
the
architecture
that
we
want
to
support
our
needs.
So
we're
revamping
almost
everything
here,
we're
doing
a
major
overhaul
where
now
the
container
scanning
and
dependency
scanners
will
no
longer
actually
do
any
scanning
in
the
sense
that
they're
not
identifying
any
vulnerabilities.
A
Instead,
they
will
just
be
generating
a
list
of
components,
dependencies
that
exist
in
the
container
image
or
in
the
project,
but
they
won't
actually
be
doing
any
lookups
Against
The
Advisory
database.
So
that's
where
we're
headed
long
term
anyway,
and
then
we're
going
to
have
this
new
service
that
will
ingest
that
data
into
our
gitlab
postgres
database,
and
then
we're
actually
also
going
to
store
our
advisory
database
in
the
gitlab
database,
and
that
will
allow
the
scanning
to
be
done
on
the
server
side
in
rails.
A
So
that
way,
the
list
of
vulnerabilities
is
always
up
to
date,
even
if
it's
been
a
whole
year
since
you
last
ran
your
scans
and
the
idea
is,
we
can
actually
reuse
that
same
information
for
licenses
as
well.
So
it's
a
really
big
project.
It
actually,
in
theory,
won't
change.
A
The
UI
at
all
the
UI
will
continue
to
work
exactly
the
same
way
it
does
now,
but
all
of
the
back
end
for
this
is
getting
a
major
overhaul
and
that's
the
working
group
that
Brian's
been
on
before
I
go
in
any
further
in
the
notes
that
he
put
here
in
the
doc.
Are
there
any
questions
on
that?
That's
kind
of
a
lot
to
cover.
A
A
So
to
break
this
out
into
work,
we've
basically
separated
this
out
in
a
few
different
epics
one
is
to
have
the
container
scanning
scanner
generate
an
s-bomb,
basically
just
a
list
of
dependencies,
and
then
we
need
the
dependency
scanner
to
produce
a
list
of
dependencies,
so
those
are
each
their
own
epic.
A
We
have
an
epic
for
ingesting
the
s-bomb
and
storing
it
in
the
database,
and
then
we
have
separate
epics
for
bringing
in
in
the
advisory
database
doing
the
match
and
then
servicing
up
all
of
the
apis
out
to
the
front
end.
So
it's
a
really
quite
a
big
project,
but
we're
collaborating
on
it
between
both
our
group
and
the
composition.
A
Analysis
group,
so
Brian's
been
working
on
this
first
piece,
which
is
the
s-bomb
ingestion,
and
that's
really
the
piece
of
all
of
this
that
we're
highlighting
today
is
this
in
just
a
Spam
reports,
Epic
and
so
to
start
that
off
they
had
a
big
spike
to
estimate
the
resource
usage
that
would
be
accumulated
and
the
size
of
the
database
tables-
and
you
know
see
if
this
was
something
that
was
going
to
scale
if
they
could
keep
the
database.
A
Small
enough
to
make
this
work,
they
created
out
this
big
database
diagram
table
that
you
might
be
interested
in
looking
at.
In
any
case,
this
spike
is
closed
and
they've
made
some
really
good
progress
on
actually
creating
these
tables
in
the
database
and
then
the
next
steps
it
looks
like
they're
going
to
work
on
creating
the
report
parser
and
bring
back
his
notes,
and
the
report
ingestion
service
are
the
next
two
that
they're
they're
working
on
through
this.
A
I
think
Alan
actually
picked
up.
One
of
the
issues
in
here
to
have
the
container
scanning
analyzer
start
outputting
an
s-bomb
of
dependencies
to
be
ingested
by
this,
so
that
piece
is
already
in
progress
as
well
for
the
initial
work
we're
actually
just
going
to
Output
that,
in
addition
to
the
other
artifacts
that
we
already
support.
So
that
way,
it's
not
a
breaking
change
and
we
can
just
add
support
for
that
and
then
remove
the
other
artifacts
later
on.
D
Long
term
are
we
planning
to
also
update
the
approval
rules
based
on
that
new
information.
A
A
A
You
know
we'll
have
a
license
information
database,
you
know
which
will
basically
be
the
equivalent
of
the
advisory
database,
and
this
will
get
updated,
asynchronously
and
then
we'll
be
able
to
just
do
a
match
here
to
generate
the
list
of
licenses
for
the
project
or
for
the
pipeline
or
for
the
branch,
and
so
it's
the
same
thing
right
both
for
our
security
approvals
and
for
license
compliance.
We
want
that
to
be
the
same.
A
In
fact,
I
see
license
compliance
really
coming
together
with
security
approvals.
Eventually,
we've
got
an
epic
I
think
it's
on
your
backlog
too,
but
it's
a
little
ways
down
there,
but
removing
license
check
and
the
license
compliance
page
and
replacing
that
with
these
license
approval
policies.
So,
ideally
we'll
move
that
entire
security
policy
editor
as
well.
A
A
You
know
what
they're
looking
for
so
again,
we're
kind
of
trying
to
make
the
license
check
go
away.
In
the
same
way,
we
got
rid
of
vulnerability
check.
So
there's
a
lot
a
lot
in
the
vision
for
this.
It's
a
really
big
project
and
we're
just
starting
in
but
I'm
really
excited
because
it's
going
to
reduce
our
Tech
debt.
A
It's
going
to
make
this
continuous
it's
going
to
allow
users
to
search
for
dependencies
across
all
of
their
projects
in
git
lab,
and
it's
going
to
pave
the
way
for
us
to
do
that
continuous
scanning
of
all
of
the
images
in
the
gitlab
registry.
This
is
kind
of
a
prerequisite
for
that
epic
anyway,
that
we've
got
on
our
backlog.
So
a
lot
to
look
forward
to
here.
D
Yeah
and
it
looks
like
going
away
from
pipeline
I
think
it
reduces
lots
of
the
confusion
that
sometimes
some
users
they
end
up
with.
A
And
yes
searching
for
dependencies
across
all
of
the
projects.
That
is
a
very,
very
common
feature
request.
It's
probably
the
biggest
thing
on
the
composition
analysis
side
that
we
need
to
add
right
now.
You
know,
especially
after
things
like
log4j
companies
are
coming
and
saying.
How
can
I
know
which
projects
in
my
organization
are
using
log4j
and
right
now
the
answer
is
well,
you
can
go
into
each
project
or
you
can
set
up
something
with
our
apis
and
it's
not
a
great
answer
right.
D
B
Been
working
on
the
background
migration
for
fixing
foreign
keys
of
vulnerability
findings,
it's
nothing
I
can
show,
of
course
concerns
around
1070
000
rows
and
probably
with
them
was
that
they
were
not
accessible
via
API
So.
Currently,
I'm
working
with
scan
result
policies
for
the
first
time
and
I'm
making
them
apply
to
merge
requests
created
in
the
past,
as
they
currently
only
apply
to
merge
requests
created
in
the
future.