►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
our
container
security
group
discussion
for
the
week.
We
have
one
follow-up
item.
I
think
we
actually
might
defer
this
one.
We
could
talk
about
it
a
little
bit
here.
I
guess
so.
From
last
week,
thiago
mentioned
that
we
might
need
a
third
spike.
We
talked
about
doing
two
spikes
and
then
thiago
wanted
to
bring
up
a
third
for
scanning
containers
and
production.
A
A
A
A
You
can,
I
don't
think,
there's
a
rush
on
this
one.
We
could
probably
pick
that
one
up
next.
A
C
I
guess
this
is
just
for
demo.
I
haven't
really
written
any
code,
but
I
have
been
reading
a
little
bit
about
fips
and
trying
to
understand
how
we
support
it
in
the
linked
issue.
I
provided
a
couple
of
screenshots
of
just
how
I
was
able
to
enable
fips,
as
well
as
just
test
out
debian,
apk
and
sorry,
not
debian
alpine
and
rel-based
distro,
just
to
see
if
we
can
support
it
happy
to
discuss
further
go
through
the
different
security
levels
I
just
did.
C
I
did
want
to
clarify
one
thing:
security
level,
4
did
say
it's
meant
to
be
hardened
for
scenarios
when
you
can't
control
the
environment
that
the
software
is
running
in,
for
example,
where
environmental
changes
or
other
things
could
affect
the
software.
I
didn't
think
that
was
within
the
scope,
so
I
kind
of
just
highlighted
as
outside
of
our
scope.
C
Modules
is
that
is
that
correct
sorry,
that
was
meant
to
be
a
question,
not
a
statement.
Can
I
assume
that
environmental
factors
are
level?
Four
fibs
is
outside
of
the
scope
of
this
work.
A
I
think
so
I
don't
know
that
I
have
a
hard
answer
for
you
right
now.
I
think
I
need
to
probably
need
to
look
into
that
more,
but
I,
I
think
that's
a
safe
assumption.
C
All
right,
so
I'm
gonna
see
my
screen
yep
on
so
like.
On
the
left
hand,
side
is
the
actual
fips
standard.
C
I
was
able
to
get
that
up
and
running,
so
you
can
see
that
here
on
my
host,
I'm
running
the
latest
5.814
linux
kernel,
fedora
32
and
I
have
fips
enabled
on
on
the
host
itself.
So
I
I
reproduced
the
error
that
the
customer
was
seeing
and
the
issue
was
this
failed
to
initialize,
nss
library.
So
in
this
case
the
docker
image
that
we're
using
is
an
alpine
based
distro,
which
by
default
uses
muscle
as
their
lipsy
implementation
instead
of
gnu
libsy.
C
And
so
I
don't
know
how
well
the
crypto
modules,
if
it's
open
ssl
or
what
crypto
modules
they're
using
in
alpine,
but
it
obviously
doesn't
pass
the
fips
standards
or
api
calls
in
terms
of
what
it's
reaching
out
to
in
the
kernel
so
reproduce.
C
The
error
I
did
take
so
adam
was
kind
enough
to
put
together
a
patch
that
changed
the
base
distribution
of
the
base
image
from
an
alpine
based
image
to
a
rel-based
image,
so
red
hat
enterprise,
linux
based
image
and
was
able
to
run
the
same
thing
in
this
case.
It's
just
shelling
out
to
rpm
to
see
if
we'd
get
rpm
to
actually
trigger
the
the
error,
and
it
did
not
cause
an
issue.
C
So
rpm
is
reaching
out
to
the
network
specifically
for
tls
to
be
able
to
download
new
packages
and
install
them
and
with
a
different
base
image.
It
looks
like
it's
actually
using
the
appropriate
system.
Interface
calls
that
are
that
are
fips
friendly,
so
just
digging
a
little
bit
further.
I
looked
at
the
version
of
the
claire
scanner,
so
there's
a
few
different
components
to
think
about
here,
because
I
didn't
really
understand
this
architecture.
C
There's
a
lot
of
things
here
so
in
the
analyzer
there
was
a
tool
called
claire,
there's
a
tool
called
clar
there's
a
vulnerability
database
which
it
looks
like
to
me
just
looks
like
a
set
of
vulnerabilities
for
known
packages,
either
sourced
with
just
version
numbers
a
name
and
then
there's
our
sort
of
glue
that
sticks
all
these
pieces
together.
C
This
is
the
gitlab
clar
analyzer,
so
I
call
it
gitlab
clark,
but
this
is
the
container
scanning
analyzer
and
the
way
it
works
is
it's
downloading,
as
part
of
the
docker
base
image
that
we're
building
it
downloads,
an
old
version
of
claire
that
version
of
claire,
like
the
latest
version
of
claire,
thinks
version
four.
It
doesn't
seem
to
have
this:
they
look
like
they've
rewritten,
a
lot
of
different
parts,
but
that
version
of
claire
was
actually
had
extensions
for
both
apk
dpkg
debian
package.
C
For
me
and
rpm,
so
apk
is
like
the
alpine
default
package
manager.
Debian
pkg
is
the
default
debian
package
manager,
but
it
will
also
run
on
unreal
based
distributions
and
rpm.
Is
the
package
manager
for,
as
I
mentioned,
red
hat
enterprise,
linux
or
fedora,
centos,
et
cetera,
and
so
just
reading
the
code
from
a
cursory
level?
It
looks
like
for
apk
and
dpk.
C
Sorry,
I'm
going
to
have
trouble
with
this
dpkg.
It's
reading
a
set
of
static
files
and
the
files
that
it's
reading
is
var
lib
dpkg
status,
which
is
like
a
listing
of
all
the
software.
That's
installed
on
the
system
using
the
debian
package
manager
and
then
for
alpine
distros.
It's
using
lib
apk
db
installed
so.
A
C
A
C
Like
it's
actually
shelling
out
to
any
other
programs,
the
only
occurrence
of
where
I
could
see
it
shelling
out
to
a
command
line
program
was
in
the
usage
of
rpm,
and
so
rpm
was
triggering
this
fips
error
because,
as
I
mentioned,
the
tls
module
that
it's
using
to
be
able
to
do
the
transport
over
the
network
to
download
the
latest
index
as
well
as
package
files.
It
does
require
some
crypto
and
the
fips
standard
is
really
focused
on
hardening
your
crypto
modules.
C
So
short
story
is,
I
think,
adam's
patch
will
work.
I
suggested
using
adam
patch
adam's
patch
is
like
a
movement
forward,
so
at
least
we
can
support
rel-based
distros
because
we
do
like
a
static
scan
of
like
little
files
for
apk
and
debian.
I
think
the
rel
based
distro
will
work
for
those
scenarios
and
then
we'll
cover
all
three.
We
don't
have
to
have
a
separate
docker
image
for
each
distro,
and
then
I
also
suggested
maybe
like
adding
some
ways
to
do
some
form
of
automated
testing
for
fips
enabled
hosts.
C
I
see
2.1.6
is
out,
as
I
mentioned
in
the
current
main
branch
or
the
default
branch
for
the
project-
they've
rewritten
it
quite
a
bit.
So
those
extensions
that
I
saw
where
we
were
reading
the
apk,
dpkg
and
rpm
package
files
has
been
rewritten
and
changed.
So
it's
a
little
bit
of
riskier
change
to
move
to
the
latest
version
and
then
final
one
is.
C
It
looks
like
when
we're
building
the
docker
image
we're
actually
pulling
these
binaries
from
an
old
location,
coreos
claire
everything
seems
to
be
redirecting
to
quay,
slash
claire
and
when
I
just
did
a
quick
google
for
these
terms,
claire
and
clara,
there's
a
there's
so
many
different
versions
or
forks
of
this
repo,
it's
hard
to
understand,
which
was
the
definitive
source
so
from
a
high
level.
This
is
what
I
see,
and
I
think
that
adam's
patch
is
a
good
move
forward.
C
B
And
mo
I
have
a
question:
you
mentioned
that
alpine
used
a
different
library.
That's
not
gonna
lead.
Would
you
mind
saying
that
name
of
the
lab
again
because
m-u-s-l.
C
I
believe
it's
pronounced
muscle.
Lipsy
m-u-s-l
is
the
mucil
muscle
lipsy
implementation.
So
they
are.
You
know
the
idea
behind
alpines
is
supposed
to
be
security
first
distro,
so
they're
using
a
more
modern,
lib
c.
However,
with
a
more
modern
lid
c,
although
it's
implementing
the
same
system
called
lipsy
interface,
it's
not
guaranteed
to
have
the
same
bugs
as
the
canoe
lipsy
and
therefore
any
software
out
there.
C
That's
you
know,
worked
around
or
dealt
with
those
bugs
or
nuances
in
libsy
are
not
going
to
work
the
same
way
when
you
work
with
muscle
lipsy,
so
I
think
using
alpine
as
a
default
base
image
today
for
the
type
of
customers
that
we
support
is
a
poor
choice.
I
fully
support
going
with
something
like
centos
as
a
default
base
image
and
if
we
could
strip
it
down
to
just
the
essentials,
I
think
we're
going
to
make
we're
going
to
be
much
safer.
C
Alpine
is
great
because
it's
small,
but
it's
new
and
it's
not
as
we
can
see
that
crypto
modules
aren't
fip
certified
or
even
fips.
We
didn't
do
anything
outside
of
just
installing
rpm
on
alpine,
which
in
itself
is
kind
of
funny
to
me,
but
yeah.
So
I
I
like
the
idea
of
centos
as
the
base.
C
Clark
is
not
using
clark,
yes,
get
lab,
clar
is
using
car.
Its
usage
was
hard
for
me
to
find,
and
it's
still
not
clear
to
me
exactly
what
it
does.
So
I
I
need
to
look
at
that
further.
What
I
saw
was,
it
was
downloading.
Oh
sorry,
go
ahead.
D
So
the
clark
is
clear,
but
I
think
they
bundled
the
database
service
inside.
So
you
know.
C
C
Yes,
so
I
explained
that
here
what
we
did
is
in
the
gitlab
ci
yaml
file.
We
use
a
services
declaration
to
launch
the
vulnerability
database,
which
is
coming
from
a
separate
mirror
that
we
have
of
that.
So
we're
it's
not
doing
it
within
the
analyzer
itself.
It's
doing
it
through
gitlab
ci's
functionality
to
mount
as
a
service
that's
available
to
the
analyzer,
then
the
analyzer
itself
uses
both
clar
and
claire.
So
it
shells
out
to
both
of
these
programs
does
a
bunch
of
stuff
and
then
generates
a
report.
C
So
I
can
pinpoint
like
here's,
where
we're
actually
downloading
clar
and
here's
where
we're
actually
downloading
claire,
and
so
we
actually
execute
these
two
programs
from
this.
The
first
time
I
looked
at
this
code
was
yesterday
by
the
way
so
analyze.
Where
was
it
analyze?
Module
analyze,
analyze
right
here
and
then
we're
shelling
out
to
the
clar
binary
here?
Okay,
there's
the
path,
sorry,
this
is
maybe
getting
a
little
more
too
in-depth,
so
we're
actually
executing
it.
We're
passing
some
environment
variables
and
settings
to
it.
We're
not
actually
like
clar
itself
and
claire.
C
Both
look
like
go
code
to
me.
So
in
that
sense
I
was
expecting
us
to
actually
pull
it
in
as
a
module
and
just
make
api
calls
to
the
class,
the
actual
like
go
code.
What
we're
doing
is
we're
downloading
the
binaries
and
then
we're
just
we're
basically
running
the
equivalent
of
a
shell
script.
That's
executing
those
binaries
from
the
command
line,
taking
the
output
and
then
trying
to
put
that
together.
So
it's
not
clear
to
be
why
we
chose
to
use
go
for
this,
but
I'm
sure
that
smarter
people
can
explain
that.
C
A
Cool
and
then
for
the
last
one
I
do
want
to
go
through
planning
breakdown
for
our
project
level
dashed
scheduled,
scan
policies.
I
think
we're
in
a
good
place
with
the
designs.
Kyle's
done
some
good
work
on
the
designs.
A
E
Sure
let
me
just
turn
my
screen.
E
So
just
the
context
on
that
is
this
is
building
on
our.
This
is
the
next
mvc
just
kind
of
building
on
our
existing
policy
creator,
which
is
just
related
to
network
policies.
So
today
we
see
the
policy
section
here
in
the
threat,
monitoring
area
and
yeah.
The
next
step
would
be
to
schedule
a
dash
scan
and
that's
considered
a
policy.
E
So
if
it's
a
one-time
scan
like
like,
we
see
in
on-demand
sense,
that's
not
a
policy,
but
if
it's
a
recurring
event
that
would
be
constituted
as
a
policy,
so
maybe
in
the
future
it
could
also
be
scheduling.
E
You
know
a
license
scan
once
a
week
or
dependency
scan
once
every
two
weeks
or
something
like
that
yeah.
So
today
we
see
policies
and
threat
monitoring,
but
one
of
the
key
changes
will
actually
be
the
information
architecture,
so
bringing
a
dedicated
policy
section
here,
just
under
the
security
and
compliance
section,
and
with
that
this
design
section
is
a
little
slow.
E
It
would
actually
change
the
ui
to
look
like
this.
So
some
of
the
notable
changes
is
there's
now
a
filter,
just
by
type,
so
the
two
different
type,
the
policy
types
that
we
would
have
is
container
runtime
and
scan
schedule.
Of
course,
that
could
grow
to
other
type
of
scans.
But
for
now
it's
just
the
dash
on
demand.
One.
E
There's
some
small
front-end
tweak
here
with
the
subtext
showing
the
environment
for
the
container
runtime
ones.
That's
just
because
in
the
previous
policy
section
it
showed,
the
environment
is
one
of
the
columns,
but
we'll
have
to
find
that
these
columns
will
have
to
kind
of
scale,
as
we
add
different
policy
types
to
them.
E
E
It's
a
little
bit
different
than
the
one
we
have
in
the
working
prototype
where
we're
almost
there.
Actually
I'll
just
show
this.
We
we
have
today.
We
have
just
an
information
drawer
here
that
kind
of
shows
a
little
bit
of
data
about
this,
and
you
can
edit
it
further
from
there,
but
with
this
new
mvc,
the
dash
portion
of
it,
it
gives
some
gives
some
information
here
about
like
what's
the
policy
type,
what
is
the
policy?
E
The
scan
type,
the
scan
name,
the
policy
status
it's
enabled
by
default,
and
then
the
history
of
it.
So
if,
for
example,
and
this
one
it's
a
weekly
scan,
so
the
user
can
go
and
check
out
the
different
results,
edit
policy
actually
goes
to
edit.
The
policy
that
we
have
here
and
then
edit
scan
is
a
new
concept
that
we're
going
to
be
introducing
as
a
result
of
this.
E
What
I
mean
by
that
is,
if
you
take
a
look
at
creating
an
on-demand
scan,
you
actually
have
to
create
a
scanner
profile
in
a
site
profile,
but
we're
going
to
have
the
dash
team
is
going
to
create
a
saved
scan
so
basically
it'll,
merge
these
two
together
and
under
that
umbrella
will
be
a
scanner
profile
in
a
site
profile
under
the
name
of
gas
name,
whatever
the
name
may
be
so
that'll
simplify
our
ux
when
creating
a
policy,
but
it
also
did
create
a
a
bit
of
a
dependency.
E
What
I
mean
by
that
is,
let
me
just
jump
to
this,
so
we
already
took
a
look
at
this,
but
the
actual
creation
of
it
follows
the
same
familiar
pattern:
the
sort
of
conditional
pattern
that
we
had
in
network
policies
and
I'll
just
assume
here.
This
is
not
part
of
the
ui,
but
this
is
just
the
parts
that
are
not
grayed
out
are
part
of
the
nbc,
and
so
if
the
user
creates
a
rule
to
schedule
branch
default
branch,
maybe
in
the
future
they
could
select
a
specific
branch.
E
But
for
this
one
it
just
defaults
to
the
the
main
branch
to
be
scanned
weekly
or
daily,
and
then
they
enter
that
criteria
and
later,
like
the
action
for
this
policy,
is
for
this
nbc
is
to
run
desk
but
later
down
the
road.
It
could
be
running
these
other
ones,
so
kind
of
just
using
this
conditional
just
to
make
sure
that
we're
we're
following
the
logic
right,
but
in
the
ui
it
would
look
like
this.
E
The
user
could
add
a
name,
and
then
that
would
show
up
in
the
policy
section
a
description
it's
enabled
by
default,
and
then
this
is
the
nbc
schedule.
The
default
branch
should
be
scanned
daily
or
weekly,
based
on
this
for
the
certain
criteria.
So
if
it's
daily
it's
time,
if
it's
weekly
it's
a
day
and
then
required
to
scans
to
run
with
the
save
scan,
but
as
follow-up
nbc's
that
this
portion
here
with
das
would
be,
the
user
would
actually
select,
maybe
one
of
the
different
scan
types.
E
So
that's
why
just
kind
of
mapping
it
out
like
this
is
relevant,
just
like
I
said
so,
we're
kind
of
dialing.
In
our
logic
there
there
is
a
scenario
here
and
I'm
following
with
the
dash
team,
because
we
don't
have
the
save
scan
concept
yet
we
may
have
to
well.
This
is
kind
of
the
decision
that
we'll
have
to
make
is
like.
E
Do
we
wait
for
that
safe
scan
concept?
I
think
it
would
simplify
it
to
wait
for
it
or
do
we.
The
alternative
is
to
select,
is
to
ask
the
user
to
designate
the
the
profile
and
the
scan
type,
but
that's
exactly.
A
Yeah,
so
I'm
not
sure
how
soon
the
death
team
is
going
to
have
those
saved
scans
done.
I
think
it
will
probably
be
pretty
soon,
but
I
need
to
follow
up
with
derek
on
the
exact
timing
of
that.
So
I
think
just
depending
on
the
timing,
either
we'll
have
them
pick
a
save,
scan
or
we'll
we'll
have
you
pick
both
the
scan
profile
and
the
site
profile
as
two
separate
drop
downs
there
in
the
ui.
C
I
I
love
this
very
much.
This
is
amazing,
so
we're
building
like
a
policy
engine
where
we
can
write
the
rules
that
we
want
to
actually
trigger
in
action,
and
then
actions
are
like
a
separate
step
that
we
can
report
on.
You
mentioned
something
about
information
architecture
and
I
just
want
to
go
back
to
the
left
hand
nav.
That
nav
is
starting
to
grow
a
little
bit,
and
I
think
you
mentioned
we're
going
to
revisit
some
of
the
information
architecture
on
that.
But
when
I
see
that
nav
I
see
security
depth.
C
No,
sorry,
not
this!
This
is,
on
the
left
hand,
side.
If
you
go
back
to
the
previous
slide,
yes
there,
so
that
that's
growing
quite
a
bit.
We've
got
security
dashboard
on
demand,
scans
dependency,
and
when
I
look
at
that,
I
see
okay,
so
policies
and
configuration
I'm
a
little
bit
confused
as
to
where
to
start
like
configuration
to
me
means
really
enabling
these
jobs
policies
is
something
like
a
pre-action
that
I
could
actually
even
do
before.
C
I
don't
know
if
we
can
rethink,
maybe
just
how
to
better
group
that,
but
that
list
is
growing
and
I
I
don't
think
it's
a
v1
thing,
but
something
to
think
about,
so
that
when
I
click
on
that,
I
know
where
to
go,
because
my
natural
inclinations
go
to
dashboard
and
then
from
dashboard
to
drive
out
workflows
from
there.
So
I'm
wondering
if
we
can
condense
the
list
or
regroup
things
or
just
think
about
a
better
way
to
architect
that
or
yeah.
A
Good
news
is
that,
as
we
continue
our
work
and
policies
eventually
what
we're
doing
here,
I
believe
we
will
remove
the
need
to
have
the
license
compliance
tab,
yeah.
C
Licensed
compliance
are
almost
overlapped
with
each
other,
so
right.
E
Yeah-
and
there
is
another
issue
too,
to
like
merge-
I
have
an
issue
open.
That's
like
merging
dependency
list
and
license
compliance
into
a
component
section.
That's
what
this
one
is.
So
I
think
because
just
generally
I
mean
this
is
a
kind
of
an
issue
at
get
lab
on
the
ux
mine
is
like
how
do
we
kind
of
we're
having
all
these
features,
but
we
kind
of
need
to
make
it
a
little
bit
it's
starting
to
get
a
little
unruly
with
all
the
different
options,
so
I
could
imagine
this
gets
confusing.
C
Will
help
it's
comfortable
when
you
know
the
pieces
and
you
know
where
to
go
but
from
like
a
starting
point.
The
first
use
experience,
I
think,
is
a
little
confusing
and
that's,
I
think,
that's
what
I'm
focused
on
is
like
that
first
use
because
we're
really
trying
to
get
adoption
of
these
things.
So
we
need
that
first
use
experience
to
be
really
simpler.
I
I
think,
I'm
not
sure.
B
And
then,
with
with
policy
going
out
of
trade
monitoring,
we
are
just
going
to
have
the
dashboard
and
the
alerts
under
threat.
Monitoring.
A
A
A
E
A
E
Okay,
that's
one
yeah
yeah
did
we
just
show
the
most
recent
one,
then,
in
that
in
our
in
that
issue,
I
asked
about
like
do
we
just
auto
archive
after
a
while,
and
I
think
that's
where
we
landed-
it's
like
yeah,
it
could
be
messy,
but
do
we
just
show
like,
for
example,
if
it's
a
weekly
scan,
do
we
want
to
just
show
like
the
last
one
october
18th
or
where
are
the
other
ones
then
like
just
because
we.
A
A
C
For
the
mvc,
like
I'm,
not
sure
what
to
call
is
that
a
drawer.
A
E
Yeah,
you
can,
you
can
edit
the
policy,
but
how
would
how
would
they
view
the
actual
results
of
the
scan?
Where
would
they
perform
that,
and
that's
why
I
was
asking
if,
like.
A
Yeah,
so
that
depends
on
where
we
end
up
running
it
and
where
we
end
up
storing
it,
which
I
think
is
still
to
be
determined.
If
it's
done
as
part
of
a
scheduled
pipeline,
then
you
could
go
into
the
scheduled
pipeline
and
view
the
results
there.
Of
course
that's
not
super
intuitive,
because
it's
not
here
in
this
ui,
where
you
set
it
up.
A
I
think
we
just
need
to
finish
out
that
research
site
first
and
then
we'll
have
more
information
to
guide
that
scan
history.
Section.
E
E
Sorry
sorry,
like
maybe
like,
like
the
link
to
that
pipeline
or
just
instead
of
this
full
history
and
just
like,
maybe
just
an
anchor
to
that
pipeline.
At
least
yeah.
C
I
think
scans-
oh,
I
think
so.
I'm
saying
we're
not
sure
I
can
say
off
the
top
of
my
head.
We
at
least
have
the
pipeline
for
the
default
branch
and
I'm
pretty
sure
we
can
link
to
the
default
pipeline
for
the
default
branch.
But,
as
sam
mentioned,
I
think
it's
better
that
we
wait
till.
We
finish
the
research
before
we
provide
any
answers
to
that
question.
Kyle.
C
F
I
wanted
to
ask
if
we're
at
the
place
where
we
really
do
want
to
start
the
planning
breakdown
work
for
this.
I
know
alexander,
isn't
here
right
now
he's
the
only
other
production
bug,
but
is
that
the
goal
after
this
discussion
or
is
that
research
spike
blocking
that
breakdown.
C
Seems
like
that
list
page
is
something
we
could
start
breaking
down
and
working
on
that
we're
just
waiting
on
a
few
details
for
like
the
show
and
edit
sections.
So
I
would
say
we
could
probably
break
down
the
list
page
but
chan,
samir.
What
am
I
not
thinking
about,
or
what
am
I
not
seeing
that
I
should
see.
D
A
So
the
first
breakdown.
Unfortunately,
I
have
a
hard
stop
here.
I've
gotta
sign
off
and
I'm
the
meeting
host.
So
it's
gonna
end
the
meeting,
but
thanks
all
for
your
time
today
lindsay.
Maybe
we
can
continue
that
discussion.
They
think.
If
that's
okay,
sorry,
I
have
a
hard
stop
here.
No
problem,
thanks
for
keeping
us
honest,
bye.