►
From YouTube: Protect:Container Security group discussion 2022-03-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
our
group
meeting
for
container
security
alexander
has
several
items
to
demo
there.
We
can
probably
just
skip
through
them
for
the
sake
of
the
recording,
but
a
lot
of
good
front-end
improvements
that
have
been
coming
out
over
the
last
little
while
so
great
job
with
those
alan.
It
looks
like
you've
got
our
first
topic
here.
B
A
A
I
would
just
add
to
my
comments
in
the
issue
there
that
I'm
hoping
we
can
make
this
reliable
enough,
where
we
don't
need
our
users
to
have
an
audit
or
report
mechanism
like
I'm
hoping
we
can
get
it
reliable
enough
where
they
can
just
count
on
it
running,
and
then
we
can
take
over
the
auditing
for
them
instead
of
having
them
all.
Do
the
auditing
for
their
own
areas.
A
So
I
guess
that
would
be
my
biggest
concern
or
point
there
do
we
feel
like
that's
something.
That's
attainable.
C
Yeah,
let's
try
stay
co-host
there
you
go.
I
was
going
to
say
that
that
is
the
ideal
situation
sam.
I
I
don't
think
it's
going
to
happen.
I'm
I'm
not
optimistic
about
it.
I
think
there's
always
going
to
be
a
way
for
accidentally,
most
likely
or
sorry,
not
accidentally,
more
unlikely.
We
can
probably
fix
the
accidents
but
intentionally
you
know
the
ci
config
is
so
flexible,
there's
so
many
things
that
can
go
wrong.
I
don't
think
we
can
cover
them
all
we'll
try
at
some
point.
C
You
realize
we're
just
playing
guacamole
in
an
arms
race
and
I
think
we're
gonna
need
the
audit
anyway,
that,
like
that's
just
gut,
feel
and
from
experience
dealing
with
these
sort
of
things.
Hopefully
I'm
wrong.
A
Yeah,
that
makes
sense,
I
think,
in
any
case,
let's
start
with
giving
ourselves
some
back
end
auditing
and
reporting
here,
so
that
we
can
even
have
an
idea
of
how
often
this
is
happening,
and
you
know
I
I
would
much.
Rather
you
know
I,
I
would
rather
wait
until
we
realize
that
we're
in
that
whack-a-mole
kind
of
situation,
where
we're
not
going
to
address
everything
and
then
give
users
some
auditing
versus
like
doing
that
before
we
have
some
hard
data
to
show
that
it's
absolutely
needed.
A
C
Are
we
keeping
count?
I
think
I
think
we've
had
two
or
three
occasions
so
far
mache.
Do
you
remember.
C
The
latest
one
was
the
stage
right.
We
realized.
If
somebody
removes
the
stage
it
doesn't
run
so
now
we're
gonna
inject
the
stage.
I
think
there
was
something
before
I
don't
remember
what
it
was,
but
I
seem
to
remember
there
were
two
other
occasions.
B
We
had
situations
where
we
were
creating
an
invalid
dml
file,
so
the
the
ci
job
was
not
running
because
of
that
yeah
and
the
error
was
not
really
clear
like
it
was
not
telling
what's
going
on
in
the
animal
itself.
It
was
just
game
about
the
camel
format
and
that's
it
that
was
fixed
yeah.
There
was
another
one
at
the
very
beginning,
but
I
I
haven't
seen
it.
C
Yeah,
so
so
yeah,
that's
the
pattern
that
I'm
that
I'm
afraid
we'll
keep
encountering.
But
you
had
a
you
had
an
idea
similar
to
the
ping,
because
the
pink
data
will
give
us
counts,
but
it
won't
tell
us
a
lot
about
what's
happening
you
you
had
an
idea
about
some
other
instrumentation.
B
So
we
need
to
do
the
spike
to
just
know
what
kind
of
fields
or
data
we
can
get,
because
we
have
one
place
when
we
inject
our
policies
and
when
the
policy
itself
is
invalid,
it
will
not
be
injected.
B
So
I
just
need
to
collect
all
the
data
that
we
can
have
and
maybe
build
some
kind
of
table
with
all
information,
and
I
was
thinking
about
having
a
list
that
that
could
be
for
us
only
internally
or
behind
the
feature
flag,
that
we
can
take
a
look,
the
list
where
you
have
a
list
of
projects
and
list
of
policies,
and
if
these
were
applied
or
not.
I
just
need
to
see,
if
that's
possible
and
current
with
current
data,
that
we
have.
C
So
that
that
comment
was
in
the
issue,
I
think
my
my
my
reply
to
it
was
if,
if
using
the
rails,
log
was
an
easier
and
maybe
intermediate
step
to
to,
because
once
we
get
into
db
migrations
and
adding
things
to
the
database.
Now
we
got
to
think
about
maintaining
that
and
cleaning
it
up,
and
you
know
when
does
it
get
deleted?
C
We
need
to
build
an
interface
for
it
or
somebody's
going
to
have
need
to
have
console
access
to
query
it,
whereas
the
log
you
hit
hit
up,
kibana
and
and
see
it.
However,
the
log
does
expiring
in
seven
days.
We
only
keep
that
for
a
week.
B
D
A
So
I
do
have
one
other
question
about
this.
So
if
we're
not
able
to
reliably
run
the
policy
itself,
how
are
we
going
to
have
confidence
in
our
ability
to
detect
whether
or
not
the
policy
was
applied?
Are
we
going
to
do
this
outside?
The
ci
pipeline
seems
like
if
we
can't
do
the
one
reliably,
are
we
going
to
run
into
the
same
reliability
issues
with
the
detection
script
as
well.
C
Yeah,
that's
a
great
great
question.
C
Yeah,
so
it
there
could
be
a
situation
where,
where
you
don't
even
know
that
you
should
have
had
a
policy,
although
I
think
I
think
that's
less
likely
than
than
in
not
running
at
the
point,
so
much
a
you
you,
you
know
a
lot
more
about
this,
but
I
believe
there's
a
hook
that
we
that
we
take
advantage
of
to
insert
our
configuration,
and
maybe,
as
part
of
that,
we
can
make
a
note
somewhere
hey
by
the
way.
C
This
pipeline
should
have
something
that's
not
affected
by
the
pipeline
configuration
itself
and
then,
when
it
finishes,
we
can
check
that
again
and
say:
hey
did
this
job
actually
run?
There
are
some
edge
cases
there
right.
So,
for
example,
if
somebody
has
a
repeated
job
name,
we
we
increment
the
job
with
the
with
the
the
counter,
so
it
becomes
a
bit
more
complicated,
keeping
track
of
what
what
is
what
in
in
the
pipeline.
A
C
C
C
Right
now
we
take
advantage
of
ci
jobs
extensively.
We
discussed
this
in
part
with
the
with
the
registry
container
scan,
because
at
some
point,
you're
gonna
need
to
build
that
bomb,
and
where
does
that
run?
We
can't
we
can't
just
run
that
as
a
sidekick
job,
so
we
need
somewhere
to
run
and
right
now
everything
is
a
ci
job.
C
So
that's
that's
one
possibility,
I
think,
but
yeah
you,
you
seem
like
you.
You
had
a
thought
before.
B
So
this
is
what
the
easiest
way
to
to
see
if,
if
there
is
a
policy
that
should
be
applied,
is
compare
the
conflict
before
like
running
our
processor,
that
is
like
injecting
the
the
updated
ci
with
policies
and
compared
before
and
after,
and
if
they
are
the
same,
so
nothing
got
changed.
That
means
that
something
is
wrong
in
the
configuration
itself.
So
definitely
we
need
to
get
back
to
it.
Investigate
see
what
we
can
do,
because,
right
now
I
yeah
we've
built
a
lot
new
features
around
that.
A
Is
this
what
happened
to
me
yeah
you
and
then
you
disappeared
and
came
back.
I
don't
know
if
there's
a
bug
in
tomb
or
who
knows
did.
E
C
Off
taking
a
step
back
from
from
creating
solutions
is
yeah.
Let's
research,
the
problem
keep
in
mind
just
watch
out,
so
we're
not
entering
a
a
cycle
of
you,
know:
patching
up
and
breaking
and
patching
up
and
breaking
and
yeah.
We
do
need
to
understand
a
bit
more,
what's
available
to
us
in
terms
of
observability
and
and
tracking.
What's
what's
happening
behind
the
scenes
so
spike
spike
issue.
A
Yeah,
that
seems
appropriate.
Do
you
want
to
repurpose
the
issue
we
have
there
or
do
you
want
to
create
a
new
one.
A
Yeah,
I'm
fine
with
this-
I'm
not
ready
yet
to
commit
to
giving
our
end
users
visibility
into
this,
but
I'm
it
sounds
like
I
might
end
up
be
being
persuaded
for
that
later
on,
but
at
least
right
now,
I'd
like
to
keep
it.
That
would
be
the
only
constraint
I
would
put
on
this
is
like
let's
keep
this
constrained
to
providing
our
team
with
visibility
into
this,
and
let's
start
with
that
before
we
look
at,
you
know,
ways
to
give
end
users
themselves
visibility
into.
C
A
C
Yeah,
I'm
happy
happy
to
take
the
net.
Take
next
discussions
there.
I
was
thinking
of
brian's
points
which
were
good
as
well
at
some
where,
where
at
some
point,
do
you?
How
far
do
you
trust
your
your
own
developers
right
when
they,
when
they
turn
rogue
and
they
start
breaking
things,
you've
got
a
different
kind
of
problem,
but
maybe
we
can
help
with
that
problem
as
well.
C
A
So
the
last
item
I
had
is
mostly
just
an
fyi.
I
post
it
in
slack,
but
it's
a
big
deal,
so
I
wanted
to
call
it
out
here,
just
verbally
as
an
announcement.
It
looks
like
the
workspace
group
finally
finished
their
background
migration,
which
has
been
you
know,
a
blocker
for
us
on
moving
things
to
group
and
namespace
level.
So
that's
done.
They
have
a
small
bug.
They've
identified
the
root
cause
of
it
it.
A
I
think
it
only
affects
like
600
projects,
so
we're
they've
already
figured
out
the
problem
they're
going
to
fix
it
soon.
I
think
we're
at
the
very
end
of
this
blocker
and
we
should
be
good
to
go
to
use
project
namespace.
I
found
out
today
there's
not
even
a
feature
flag,
it's
just
on
by
default.
It's
just
a
question
of
if
you
can
reliably
count
on
every
single
project
having
an
associated
project
namespace
and
at
least
as
of
right
now,
that's
true
for
all
of
our
projects,
except
for
600
of
them.
A
C
Tricky
problem
and
must
be
huge
for
them.
Congratulations
to
that
team.
I
I
don't
think
it.
It
slowed
us
down
a
little
bit.
C
We
did
wait,
one
or
two
milestones
for
it,
but
but
I
think
mache
has
started
in
parallel
while
they
were
doing
the
migration,
so
we
we
have
been
unblocked
for
a
little
while,
although
with
the
with
the
phipps
work,
I
made
a
mistake
with
the
with
the
scheduling
and
I
completely
overlooked
all
those
fips
issues
that
we
offered
to
help
with
and
now
now
we're
pulling
them
in
which
may
affect
our,
which
we
will
factor.
The
planning
that
I
had
in
place
was
a.
C
A
Yeah,
that
sounds
good.
The
other
thing
this
is
a
carryover
for
my
one-on-one
with
neil,
but
we
should
probably
go
through
and
double
check
that
we're
properly
unblocking
all
of
the
front
end
work
and
prioritizing
those
issues.
If
you
haven't
talked
to
neil
about
that
already
too,
I.
A
A
So
I
mean
generally
our
backend
team.
We
have
four
back
end
developers
to
one
front,
end
developer
and
so
the
back
end
team
usually
outpaces
the
front
end,
but
I
think
we're
in
a
situation
where
alexander
is
not
able
to
work
on
his
top
priorities
because
he
has
some
back
end
blockers
that
are
not
done
anyway.
So
we're
we're
in
sort
of
a
state
right
now,
but
we
might
need
to
take
a
look
at.
Let's
do
that.
C
Our
our
output
last
month
in
february
was
a
little
bit
slower.
There
was
a
lot
of
a
lot
of
time
off,
taken
parental
leave
and
sick
leave,
and
things
like
that.
We
have.
We
have
dominic
with
ko
kovid,
he's
still
struggling
with
it,
but
but
yeah
outside
outside
of
these
things
that
are
not
in
in
our
control.
I
can
definitely
catch
up
with
you
about
identifying
those
blockers
and
see
if
we
can
work
on
them.
D
Yeah
and
then
something
else
al
alexander
and
I
talked
about
yesterday
following
so
sam
and
I
talked
to
alexander
and
I
talked
afterward
in
the
afternoon-
something
else
that
we've
noted
is
it's.
It
can
be
difficult
for
alexander
to
put
the
right
amount
of
attention
toward
work
that
will
be
coming
up
while
he's
actively
working
on
something
else.
D
So
I
want
to
find
the
right
balance
because
he
he'll
honestly
fast
that
he
might
you
know,
like
a
lot
of
us,
do
off
the
cuff.
That
looks
great
because
you
don't
have
a
lot
of
time
to
look
at
something
we
want
to
try
to
avoid
that.
We
want
to
find
the
right
amount
of
depth
the
right
amount
of
attention,
so
we're
not
missing
this
stuff,
because
it's
it's
likely
that
some
of
these
blockers
were
things
that
could
have
been
caught
earlier
too.
C
Yeah,
so
that
that's
that's
a
good
point,
we
shouldn't
really.
We
shouldn't
really
assume
that
that
the
same
development
can,
you
know,
keep
delivering
what
they
focused
on
and
keeping
an
eye
on.
All
the
other
planning,
and
you
know,
preempting
problems
that
might
come
in
other
areas.
D
C
No,
but
I
I
just
a
heads
up
that
I'm
I'm
off
the
rest
of
the
week,
so
neil
you
might
need
to
catch
up
next
week
about
this.