►
From YouTube: Cluster Image Scanning Sync Up
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
well,
thanks
for
meeting
with
me.
I
just
wanted
to
sync
up
on
a
couple
of
things
this
morning
and
make
sure
we're
on
the
same
page,
with
our
plans
for
production,
vulnerability
scanning
we're
still
figuring
out
the
final
state
of
the
front
end,
and
so
I
just
wanted
to
make
sure.
A
First
of
all
that
I
understand
the
relationship
between
those
policies
and
environments
and
clusters,
because
all
of
those
things
are
a
little
bit
confusing,
and
then
I
wanted
to
talk
about
the
actual
yaml
syntax
that
goes
in
the
security
policy,
and
I
just
wanted
to
clarify
a
couple
of
things
there.
If
that's
all
right,
okay,
so
the
way
we've
got
it
in
the
mock
right
now
me
pull
it
up.
B
So
let
me
maybe
I'll
quickly
go
what
we
currently
have
with
the
mvc
or
actually
what
we
have
right
now.
So
let
me
let
me
just
quickly
for
my
screen.
B
Okay,
so
so,
first
of
all,
let
me
go
start
with
the
epic
and
with
the
with
the
plan
that
we
had
here.
So
what
we
want
to
achieve
is
to
be
able
to
get
vulnerabilities
from
from
the
cluster
from
kubernetes
cluster.
B
We
are
trying
to
not
think
about
not
think
about
policies,
but
provide
a
way
to
do
it
like
using
gitlab
right
now.
What
we
can
do
so
the
plan
is
that
there's
already
a
project
cluster
image
scanning
that
has
it
is
an
analyzer,
but
it's
really
like
purely
alpha
version.
It's
basically
one
bash
script
and
one
ruby
script
that
converts
from
from
the
like
format
that
is
provided
by
by
starboard
to
a
format
that
that
we
have
at
gitlab.
B
So
what
it
does
it
is
connecting
to
kubernetes
getting
vulnerabilities,
all
of
them
as
json
file,
we've
given
cubeconfig
and
just
convert
them
like
to
the
json
file,
and
then
we
are
using
ruby
to
convert
it.
So
it's
it's
that
simple,
so
how
it's
currently
configured
if
I'll
go
to
live
to
my
test
application,
I
have.
I
have
a
yaml
file
that
is
actually
using
this.
This
cluster
image
scanning
project-
it
is
working
the
same
as
we
have
for
any
other
analyzer,
so
it
connects
to
to
the
analyzer.
B
Analyzer
connects
to
kubernetes.
Cluster
gets
the
results
and
those
results
are
available.
So
for
now,
what
you
need
to
do
is
to
specify
the
the
cube
config.
B
There
is
already
nmr
with
with
proposals
for
the
documentation
part,
so
maybe
that
will
for
for
so
for
those
that
are
looking
for
the
recording
at
the
recording.
We
have
the
yeah
here
we
have
the
template
and
we
have
the
documentation
part,
so
it
it
might
be
easier
to
understand
how
it
works
internally.
So
I've
I've
written
here
how
to
configure
the
cluster,
how
to
configure
the
cubeconfig,
how
to
obtain
the
all
the
tokens
that
you
need
and
so
on
and
then
how
to
configure
it
in
the
gitlab
itself.
B
So
I
have
this
yaml
file.
This
yaml
file
is
running
and
in
the
settings
what
I
have
for
settings
for
cicd.
I
have
configured
the
variable
that
contains
the
configuration
for
to
the
cluster
to
connect
to
it
so
backing
back
to
to
the
prototype
we
have
now.
Could
you
could
you
send
me
the
link
I'll
just
open
it
here.
B
A
B
Okay,
no
worries
so,
okay,
what
we
had
there
is
actually
ability
to
to
schedule
a
scan
on
given
time
and
you
you
are
selecting
the
environment,
so
that's
the
important
part
how
to
do
it
with
what
we
currently
have.
So
we
have
variables
so
whenever
we
will
set
up
the
environment
in
the
in
the
policy
we'll
set
a
proper
environment,
variable
that
will
be
sent
to
analyzer
analyzer
will
decide.
Okay,
I'll
filter
out
those
vulnerability
reports,
because
these
are
not
related
to
the
namespace.
A
A
B
I
I
don't
remember
to
be
honest
about
that,
but
whenever
you
have
an
environment,
I'm
looking
I'm
talking
about
the
back
end
code,
I
know
so
whenever
you
have
environment
and
you
have
a
cluster
configured
for
a
given
project,
there
is
a
specific
pattern
that
is
used
to
generate
the
name
of
that
environment
that
will
be
used
in
kubernetes,
so
having
the
environment.
We
can
map
it
to
to
a
cluster
namespace.
So
this
is
how
it
works.
I
I
don't
have
like.
B
I
need
to
take
a
look
at
the
documentation
to
see
how
it
works
like
properly,
but
but
this
is
how
I
was
doing
that
I
was
looking
at
the
cluster
and
I
was
looking
at
the
environment.
I
was
looking
at
the
cluster
connected
to
the
project
and
then
I
was
able
to
to
generate
the
namespace,
the
name
of
the
namespace
that
is
used
to
to
deploy
apps
and
also
to
filter
those
vulnerabilities.
So.
A
I
guess
what
I'm
asking
like
here,
even
in
that,
mr
that
you've
got
open
there
on
line
132
right.
You
said
you
provide
the
credentials
in
that
cubeconfig
variable,
so
that
seems
a
little
bit
redundant,
because
users
are
already
providing
credentials
to
connect
clusters
to
gitlab
in
the
infrastructure,
kubernetes
tab
right,
so
they
either
connect
it
through
a
cluster
certificate
or
they
connect
through
the
agent.
A
B
B
When
we
run
a
pipeline
when
there
is
a
deploy,
then
we
are
specifying
this
cube
conflict
variable
it's
not
available
widely
because
it
it
is
a
risk
to
have
that
because
other
job,
I
don't
know
malicious
job
or
you,
you
could
create
an
mr
that
could
somehow
get
your
cube
config
from
from
your
gitlab,
so
so
as
for
now,
whenever
you're
doing
deploy,
this
variable
is
available
for
for
the
job,
that
is,
that
is
performing
the
deploy.
B
But
it's
not
like
widely
available,
and
the
other
thing
is
that
when
you
configure
the
the
containers
kind
of
they
have
like
kubernetes
cluster
here
now,.
B
So,
whenever
you're
configuring
it,
I
need
to
okay,
okay,
so
whenever
you're
configuring
it,
you
have
the
cluster
name,
url
the
certificate
and
the
token-
and
these
are
for
cluster
admin
role.
We
don't
want
to
to
give
that
permissions
to
analyzer.
That
should
only
be
able
to
fetch
those
vulnerabilities
from
the
cluster.
B
So
for
me
it's
a
security
risk
that
that
we
could
give
too
much
to
the
analyzer
itself.
I
want
to
be
able
only
to
to
get
here's
the
role
I
I
want
only
to
get
watch
and
list
from
the
building
reports
that
are
available
on
the
cluster.
That's
why
we
have
separate
role.
That's
why
we
have
separate
service
account,
so
the
ux
improvement
on
that
would
be
to
whenever
we
are
configuring,
the
I'll
just
switch
to
new
project.
So
I
will
not
expose
my
credentials
to
my
cluster.
B
So,
whenever
we're
configuring
it,
we
could
also
add,
instead
of
like
additional
service
token
like
service
token,
without
the
cluster
admin
privileges,
but
with
only
those
small
chunk
of
permissions
that
we
need
to
get
vulnerabilities.
So
that's
the
main
problem,
like
the
service
token
that
we
are
currently
using
for
for
kubernetes
integration
wants
too
much
and
we
don't
want
to
be
responsible
for
licking
it
or
anything
like
that.
We
want
to
be
able
to
only
ask
for
for
things
that
are
needed,
so
that
is.
B
A
Okay,
so
if
we
go
back
to
that
kubernetes
cluster,
you
know
if
we
wanted
to
add
a
tab
to
this
view
up
top
for
security.
No,
I
guess
actually,
after
you
have
a
cluster
connected,
but
you
know
you
you
come
in
there
there's
the
details,
tab,
there's
like
a
metrics
tab.
There
are
a
few
tabs
there
if
we
wanted
to
add
a
security
tab
and
list
all
of
the
vulnerabilities
for
that
specific
cluster.
A
B
A
A
B
Will
be,
there
will
be
no
magic
for
users,
they
will
know
from
which
namespaces
they
want
to
get
vulnerabilities
from.
B
There
will
be
some
magic
and
maybe
some
are
not
using
environments.
Maybe
they
they
just
want
to
connect
the
cluster
just
to
get
those
vulnerabilities
but
without,
like
the
whole
whole
integration
with
kubernetes
cluster.
So
so
definitely
what
we
have
in
the
design
is
doable.
If
you
want
to
do
it
better,
that's
a
great
suggestion
just
go
with
kubernetes
namespaces
that
will
simplify
a
lot.
A
A
So
that
way
the
workflow
would
be.
You
connect
your
cluster
to
gitlab
first
and
then
you're
able
to
pick
to
select
that
from
the
drop
down
and
yeah
I
mean
I
agree,
it's
good
to
not
end
up
using
more
permissions
than
we
need.
The
challenge
is
that
users
are
already
giving
us
cluster
admin
permissions
if
they
connect
a
cluster
through
the
certificate
method.
B
Yeah
so
yeah,
it's
all
about
exposure
like
to
limit
the
exposure.
B
You
know.
Who
knows
what
could
happen,
but
let's
say
we
have
it
configured
on
gitlab.com
and
you
are
not
a
part
of
of
of
the
company,
but
your
like
community
contributor
and
you're
modifying
the
gitlab
ciaml
file
into
creating
an
mr
and
with
that
cubeconflict
exposed.
You
could
do
something
malicious
with
that
conflict
variable
I
don't
know,
maybe
maybe
just
show
it
in
in
the
logs
or
something
like
that.
So
so
it's
all
about,
even
though
the
the
variable
could
leak
like
we,
we
don't
want
to
to
leak
like
the
whole
cluster
admin
role.
A
It
makes
sense
we
need
to
move
that
out
of
instead
of
having
it
be
an
environment
variable.
Can
we
just
store
that?
I
I
don't
know
where
we
store
the
token
today
for
our
kubernetes
connection,
but
can
we
just
read
it
from
there?
So
it's
not
an
environment
variable,
it's
not
something
that
anyone
can
just
you
know
read
if
they
create
an
mr,
but
that
way
you
know
it's
a
little
bit
more
secure
and
it's
only
accessible
with
the
you
know,
live
container
scanning
job
or
whatever
we
called
it.
Cluster
cluster
image
scanning.
B
So
the
the
what
we,
what
I
believe
it's
here
as
a
iteration
tree,
so
use
the
kubernetes
agent
to
fetch
results.
This
is
like
the
last
step
that
we
wanted
to
have,
because
this
is
how
we
will
that's,
how
we'll
do
it
with
those
first,
two
iterations
that
we
want
to
just
like
step
by
step,
build
the
feature
and
then
extend
it
to
use
like
kubernetes
agent,
it's
just
to
to
provide
a
way
for
users
to
get
vulnerabilities
from
their
cluster
into
gitlab.
Now
the
last
one
is
the
most
secure
way.
B
It
will
not
use
the
like
pipeline,
because
the
main
problem
is
the
pipeline,
because
the
way
we
communicate
from
gitlab
with
runner
is
through
those
variables.
So
at
some
point
they
have
to
be
exposed
because
jobs
are
simply
like
docker
images
containing
different
applications
that
they
can
only
communicate
by
by
getting
those
data
from
from
from
environment
variables.
So
that
that's?
B
Why
that's
why
there
is
kubernetes
agent,
because
there
is
that
risk
of
exposing
the
token
to
your
kubernetes
agent
to
recommend
this
cluster,
so
so
the
first
iteration
that
is
planned
for
next
order.
B
I
don't
know
to
which
milestone
is
the
is
the
solution
that
we
want
to
use
eventually,
so
for
now,
I'm
working
on
the
on
the
first
iteration.
So
we
we
have
something
we
can
get
some
feedback
and
then
so
on.
A
B
That's
yeah,
that's
that's
something
that
definitely
we
can
do.
It
will
be
just
a
separate
option.
In
my
opinion,
it's
good
to
have
two
options,
so
someone
will
use
kubernetes
agent,
but
some
would
like
just
to
use
the
the
variable.
But
you
know
it's
up
to
customers
what
they
want
to
do
with
that
if
they
want
to
configure
like
the
whole
pass
and
kubernetes
agent
and
so
on,
or
they
want
to
keep
it
simple
like
like
they
do
for
any
other
analyzer
they're
currently
using.
B
A
B
Yeah
yeah
yeah,
that's
just
the
first
step,
just
to
to
get
something
working
and
and
then
definitely
like,
improve
ux
on
that,
because
right
now,
when
you'll
read
the
documentation,
part
you'll
see
that
that's
that's
mainly
like
very
manual
like
you
have
to.
You
have
to
do
something
where
this
here
get
the
token
then
get
the
the
certificate
get.
The
url
then
run
this
comment,
then
you'll
get
the
whole
cube.
Config
file
copy
it
navigate
to
settings,
configure
it
it's
it's,
not
really
user
friendly
like
for
now
it
is
easy.
B
I
mean
I
I
really
everyone
that
is
familiar
with.
Github
can
do
it,
but
it's
still
not
really
user
friendly.
We
want
to
be
better
so,
and
there
is
like
warning:
what's
why
it
is
important
to
configure
it
in
a
certain
mode
and
so
on.
So
I
tried
to
to
add
all
the
information
and
at
the
beginning,
I've
added
notes
that
you
know
it's
still
an
alpha
stage
and
it's
unstable,
so
we
can
change
it
anyway,
at
some
point
but
but
yeah
this
is.
This
is
like
the
first
step
we
want
to
have.
A
Okay,
yeah:
that's
that's
great
thanks
for
clarifying
that
so,
okay.
So
the
answer
is
in
the
long
term.
Instead
of
specifying
environments
to
scan
we,
we
should
be
scanning
clusters
and
name
spaces
in
those
clusters
and
then
the
second
change
is
that
we
should
provide
that
cluster
image
scanning
token
as
part
of
the
ui
when
they
set
up
their
cluster.
B
B
Yeah
yeah,
so
so
just
to
clarify,
because
there
was
a
discussion
about
where
to
show
those
vulnerabilities,
because
that's
that's
the
problem
currently,
because,
okay,
we
have
this
security,
like
we
have
the
vulnerability
report
and
I
recall
the
whole
discussion
about
okay,
having,
like
I
know,
development,
variabilities
or
operational,
and
so
on.
B
So
if
they
should
be
here
or
not,
so
that
that's
the
main
problem
for
now
in
the
first
iteration,
what
we're
adding
we're,
adding
like
additional
scanner
that
is
called
like
cluster
image
scanning
that
we
we
just
clarified
two
days
ago
and
it
will
be
available
here
and
you'll,
see
like
okay.
A
Yes,
so
we
would
have
a
security
tab
here
for
just
the
vulnerabilities
in
this
cluster
specifically,
but
then
they
would
also
show
up
in
security
and
compliance
in
the
vulnerability
list
under
the
operation
tab
there.
So
one
project
may
actually
be
scanning
multiple
clusters,
and
so
here
you
would
get
everything
for
the
project
and
then,
when
you're
viewing
the
cluster
itself,
you
would
see
vulnerabilities
specific
to
that
cluster
similar
to
the
way
we
do
it
in
pipelines.
Today,
like
we
have
that
security
tab,
so
you
see
the
vulnerabilities
just
for
that
pipeline.
A
Okay,
that's
good!
They
would
actually
show
in
both
places
yeah
again
just
like
pipelines.
B
A
A
You
know,
and
this
would
reference
a
cluster
that
you've
set
up.
You
know
production
cluster
and
then
in
here
you
would
specify
name
spaces,
so
I'm
actually
not
great
with
the
ml
syntax.
A
But
you
know
something
along
the
lines
of
I'm
probably
going
to
mess
this
up
exactly,
but
anyway,
you
would
have
a
whole
bunch,
a
listing
of
namespaces
here
you
know
production
and
I
don't
know
what
else
you
know
staging
as
well,
just
for
example,
but
okay
in
here
I'm
wondering
if
I'm
wondering
if
we
need
to
have
this
be
different
or
if
this
can
just
be
container
scanning
and
we
would
know
if
it's
a
live
container
scan
or
a
project.
I
don't
know
what
to
call
it,
but
a
project
versus
you
know.
A
B
For
me,
the
main
question
is
it's:
the
difference
between
rule
and
action
rule
specifies
when
and
on
which
conditions
and
actions
do
what
so,
I'm
thinking
if
we
should
keep
the
where
we
should
keep
actually
information
about
the
cluster
and
name
spaces
and
so
on.
Maybe
it
should
be
in
action.
So,
okay,
whenever
this
is
happening,
so,
let's
say
every
day
at
midnight,
run
container
scanning
or
cluster
image
scanning.
That's
how
we
name
it
currently
yeah,
and
here
we
should
just
add
it.
A
A
So
I
think
what
we
would
do
is
you
could
just
specify
the
cluster
up
top.
You
know
schedule
this
cluster
to
be
scanned
and
we
can
pick
container
scanning
and
you
know
for
us
on
the
back
end,
it's
two
very
different
things.
This
live
scan
versus
the
code
container
scan,
but
from
the
customer's
perspective,
I
don't
think
it's
really
that
different.
I
think
it's
all.
It's
just
running
a
trivia
scan
right
at
the
end
of
the
day,
and
so,
for
example,
if
a
customer
set
has
these
rules,
then
you
know
on
the
back
end.
A
So
it
actually
mapped
to
two
different
things
in
our
back
end,
but
it
I
feel
like
from
the
customer's
perspective,
that
they
would
not
need
to
differentiate
that
you
know
they're
just
specifying
where
if
we
move
that
down,
then
I
think
it
gets
really
confusing,
because
now
you
have
the
question
of
what?
Where
are
you
scanning
it
up
here
for
some
things
and
then
down
here
for
others,
and
then
they
have
to
start
to
separate
those
in
the
mind.
B
Yeah
yeah,
I
agree
yeah,
it's
it's
a
it's
interesting
unix
issue
to
solve.
You
know
how
to
do
it
to
be
user
friendly,
because
for
me,
when
I
look
at
the
rules-
and
I
see
if
like
when-
I
read
the
whole
like
the
whole
sentence,
if
scheduled
the
environment
production
to
be
scanned
like
it,
it
doesn't
like
okay
here,
right,
yeah,
exactly
I
I
don't
get
the
word
f
at
the
beginning,
so
maybe
maybe
we
should
improve
that,
but
that
that's
that's
definitely
something
that
that
ux
can
solve
like
like
the
whole
problem.
B
For
me,
there's
no
difference
if
it
will
be
in
rules
in
action
because
I'll
rate
it
anyway
in
terms
of
yaml
structure,
I'll
I'll,
send
you
the
how
it
can
look
like,
but
actually
what
you
did
is
great.
We
just
need
to
make
it
yaml
compatible
and
and
then
it
will
be
okay.
A
Okay,
yeah,
I'm
I'm
sure
I
probably
messed
up
the
yaml.
I
I
always
worried,
miss
the
decks
of
it
but
yeah.
That
is
a
little
bit
funny
up
here.
You
know
if
it
makes
a
lot
of
sense
for
almost
everything
else
that
we
do
to
have.
If
you
know,
if
a
pipeline
is
run,
you
know
for
our
network
policies.
If
network
traffic
is
inbound,
you
know
some
of
the
scan
results,
policies
that
we're
planning
to
do.
You
know
if
yeah
secret
detection
scan
finds
this,
so
it
works
for
everything
except
for
schedule.
A
It
is.
It
is
a
little
bit
of
incorrect
grammar
there.
It's
not!
So
I'm
not
sure
how
that
you
know
if
yeah
it
gets
a
little
bit
tricky
I
mean
we
can
always
because
really
there
shouldn't
be
anything
here.
You
know
for
a
proper
english
sentence,
but
you
know
how
do
you
you
know?
Maybe
those
change.
You
know
as
these
change
so
like
you
change
this
to
schedule
on
the
word
if
disappeared,
yeah
we'll
have
to
play
with
that
a
little
bit
when
we
get
closer
to
hitting
this
front
end
cool.
A
Are
you,
okay,
with
with
keeping
the
with
doing
that?
Do
you
have
any
concerns
with
you
know
in
up
here
saying
you
know
specifying
the
cluster
up
here
in
the
name
spaces.
B
No,
I
I
don't
see
any
any
problems
we
already
can
do
it
like
using
environment
variables,
so
it
all
ends
up.
As
translating
like
I
mean
in
the
current
state
in
the
first
two
iterations.
That
will
be
like
translating
what
we
have
in
the
policy
into
environmental
variables
sends
to
the
job,
and
the
first
iteration
will
just
just
read
those
policies
from
from
the
database
or
from
the
from
the
repository.
A
Okay,
great
well,
thanks
for
clarifying
all
of
that.
I
know
annabelle's
working
on
all
of
these
mocks,
so
we're
excited
to
see
this
come
come
out
and
I
know
we've
got
a
lot
of
users
that
are
really
excited
to
start
scanning,
their
production
environment.
So,
looking
forward
to
it.