►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
our
container
security
group
meeting,
so
I've
got
the
first
one.
I
just
wanted
to
follow
up
on
this
one
synchronously
rather
than
asynchronously,
because
I
feel
like
there's
a
discussion
to
be
had
around
it.
So
going
back.
Thank
you
arthur
for
your
additional
research
on
adding
support
for
gma,
v1,
v2
and
self-managed
for
the
alerts.
A
I
think
now
we
have
a
decision
to
make
right
of
which
of
those
three
bullets
do
we
go
with
and
which
one
as
I've
thought
about
it.
A
It's
a
little
bit
hard
to
say
exactly
where
things
are
going
to
land.
I
know
the
configure
team
is
working
to
add
yet
another
option
to
that
mix
by
having
an
agent
in
the
cluster
and
from
my
discussions
with
their
team,
you
know
it's
a
little
bit
hard
to
tell
exactly
what
the
future
of
gma
v1
and
v2
are
going
to
be.
A
So
my
inclination
for
now
is
to
just
go
with
the
cheapest
options
so
that
we
can
get
feedback
and
get
it
released
and
out
to
customers,
and
then
we
can
add
support
for
the
other
two
bullets
later
on.
If
customers
ask
for
it-
and
you
know,
that'll
buy
us
some
time
as
well
to
see
how
things
shake
out
on
the
configure
team
to
know
where
we
want
to
support
long
term.
A
So
I
guess
that
leads
me
to
my
question,
which
is
you
know
what
what's
the
relative
cost
of
each
of
those
three
items?
You
know:
what's
the
cheapest
of
the
three
cheapest
and
easiest.
B
B
Installation
of
parameters
is
a
bit
harder,
so
we
have
to
either,
as
I
mentioned,
improves
the
support
between
gma,
b2
and
v1
in
the
same
deployment
environment
or,
as
we
discussed
yesterday,
maybe
some
docks
that
clearly
say
that
there
is
a
risk
of
mixing
two
at
the
same
cluster
and
to
avoid
issues.
Do
this
thing
specifically,
but
yeah
for
a
lot
specifically.
Gmab1
is
just
a
bit
more
slightly
easy
to
integrate.
Just
again
since
we're
aligning
this
monitor
team.
B
That's
it
feels
like
what
that
must
rely
on
it
kind
of
feels
like
having
a
shift
in
interaction
right
now,
at
least
written
from
the
docs.
There
was
a
lot
of
broken
links,
and
so
I
think
what
they're
trying
to
do.
They
try
to
focus
on
the
incident
management
part
rather
than
the
alerts
part,
and
the
other
generation
is
being
transformed
into
something
generic,
and
we
briefly
discussed
yesterday.
So
it's
it's
like
at
least
what
I
feel.
A
B
Like
from
reading
their
docs
is
that
they
want
to
provide
you
an
endpoint
and
you
generate
a
lot
wherever
the
way
you
want.
It
might
be
your
parameters.
Instance.
It
might
be
any
third
party
integration.
They
just
don't
care
like
as
long
as
they
will
trigger
a
lot
on.
They
get
to
happen
point
and
then
they
start
doing
their
ui
magic,
creating
alerts,
fingerprinting
alerts.
A
B
External
parameters,
the
downside
is,
you
can't
create
alerts
on
on
it
from
the
github
ui
so
and
that's
the
issue.
I
I
uncovered
additionally
to
what
I
said
yesterday:
yeah.
C
That's
what
I
understood
yeah.
B
But
now
yesterday
my
my
idea
was-
and
my
understanding
was-
that
external
parameters
properly
the
best
option
for
us
right
now,
but
again,
first
investigation
after
the
discussion
I
had
yesterday
uncovered
that
the
alert
creation
is
not
possible
at
all
on
the
external
instances.
So
I'm
kinda
and
again
with
these
questions
that
sam
is
asking
what's
the
easiest
way
for
us
right
now
to
at
least
have
some
kind
of
alerts
in
place.
It's
each
team,
everyone,
unfortunately,
and
but.
B
B
It
can
be
generated
by
literally
anything
so
the
way
they
try
to
abstract
it
is
by
providing
api
endpoint
that
you
can
use
to
create
a
lots
inside
github,
so
it's
kind
of
shifts,
responsibility
from
github
to
generate
a
lot
entity
in
the
database
like
previously,
they
were
relying
on
parameters
to
something
external
and
I
think
they
mentioned
in
the
docs
objeni
as
something
that's
coming
as
an
additional
alternative
to
parameters
which
you
understand
as
a
paid
service,
and
then
again,
they
literally
allow
you
to
create
like
free,
free,
preload
alerts
in
the
system.
B
So
again,
I'm
just
trying
to
say
that
gmail
one
is
just
the
easiest
option
right
now,
but
we
can
end
up
with
the
same
situation
with
orchestration
group
we
are
having
right
now.
Is
that
monitor
team
is
shifting
in
in
a
different
direction.
They're
kind
of
going
away
with
parameters,
at
least
that's
what
I'm
getting
a
feeling
off
so
again,
sticking
to
gym.
Everyone
still
might
be
a
risky
long-term
decision
because
it
doesn't
sound
like.
C
A
Yeah
I
mean
because
things
are
I
I
feel
like
that's
even
more
reason
to
go
with
the
cheapest
option,
because
if
things
are
changing
there,
you
know
we've
got
things
changing
in
orchestration
group.
We've
got
things
changing
in
the
monitor
group
like:
let's
make
the
minimal
investment
to
get
it
working.
D
It's
exactly
what
sam
the
same,
the
on
the
same
note
that
sam
is
talking
about,
I
think,
looking
back
now.
If
we
had
celium
just
done
for
v1,
we
would
have
solved
so
many.
We
would
have
avoid
so
many
issues
that
we
are
having
now
so
sometimes
I
just
prefer
to
go
cheap
and
see
what
would
come
out
from
the
research
and
then
later
on.
We
can
integrate
better
if
needed,.
C
D
Not
regret
is
just
learning.
B
Right,
I
kind
of
disagree
because
the
the
way,
the
reason
to
go
with
the
two
specifically
is
that
psyllium
doesn't
really
fit
in
divine
paradigm
when
we
have
a
predefined
vendor
home
values
that
are
sitting
in
github
and
you
can't
change
them.
Is
it
that's
not
going
to
work
for
serum
again,
at
least
like
it's?
B
If
we
will
go
with
the
one,
we
essentially
will
cut
off
possibilities
of
using
native
routing
completely,
and
I
think
it's
just
a
bad
decision.
So
I
don't
think
that
gmail,
the
one
for
psyllium
is
like
any
kind
of
good
possibility.
It's
kind
of
similar
to
parameters
because,
like
while
we
can
create
alerts
up
there,
we
just
limit
what
user
can
do
overall,
because
we
managed
everything
in
parameters
and
and
the
users
can't
control
anything
what
is
happening
with
their
parameters.
B
C
Cool
thanks
for
that,
if,
if
we,
if
we
do,
if
we're
going
with
with
v1
and
we
have
to
mix
with
ceiling
on
v2,
we
we
talked
about
before
addressing
like
documentation
just
being
playing
with
the
facts
and
going
here's
the
situation.
Here's
what
you
need
to
to
know
about
this
choice.
C
Would
you
consider
the
same,
addressing
the
the
making
that
making
it
safe
to
mix,
v1
and
v2
like
so
so
would
be
a
next
step
after
documentation.
A
B
B
So
the
problem
right
now
is
that
triggering
v2
pipeline
after
everyone
install
essentially
can
result
in
an
install
of
one
application,
and
I
think
it's
just
the
nature
of
it
to
being
so
forceful
this
installation.
So
we
could
maybe
relax
a
bit
the
pipelines
in
b2
in
terms
of
like,
if
user
didn't
say
explicitly
like
primitives
is
like,
because
by
default
parameters
will
be
set
to
false
in
v2
and
it
will
cause
uninstall.
So
maybe
we
can
change
this
logic
to
something
like
if
user
did
not
explicitly
say
prometheus
false.
B
We
don't
imply
this
value
automatically.
I
think
this
fix
is
not
really
hard
to
do
in
class
applications
and
it
can
be
a
viable
solution
that
will
allow
a
bit
better
mixing
of
v1
and
v2
am
I
did.
I
represent
correctly
your
idea
in
the
minutes
yeah.
I
think
so.
Yeah.
A
Yeah,
so
I
I
agree,
so
those
are
kind
of
our
two
options
there.
I
guess
the
question
is:
what's
the
relative
cost
of
each
one,
because
we
could
just
do
docs
and
we
could
actually
try
to
do
as
you
suggest
their
arthur
and
relax
the
vt
pipelines?
B
Against
that,
like
it
literally
won't
take
much
to
implement,
we
just
have
to
change
how
can
file
and
triggers
deployments
and
for
the
one
alerts
like
a
lot
based
on
the
one.
It's
pretty
easy.
I
mentioned
yesterday
that
using
v1
I
was
able
to
essentially
set
up
like
a
prototype.
What
what
we
are
kind
of
trying
to
do
quite
fast,
so
the
one
update
is
pretty
much
means
relying
on
what
monitor
team
already
has
in
place.
C
C
B
A
C
No,
I'm
happy
with
that.
I
agree
with
it
as
well
wanted
to
make
sure
you
you're
on
board.
A
Great
okay,
so
decision
wise,
it
sounds
like
let's
decide
with
going
with
gma
v1
for
now,
since
that
seems
to
be
the
cheapest
and
for
now
we'll
start
with
just
documentation.
But
in
parallel
we'll
talk
with
the
configure
team
and
see
if
they're
open
to
fixing
a
problem,
I
mean
we're
going
to
need
documentation
either
way.
I
think
in
the
spirit
of
iteration
we
can
probably
release
with
just
the
documentation
and
then
fix.
You
know
how
v1
and
v2
work
together
later.
A
C
Thanks
just
to
close
this
out,
zamir
you've
refined
the
epic.
With
this
changes
the
some
of
the
issues
that
we
have
already
refined.
Are
you
happy
to
go
in
there
and
update
and
update
them?
Do
you
know
what
to
do
or
would
you
like
some
some
help.
A
A
So
I
know
we've
talked
about
this
before
to
some
extent
we're
getting
closer
to
actually
formalizing
some
plans
around
scanning
for
vulnerabilities
in
a
production
environment.
That's
probably
one
of
the
next
big
things.
That's
coming
up
that
we
still
need
to
define
for
planning
breakdown
at
a
high
level.
A
The
idea
is
that
we
would
start
with
just
package
scanning.
So
what
claire
and
clark
do
today,
which
is
looking
at
the
packages
that
are
installed,
comparing
them
to
known
cves
and
reporting
on
any
vulnerabilities
from
there?
We
would
want
to
actually
create
a
vulnerability
object
rather
than
an
alert,
because
it
is
a
vulnerability
and
feedback
into
maps
security
dashboard.
A
So
I
just
wanted
us
to
start
thinking
about
that
and
discussing
how
that
might
work.
We
don't
have
to
sort
it
all
out
today.
It's
not
ready
for
formal
planning
breakdown
or
anything
like
that,
but
I
just
wanted
to
open
the
discussion.
So
that's
going
to
be
kind
of
the
next
big
thing
coming
up
on
the
roadmap.
A
A
Of
course,
if
we're
talking
about
scanning
something
in
production
that
lives
outside
of
a
pipeline,
so
we
need
some
way
to
schedule
that,
on
a
regular
basis,
you
know
suppose
the
user
puts
in
like
I
want
to
scan
once
a
day
at
2
am,
you
know,
brings
up
questions
of
where
would
that
configuration
live
anyway
again?
I
just
wanted
to
start
the
discussion
on
that
see
what
feedback
and
questions
you
have
for
me.
So
I
can
flesh
out
that
issue
some
more
and
get
it
ready
for
for
planning
breakdown,
zamir.
D
I
just
had
one
I
was
a
little
bit
unsure
of
the
context
there,
so
I
was
thinking
that
we
would
have
something
like
a
application
cluster
application
installed
through
cluster
management
that
would
handle
this
configuration
and
generate
some
logging
artifacts.
A
That
so
is
that
how
you
would
go,
would
we
want
to
install
an
application
that
way
or
would
we
have
it?
Would
we
have
the
code
execute
on
the
gitlab
server
and
just
remotely
connect
into
kubernetes
to
scan
it?
I
don't
know
I'm
asking.
D
I'm
not
sure
from
the
applications
that
we
saw
before
we
had
that
spike
that
we
had
a
couple
of
applications.
Was
there
any
application?
That
would
be
like
good
for
this
situation
that
you're
considering
for
the
first
iteration
or
would
you
be
considered?
Would
you
expecting
something
different
like
a
different
scenario
for
this.
A
I'm
not,
I
don't
totally
understand
the
question,
but
the
scenario
would
be
that
you
know
say:
I've
deployed
five
pods
into
kubernetes,
and
I
you
know
each
of
those
pods
just
has
one
container
each.
You
know
so
five
different
containers.
A
A
D
I
don't
know,
I
think,
for
cdes
and
general
vulnerabilities
for
related
images.
Glare
can
do
that
on
a
pipeline
level.
So
then,
because
it
doesn't
consider
what's
running
inside
it's
just
what
the
image
is
about.
B
I
I
think
the
there's
worse
can
refine
that,
I'm
pretty
sure
the
queer
can't
do
like
what
sam
is
born,
says
to
scan
for
cvs
from
inside
the
running
container,
and
I'm
pretty
sure
claire
can't
do
that.
The
clear
works
a
bit
differently,
so
it
goes
grabs
the
docky
image
it.
It
documents,
essentially
a
pie
of
different
levels
of
the
system
that
is
sliced
together.
So
it
goes
and
checks
each
slice
against
the
vulnerability
database.
B
The
3b
container
scanners
that
I
link
here
does
it
time
sam.
It
actually
can
do
that.
You
can
put
trivia
inside
the
container
and
it
it
can
do
scans
from
the
sun
container
too.
It
also
supports
scans
from
the
outside
containers.
The
same
way,
queer
does
trivia
is
actually
a
really
good
scanner
for
our
use
case,
and
it's
really
easy
to
kind
of
use
it.
You
get
binary,
you
put
it
inside
the
container
and
you
can
do
it
at
the
runtime
level.
B
If
you
have
permissions,
obviously
to
do
that
inside
the
kubernetes
cluster,
and
then
you
can
just
run
the
binary
from
inside
the
container,
and
three
will
do
that.
There
is
a
huge
problem
with
trivia
it's
implemented
by
aqua
sec,
which
is
direct
competitor
to
us.
So
I'm
not
sure
if
we
can
use
that,
but
trivia
is
probably
the
best
kanye
on
the
market
right
now.
B
Queer
is
not
being
actively
developed
as
I
understand,
and
it
has
a
really
crucial
limitation
right
now.
It
only
supports
docker
images
and
there
are
different
specs
for
how
to
create
images
and
the
most
like
actively
developed.
Spec
right
now
is
ocr
right
on
its
official
container
spec
right
now
and
dock
is
slowly
been
going
or
going
away
and
queer
can
only
scan
specification
of
a
docker,
but
here
we
can
scan
both
specifications.
B
B
So
I'm
not
sure.
What's
the
this
is
like
stand
like
what's
the
politics
of
the
git
web
in
terms
of
using
compared
to
the
products
in
our.
A
A
I
don't
know
what
get
loud
scanning
or
something
you
know
we'll
make
up
our
own
name
for
it
just
to
help
a
little
bit
with
that
way.
If
we
ever
did
switch
away
from
trivia
in
the
future,
it
would
be
more
transparent
to
the
end
user.
Of
course,
there's
going
to
be
some
impact
still,
because
the
functionality
will
change
at
least
a
little
bit,
but
it'll
at
least
be
a
little
bit
more
separated
less
visible
to
the
end
user,
but
I,
I
think,
they're
an
option.
E
B
B
Trivia
also
has
a
really
cool
feature,
which
is
a
client
server
mode,
because
scanning
itself
doesn't
do
anything
unless
you
have
a
database,
and
this
query
you
have
to
like
constantly
have
a
database
next
to
you
with
three
way.
You
can
use
a
tree
client
to
get
information
from
the
runtime,
but
actual,
I
think,
matching
against
cv
database
happens
externally.
So
it's
really
good
model
for
us
and
we
can
have
a
server
inside
our
runtime
somewhere.
A
B
We
have
less
restrictions
and
then
3v
will
be
like
doing
the
lightweight
operations
inside
the
container.
Yeah
trivia
has
like
way
more
features
and
that
are
quite
useful.
In
most
situations,
do
you
know.
E
A
E
E
One
of
the
ones
that
the
ones
that
I'm
familiar
with
are
not
container-
or
you
know
a
container
native
they're
things
like
scan,
give
it
credentials
to
the
running
container
and
then
it
logs
it
ssa
it
does.
A
network
scan
and
ssh
is
in
and
gets
package
listings
like
open.
Bas
is
a
good
option
for
things
like
that,
but
it
is
not
container
friend.
It
is
not
kubernetes
or
container
friendly
so
that
something
like
that
would
probably
not
be
a
good
fit
for
us.
It's
a
bit.
A
B
What
I
want
to
say,
I
think,
trivia,
has
a
comparison
with
competitors
on
the
github
page.
If
I'm
not
mistaken,
so
maybe
we
can
search
alternatives
from
there,
but
I
I
just
have
not
seen
anything
else
mentioned
in
the
post
that
I
was
previously
researching.
A
All
right:
well,
thanks
for
your
input
here.
Hopefully
I
help
to
clarify
what
we're
looking
for
requirements-wise
a
little
bit.
Obviously
right
now,
our
focus
is
on
alert
management.
So
you
know
don't
worry
too
much
about
this,
but
I
will
post
some
of
those
open
questions
on
the
issue
since
we're
at
time
here,
and
we
can
continue
the
discussion
they
think
you
know,
just
as
we
start
to
think
about
how
we
might
head
down
that.