►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
our
group
discussion
for
the
container
security
group
zamir,
it
looks
like
you've
got
the
first
item.
B
Myself,
oh
the
the
demo
yeah
I
just
we
were
able
to
have
cass
and
agent
k
working
staging.
So
then
it's
way
easier
than
setting
up
the
local
environment.
So
then
I
just
created
a
video
to
help
people
going
through
the
steps.
It's
like
basically
three
steps
to
get
this
done.
A
Yeah
I
watched
that
today
and
great
job
on
that.
I'm
super
excited
to
see
all
the
progress
we've
made
and
and
to
have
it
up
and
running
on.
Staging
is
a
big
deal.
That
means
we
actually
have
some
some
good
end-to-end
testing,
starting
to
happen
so
glad
to
hear
that
let's
see
lindsay
had
a
couple
of
items
for
planning
breakdown
and
says
she's
not
going
to
be
present,
I'm
not
sure
what
she
wanted
done
with
these.
A
So,
let's
see
create
end
to
end
test
for
project
level
alerts.
Dashboard
is
the
first
one.
C
So
this
is,
this
is,
was
created
as
part
of
a
of
an
okr
to
improve
knowledge
of
end-to-end
tests
for
for
our
engineers,
and
we
also
don't
have
end-to-end
tests
for
for
four
containers.
Here
is
the
annie
so
we'd
like
to
to
create
some
with
with
the
with
the
planning
break
ground
it.
It's
basically
answering
the
the
questions
that
we
always
ask
the
like
are
the
requirements
clear
enough
do
do
we
know
what
the
dependencies
are
and
and
do
we
need
to
break
it
down
in
multiple
iterations.
C
So
I
can
see
that
I'm,
the
only
person
who's
actually
read
the
issue.
Do
we
wanna,
do
it
asynchronously
or
do
you
want
to
have
a
quick,
read
and
try
to
answer
the
questions.
D
I
think
this
ties
into
one
of
the
discussions
I
have
in
the
meeting
notes
further
down
about
what's
left
to
release
alerts-
and
you
know
it's
this
one
talks
about
bang
and
then
testing.
The
second
issue
that
lindsey
brought
up
is
talk
about
load
testing
for
the
alerts.
Are
these
two
things
necessary
for
release.
C
A
A
That's
the
last
thing
we
want
to
have
happen
here,
so
you
know
little
bugs
in
like
rui
or
if
we
even
just
crash,
get
lab,
that's
nowhere
near
as
bad
as
crashing
the
customer's
application
right.
So
I
would
say,
with
any
testing
time
that
we
have
available.
That
should
be
the
thing
that
we
poke
at
the
most.
C
So
to
to
talk
about
the
the
risk
management
there,
the
the
probability
of
this
affecting
the
cluster
is
is
different
to
to
what
we
had
in
the
past,
because
as
amelia
and
this
is
where
I'm
going
to
need
your
help.
As
I
understand
the
agent
runs
in
in
its
own
container.
So
if,
if
it
crashes
it's
going
to
crash
itself,
it
doesn't
affect
anything
else
in
the
kubernetes
cluster.
B
B
They
are
kind
of
trying
to
test
this
integration
over
time
and
see
the
behaviors
before
they
actually
put
on
gitlab.com.
So
then,
in
terms
of
cast
and
agent
k
as
a
whole,
there's
a
kind
of
black
box
testing
happening.
B
Yeah
some
some
people
might
have
the
whole
setup
in
the
cluster,
get
labcats
and
everything.
So
then,
in
this
case,
if
cass
has
an
issue
that
would
be,
but
in
the
case
that
you
are
mentioning
now
that
you
have
git
lab
somewhere
else,
and
you
just
have
the
cluster
integrated
in
our
pipe
on
through
the
search
certificate
based
integration,
then
agent
k
would
be
the
only
one
running
the
cluster.
C
C
So
we
still
need
to
do
the
testing.
I
think
I
think
it's
it's
less
likely
we'll
have
these
sort
of
issues.
So
out
of
these
two
do
we
have
opinions
on
on
what
we
should
prioritize.
A
D
That's
true,
I
think
I'm
sorry,
I
might
cut
you
off.
D
I
was
just
gonna
say
I
I
agree
with
sam
like
anything,
and
that
requires
automation.
I
don't
think
it's
necessary
for
release
like
I
think
we
should
be
testing
this.
I
think
we
should
be
load
testing
this,
but
automating
them
one
or
both
or
whatever
is
not,
and
so
I
think
we
should
push
off
the
playing
breakdown
of
the
automated
and
then
test
until
later
on,
seeing
how
we
have
not
reviewed
it
yet
and
then
yeah
and
then
the
other
one.
I
don't
think
we
I'm
assuming.
D
I
haven't
clicked
on
it
yet
so
I'm
assuming
no
one's
looked
at
that
either,
but
that
one
does
feel
like
it
should
be
looked
at
and
talked
about
before
release.
But
zamir
said
it's
not
even
an
issue
so.
C
Yeah,
I
I'd
I'd
love
to
to
get
the
load
testing
done.
So
we
really
just
need
to
answer
here
right
now.
If,
if
the
description
wasn't
there,
it's
clear
enough,
oh
we
can
do
it
offline.
C
C
All
right
does
anybody
think
this
would
take
more
than
a
milestone
to
do
or
does
it
need
to
be
broken
down,
or
is
it
a
one-person
job.
D
C
Cool
so
I'll
label
this
as
a
refinement,
we'll
pick
someone
to
to
look
into
it.
A
Great
so
then
other
topics
to
discuss.
I
know
we've
had
a
lot
of
asynchronous
discussion
about
trivia
versus
claire
we're
getting
to
the
point
where
we
need
to
make
a
decision.
So
I
just
wanted
to
circle
up
with
the
group.
You
know:
we've
had
a
lot
again
asynchronously
in
the
issue.
A
It's
looking
like
the
decision
here
is
to
go
with
trivia.
Is
there
you
know,
I
guess,
before
we
make
that
a
finalized
decision
is
there
any?
Are
there
any
other
concerns
with
that?
And
my
second
question
would
be
do
do
we
have
at
least
a
high
level
estimate
of
like
how
long
it's
going
to
take
us
to
switch
over
it's
talking
with
tiago,
my
101,
maybe
roughly
a
month
of
one
person
for
a
month,
but
just
wanted
to
socialize
that
with
the
rest
of
the
team
and
see
if
that
felt
about
right.
E
Well,
it
seems
like
yeah,
so
I've
done
a
poc
in
go
and
we
also
have
gcs.
So
it
looks
like
we've
got
two
tools
that
are
pretty
much.
You
know
ready
to
go
when
you
want
to
switch
to
trivi.
There
needs
to
be.
You
know
into
some
additional
work
for
for
either
you
know
implementation,
either
the
go
or
ruby
implementation,
but
we're
in
a
pretty
good
spot,
with
both
tools
with
both
languages.
C
E
Like
that,
you
know
the
thing
is:
there's
there's
going
to
be
some
unknowns,
there's
going
to
be
some
things
that
maybe
we
didn't
think
about
the
way
that
trivia
represents.
You
know,
even
though
we've
done
a
you
know
a
kind
of
deep
dive
and
investigation
into
it.
Maybe
there's
some
things
that
we
might
have
missed,
but
for
the
most
part
yeah.
I
think
that
there's
not
really
going
to
be
a
ton
of
surprises
here,
and
so
I
think
in
one
mile.
That's
fine.
C
I
know
I
know
it's
a
different
question,
but
we
do
have
the
the
issue
there
to
decide
whether
we
we
want
to
use
gcs
or
clark,
and
the
reason
this
matters
is
that
the
sooner
we
decide
we
we
would
be
able
to
start
the
work
on
1310
and
the
reason
I
would
like
to
start
on.
10
is
because
we,
if
we
release
this
on
1310,
then
we
get
two
milestones
where
I
don't
know.
C
If
we
use
a
feature
flag
or
some
other
way
to
to
keep
the
the
keep
the
version
of
the
scan
that
the
uses
claire
and
have
a
version.
The
scan
is
there
that
they
use
this
treev
and
we
can
run
them
sort
of
in
parallel,
and
we
will
have
two
full
milestones
to
figure
out
if
there's
any
any
major
issues
before
switching
the
default
scanner
to
3v
in
140.
B
Can
I
just
ask
if
how
many
people
are
going
to
be
working
on
this,
because
we
have
the
the
dest?
We
have
the
project
scanner,
orchestration
security,
orchestration
part
of
the
project
and
also.
C
A
Yeah,
we
probably
have
one
one
person
on
this
and
then
obviously
alexander
most
of
his
time
would
still
be
on
the
alerts.
Improvement
since
those
are
very
front
end
heavy.
I
think,
there's
like
one
that
has
a
small
back
end
component
to
it,
but
that
would
let
the
rest
of
the
engineering
team
work
on
the
security
orchestration
work.
E
C
E
E
Your
ruby
yeah
I
could,
but
at
the
same
time
it
seems
like
you
know,
gcs
is
kind
of
john's.
Basically,
his
baby
yeah.
C
E
E
Yeah,
I'm
going
to
respond
to
some
of
the
comments
in
the
in
the
issue,
so
we
can
yeah.
I've
got
some
other
responses,
so
yeah.
C
I'm
I'm
giving
everyone
time
until
friday,
so
nobody's
rushing
this
through,
but
then
I'll
I'll
read
through
it
and
and
make
a
call.
A
Great
well
we'll
call
the
decision
to
go
with
trivia
official
then.
At
this
point
I
mean
one
person
for
one
milestone
is
totally
worth
it
from
my
perspective,
so.
C
Then
yeah-
and
we
mentioned
on
our
101
sam,
but
I
didn't
bring
it
up
just
to
bring
it
up
here.
We
do
have
the
that
issue
with
the
vulnerabilities
db
on
claire
adam.
I
don't
know
if
you've
been
following
yeah.
C
Yeah
mache
asked
you
for
for
an
opinion
there,
or
was
it
my
confusion,
issues
now,
but.
C
A
If
that
needs
to
change,
I'm
sure
customers
will
let
us
know,
but
otherwise
you
know
in
for
the
sake
of
moving
forward,
rather
than
spending
time
to
continue
to
build
on
old
code
that
you
know.
I
agree
with
that.
Try
to
move
forward
with
trippy,
and
you
know
let
that
issue
set.
A
C
So
I
I
think
what
I'll
do
with
that
with
that
issue
is
right.
Now
is
an
s2
p2,
I'm
gonna
leave
the
severity,
but
I'll
I'll
reduce
the
priority,
because
we
we've
just
made
a
decision
that
impacts,
impacts
the
priority
and
then,
if
customers
tell
us
that
no,
this
is
causing
real
issues,
we'll
bump
the
priority
again
and
then
we'll
probably
need
to
bring
the
the
update
project
into
gitlab
and
run
it
inside
gitlab.
C
That's
it
on
on
that
topic,
but
I
agree
with
your
with
your
point
there
sam,
I,
I
do
think
we're
better
off
spending
the
resources
on
putting
trivia
in
a
good
place,
stabilizing
trevi.
D
Alexander
you're
up
cool,
so
this
is
just
going
back
to
the
question
I
asked
earlier
what
is
left
to
release
alerts?
I
know
there's
documentation
that
needs
to
be
on.
I
just
put
up
an
mr
today
for
the
front
end
docs,
and
I
can
see
I'm
getting
some
feedback.
Samir
has
commented
that
he's
going
to
be
able
to
put
some
documentation
for
the
back
end,
which
is
excellent.
D
He's
also
got
several
videos
of
how
to
set
this
up
in
gitlab
set
up
in
your
local
environment,
all
of
which
could
be
used
in
the
release
post
sam
has
the
release
post
going.
That's
great
testing.
We've
talked
about
that.
There
needs
to
be
oh,
and
sam
is
filling
out
some
things
right
now.
There
should
be
some
end-to-end
testing
we
need
to.
I
hope
I
should
actually
check
the
mvc
epic
to
see
if
there's
an
issue
for
that,
but
basically
I'm
just
trying
to
collect
all
the
issues
left
for
release.
B
Just
for
the
sake
of
clarification,
there's
no
way
that
we
can
crash
the
cluster
it.
The
worst
thing
that
can
happen
is
that
the
agent
just
crash
and
the
the
the
pod
just
keep
trying
to
respect
the
new
image.
That's
the
most.
That
can
happen.
D
D
Speaking
of
which
there's
in
the
front
in
the
ui
there's
still
a
warning
when
you
enable
the
alert
that
something
about
load
does
that
need
to
be
updated,
then
I
could.
A
I
think
that's
still
accurate,
so
the
agent
should
have
throttling
mechanisms
in
place
where,
if
somebody
tries
to
collect
an
alert
on
every
single
piece
of
traffic
that
comes
through,
you
know,
obviously
that
we're
not
set
up
to
handle
like
this
mass
load
of
alerts
coming
in.
So
it's
going
to
throttle
itself
and
start
dropping
alerts
rather
than
feeding
them
into
the
get
lab
api
and
entire
database.
B
Yeah,
the
the
there
is
ongoing
discussion
about
how
the
house,
the
specifics
of
the
limits,
are
going
to
be
working
on
the
asian
side.
I
think
they
are
going
to
play
out
with
maybe
a
new,
a
different
architecture
that
what
they
have
now
so
I'm
gonna
check
with
the
guys
from
configure,
and
I
can
post
a
link
somewhere
for
us
to
keep
a
look
on
that.
D
Awesome
so
looking
at
the
mvc,
then
I
see
the
doc
updates.
I
see
review
load,
testing
capabilities.
I
don't
see
an
issue
for
end-to-end
testing
capabilities,
but
sam.
We
should
do
some
indian
tests
and
some
load
testing.
So
I
guess
we
should
create
an
issue
just
talking
about
someone
going
through
and
staying
up
saying
it
up,
though
I
guess
the
mirror
already
did
that.
Does
that
count
as
end-to-end
testing.
A
Yeah
that
counts,
as
I
think,
we've
done-
that
end-to-end
test.
As
far
as
I'm
concerned,
because
zamir,
you
put,
you
know,
you
created
an
alert,
you
fired
an
alert
it
showed
up
in
the
dashboard.
I
don't
know
if
we've
tried
to
dismiss
an
alert
already,
but
that
would
sort
of
be
the
last
piece
of
that
end-to-end
test.
So
I
would
consider
that
one
done
or
or
nearly
done.
D
Cool
so
then
docs
load,
testing
and
then
turning
the
feature
flag
on
by
default
to
be
default
sounds
like
our
last
three
remaining
tasks
and
as
we
are
getting
through,
those
I
mean
feel
free.
Everybody
feel
free
to
just
like
poke
around
with
some
of
the
alerts
and
try
to
do
things
in
the
ui
that'll
build
our
confidence
in
this.
A
Yeah
and
given
the
architecture
that
we've
described,
I,
I
would
say
you
know
it's
even
in
that
issue,
I
don't
think
the
load
testing
has
to
be
a
blocker
for
release.
So
you
know
if
we
can
get
that
done
before
we
release.
That
would
be
great
just
to
give
ourselves
a
little
bit
more
confidence,
but
you
know
if
the
worst
that
can
happen
is
that
our
own
pod
keeps
crashing.
D
C
C
C
D
C
All
right
awesome
all
right
yeah,
so
so
that
that's
different.
We
we
can
do
that
for
our
our
own
instance,
but
switching
the
flag
on
by
default
means
that
any
customers
with
a
self-managed
instance
that
that
would
be
available
for
them
as
well.
Sorry,
we
should
not
do
that
for
gitlab.com
for
the
same
reasons,
switch
it
to
default,
that
sorry
switch
it
on
on
on
production
or
switch
it
on
by
default.
B
Yeah,
can
I
ask
a
question
that
we
might
be
forgetting
cass
is
to
not
enable
in
gitlab.com?
A
A
Well,
I
know
we're
technically
just
past
the
25
minutes.
If
you
need
to
drop
feel
free.
I
wanted
to
share
just
really
quick,
as
we
were
talking
about
container
scanning.
It
crossed
my
mind
that
we've
got
a
number
of
other
issues
that
you
know
would
be
really
nice
to
address
at
the
same
time.
So
I
just
wanted
to
share
these
real
briefly
with
the
group.
This
is
not
like
a
planning
breakdown
or
anything.
A
I
just
wanted
to
call
some
attention
to
them.
So
in
our
priorities
here-
and
I
probably
need
to
reorder
these
a
little
bit,
but
a
big
one
is
to
stop
using
root
in
the
clar
analyzer,
that's
preventing
us
from
running
on
openshift,
and
then
we
actually
have
a
few
that
are
really
documentation
related.
This
allow
list
usage.
Our
documentation
is
not
great
at
the
moment
and
it's
confusing
customers,
and
so
that
would
be.
A
A
Yeah,
we're
not
de-duplicating
our
findings
in
a
way
that
meets
all
customers
use
cases
it
works
for
some
customers.
It
does
not
work
for
others.
We
need
to
change
our
default
setting
there
and
then
also
let
that
be
customizable,
because
you
know
there's
going
to
be
no
way
to
do
this
for
every
customer.
We
need
to
you
know:
let
them
customize
this
and
adjust
for
their
own
areas.
E
That's
the
one
that
I
pinged
out,
I'm
about
yeah!
I
need
to
look
into
that
further
because
yeah
I
need
to.
I
need
to
spend
some
time
thinking
about
that,
because
there's
so
much
is
based
on
the
identifiers
of
this
stuff
and
we
could
run
into
like
you
know
a
lot
of
really
strange.
Behavior,
that's
really
hard
to
debug,
because
people
have
overridden.
You
know
these
these,
these
identifiers,
I'll,
look
at
it
and
write
some
comments.
A
C
E
E
John
john
changed
it,
and
then
you
had
some
questions
about
it
that
I
responded
with
a
long
comment
which
I
can
link
here,
but
yeah
there's
a
lot
of
considerations
around
this
and
I'm
sure
it's
possible
to
be
flexible.
But
the
downside
is
that
it
could
introduce
a
lot
of
really
tricky
behavior.
A
No
absolutely
so
I
have
a
proposal
in
there
that
I
think
will
help,
but
you
know
welcome
any
feedback
there
and
again.
I
just
wanted
to
bring
some
visibility
to
those
issues.
Since
you
know
we
talked
a
lot
about
container
scanning
today
and
I
don't
know
who's
going
to
end
up
doing
that
work,
but
it
would
be
good
for
them
to
at
least
you
know,
have
these
issues
in
the
back
of
their
mind
as
they're
doing
some
of
that
work.