►
From YouTube: Security Policies Vision
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
hello,
everyone
thanks
for
joining.
I
just
thought
I
would
set
up
an
extra
meeting
on
our
calendars
so
that
we
could
have
some
dedicated
time
to
talk
a
little
bit
about
the
future
direction
for
the
security
policies
group.
I
know
a
lot
has
kind
of
shifted
around
recently
and
you
know
some
engineers
are
moving
over
to
threat
insights
so
like
we
now
will
have
yep,
including
brian.
A
A
It's
really
going
to
allow
us
to
focus
very
heavily
on
the
security
policy
editor
and
there's
a
lot
planned
there.
So
that's
what
I
wanted
to
kind
of
cover
today,
because
I
really
see
that
as
a
big
value
add
for
customers
who
are
purchasing
ultimate.
We
see
that
as
a
big
value
driver
all
the
time
in
deals.
I
know
I
shared
that
quote
recently
from
samir
who
shared
you
know
he
walked
through
all
of
our
security
features
with
his
customer
and
they
were
kind
of
medium.
A
You
know
about
all
of
it
and
then
he
showed
them
the
security
policy
editor
and
they
got
really
excited
and
that's
because
this
is
something
that
we
have
that
pretty
much
nobody
else
has
on
the
market.
This
is
a
very
unique
value
proposition,
and
this
is
an
area
where
get
lab
is
really
leading
the
way
in
terms
of
innovation,
flexibility
and
usability
for
our
customers.
A
So
that's
really
the
story
for
security
policies
is,
we
are
you
know,
making
these
policies
easy
to
use,
we're,
making
them
understandable
and
we're
making
them
comprehensive
enough
to
meet
all
of
our
customers
needs.
So
I
wanted
to
share
my
screen
some
of
this.
I
know
we've
talked
about
before,
but
it's
been
a
long
time,
and
so
I
think
it's
probably
worth
coming
back
and
revisiting
some
of
these
topics
just
just
to
re-share,
where
we're
headed,
hopefully
here
in
a
minute
you're
able
to
see
my
screen.
A
Okay,
I'm
looking
at
the
direction
page
for
a
group
in
here.
We
have
this
matrix
and
this
matrix
will
change
slightly
because
we're
actually
trying
to
get
rid
of
the
license
compliance
analyzer.
So
that
will
go
away.
Of
course,
we've
got
our
scan
execution
policies.
We've
got
our
scan
result.
Policies
which
we
plan
on
renaming.
Eventually,
you
know
so
kind
of
the
immediate
roadmap
here
is
to
finish.
A
Adding
support
for
the
other
analyzers
make
sure
that
scan
result
policies
are
supported
at
the
group
level,
you're
just
kind
of
filling
out
those
policies
that
we
already
have.
You
know
we're
in
the
middle
of
working
on
a
rule
mode
for
for
scan
execution
policies.
A
Things
like
that,
I
think
you're
all
pretty
familiar
with
our
near-term
roadmap,
but
there
actually
is
a
lot
to
be
done
beyond
that
near-term
roadmap
and
just
looking
at
our
priorities
page,
you
know,
we've
got,
I
took
all
of
the
container
scanning
items
out
and
we
still
have
21
items
here
on
our
backlog,
so
we
have
a
very
big,
very
robust
backlog,
and
this
is,
you
know,
believe
it
or
not.
I
actually
tried
to
limit
this
so
there's
more
stuff
that
we
can
do
that.
A
A
So
once
we
have
scan
execution,
policies
really
filled
out
with
all
support
for
all
the
scanners
and
you
know
scan
result.
Policies
are
a
little
bit
more
matured.
One
of
the
next
things
that
we
want
to
do
is
now
that
the
compliance
group
is
coming
over
and
we
wanted
to
do
this
already
anyway.
It
just
makes
the
coordination
a
little
bit
easier,
we'll
want
to
start
merging
the
compliance
frameworks
with
the
security
policies.
A
A
You
know
right
now
those
compliance
frameworks.
You
can
only
really
apply
them
at
the
top
group
level.
So
you
know,
we've
got
trade-offs
on
both
sides
and
we
really
want
to
bring
those
together
and
make
it
one
really
clean
solution
for
users.
Ideally,
you
know
we
haven't
done
the
design
for
all
of
this
yet,
but
ideally
we
would
show
some
of
this
back
on
the
main
page.
A
So
this
is
a
top-level
group
where
you
go
to
set
up
these
compliance
frameworks
and
if
I
click
in
on
here
right
now,
we
you
can
only
specify
the
compliance
pipeline,
but
ideally
we
would
push
that
information
back
up
too.
So,
once
you
can
filter
these
policies
by
compliance
framework,
we
would
be
able
to
say
for
this
framework.
A
Another
area
that
we
want
to
really
expand
on
is
the
usability
of
policies,
so
some
of
this
we
actually
have
designed
out,
I
think
so
right
now,
when
you
create
a
merge
request
for
a
change
to
the
policies.
Of
course,
it
doesn't
show
up
in
this
policy
list,
so
it'd
be
really
nice.
If
we
could
centralize
that
ui
and
show
you
know,
policies
that
are
pending
changes,
you
know
show
open,
merge
requests.
A
A
You
know
how
can
we
make
that
storage
simpler
so
that
users
don't
have
to
think
about
associating
that
with
a
separate
project?
So
this
would
be
one
step
towards
that,
because
if
we
do
move
this
to
the
database,
we
would
still
want
a
way
for
users
to
submit
changes
for
approval
and
show
those,
and
so
this
would
be
building
out
that
approval
ui
as
a
preliminary
step
to
potentially
moving
it
to
the
database
in
the
future.
A
Let's
see
other
things
to
think
about
longer
term
is
just
expanding
this
beyond
just
scan
execution
and
scan
result
policies.
So
those
are
the
two
policy
types
that
we've
started
with,
but
there's
actually
a
lot
more
that
we
can
do
here.
The
threat
insights
team
is
looking
at
contributing
in
a
new
vulnerability
management
policy
type
to
help
automate
some
of
the
workflow
around
vulnerabilities,
so
basically
automatically
dismissing
a
vulnerability
or
automatically.
A
Setting
a
vulnerability
to
be
resolved
if
it's
no
longer
detected
requiring
approval
before
changing
the
severity
of
a
vulnerability
or
acquiring
comments
when
you
dismiss
something
so
they've
they're,
actually
planning
to
contribute
into
our
group
for
a
lot
of
those
things,
although
eventually
we
might
build
some
of
that
out,
ourself
as
well,
and
then
we
have
still
other
policy
types
like
insider
threat,
type
policies.
A
The
easiest
to
understand
example
here
is
thinking
about
malicious
user
activity
in
get
lab
itself.
So
if
somebody
logs
in
from
texas
today
and
then
logs
in
five
minutes
later
from
australia,
that
would
be
really
suspicious
because
in
theory
they
shouldn't
have
been
able
to
travel
that
far
of
a
distance,
and
so
what
do
you
do
with
things
like
that?
At
that
point,
this
is
kind
of
where
that
alert
dashboard
comes
back
into
play
of
generating.
A
You
know,
alerts
of
suspicious
activity
that
we're
seeing
and
allowing
users
to
process
that
we
may
also
have
other
automated
response
actions
that
come
out
of
this,
like
block
the
user
from
signing
in
for
10
minutes
or
don't
allow
them
to
clone
any
repositories
for
the
next
hour.
While
we
go
triage
things
or
you
know
limit
their
access
to
all
projects
that
are
tagged
as
sensitive
and
things
like
that
that
you
know
might
be
appropriate
precautionary
measures
when
you
see
suspicious
activity
so
there'd
be
specific
events.
A
I
actually
used
to
work
on
an
insider
threat
product,
so
I
know
a
lot
about
anomalous
user
activity,
but
you
know
establishing
thresholds
like
if
a
user
suddenly
starts
cloning
all
of
the
organization's
repositories,
and
you
see
a
thousand
clones
in
the
course
of
an
hour
that
would
be
really
suspicious.
That
wouldn't
really
be
normal,
expected
ordinary
behavior
by
a
person,
and
it
may
be
that
that
person
is
doing
something
wrong.
A
It
also
could
mean
that
their
credentials
have
been
compromised,
and
you
know
a
bot
is
going
and
doing
something
on
their
behalf
and
then
peer
group
anomalies,
right,
like
gitlab,
is
a
great
place
to
look
at
this.
A
If
I
were
to
suddenly
go
out
and
do
a
lot
of
activity
on
a
project
that
none
of
my
peers
engage
with
again,
maybe
that's
legitimate
activity,
but
it
also
potentially
could
be
concerning.
So
there
are
a
lot
of
like
advanced.
You
know,
user
activity
policies,
as
we
think
about
helping
to
avoid
you
know
malicious
activity.
A
A
You
know,
maybe
we
could
even
proactively
detect
if,
if
you
suddenly
open
a
merge
request
with
a
whole
lot
of
sas
findings
or
we're
able
to
detect
that
there's
some
sort
of
malicious
code
there.
Maybe
your
account
has
been
compromised
and
that
should
be
raised
for
for
further
review
and
then
on
the
anti-abuse
side.
So
this
actually
both
of
these
relate
to
other
groups
inside
of
of
the
sex
section,
but
on
the
anti-abuse
side
would
be
pipeline
policies.
A
You
know
specifically
pipelines
that
tend
to
either
contain
malware
or
run
malware
or
contain
crypto
mining
software.
Or
you
know
how
do
you
protect
your
runners
from
abuse?
A
A
We
do
have
on
our
roadmap
to
bring
what
is
currently
license
check
in
to
this
editor.
In
fact,
we
have
actual
mocks
for
that.
I
should
show
the
actual
mocks
instead
of
the
prototype.
A
A
And
then,
lastly,
we
don't
have
mocs
for
this
one,
so
I'll
switch
over
to
the
prototype,
but
we
have
status
check.
I
don't
know
if
all
of
you
are
familiar
with
that
that
capability
in
the
product,
but
this
would
be
the
ability
to
potentially
block
or
require
approval
on
a
failed
status
check.
So
that's
again
something
that
the
compliance
team
owns
and
manages
that
they've
built
out
in
the
product.
A
You
can
see
this
when
you
set
it
up
on
the
merge
requests
page.
You
can
configure
an
external
api
to
reach
out
to,
and
it
will
basically
wait
for
that
api
to
reply
with
either
a
passed
or
failed
message
saying
if
it
succeeded
or
not.
So
this
is
really
nice
because
it's
super
flexible,
your
api
can
do
any
kind
of
logic
or
custom
checks
that
you
care
to
code
in
the
way.
This
shows
up
right
now
on
the
merge
request
itself.
A
Is
we
have
a
separate,
mr
widget,
for
the
check
you
can
see
this
one
pending
because
I
just
called
out
to
an
arbitrary
url.
So,
of
course,
it's
never
going
to
reply,
but
right
now
this
replies
back
either
passed
or
failed,
but
there's
no
way
currently
to
block
the
merge
request
or
require
approvals
on
a
pending
or
a
failed
status
check.
So
again,
ideally,
this
would
be
integrated
with
the
security
policy
editor,
because
this
is
very
commonly
being
used
as
a
security
and
compliance
gate
for
merge
requests.
A
B
I
thank
you
sam
for
the
overview
that
is,
that
is
a
great
look
into
the
future.
I
am
excited
about
all
the
new
policies
we're
creating.
This
is
a
question
something
that's
very
far
in
the
future,
but
pipeline
policies
sound
like
they
overlap
with
monitor
alerts
a
little
bit
in
terms
of
like
the
monitor
alerts,
can
alert
people
to
cpu
usage
and
stuff.
Like
that.
How
what
are
the
difference
in
use
cases
there.
A
A
You
know
itops
kind
of
persona,
that's
monitoring
to
make
sure
that
everything's
up
and
running
in
production,
so
we
would
be
monitoring
the
pipelines
and
that's
actually
more
in
line
with
what
model,
ops
and
anti-abuse
what
the
anti-abuse
team
is
doing,
because
they're
trying
to
write
rules
to
detect,
especially
for
gitlab.com,
where
we've
got
a
sas
instance.
But
this
applies
as
well
to
self-managed
users
who
have
public-facing
instances
of
detecting
you
know.
A
Okay,
somebody
signed
up
for
a
free,
gitlab
account
and
now
they're,
using
our
free
pipeline
minutes
to
run
a
miner
software
and
they're,
basically
making
you
know:
50
cents,
while
charging
get
lab
10
dollars
in
cpu
time.
I'm
making
up
these
numbers,
but
you
know
that's
generally,
that
the
idea,
as
well
as
malware
com
malware
in
general,
comes
into
play
there
as
well,
like
we
do
a
good
job
of
isolating
those,
but
we
still
see
things
that
are
malicious
in
nature,
whether
they're
hosting
attempting
to
host
malicious
code
or
call
out
to
malicious.
A
People
are
very
creative
and
if
they
can
find
a
place
to
run
something
for
free,
there's
a
good
chance
that
there
are
people
out
there
trying
to
abuse
it.
And
so
you
know
these
rules
would
attempt
to
to
curb
that
abuse
and
allow
users
to
determine
how
they
want
to
respond.
When
those
things
are
detected,
with
the
goal
being
to
put
that
in
their
hands,
because
you
don't
want
to
just
outright
block
things
because
you
might
end
up
blocking
something
that
is
in
fact
legitimate.
A
But
you
know
giving
them
those
controls
to
define.
Where
is
that
line
for
them
at
what
point
do
they
start
blocking
at
what
point?
Do
they
just
alert
and
then
manually
follow
up
at
what
point?
Do
they
rate
limit
or
like
reduce
things
before
just
blocking
out
right
and
what
things
do
they
completely
allow?
A
And
so
again
this
would
be
focused
more
on
the
pipelines
but
from
an
alert
dashboard
perspective.
Like
a
front-end
perspective,
there
definitely
is
a
lot
of
overlap
there.
The
main
difference
would
just
be
again
the
same
as
we
encountered
before
is
just
the
persona
is
different,
and
so
you
don't
necessarily
want
to
see
your
production
monitoring
together
with
your
security
and
compliance
monitoring.
A
Yeah
great
question
all
right:
well,
yeah
thanks
for
thanks
for
letting
me
talk
for
so
long.
I
hope
this
was
useful
to
you
personally,
I'm
very
excited
about
our
our
future
here
and
the
direction
that
we're
headed.
I
feel
like
this
is
a
huge
differentiator
for
the
product
and
for
the
company
as
well,
and
a
huge
value
add
for
users.
I
get
feedback
all
the
time
on
this
and
it's
generally
very,
very
positive,
and
both
on
you
know,
with
all
of
the
capabilities
that
we've
introduced
up
to
this
point.