►
From YouTube: CNCF SIG Security Meeting 2019-12 -11
Description
Join us for Kubernetes Forums Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
CNCF SIG Security Meeting 2019-12 -11
A
A
So
I
will
put
the
notes
in
the
chat
for
people
who
are
joining.
We
have
a
tradition
of
everybody,
adding
themselves
for
attendance
and
then
I'd
also
like
to
call
out
for
the
regular
members
anybody
who's
sitting
in
front
of
a
computer
who's
willing
to
scribe.
We
just
take
notes,
live
in
a
Google
Doc,
so
that
people
who
can't
attend
can
get
highlights
and
decide
whether
they're
going
to
watch
the
videos
and
we
generally
post
the
videos,
one
of
our
fabulous
staff.
A
People
at
CN,
CF
post
the
videos
shortly
out
with
Paul
so
usually
in
the
next
day
or
so
so.
I
also
want
to.
Let
the
regular
group
folks
know
that
we
are
we've
been
small
group
talking
about
going
back
to
what
we
were
doing
in
2018
in
early
2019,
where
we
had
every
other
week,
presentations
and
every
other
week
working
group
meetings
which
were
more
discussions
and
check-ins
and
talking
through
different
proposals.
And
so
this
is
unusual
because
we've
had
two
presentations
in
a
row.
A
But
that's
because
Brendan
couldn't
be
here
because
cube
cube
forum
is
happening
in
some
place
that
he
had
to
travel
to.
So
we're
accommodating
Brendan
who's
been
active
and
had
discussion
topic
that
he
wanted
to
cover
and
because
it's
before
the
holidays,
I
didn't
want
to
just
move
this
to
January.
So
so,
if
there's
time
at
the
end,
we'll
talk
a
little
bit
about
well
cover
some
team,
logistics
and
discussion
topics
about
you
know
things
that
are
top
of
mind.
But
if
the,
if
needed,
we
will
spend
the
whole
time
discussing
cloud.
A
Custodian
and
I'm
excited
to
have
Kapil
here
to
do
a
presentation
and
I
talked
to
one
of
your
colleagues
on
slack
and
suggested
that
the
presentation
be
twenty
or
thirty
minutes,
and
then
we
would
have
time
for
discussion
and
I
want
to
allow
that
time
for
as
much
as
people
have
questions
and
things
that
they
want
to
discuss.
So
thanks
for
posting,
the
slides
in
the
chat
I
will
just
wait
a
moment
for
people
to
add
themselves
for
attendance
before
we
get
started,
can
do
I,
have
a
volunteer
scribe.
A
And
then
we
have
a
place
and
then
at
the
end,
the
agenda
that
if
you
have
announcements,
please
type
them
into
the
agenda
or,
if
you're
or
feel
free
to
put
them
in
the
chat.
If
you
don't
aren't
able
to
use
Google,
Docs
and
and
then
if
then,
we
don't
necessarily
have
to
take
time
for
they
announce,
since
if
we
with
the
discussions
is
taking
over
so
with
that,
I
will
ask
Kapil
to
introduce
yourself
and
custodian
and
tell
us
a
little
bit
about
who
you
are
and
why
you're
here.
B
C
Thank
you
for
catching
that,
because
I
started
to
speak
and
started
to
check
thanks
everybody.
It's
a
pleasure.
My
name
is
John
Mark
Walker
I
run
the
open-source
program
office
at
Capital
One,
and
you
know
just
to
get
a
little
bit
of
the
stage
setting
here.
This
is
something
that
we've
been
talking
about
for
I,
don't
know
at
least
a
year
now,
probably
before
my
time
before
joining
Capital
One
but
I
since
I've
joined
here
last
since
August.
C
This
has
moved
top
of
mind
for
me
and
it's
very
important
to
me
that
we,
you
know,
become
and
improve
upon.
You
know
previous
open
source
efforts,
and
this
is
one
way
to
do
that,
and
so
one
of
the
things
we're
looking
to
get
out
of
this
session
is
a
series
of
some
sort
of
working
relationship
as
well.
C
As
you
know,
next
steps
we
can
follow
to
so
we
can
make
sure
that
we,
you
know,
get
things
across
the
finish
line
at
at
some
point,
so
I'll
be
looking
for
a
direction
from
the
rest
of
sig
on
that
at
the
end
of
this,
but
but
we're
we're.
Finally,
you
know
at
the
point
where
we
can
say:
yes,
we
absolutely
want
to
do
this
took
a
lot
of
time
and
effort
to
get
here.
B
Thank
you
go
so
what
is
it
so
kiss
classic?
Odeon
is
a
stateless
rules.
Engine
that
is
intended
to
help
sort
of
customers
and
users
manage
their
closet.
Public
cloud
accounts
at
scale.
So
by
way
of
background.
So
when
I
was
at
Capital
One
and
we
were
sort
of
first
going
to
the
cloud
sort
of
recognized
that,
as
you
know,
dealing
with
regulations
and
security
and
compliance
aspects,
around
cloud
was
resulting
in.
We
wanted
to
automate
as
much
of
that
as
possible.
The
natural
tendency
for
organizations
they
are
they
go
into
the
startup.
B
Their
journey
in
the
cloud
tends
to
be
around
to
writing
one-off
scripts
around
these
different
requirements
and
sort
of
extrapolating
forward
from
that
sort
of
that
journey
into
the
future.
I
was
seeing
a
place
where
we
would
have
hundreds
of
these
random
scripts,
and
there
would
be
questions
about
you
know
who
deployed
them.
How
will
they
well-tested?
B
What
was
the
operations
around
them,
and
so
you
know
that
was
going
to
be
a
bit
of
a
mess,
and
so
one
of
the
first
I
can
step
back
from
that
and
try
to
look
at
how
to
deal
with
that
problem.
Holistically
and
so
custodian
tries
to
be
sort
of
that
Swiss
Army
knife
around
all
the
different
concerns
an
organization
may
have
around
their
cloud
footprint,
be
it
managing
security,
the
imagined
cost
optimization
and
to
do
that.
B
It
integrates
very
deeply
with
whatever
the
cloud
providers
made
of
tooling
is
so
Google
cloud
functions,
Amazon
lambda,
you
know
AWS
config,
whatever
those
native
capabilities
are,
it
tries
to
sort
of
in
grey
poupon
pulley
and
expose
them
to
users
through
that
DSL
to
become
the
easiest
way
to
consume
some.
Some
of
these
new
provider
features
so
at
a
heart.
It's
heart.
It's
effectively
a
set
of
policies
that
are
written
in
the
ml
file.
B
D
B
Is
agnostic
to
where
it's
being
executed,
so
it
could
be
executing
in
the
container
and
jenkins
box
and
in
a
service
function,
it's
agnostic
to
sort
of
where
it's
its
execution
environment,
but
when
we,
when
a
user,
does
specify
an
execution
environment,
Cassetti
won't
do
the
work
of
sort
of
actually
provisioning,
all
the
event
streams
and
the
service
functions
kind
of
scenes
for
them.
So
the
actual
policy.
B
You
know
on
this
case
we're
looking
at
Amazon's
elastic
locks,
storage
volumes
and
we're
going
to
go
ahead
and
filter
for
any
volumes
that
are
not
attached
to
an
and
that
don't
have
a
particular
tag
retained.
So
we're
filtering
the
set
of
things
that
set
of
resources
or
that
this
policy
is
targeting
not
to
find
the
things
that
we're
looking
for
and
then
we'll
go
ahead
and
take
in
an
action
a
set
of
actions
on
them.
So
this
could
be
in
the
case
we're
actually
marking
it
for
an
action
in
the
future.
B
B
Our
policies
into
very
fine-grained,
using
creating
policies
with
our
vocabulary
very
fine,
grade
filters
and
actions,
so
we
might
have
an
action
like
stop
an
ec2
instance
that
we
use
for
off
hours
or
you
might
use
it
in
response
to
a
security
event
as
a
relation
activity
or
our
incident
response
in
that
case,
so
continuing
forward
the
other
sort
key
aspect,
around
sort
of
getting
transparency
around.
What
these
policies
are
doing
is
having
sort
of
a
very
rich
set
of
output.
B
You
know
resource
dashboards
or
on
which
policies
are
compliant
or
not,
compliant,
as
well
as
the
ability
to
sort
of
take
the
raw
logs
from
an
optics,
diversion
and
mix
them
into
say,
elasticsearch
or
something,
and
then
of
course,
the
other
key
aspect
around
on
doing
any
sort
of
remediation
activities
inside
of
a
cloud
environment.
In
this,
real-time
fashion
is
being
able
to
send
notifications
to
users.
So
we
will,
you
know,
enable
re-enable
sort
of
setting
out
to
slack
into
integrating
into
downstream
Splunk
sending
out
the
email.
B
B
In
this
case,
we've
got
sort
of
a
policy
looking
for
ec2
instances
that
are
not
appropriately
tagged
and
then
having
them
being
March
for
being
stopped
at
at
future
dates.
So
this
is
giving
a
chance,
in
combination
with
a
notification
for
a
the
end
user
who
provisioned
that
resource
to
actually
remediate
that
remediate
themselves.
If
not,
then
the
policy
will
come
back
through
so
a
lot
of
concern
and
governance
years
around.
B
Not
really,
you
know
an
organization,
we
have
hundreds
of
application
teams
and
in
this
context,
what
we're
really
looking
to
do
is
to
give
sort
of
a
centralized
team
either
security
operations.
You
know
compliance
a
an
ability
to
sort
of
have
a
ground
based
assurance
that,
regardless
of
what
tools
and
application
teams
using
each
provision
via
terraform
via
cloud
formation,
the
you
know
research
templates
that
that
their
cloud
environment
is
sort
of
conforming
to
a
known
baseline
that
they're
finding
in
these
policies.
So
what
can
you
do
with
this
Arian
lots
of
things?
B
We
have
the
authors
of
some
of
the
tools
and
containers
contributors
didn't
originally
think
of
so
looking
at
sort
of
what
does
it
look
like
to
run
and
deploy
this
thing,
and
so
the
it's
a
one-line
install
it's
a
stateless
engine,
so
it
as
far
as
getting
started.
I
you
can
it's.
You
know
you
can
doctor
and
doctor
run
it.
B
You
can
pip
install
the
tool
itself
is
written
in
Python
and
then,
of
course,
you
have
these
rich
vocabulary,
execution
modes,
that
the
tool
actually
provision
and
hook
up
the
event
streams
for
B
and
then
as
it's
the
tools
also
abstract
out
to
where
getting
this
data.
So
it's
the
policy
execution
modes
are
going
to
define
sort
of
where
turning
and
the
policies
themselves
are
fairly
isomorphic
to
what
that
location
is,
and
then
the
policies
themselves
can
also
hold
their
source
of
data
from
different
places.
B
So
in
some
cases
I
give
an
execution
mode
will
will
just
take
whatever
is
in
the
event
stream.
In
some
cases,
it'll
go
to
sort
of
the
describe.
Api
calls
that
are
available
from
the
cloud
like
the
gets
in
some
cases
built
they'll
use
a
CMDB
resource
database
as
far
as
where
they're
getting
this
information
to
start
processing.
E
Hi
I'm
Ken
Everdene
I'll
do
a
quick
intro.
My
name
is
Andy
long
I'm,
an
engineering
manager
at
Microsoft
and
over
the
passenger
in
half
we've
been
contributing
heavily
into
adding
support
for
Azure
cloud
custodian
and
one
of
the
the
big
components
was
having
compliances
code,
and
this
is
has
been
really
effective
for
customers.
Have
you
worked
with
and
internally
on,
Microsoft
we're
having
your
compliance.
You
know
the
policies
are
written
and
stored
in
yellow
files
and
gives
us
the
ability
to
version
them
actually
go
through
a
good
like
pipeline
and
process
for
deploying
them.
E
We
build
a
lot
of
tooling
in
cloud
starting
around
it
as
well.
We
have
a
tool
called
policy
stream
that
allows
us
to
look
at
it,
get
repositories,
history
and
compare
to
the
changes
in
the
policies
over
time
and
all
these
have
effectively
made
the
memorial
to
compliance
and
actually
how
you
do
end
up
calling
these
policies
we've
seen
integrations
both
on
the
policy
deployments
with
drone
Jenkins
as
your
DevOps
and-
and
this
is
effectively
been
a
really
important
component
of
going
forward
in
this
space
that
we
have
here.
E
Now,
in
addition,
that
you
know
you
can
always
rent
custodian
in
the
VM
or
a
container
of
whatever
you
choose,
that's
a
best
fit
for
your
situation,
and
so
in
particularly
Bajor,
where
used
leveraging
edge
of
functions
as
the
service
offering
and
we're
actually
able
to
subscribe
to
events
that
are
happening
inside
as
your
your
address,
subscription
via
event
grid
and
we're
trigging
off
of
those
events
to
perform
some
action.
This
is
an
example,
a
simple
policy
around
people
and
here
when
a
key
ball
actually
exhibits
a
right
event.
E
Email
on
the
tag
here,
they
actually
go
ahead
and
tag
that
this
is
a
really
important
scenario.
Just
just
help.
Look
like
ownership
of
resources,
as
developers
and
even
in
production.
People
are
deploying
a
lot
of
these
resources
being
able
to
map
it
back
to,
like
you
know,
who
is
the
ultimate
owner,
is
really
helpful.
E
We
can
go
to
the
next
slide.
I
added
a
little
flow
here
to
just
some
help
with
the
visualization,
so
as
your
subscriptions
can
emit
a
tivity
logs
that
we
subscribed
through
event
grid
another
Azure
service,
but
we
fund
this
into
an
a
jacuz
which
actually
is
a
mechanism
so
that
we
can
deliver
this
really
anywhere
and
one
of
the
hosting
options
like
alluded
to
was
as
your
functions
that
will
go
and
listen
to
this.
The
Q
DQ
messages
there.
E
That's
where
custodian
actually
executes
the
outputs
are
potentially
stored
in
our
storage
there
we
have
some
other
options
and
then
a
lot
of
the
metrics
and
executions
X
perceptions.
Anything
else
that
is
around
the
monitoring
of
the
function
actually
is
stored
in
application.
Insights,
which
is
a
it,
would
just
compare
the
azure
Monitor
umbrella.
So
we
have
a
really
nice
flow
for
of
customers
that
are
running
custodian
in
production,
where
you
can
have
all
all
the
logs
integrate
right
actually
into
the
native
of
edger
solutions,
but
also
you
have
the
flexibility
to
output.
E
One
thing
that
really
point
out
we're
kind
of
custodian
fits
into
the
the
grander
picture.
For
a
sure,
that's
in
our
context
was
that
we
were
looking
for
something
that
will
complement
all
the
other,
a
generic
services,
and
you
can
do.
You
can
essentially
establish
a
very
similar
diagram
for
all
the
other
other
cloud
providers
that
were
also
associated
with,
where
custodian
kind
of
fits
in,
where,
in
conjunction
with
all
these
other
governance
tools.
E
That
add
your
actually
supports
a
natively
which
is
very
important
for
customers
that
we
worked
with
because
they
want
to
use
their
native
tools.
But
then
there's
always
a
point
where,
as
their
governance,
implementation
matures
they're
looking
for
more
customization
and
also
when
you
look
at
customers
that
have
more
multi
cloud
deployments
having
something
that
they
that
can
actually
unify
that,
and
also
is
consistent
of
all
the
clouds
to
help
with
their
governance
story.
E
B
B
Thanks,
firstly,
most
of
our
key
contributors
are
actually
out
of
Eastern
Europe,
so
weren't
able
to
attend,
but
the
local
notion
is
that
these
policies
are
looked
like.
This
is
a
policy
for
GCP
that
will
anytime
you.
Certain
incidents
will
effectively
say
if
it's
not
a
quarantine
tag
and
go
ahead
and
stop
it.
Now,
it's
important
to
keep
in
mind
like
that
workflow
that
sort
of
Andy
showed
in
that,
and
that
is
also
present
for
for
TCP
sort
of
this
flow
diagram.
All
this
is
sort
of
it's
not
the
user
doesn't
have
to
do
this.
B
It
has
conversion
all.
When
you
run
custodian
command-line
and
give
this
policy
without
execution
mode,
it
will
go
ahead
and
do
all
the
wiring
and
provisioning
for
the
user
so
that
they
can
so
that
they're
they
don't
really
have
a
lot
of
DevOps
responsibilities
as
far
as
what
they
need
to
do
any
all
of
us.
B
But
the
I
think
it's
a
sort
of
switching
a
forum
so
think
you
throw
in
the
AWS
like.
So
this
is
ends
up
being
a
super
powerful
capability
from
a
government
perspective,
a
lot
of
concern
is
geared
towards
sort
of
from
a
real-time
perspective
is
geared
towards
for
these
detective
controls.
You
know
anything
that
users
can
express
in
via
the
IM
language
of
their
provider.
We
would
go
ahead
and
recommend
they
do
that.
B
First,
a
lot
of
IM
decision
is
maybe
not
as
flexible
for
some
of
the
nuances
that
people
want
to
express
in
policies,
but
so
this
ability
to
sort
of
introspect
the
API
call
stream
that's
happening
against
their
provider
infrastructure
and
real
time
to
make
sure
that
the
things
that
are
being
created
are
compliant
to
policy
ends
up
being
a
really
powerful
capability
group,
and
so
just
again
as
another
example.
Sort
of
integrating,
with
whatever
the
cloud
providers
native
capabilities
are
one
example
here
is
with
AWS.
B
Config
is
be
able
to
take
a
given
policy,
simply
switch
out
the
motor
to
config
role
and
have
it
deploy,
as
as
a
as
a
custom
configure
ole
within
that
native
service
from
a
multi
cloud
perspective
around
the
different
providers.
You
know
we
cover
off
on
sort
of
the
key
feature
set,
but
API
subscription,
observing
observation
capability
exists,
sort
of
natively
across
multiple
providers
and
then
of
course,
logging
metrics,
multi
account
support
exists
as
well,
so
Canadians
also
an
umbrella
project
around
several
different
tools
that
sort
of
help
users
you're
an
automation
or
operations.
C7.
B
An
organ
is
our
sort
of
parallel
multi
accounts
multi
subscription
multi
project
execution,
where
it
will
allow
for
easier
to
take
a
a
set
of
policies
and
execute
in
parallel
across
accounts,
regions,
etc.
The
other
component
to
that
is
sort
of
our
our
notification
system,
which
is
going
to
mailer
to
be
deployed
as
a
service
function
or
run
within
a
container.
B
It
sort
of
just
giving
a
little
bit
more
flavor
for
around
so
the
operations
cost
and
security
aspects.
This
is
a
policy
that
you
know
in
the
same
way
that
any
policy
was
tagging
on
cable
creators
on
this
policy
subscribe
to
anytime.
Someone
creates
an
s3
bucket
to
go
ahead
and
add
the
rubber
created
as
a
owner
tag
to
that
to
that
resource
from
a
cost
savings
perspective.
You
know
this
is
mostly
about
sort
of
taking
that
taking
the
the
ability
to
look
at
a
resource.
B
B
Looks
looking
at
2019,
we've
had
about
100
contributors
merging
about
you
know,
750
pull
requests,
and
this
is
sort
of
breaking
it
out
by
which
particular
provider
or
feature
that
they
were
working
on,
and
so
it's
pretty
evenly
split
across
across
the
core
and
the
different
providers
we've
been
starting
to
pick
up
on
some
of
our
granted
stuff,
but
all
I
can
talk
to.
You
is
up
some
of
that
on
our
roadmap.
B
As
far
as
this
sort
of
breaking
out
by
companies,
this
is
sort
of
the
current
breakdown
as
long
as
the
top
contributors
and
who
are
also
all
those
soften
triggers
are
also
maintained.
Your's,
as
well
from
a
principals
perspective,
you
know,
Presidium
tries
to
focus
on
being
operationally
simple
a
lot
of
these
years.
You
know
you
know
it.
You
get
tends
to
be
using
a
lot
of
enterprise
context,
but
it
also
tends
to
be
used
sort
of
it
in
smaller
shops.
Sort
of
just
one-off
cost
cost
optimization.
B
So
we
want
to
be
fairly
simple
to
run
and
operate,
and
so
we
try
to
integrate
with
the
native
services
as
much
as
possible,
partly
to
alleviate
any
operational
burden
from
from
an
end
user.
We
want
to
sort
of
keep
the
liqueur
fairly
simple
and
minimized
from
a
vocabulary
that
we're
at
reducing
the
users
like
we
have.
B
You
know
you
know,
probably
a
thousand
different
filters
in
action,
the
capillaries
to
rethink
of
resources,
and
so
just
trying
to
make
those
fairly
orthogonal
so
and
simple
for
users
to
get
understanding
with
all
this
stuff
is
thought
of.
As
you
know,
the
schema
validation
and
the
dry
run
capability
that
we
run
in
CI.
You
know
when
you
say
some
schema.
All
those
filters
and
actions
and
capabilities
are
automatically
documented
out
from
the
code
into
our
dockside
as
a
reference
documentation.
B
Just
to
as
we
expand
out
the
number
of
features
we
have,
we
want
to
make
sure
that
we're
being
being
simple
or
on
cold
starts
in
CLI
execution
as
far
as
not
needing
to
load
anything
more
than
we
need
to
some
structured
logging.
This
is
sort
of
our
internal
to-do
list
monthly
to
gloss
over
it
so
feel
free
to
ask
questions.
B
As
far
as
q-branch
integration,
we
have
a
humanized
provider.
It's
been
it's
pretty
minimal.
At
the
moment,
it's
sort
of
the
Liao's
for
sort
of
home-based
querying
on
to
manage
resources
and
sort
of
doing
it.
Sourcing
policies
on
on
most
of
the
terminal
pins
across
the
application
namespace
as
well
as
CR
DS
we'd
like
to
go
ahead
and
ramp
that
up
as
far
as
building
out
our
innocent
parallel
facilities
and
today
kissing's
also
been
fairly
opinionated.
B
How
people
deploy
now
we've
seen,
Jenkins
and,
and
you
know,
faregates
and
cuba
Nettie's
and
get
lab
CI
and
so
I
think
Cabernets
offers
us
a
good
baseline
to
actually
start
having
an
opinionated
deployment.
As
far
as
what
is
good
operations
and
good
deployment
of
trip
night
custodian
management
framework
look
like
on
top
kubernetes.
D
B
So
a
great
question,
open
policy
agent
is
is
has
I,
think
I.
Think
there's
some
complimentary
features
like
I
think
open
policy
agent
does
a
really
great
job
of
enabling
sort
of
edge
based
decisioning
around
like
and
sort
of
very
decoupled
from
data
sets.
The
permission
for
us
is
that
we
wanted
to
have
that
shirt.
B
The
real-time
behavior
around
the
cloud
environments
for
us
was
sort
of
where
we
started
off
and
opah
started
off
around
sort
of
kubernetes
as
far
as
its
some
of
the
tighter
integrations
and
it's
been
around
in
this
ecosystem
for
a
long
time.
For
us,
this
is
sort
of
growing
out
our
trust
ability
to
cover
a
sort
of
a
full
space
around
infrastructure.
B
If
we
look
at
we're
sort
of
opens
gone
as
far
as
its
decisioning
a
lot
of
its
going
out
to
sort
of
edge
like
you
know,
ssh
integration
or-
and
so
it's
a
very
decoupled
engine,
and
so
that's
been
really
nice
one
of
the
things
that
sort
of
separates
the
two
is
that
Oppo
wants
to
sort
of
tipo
full
a
full
inventory
in
memory.
As
far
as
be
able
to
do
that,
decisioning
self
doesn't
really
is
typically
going
to
pull
pull
on
demand,
any
additional
information
it
needs
from
third-party
data
sources.
B
So
a
lot
of
the
filters
that
we're
using
or
like
are
going
to
do
additional
API
calls
into
a
cloud
environment
to
to
sort
of
verify
things
so,
for
example,
validating
data
Abe
wants
config
is
valid,
requires
us,
terrifying,
the
security
groups,
the
image
etc
around
it,
whereas
OPA
typically
trying
to
do
a
fairly
localized
decisioning
around
whatever
it
has
in
memory.
At
that
moment,.
D
B
B
Do
you
run
incrementally
across
the
partial
subset
be
running
dry
run,
run
on
you
know
a
periodic
basis
like
it
doesn't
have
to
hook
up
to
the
event
stream,
but
we're
always
operating
against
whatever
the
control
stream,
the
control,
plane,
API
and
event
streams
are,
whereas
with
open,
I
think
you
can.
You
can
do
some
of
that
I
think.
B
As
far
as
what
the
built
in
integrations
are
today,
like
the
only
place,
that's
I
think
a
valid
statement
is
really
around
sort
of
kubernetes
specific
integration
as
far
as
gatekeeper,
but
if
you
look
at
sort
of
in
the
wild
where
it
goes,
it's
typically
typically
with
a
smaller
footprint
out
towards
the
edge
and
I
think
there's
some
I
have
some
national
sort
of
research.
Experiments
I'd
like
to
do,
and
now
that
oppa
has
sort
of
been
has
some
support
for
compiling
I
got
a
lesson
that
it
might
be
interesting
to
explore
additional
integrations.
D
A
D
D
That's
not
a
very
favorable
use
case,
so
that
I
would
see
that
one
distinction
between
the
pay,
OPA
and
custodian
gets
used
and
yeah
OPA
was
started
off
more
towards
kubernetes,
but
it's
a
general
purpose
policy
engine,
so
I
could
see
a
way
of
integrating
it
with
some
of
these
use
cases
where
you're
putting
a
custodian
with
AWS
or
GCP,
so
yeah
I,
there's
overlap
with
those
kind
of
use.
Cases
like
admission
control,
but
performance
wise,
like
running
on
the
edges.
D
A
A
Alright
I'll
dive
in
so
I
noticed
your
GCP
support
is
in
in
beta
and
a
way
early.
We
had
an
issue
where
we
had
wanted
to
wear
our
issue,
meaning
we
file
issues
when
we
want
to
and
bite
people
to
talk
about
things,
and
there
was
an
idea
that
we
would
invite
cloud
consortium
and
for
SETI,
because
at
that
time,
if
I
recall
correctly,
cloud
custodian
was
for
AWS
and
for
setting
was
just
for
DCP
and
now
I
was
excited
to
see
that
one
of
the
platforms
had
embraced
cross
cloud.
A
B
There's
you
know,
there's
been
number
of
sessions
over
to
study
and
blood
time
with
all
the
cloud
providers
and
there
there
have
been
some
contributions
from
Google,
not
directly
from
the
for
SETI
team.
You
know,
there's
a
sniffing
in
Delta
like
I
looked
at
for
said,
a
have
way
before
I
started
working
on
TCP
support
a
few
years
ago.
I
think
you
know,
especially
moving
into
the
MTF
I,
think
there's
there's
a
strong
opportunity
there
for
getting
some
contributions
from
the
for
SETI.
B
If
I
was
going
to
compare
and
contrast
sort
of
what
they
do
today
for
SETI
is
typically
doing
sort
of
a
running
poll
of
all
the
resources
or
integrating
with
cloud
asset
inventory,
running
a
set
of
rules
and
dropping
it
into
a
database
doesn't
run
from
a
architecture
perspective
around
what
what
our
goals
are.
As
far
as
offering
sort
about
real-time
response,
it
wasn't
something
that
for
SETI
was
really
doing
when
we
looked
at
it
and
I,
don't
think
it's
added
that
in
the
interim
sense,
but
potentially
it's
on
the
roadmap.
B
So
what
we
sort
of
targeted
initially
was
this
capability
around
sort
of
doing
that?
Real-Time
integration,
so
hooking
up,
you
know
called
audit
logs
to
pub
subtopics
to
Google
Cloud
functions
and
enabling
both
these
policies
to
be
employed
out
there
to
be
able
to
do
real-time
response
on
the
order
of
a
few
seconds
to
to
respond
to
that's,
as
opposed
to
getting
a
database
on
a
polling
basis.
Every
few
hours
that
that
has
some
set
of
things
that
need
to
be
remediated.
B
But
to
answer
the
question:
yes,
we
would
love
to
have
for
additional
contribution
from
core
study
team
they've
done
a
lot
of
work,
around
sort
of
a
set
of
a
static
sort
of
rules
around
what
is
for
the
best
practice
in
their
environment,
and
so
that's
something
that
we
would
love
to
recover
it
with
and
potentially
be
able
to
express
and
get
sodium
policies
going
forward.
Right.
A
And
that
kind
of
leads
me
to
like
thank
you
for
the
compare/contrast
of
the
architecture,
because
I
had
looked
at
it
a
couple
of
years
ago,
but
that
was
a
while
back,
and
one
of
the
things
that
that
reminds
me
of
is
that
it's
been
a
while,
since
I
did
application
development,
but
I
remember
in
the
early
days
of
lambda,
they
there
were
a
lot.
It
wasn't.
I'd
I
haven't
heard
that
amazon
has
SLA
s
around.
A
B
A
So
people
can
decide
how
a
hundred
percent
they
want
to
be
based
on
the
particular
compliance
they're
doing
like
maybe,
if
they're
turning
off
an
instance
during
off
hours.
It's
not
that
important.
It's
19,
you
know,
0.001
percent
of
the
time
does
happen,
but
if
it's
well,
maybe
my
bucket
is
wide
open
to
the
Internet.
That's
not
okay!
Right.
B
And
and
some
of
the
execution
as
we
integrate
with
our
doing
the
extra
behind-the-scenes
work
as
far
as
you
know,
like
say
where
you're
deployed
as
a
config
role
in
AWS
behind
the
scene,
that's
you
know
doing
the
event
stream
as
well
as
doing
polling
and
then
feeding
that
information
back
to
us
so
from
a
policy
perspective
that
wouldn't
need
to
be
sort
of
duplicated
as
two
policies,
one
as
a
you
know:
full
fleet
evaluation
and
one
as
a
vent
evaluation.
But,
generally
speaking,
you
know
how
befitting
users
typically
are.
B
Authoring
is
they'll,
take
in
a
write
of
policy
for
learning
into
the
whole
fleet
and
then
they'll
start
adding
in
the
event
base
the
event
basis
that
they
wanted
to
execute
on.
So
even
in
the
development
that
they're
sort
of
you
know
switching
between
modes
sort
of
seamlessly,
as
they
are
trying
things
out
right.
A
B
So
health
is
interesting
a
month
sure
the
context
of
that
is
served
like
general
general
cloud
infrastructure.
The
ability
to
filter
and
sort
of
geometric
square
ease
allows
you
to
sort
of
operate
on
many
different,
accessible
parameters.
So
you
know
we
we
get
brought
the
example
around
sort
of
cost
optimization.
B
B
So
you
know
we've
seen
you
know
operational
environments,
where
the
auto
scale
group
is
just
really
trying
to
spin
up
instances
and
it
can't
because
it's
misconfigured-
and
so
it's
they're-
doing,
support
sort
of
detecting
like
actually
going
through
and
validating
that
as
well
as
sort
of
subscribing
to
additional
events.
Dreams
around
launch
failures
around
instances
additionally
in
native
us,
and
addresses
where
it
started
off
initially
about
four
years
ago,
and
so
it
definitely
has
the
richest
integration.
B
There's
an
outage
incident
status,
page
update
that
you
go
ahead
and
notify,
say
the
application
team
that
those
resources
are
effecting
and
so
there's
rich
capability
around
doing
operational
work
with
the
Stadion
that
there
goes.
Those
tens
of
those
capability
tend
to
divided
differently
across
different
clouds,
in
what
those
clouds
expose
natively
that
it
can
that
consider
you
can
use
them
as
an
event
stream,
I.
C
B
That's
a
great
question,
so
Paul
seems
did
certainly
has
built-in
safety
belts.
Obviously
any
any
time
you
know,
during
remediation
at
scale
or
doing
sort
of
mass
operations
and
running
sugar
at
scale.
It's
it's
a
good
best
practice
to
have
some
sort
of
safety
belt
capability.
Isalmost
derives
from
this
whole
notion
of
compliance
is
code
like
in
in
most
teams
will
set
up
a
CI
infrastructure,
so
they're
doing
a
jenkins.
B
It's
simply
going
to
get
show
you
the
set
of
resources
that
you
filtered
to
and
then
those
can
post
back
as
sort
of
a
to
the
pull
requests
or
as
a
comment
with
regards
to
sort
of
what
this,
what
these
policies
are
going
to
affect
and
and
II
had
touched
upon
sore
that
tool.
We
have
called
policy
stream
and
policy
stream
is,
you
know,
we
talked
excess
code,
it's
all
about
sort
of
workflows,
but
one
of
the
things
that
policy
stream
does
for
us
is.
B
B
With
regards
to
policy
execution
and
in
affecting
a
larger
population,
and
then
you
might
expect,
of
course,
from
an
exception
perspective,
you
know
one
of
the
on
the
common
rules.
I've
noticed
is
that
every
rule
has
its
own
exception.
Is
the
ability
to
sort
of
pull
an
exception
list
around
ticular
policies
and
source
them
from
from
urls
from
s3
from
from
json
feeds
and
CSVs
be
at
around,
like
this
particular
image,
or
instance,
is
exempt
from
these
particular
policies
using
sort
of
the
intrinsic
using
external
integrations
as
far
as
defining
what
those?
B
Yeah,
so
you
can
go
ahead
and
so
as
far
as
sort
of
doing
hopefully
devaluation,
you
can
very
much
do
that
you
can
evaluate
the
whole
fleet
determine
you
know
this.
Particular
these
four
instances
don't
match
this
particular
set
of
criteria
and
then
look
at
that
as
a
whole.
And
then
you
can
actually
do
an
aggregate
query
and
a
great
filter.
That
says
if
this
is
an
percent
of
the
population
or
if
there's
more
than
50
of
these
things
then
proceed
to
action.
B
A
We've
just
got
ten
minutes
left,
so
I
wanted
to
make
sure
that
to
allocate
some
time
to
after
your
question,
which
is
what's
next,
yes
and
and
so
I
would
like
to
draw
everybody's
attention
to
at
the
I
added
to
the
agenda.
The
I'll
actually
do
this
backwards,
a
little
bit.
A
Yes,
I
have
been
in
a
bunch
of
these
meetings
and
been
around
for
a
little
while
and
been
the
recipient
of
I.
Don't
know
what's
going
on
and
how
do
we
become
a
CNCs
sig
that
lasted
many
months?
So
I
and
I've
been
a
fan
of
this.
This
markdown
flowchart
maker,
so
I
took
a
discussion
from
last
summer.
That
was
in
a
you,
know,
diagram
tool
and
I
turned
it
into
and
I.
A
You
know
filtered
in
a
bunch
of
other
discussions
that
I've
heard
in
various
meetings
and
made
this
flowchart,
so
this
is
not
adopted,
but
this
is
my
attempt
to
write
down
what
I
think
everybody
is
talking
about,
and
nobody
has
said
this
is
terribly
wrong
and
so
I
thought
I
would
just
kind
of
tell
everybody
about
this.
I,
don't
think
we
have
time
to
have
a
deep
discussion
about
it,
but
but
everyone
in
the
group
and
folks
McLeod,
couldst
odeon.
A
You
should
feel
free
in
fact
obliged
to
look
at
this
carefully
and
tell
me
I'm
wrong
and
say
where
this
doesn't
make
any
sense,
and
we
can
elaborate,
because
I
think
there
is
a
idea
that
the
TOC
would
like
the
SIG's
to
participate
in
making
it
so
that
the
TOC
has
more
bandwidth
by
kind
of
preflighting.
All
of
the
it
does
is
this
project
of
fit
without
making
a
decision
about
a
recommendation
right,
so
that
if
projects
are
really
clearly
not
a
fit,
they
get
quick
feedback.
A
A
Lovable
all
right,
there's
a
lot
of
detail
that
would
need
to
be
worked
out,
but
the
truth
is
that
the
spirit
of
this
is
that
there's
a
long
queue
of
projects
that
would
like
to
present
to
the
TOC
and
then
they
present
to
the
TOC,
but
that
then
there's
like
a
QA
discussion
due
diligence
thing
that
there
just
isn't
enough
bandwidth
on
the
TOC
to
do,
and
so
this
is
an
effort
to
paralyze
it.
So
the
is
my
understanding
of
what
I
think
we're
doing
so
we're
gonna
go
forth.
A
We
can
go
forth
down
this
path.
If
you
don't
like
this
path,
then
can't
like,
then
we
can
change
it,
but
by
default
we'll
go
with
my
understanding
of
how
things
so,
which
is
that,
like,
basically,
you
were
sent
to
us
to
be
that
to
say:
hey,
engage
with
the
sig
here.
I,
just
am
becoming
part
of
the
CNC
F,
so
the
different
ways
that
you
can
engage
is,
of
course,
if
we,
you
know
we're
excited
to
have
you
present
here.
A
We'd
also
encourage
you
to
like
you
know,
participate,
and
then
we
have
a
new
members
page
and
that
can
help
speed
things
up.
That's
sure
to
help
us
get
through
our
backlog.
Also,
we
I'd
encourage
you
to
go
look
and
see.
Are
there
are
other
projects
where
cloud
custodian
could
be
useful?
I
mean
you're
already
connected
to
kubernetes,
of
course,
but
maybe
there
are
some
other
projects
that
it
would
make
sense
for
them
to
use
cloud
custodian
or
you
to
use
them
or
something
something.
A
So
that's
I
think
just
sort
of
generally
good
idea,
because
those
questions
may
come
up
if
there
is
an
obvious
connection.
That
may
not
be
obvious
to
you
if
you're,
an
agency,
ins,
yeah
or
even
obvious
to
all
of
us,
so
there
there's
a
due
diligence
process
which
right
now
is
multiple
due
diligence
processes
which
we
have
on
deck
to
sort
out
so
like
in
the
next
weeks.
A
If
you
hadn't
arrived,
we
would
be
working
on
clarifying
that
process,
so
so
I
just
want,
let
you
know
and
then,
as
part
of
that,
what
we've
talked
about
informally
is
that
it
would
make
sense
to
us
to
me
and
a
couple
people
I've
talked
to
that
as
part
of
that
process.
We
have
this
self-assessment.
That
is
a
document
that
is
generally
produced
by
the
project
as
a
way
to
explain
what
the
project
is
and
what
its
security
posture
is.
A
B
I
have
a
few
questions.
There
I'm
just
trying
to
understand
what
the
what
the
process
is,
because
I've
seen
products
go
through
security
assessments,
products
that
were
already
in
and
see
MTF,
so
it
seems
like
it
was
independent
from
sort
of
the
inbound
activity
towards
the
talk
and
so
I'm.
Just
trying
to
understand
is
that
being
defined
as
a
prerequisite
and
having
looked
at
that
process,
it
looks
mostly
like
it's
going
through
the
CI,
a
CI
bad
job
status.
Stuff.
Is
there
additional
stuff
beyond
that.
A
B
A
D
A
We
shouldn't
require
people
to
do
an
assessment,
yet
we
wanted
to
do
assessments
of
the
people,
who
are
the
products
that
were
already
in
CNC
F,
so
we
invited
OPA
to
do
our
second
assessment,
which
was
a
very
collaborative
process,
a
Ash's
here
who
really
helped.
You
know,
steer
that
process
so
that
we
could
refine
the
process,
and
so
our
goal
is
to
well.
A
A
lot
of
it
is
prioritized
about
projects
that
come
to
us
or
we
do
outreach
for,
and
so
so
it's
it's
we're
exploring
whether
we
will
have
a
lighter-weight
do
whether
the
security
assessment
will
be.
You
know
either
recommended
or
required,
or
well
just
think
about
doing
this
sometime,
you
know
or
anyway,
anywhere
in
between,
and
so
that
hasn't
been.
A
That's
not
decisional
and
you've
sort
of
caught
us
in
the
midst
of
having
this
process,
and
we
basically
have
said
that
until
we've
done
five
security
assessments
and
we've
evaluated
our
process
of
doing
so,
and
we
know
we
can
say
if
somebody
comes
to
us
or
the
TOC
says,
do
an
assessment.
We
can
say
yep
it'll
be
done
in
an
week
square,
we're
shooting
for
an
equal
three
right
now,
so
we
can
assert
that
it
will
definitely
take
this
amount
of
time.
We're
not
going
to
make
it
a
requirement.
C
F
A
A
We'll
wait
until
your
process
is
done
so
I
went
to
answer
your
question
about
the
self
assessment.
The
CI
best
practices
is
hardly
the
most
important
or
biggest
part
of
it,
although,
like
it's
a
basic
checklist
of
like
if
you're
not
doing
90%
of
the
basic
stuff
like
we're
a
little
worried
about
you
and
it's
more,
it's
less
that
you
need
to
do
it
because
there
certainly
could
be
projects
accepted
into
the
sandbox
that
are
experimental
and
they're
like
we're
excited
about
them
and
were
just
like.
Oh
just
FYI,
this
is
a
set
of
things.
A
You
ought
to
be
doing
right
and
just
cue
it
up
before
incubation
or
whatever.
But
it's
more
that,
like
that's,
that's
the
like
the
least
of
your
concerns,
and
my
guess
is
that
that
would
not
be
arduous
for
any
project
that
is,
security
focused
or
at
the
maturity
of
pods
custodian.
The
important
part
most
of
this
self
assessment
is
just
like
some
common
format.
For
what
is
your
project?
Do
the
big
part
of
it
is
the
security
analysis?
What's
your
threat
model?
This
is
something
that
is
rarely
like,
really
surfaced
in
a
concise
way.
A
For
open
source
projects-
and
so
this
is
the
meat
of
it,
which
is
that
you
explain
to
us
what
you
think
your
security
posture
is
and
and
what
are
the
threat
potential
threats
by
adding
your
thing
into
the
mix
right
because
adding
up
generally
when
you're
adding
a
project
which
is
supposed
to
increase
your
security
that
that's
huge,
huge
benefits.
But
of
course
you
have
to
evaluate
that.
That's
another
attack
circuit
to
surface,
and
so
we
want
to
just
streamline
that-
and
you
know,
have
an
opportunity
to
discuss
that
and
think
it
through.
A
So
that's
kind
of
like
the
big
part
of
it,
and
so
one
of
the
things
that
I
think
might
work
well
and
I'd.
Really
like
your
feedback
on
which
we
can
do
a
synchronously,
because
I
know
it's
the
top
of
the
hour,
which
is,
if
you
were
to
like
look
at
this
and
say
well
how
much
work
would
it
be
for
you
to
produce
that?
Would
that
be
a
reasonable
requirement
for
you
to
produce
that
and
then
the
actual
self
assessment
may
or
may
not
be
necessary
at
this.
B
We
could
try
to
get
that
kicked
out
relatively
quickly,
I'm,
just
looking
at
the
the
holidays
impending
and
people
sort
of
manage
out
on
position
and
family
time.
That's
sort
of
what
that
puts
us
from
your
three
week
target
time
frame
to
well
the
holidays,
and
it's
going
to
take.
You
know
two
months.
A
Anything
before
the
holidays,
in
any
case
right
so
yeah,
but
the
due
diligence
flooring
is
that
maybe
we
have
a
lighter
weight
due
diligence
which
is
just
like.
Do
we
have
any
big
concerns
that
would
be
blocking?
Are
we?
Where
are
we
on
the
realm
of?
No?
We
recommend
the
tos.
You
not
touch
this
thing
or
yeh,
please.
B
D
B
Threat
models
they're
really
around
controlling
access
to
the
get
Rico.
That
has
the
policies
that
contains
executing
with
and
then
sort
of
trying
to
hijack
one
of
the
functions
of
its
deployed,
but
yeah
we
can
I
can
walk
throughs
on
that
and
then,
as
far
as
the
project
description
and
the
OCI
review,
that's
relatively
straightforward!
B
A
B
B
A
A
Right
yeah,
so
I
think
it's
it's
five
past,
so
I'll
close
the
meeting
but
folks
feel
free
to
chime
in
on
slack
and
if
just
things
that
we
didn't
cover
that
we
need
to
coordinate,
and
then
next
week
will
be
our
more
usual
working
group
meeting.
I
think
there
are
some
topics
that
people
are
thinking
of
phrasing,
but
then
we'll
have
check-ins
and
discussion
of
what's
top
of
mind.
That's
great
right!
Thank.