►
From YouTube: Kubernetes SIG Security Tooling 20211116
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
it's
8
32.
We
have
a
total
of
45
minutes
worth
of
meeting
time
and
we
have
about
10
people.
So
I
think
it's
good
enough
crowd
to
get
started
so
I'll
hand
it
over
to
you
rahul.
I
think
I'll
need
to
make
you
presenter
so
I'll
do
that
and
then
you
can
share
your
screen.
B
Fabulous,
let
me
start
okay.
B
B
We
are
working
on
runtime
security
platform
for
kubernetes
and
cube
armor,
which
the
the
project
cube
armor.
It's
it's
basically
the
flagship
project
that
we
are
working
on.
It's
completely
open
source.
In
fact,
we
have
already
approached
it
to
be
part
of
cncf
sandbox.
Some
background
about
about
me.
B
I'm
I've
been
a
kernel
developer
before
I've
worked
in
networks
and
transport,
optimization
domain,
multiple
transports,
I've
been
part
of
it
standardization
groups,
but
in
general,
in
mostly
I'm
a
systems
programmer
and
that's
what
you
know
I
do
for
the
most
part
of
my
job.
C
A
B
So
we
have
the
technical
all
the
technical
folks
required
here,
all
the
technical
ammo
that
is
required
here
for
this
meeting
nice
perfect.
A
B
A
B
We
have
already
applied
for
the
cnc
of
sandbox
and
I
believe,
next
time
after
a
couple
of
months,
there
will
be
the
next
close
meeting
which,
where
we'll
there
will
be
a
deliberation.
I
guess
it's
a
closed
deliberation,
close
house
delegation
so
looking
forward
further
opportunity
and
looking
forward
to
have
this
project
accepted
as
part
of
cncf
coming
back
what
is
cube
armor?
What
is
the
problem
that
we
are
trying
to
solve?
You
know
our
primary
problem.
B
You
know
the
the
genetic
statement
about
the
cloud
workloads
today
are,
you
know,
can
engage
in
malicious
behavior
now.
The
important
point
to
note
here
is
that
traditionally
it
has
been
it
has
been.
You
know
the
cloud
security
has
been
equated
to
network
security,
but
today
we
know,
as
we
all
know,
that
you
know
you
can
have
a
malicious
attacker
in
your
pods.
Through
some
other
mechanisms,
it
could
be
supply
chain
attack
or
it
could
be
some
other.
You
know
a
rogue
internal
actor
playing
putting
in
the
attack
vector
directly
into
the
part.
B
So
it's
not
necessarily
the
you
know:
let's
not
compute
cloud
security
as
something
equal
to
network
security.
B
The
second
biggest
aspect
for
us
working
on
this
problem
statement
was
the
mitre
ttps
miter,
as
we
all
know,
is
it's
a
it's
an
attack
framework
which
talks
about
all
the
techniques
tactics
used
by
typically
used
by
attackers,
and
we
found
that
you
know
there
are
a
lot
of
detection
points
and
mitigation
points
that
are
offered
by
miter
and
the
the
rational
for
cube.
Armor
was
there
will
can
we
can
we
design
a
policy
engine
which
can
handle
those
kind
of
rules
and
handle
it
at
the
bottom,
not
necessarily
at
the
host
level?
B
So
you
have
miter
ttps
and
sticks
policies
which
can
which
can
do
mitre
can
do
most
of
the
miter
tactics
are
at
the
host
level,
but
is
it
possible
to
do
something
similar
at
the
pod
level
as
well
and
from
the
detection
perspective
as
well?
We
found
that
there
are
several
gaps
in
terms
of
how
good
can
one
do
a
detection
about,
for
example,
in
most
of
the
techniques
we
found
that
there
is
a
correlation
across
the
time
space.
B
For
example,
if
someone
is
doing
a
file
read
and
someone
is
trying
to
do
a
connect
network
access,
both
these
events
can
be
correlated.
Can
both
these
events
be
correlated
directly
in
the
kernel
space
so
as
to
avoid
the
performance
impact?
That
is
that
that
is
another
problem
statement
that
we
hit
upon.
You
know
so
so
we
started
working
on
policy
templates.
What?
How
should
the
policy
templates
look
like
and
is
there
any
policy
engine
out
there
which
can
fulfill
those
policy
templates?
B
Now
let
let
me
talk
quickly
about
you,
know
high
level
solution
about
what
kubernetes
cubarama
essentially
leverages
lsm,
so
linux
security
modules
and
cuba.
Basically
people
you
might
have
heard
of
app
armor
s,
linux,
bpf
lsm,
is
a
new
lsm
in
the
on
the
corner.
You
know
it
basically
leverages
ebpf
at
the
lsm
hooks.
You
have
a
more
flexible.
You
can
have
a
more
flexible
policy
engine
with
bpa
for
lsm,
and
I
know
for
sure
that
this
team
here
is
already
deep
into
poly
port
security
context.
B
Spot
security
policies,
so
I
just
wanted
to
make
a
point
as
to
how
we
are
how
the
cube
armor
is
differentiated
in
that
context.
So
what
security
context
expects
user
to
provide
s
linux
apartment
customers?
You
know
the
primary
problem
with
that
is
users
putting
in
policy
constructs
in
the
form
of
s.
Linux
policies
is
extremely
difficult.
B
The
other
problem
is
that
you
have
cloud
providers
and
not
all
the
cloud
cloud
providers
are
standardized
using
s
linux,
for
example,
gk
class
container
optimized
os,
makes
use
of
app
armor
eks
bottle
rocket.
Dk
sam
is
in
linux
2.
They
make
use
of
s
limits
as
the
lsm
of
choice,
and
these
two
lsms
are
not.
You
know,
stackable
lsms,
which
means
that
either
app,
armor
or
slns
will
be
can
be
enabled
in
a
given
deployment.
B
Now
there
is,
there
is
absolutely
no
control
that
the
security
team
might
have
on
what
lsn
might
be
enabled
in
the
deployment.
So
typically
someone
hand,
coding,
slns
rules
or
app
armor
rules
for
a
given
pods
is
extremely
difficult
is
probably
out
of
scope.
The
part
security
context
expects
that
you
provide
an
slns
set
labels
in
the
context
of
the
pod,
and
you
prepare
a
policy
specification
for
those
labels
for
that
type
request,
and
you
know
if
someone
is
well-versed
in
slns,
they
might
be
able
to
do
that.
B
But
again,
there
are
several
several
issues
with
the
cylinders
by
itself
which
which,
which
is
documented
in
cube
armor
as
to
why
some
of
the
policies
of
this
linux
cannot
be
applied
inside
the
containers.
So
the
point
primary
problem
is
traditionally
app.
Armor
and
slns
policies
have
been
applied
at
the
host
level.
Is
there
something
that
we
can
do
in
the
same
tone
at
the
pod
level
within
the
pod?
Is
it
possible
for
us
to
say
within
this
part
only
these
set
of
processes
can
be
executed.
B
Only
these
these
processes
can
only
access
these
file
file
system
parts
capabilities.
I
think
there
are
multiple
ways
of
controlling
the
capabilities
of
the
container,
but
you
know
cube.
Armor
also
has
its
own
way
of
enforcing
the
capabilities
for
the
container.
The
the
good
part
about
cube
armor
is
that
you
can
apply
this
policy
as
a
declarative,
kubernetes
yaml,
and
you
don't
have
to
learn.
Sln
x
policies,
app
armor
policies.
B
Cubama
internally,
will
take
these
police
declarative,
yamls
and
check,
which
is
the
lsm
that
is
deployed
on
the
given
cluster
on
the
given
node
and
convert
that
kubernetes
policies
into
the
corresponding
lsm
language
with
bp
of
lsm.
This
there's
a
lot
more
to
come,
but
that's
the
basic
premise
of
q
bar
now,
like
I
mentioned
before,
application
enforcement
controls,
it's
difficult,
getting
bare
bones:
linux
and
app
armor
first
right
for
the
parts
even
for
the
hosts.
B
B
It's
simply
a
matter
of
using
kubernetes
animals,
a
set
of
yaml
policies
for
applying
the
policies
now
coming
back
to
the
mitre.
The
primary
reason
we
hit
upon
mitre
is
that
you
know,
as
a
security
company.
Mitre
is
one
of
the
primary
attack
framework
that
we
had
been
concentrating
concentrating
upon,
and
when
we
looked
at
the
scope
of
mitre,
we
found
that
most
of
the
attack
scenarios
stem
from
systems
perspective,
not
from
the
network.
B
Of
course,
there
are
a
lot
of
tactics
mentioned
here,
which
also
has
a
network
component
associated
with
it,
but
majority
of
it
had
systems
aspect
associated,
and
thus
you
know,
like
I
mentioned
most
of
the
controls
are
systems
or
application
based
controls.
Some
of
them
are
highlighted
here.
In
fact,
one
can
directly
go
to
attack.mitre.you,
know
org
and
check
if
there
is
a
way
to
select
systems
versus
network
and
you'll,
see
the
difference
in
the
ttps
that
are
present
in
the
context
of
systems
or
systems.
B
Now
I
I
also
need
to
talk
about
some
existing
solutions.
We
have
falco
and
tracy.
These
are
brilliant
engines,
you
know
falcon
3c,
they
integrate
evp,
back-end
and
couples
with,
so
they
have
an
eppf
based
flexible
backend,
wherein
you
can
specify
the
fields.
The
events
that
are
you
interest
that
you
are
interested
in
when
there
is
an
event
it
couples
it
with
the
kubernetes
metadata
and
that
event
information
can
be
sent
to
the
policy
engine
which
might
take
an
action,
something
like
killing
the
process
or
quarantining
the
container.
B
You
know
falco
provides
also
a
lot
of
out-of-box
rules,
alerts
on
malicious
attacks
and
cv
exploits,
but
the
basic
premise,
the
basic
difference
between
cube
armor
and
falco
and
or
tracy
is
that
cube.
Armor
is
an
enforcement
engine,
unlike
falcon
tracy,
which
means
that
you
can
block
an
action
in
line
and
I'll
talk
about
that
in
the
subsequent
slide.
But
before
going
there,
you
know
the
the
there
were
pretty
big
challenges
when
we
started
working
on
cuba.
C
Well,
just
one
second,
so
just
wanted
to
pause
there
and
see
if
anybody
has
any
questions
thus
far.
A
For
me,
so
I
noticed
you
mentioned
that
I
and
I
I
think,
that's
a
very
good
point
for
s
linux
and
app
armor,
where
there
are
profiles
that
are
built
in
at
runtime
level
that
are
just
consumed
by
bot
security
context,
because
nobody
knows
as
an
application
developer
what
app
armor
profile.
I
need
what
this
linux
profile.
I
need,
so
you
there
was
also
a
line
there
where
you
said
it
will
be
auto
generated
based
on
the
pods
behavior.
Is
that
correct
from
my
understanding
or
I
missed
something.
B
B
One
is
the
visibility
aspect
where
it
shows
what
that
pod
is
doing,
and
this
information
can
be
additionally
used
for
several
other
purpose,
and
one
purpose
is
auto,
discovering
such
policies
and
that
auto
discovery
engine
is
also
something
that
we
have
open
source
recently,
I
see
so
I'll
I'll
talk
about
it.
A
C
Can
explain
the
problem
statement
further?
We
do
know
a
couple
of
people
in
the
security
community
who
use
app
armor
to
harden
the
hosts
right,
but
the
reality
is
they
have
no
clue
how
to
get
started
with
kubernetes
environments
yeah
and
to
build
a
custom.
App
profile
is
next
to
impossible.
C
So
you
know
so
that's
what
we
said
hey.
This
is
a
known
technology.
This
has
been
used
for
20
years.
Why
can't
this
be
used
to
secure
your
workload
at
runtime
so
that
it
can?
We
can
sort
of
know?
What's
the
signature
of
the
application
protect
it
and
you
know,
restrict
it
from
doing
anything
else
or
at
least
audit
it
from
being
doing
anything
else
right
and
the
first
step
was
the
observability
piece
which
we
say:
what
is
this
application?
C
First
of
all
doing
that's
you
know
that's
what
cuba
does
first
and
then
we
have
a
declarative
policy
based
enforcement
engine
that
can
actually
restrict
or
allow
or
deny
case
in
point
is.
You
could
have
a
my
sequel
for
kona
actually
be
clustered
running
in
kubernetes,
which
is
pretty
common
today,
and
you
can
say
that
this
particular
mysql
cluster
is
the
only
processes
or
list
of
processes
or
parts
that
can
access
my
mysql
data
paths
on
the
host
operating
system
yeah.
It.
C
So
that's
that's
the
kind
of
use
case,
and
so,
if
you
now
want
to
enforce
any
such
policy
in
the
communities
environment
without
cuba
or
it's
close
to
impossible,
let
me
put
it
that
way.
So
we've
taken
the
app
on
our
sllinux
constructs,
you
know
made
it.
You
know
kubernetes
and
container
aware
now
provided
visibility,
and
we
are
now
open
something
the
policy
auto
generation
as
well.
I
mean
the
first
step:
is:
can
you
declaratively
even
enforce
the
policies?
C
I
think
we've
we've
gotten
past,
that
initial
hurdle
now
the
policy
auto
generation
is
also
coming
and
and
so
that
sort
of
should
improve
your
journey
right.
Can
I
discover,
can
I
write
my
own
policies
and
then
can
I
automatically
generate
my
application
profile?
Yeah.
B
Go
ahead
yeah,
so
some
of
the
challenges
that
we
faced
you
know
lsm's.
There
are
a
lot
of
host
of
lsm's
out
there
on
linux
right
you
have
apparent
slings,
smack
bomb
away
and
so
on.
B
B
So
you
have
inline
security
control,
which
means
that
if
you
are
blocking
a
file
access
or
if
you
are
blocking
a
process
found
event
before
the
process
is
found,
there
is
an
inline
lsm
hoax,
which
says:
do
you
want
this
process
to
be
spawned?
And
you
have
to
respond
back
saying
yes
or
no?
That's
where
s
linux
or
app
armor
comes
into
picture
using
those
policy
construct.
You
can
specify
the
policy
which
can
say
yes
or
no,
and
you
know
so
so
that
is
one
of
the
primary
advantage
that
you
know.
B
Lsm's
do
not
suffer
from
the
problem
of
doctor
the,
but
the
disadvantage.
One
of
the
primary
disadvantage
with
lsims
till
date
had
been
that
they
are
not
integrated
with
kubernetes
or
docker.
For
that
matter,
you
know,
kubernetes
or
docker
is
an
upper
layer,
orchestration
platform,
so
lsms
does
not
care
about
docker
any
containers
or
kubernetes
so,
and
the
steep
learning
curve
for
any
policy
language,
for
example
selenium
armor,
as
we
have
already
talked
about
it,
it's
very
high
and
within
the
given
lsm
as
well.
B
For
example,
we
are
talking
about
app
armor,
app
armor
itself
has
version
one
two
and
three,
and
the
behavior
of
these
versions
is
very
different
from
each
other.
The
kind
of
policy
constructs
that
are
available
as
part
of
these
versions
are
very
different.
B
B
Again,
you
know
the
dynamic
enforcement
is
not
possible,
there's
very
limited
adoption
till
date,
the
primary
problem
again
with
seccomp.
Is
you
know?
How
do
you
isolate
the
set
of
system?
Calls
that
you
feed?
You
think
that
the
corresponding
part
will
not
use
that
information
coming
up
with
that
kind
of
information
by
itself
is
challenging
and
that's
where
again,
the
visibility
mode
of
cube
armor
will
come
into
picture.
B
You
know
it
gives
you
a
clear
statement
as
to
these
are
the
set
of
system
calls
which
will
be
used
by
this
parts
set
of
parts
or
containers,
and
you
can
apply.
You
know
we
can
auto
generate
today.
We
don't
support
generation,
auto
generation
of
second,
but
we
have
that.
Qrm
has
that
in
the
roadmap
as
well,
and
then
I
don't
need
to
talk
about
user
space
controls
because
I
don't
think
those
are
relevant,
but
just
for
the
sake
of
completeness,
I
need
to
talk
about
ldprolab,
which
allows
you
to
dynamically
change.
B
B
You
can
still
override
lepsy
calls,
but
again
ld
preload,
it's
a
user
space
control,
it's
very
difficult
to
implement.
You
need
to
make
change.
You
need
to
insert
your
dynamic
library
as
part
of
pod.
It's
it's
extremely
difficult
from
the
deployment
point
of
view
than
than
anything
else
and
again,
one
problem
with
ldpreload
is
that
you
know
it.
It
can
override
library
calls,
but
if
an
attacker
is
smart
enough,
they
can
completely
override.
B
B
Now
I
need
to
talk
here
about
how
do
we
apply
controls?
You
know
this
is
about.
How
do
you
do
enforcement
and
on
the
left-hand
side,
you
see
a
falco
and
tracy
engine?
It
could
be
any
other
engine
as
well
any
any
engine
which
can
identify
events
and
then
want
to
take
an
action
in
the
context
of
you,
for
example,
there
is
a
file
open,
it
does
a
sensitive
file
open.
B
You
can,
you
can
have
an
event
auditing
engine
such
as
five
countries,
you
identify
that
convert
it
to
the
policy
engine
and
then
the
policy
engine
might
asynchronously,
take
an
action
to
kill
the
process
or
quarantine
the
container
or
pod.
All
these
are
possible
action
items,
but
the
problem
there
is
by
the
time
this
happens.
The
process
is
killed
or
the
container
is
current
time.
B
Maybe
the
damage
might
have
already
done,
for
example,
if
if
the
attacker
is
trying
to
unlink
some
sensitive
files
or
key
files,
the
unlink
will
already
have
been
successful
in
case
of
lsms,
like
I
mentioned,
it's
an
inline
control,
which
means
that
before
the
file
is
open,
our
process
is
found
the
lsm
host
checks
with
the
policy
engine,
whether
it
is
allowed
or
not,
which
means
that
the
other
advantage
that
I
talked
about
before
not
suffering
from
time
to
check
was
versus
time
to
use
problem.
B
That
won't
be
what
won't
be
happening
in
this
context,
so
in
in
summary,
there
are,
there
are
very
few
primitives
in
the
linux
kernel
which
allows
you
to
do
such
kind
of
enforcement.
Lsm
is
the
primary
one.
B
C
Sorry
to
just
look
at
one
example
or
use
cases,
we
took
an
old
malware
called
kinsing,
which
essentially
does
the
crypto
jacking
of
a
kubernetes
cluster,
and
we
actually
have
created
a
policy
that
can
block
that
particular
malware
and
it's
open
source
as
a
part
of
our
ready-made
policy
templates.
The
links
are
yeah.
It's
on
github.com,
it's
available,
so
that's
to
demonstrate
a
typical
poc
or
a
use
case
of
how
cuba
will
will
work.
B
Go
ahead,
rob
so
yeah.
So
this
this
slides
goes
into
the
internal
architecture.
I
mean
a
high
level
architecture
about
cube
armor.
As
you
can
see,
there
is
an
there
is
an
lsm
context.
There
is
an
ebp
context
as
well.
The
reason,
the
primary
reason
why
ebpf
has
been
made
use
of
once
you
get
an
audit
event
from
any
of
these
lsm
engines.
You
need
to
correlate
that
audit
event
the
kernel
event
in
the
context
of
kubernetes
metadata.
You
need
to
provide
the
namespace.
B
The
container
image
container
id
pod
name,
keep
popcorn
id,
and
things
like
that.
So
for
that
we
leverage
essentially
ebpf
and
then
this
alerts
and
telemetries
can
be
further
consumed
by
any
of
the
components
it
could
be
sent
to
elastic.
It
could
be
sent
to
splunk
or
any
other
place,
but
you
know
so
cube.
B
B
Yeah
again
talking
about
the
design
elements
we
wanted
to
be
kubernetes
native
engine,
which
means
that
you
know
we
wanted
to
make
sure
that
qbamo
works
out
of
the
box
for
kubernetes,
orchestrated
any
kubernetes
orchestration
engine,
and
we
have
a
kubernetes
operator
for
system-wide
policy,
system-wide
security
policies,
the
kubernetes
native
yaml
policy
specification.
B
Today
we
have
come
up
with
a
policy
construct
of
our
own,
which
abstracts
away
app
armor
and
s.
Linux,
slns
policy
constructs
in
a
yaml
format
and
I'll
talk
about
that
as
well.
You
know
and
that
that
is
the
primary
you
know.
B
The
primary
problem
is
that
you
know
handling
this
lsm
deployment
complexity.
It's
it's
extremely
difficult.
You
know
not
only
across
the
cloud
providers
but
within
a
cloud
provider
as
well.
For
example
eks
you
might
have
two
worker
nodes,
running
ubuntu,
two
worker
node
running
bottom
rocket,
two
worker
nodes
running
amazon
and
x2,
and
they
will
have
different
lsm
settings
all
together.
How
do
you
handle
these
these?
These?
B
You
know
heterogeneous
distributions,
linux
distributions
and
handle
the
lsm
deployment
complexity,
so
as
I've
given
an
example
below
container
optimized
os
ships
by
default
ships
with
app
armor
only
and
amazon
industries,
with
the
cylinders
in
the
future.
What
we
see
is
bp
of
lsm
will
be
enabled
by
default
on
most
of
the
images
but
e,
but
vpf
lsm
is
supported
only
from
certain
kernel
versions
in
the
power
that
itself
is
a
problem
for
today's
adoptability,
but
going
forward
in
the
future.
This
will
change.
B
Now,
here
is
the
piece
that
we
that
pushkar
you
mentioned
about
you,
you
referenced
before
auto,
discovering
a
set
of
policies
and
this
policy
this
this.
This
problem
statement
is
not
specifically
in
the
context
of
cube
armor.
B
So
what
we
have
done
with
the
policy
generation
engine
is
that
you
know,
given
any
visibility
from
any
aspect
here.
As
you
can
see
the
resilient
there's
cube,
armor,
there's
calico,
and
that
is
fargo.
All
these
engines
have
some
sort
of
even
eventing
system.
Is
it
possible
for
us
to
the
question
that
we
asked
ourselves?
Is
that
you
know?
Is
it
possible
for
us
to
take
these
events
and
then,
based
on
this
visibility?
Information
generate
a
set
of
policies.
B
Now
the
policies
could
be
the
network
policies
or
the
system
policies
could
be
sealing
network
policies,
it
could
be
calico
network
policies,
it
could
be
kubernetes
network
policies,
cube
armor
policies,
spot
security
policies
or
it
could
be
second
profiles
in
the
near
future
or
it
could
be
any
other
police
policy.
B
This
thing
the
kubernetes,
is
what
the
kubernetes
engine
here
is.
The
the
engine
tries
to
get
the
pod
information,
the
service
information,
the
node
information
that
correlates
this
information
with
the
logs
that
are
available
here
so
for
every
policy
engine
there
is
a
corresponding
adapter
here
which
understands
the
way
in
which
celium
exports
information.
B
Today
in
the
policy
generation
engine
we
support,
you
know
everything
which
is
grayed
out
is
not
supported.
Today,
for
example,
calico
falco
is
not
supported,
but
we
support
celium
and
coupe
armor
cerium
network
policy
generation
is
supported,
kubernetes
network
policy
generation
is
supported
and
cuba
armored
policy
generation
is
supported.
Today
we
have
already
open
sourced
it,
but
we
need
to
you
know,
make
sure
that
we
have
documentation
in
place
and
things
like
that
for
for
community
to
consume.
So
we
are
working
towards
that.
Those
aspects.
C
Just
want
to
add
that
the
context
for
psyllium
here
is
we
for
the
communities
that
I
cannot
as
a
company.
We
are
focused
on
runtime
security,
so
sort
of
we
are
also
heavily
contributing
back
to
psyllium,
so
the
same
aspect
that
runtime
security
includes
q
bomber
and
psyllium
for
us,
and
hence
the
auto
discovery
as
well
as
the
policy
templates
that
I
shared
the
link
actually
contains
both
psyllium
network
policies
and
cube
armor
application
policies.
A
One
question
on
this
and
others
also,
I'm
not
the
only
one
who
should
ask
questions
so
if
you
have
also
feel
free
to
ask
questions
either
on
chat
or
directly
was
when
I
have
a
pod
running
a
specific
image,
like
version
1.2,
and
then
I'm
upgrading
my
application
running
in
the
pod
to
version
1.3.
A
B
That's
a
very
good
question.
Actually
let
me
take
this
one,
so
pushkar.
One
of
the
aim
for
for
for
this
runtime
security
policy
engine
is
that
it
can
shift
left,
which
means
that
when
there
is
an
application
change,
you
should
be
able
to
use
cube
armor
in
the
staging
environment
itself
or
even
in
your
ci
cd
environment
and
check
what
is
the
difference.
B
C
Maybe
I
can
try
to
clarify:
did
you
mean?
Are
you
saying
that
the
host
version
might
change
and
therefore
you
know
what
whether
we
use
app
armor
or
sc
linux,
or
a
different
version
of
app
or
what
that
changes?
Or
are
you
talking
about
whether
the
application
itself
has
changed
an
application
workflow,
and
that
means
we
need
to
enforce
new
set
of
policies?
Is
that
the
question.
D
A
And
I
think
that
makes
sense
like
having
something
to
as
a
separate
environment
that
mimics
what
is
in
production
and
then
generate
the
policies
there.
Is
there
like
a
fail,
safe,
though,
where
my
policy
generated
is
wrong
and
then
something
is
blocked
in
production
and
then
I'm
not
sure
whether
it's
valid
or
not.
B
Yeah,
so
cuba
already
supports
dry,
run
mode
or
audit
mode.
Oh
nice.
A
B
Now
the
challenges
that
we
have
faced-
and
you
know
these
are
the
challenges.
This
is
the
help
that
we
we're
looking
forward
from
from.
You
know
this
community
and
many
other
communities
as
well
out
there.
You
know
cuba
heavily
depends
upon
underlying
ls
and
constructs.
There
are
some
challenges
that
we
are
facing
with
slms
app
armor
is
much
easier
to
handle
is
what
our
experience
has
been.
Is
there
anyone
else
who
is
making
use
of
s
linux
in
the
community
for
policy
enforcement?
B
Well,
bp
of
lsm
by
itself,
is
a
silver
lining
which
means
that
what
we
are
seeing
that
you
know
with
bpf
lsm
there
won't
be
any
specific
policy
constructs
that
that
will
that
will
be
imposed
on
the
users
with
bpf
lsm
one
can
have
their
own
bp
of
byte
code
executed
in
the
context
of
a
given
lsm
hook,
which
means
that
you
have
better
programmatic
control.
B
You
know
it's
so
so
you
can
think
ebpf
as
something
like
raw
using
wasm
with
anoi,
and
that
that
is
an
analogy
that
works
very
well
in
the
community.
So
bpf
allows
you
to
insert
dynamic
code
at
particular
hooks
in
the
linux
kernel.
It's
there
is
a
kernel
verifier,
which
checks
and
verifies
whether
the
ebp
or
bytecode
is
safe
and
only
then
allows
that
vp
of
bytecode
to
be
inserted
in
the
kernel
with
dpf
lsm
the
whole.
B
B
B
The
second
aspect
is
the
yaml
policy
constructs.
You
know
in
case
of
network
policies,
the
the
you
know,
this
community
built
up
kubernetes
network
policies
and
that
served
as
a
baseline
for
building
the
base
set
of
policy
specification
for
most
of
the
cni's
out
there.
The
ceiling
beat
calico
and
then
individual
cni's
worked
on
top
of
it
to
provide
some
advanced
constructs,
but
for
systems
security
policy
constructs.
We
don't
see
such
kind
of
policy
constructs
anywhere
again.
B
Of
course,
there
is
spot
security,
constructs
port
security
context
available
today,
but
it's
it's
it's
very
minimal
today.
You
know,
if
you
want
to,
if
you
want
to,
you,
know,
address
pod
security
at
the
process
level
at
the
file
system
level,
we'll
need
a
lot
more
policy
construction.
I
believe
this
is
the
right
community
which
can
provide
us
with
the
right
free
feedback,
and
you
know
maybe
the
standardize,
something
like
you
know,
systems
security
policy
constructs.
B
You
know
standardizing
something
like
this
would
also
encourage
tooling
around
this
country
customers.
So
it's
the
same
thing
that
happened
with
cni's
with
kubernetes
network
policies.
The
same
thing
could
happen
in
this
context
as
well.
The
third
thing
is
building
policy
templates
for
popular
kubernetes
workloads
or
you
know,
that's
a
big
work
in
itself.
You
know
and
there
could
be
a
security
community.
There
are
several
security
communities
which
are
specializing
in
that
about
writing
the
policy
templates.
B
We
would
love
to
get
engaged.
You
know
we
would
love
to
talk
to
them
and
see
if
we
can
work
on
such
problem
statements
together.
B
So
this
there
is
a
again
you
know,
cube
armor,
as
of
now
is,
is
a
year
old.
You'll
see
a
lot
of
these
things
in
a
lot
of
these
discussions
happening,
but
I
believe
this
is
the
right
forum
to
talk
about
these
challenges.
A
B
C
Okay,
cool
actually,
should
we
just
do
a
quick
demo
pushkar
that
way,
we
can
go
through
the
use
cases
as
well
to
the
demo.
We
have
only
eight
minutes
left.
B
Right
so
give
me
you
give
me
just
one
minute
just
to
show
how
the
policies
will
look
like
you
know,
okay,
so
this
is
how
the
cuba,
the
cuban
demo
policies,
work.
You
know
there
is
audit
action
as
well
as
block
action.
In
this
context,
you
can
see
that
there
is
a
configuration
file
in
wordpress
which
can
be
protected.
You
know
you
can
say
that
anyone,
the
cat,
won't
be
allowed
to
open
this
configuration
file.
B
A
Yeah
yeah,
let's
yeah,
now
that
that
was
useful
to
see
the
policy
in
in
real
life
like
how
it
looks
like
so
definitely
different
than
how
selinux
profiles
or
rap
armor
profiles
look,
and
this
seems
at
least
familiar,
whether
it's
better
or
not.
A
That's,
probably
the
users
will
tell,
but
it's
at
least
familiar
to
kubernetes
developers,
so
they'll
have
under
more
better
understanding
of
what
is
going
on
here
in
terms
of
challenges,
so
we
may
not
finish
going
through
all
of
them
in
depth,
but
some
initial
thoughts
I
wanted
to
share
so
for
the
ss
linux
people
or
who
people
who
have
worked
with
s
linux,
a
lot
I
would
suggest
looking
for
some
or
reaching
out
to
people
in
some
channels
like
cryo,
channel
and
openshift
users
channel.
A
So
both
the
channels
you'll
see
some
discussions
in
the
past
that
have
happened
on
s
linux
and
there
are
probably
people
who
created
a
cnx
also
hanging
around
there.
So
those
will
be
people.
Maybe
you
will
be
able
to
have
some
common
problems
to
discuss
and
common
solutions
to
work
on
for
bpm
ls
lsm
yeah.
I
agree
I
mean
even
though
many
new
kernels
support
it.
A
A
So
it
just
matter
of
waiting
it
out
and
really
figuring
out
whether
we
need
to
support
anything
else
or
just
wait
out
until
vast
majority
of
the
users
are
using
that
and
then
second
one
was
there
were
some
discussions
way
back,
maybe
one
and
a
half
year
ago,
or
so
about
creating
some
level
of
a
template
or
a
policy
engine
for
something
like
ids
or
intrusion,
detection
systems
or
something
that
can
prevent
things
or
detect
things
that
are
at
run
time.
Security
level.
A
I
think
there
was
the
there
was
some
traction
to
it,
but
with
the
people
who
were
working
on
it
kind
of
had
changed
roles
and
did
different
things.
So
it
kind
of
fell
off
the
radar.
But
if
you
want
to
bring
it
up
as
some
kind
of
like
a
standardized
policy
for
security
which
goes
at
runtime
level
through
the
pod
specs,
that
might
be
something
worth
bringing
up
in
the
community
and
see
if
you
want
to
lead
that,
how
much
would
that
really
help
you?
A
As
a
company
versus
the
community,
is
the
main
question
you
have
to
figure
out
and
the
last
one
was
yeah.
I
I
think
this
is
where
I
hope
people
like
you
can
contribute
on
the
policy
template
side.
So
if
you
start
creating
those
kind
of
policy
templates
and
let
people
consume
it
in
open
source,
which
I
think
you're
already
doing,
that
will
really
help
to
know,
because
I
I
know
for
sure
people
are
looking
for
something
that
they
can
just
copy
paste
and
apply
to
their
cluster
and
it
works
right.
C
C
Yeah
we've
already
so
so
very
interesting
conversation
we
actually
already
reached
out
to
oisf
to
see
if
we
can
use
cerakata
rules
to
sort
of
create
profiles
for
known
threats
and
sort
of
create
policies.
So
that's
a
conversation
that
we're
already
having
we
we're
also
trying
to
see
if
we
can
also
leverage
other
similar
vulnerability
scanners
again
to
take
the
vulnerabilities
and
can
we
do
can
we
can
we
do
soft
patching
where
possible,
with
cube
armor
and
policy
templates,
we
have
a
little
over
150
templates,
already
open
sourced.
C
As
I
mentioned,
it
includes
psyllium
and
cubomer,
because
we
are
also
active
contributors
back
to
psyllium,
but
it's
available
in
the
under
the
cube,
umber
repo,
but
very,
very,
very
interesting
observations.
I
think
you're
spot
on
with
respect
to
users
wanting
to
just
check
out
a
bunch
of
policies
and
apply
and
forget.
D
What
I'm
thinking
is
also
challenging
is
when
your
application
gets
a
site
car
like,
for
example,
with
istio
and
then
yeah.
I
think
it
would
be
nice
to
have
the
policy
template
for
this
kind
of
scenario.
B
Yeah
yeah,
absolutely
that
that
the
question
completely
makes
sense.
You
know
it's
it's
a
good
observation
that
you
know
most
of
the
developers
application
developers
are
already
making
use
of
service
meshes
with
cubomer.
You
know
when
we,
when
you,
when
you
apply
a
visibility
on
a
pod.
It
includes
all
the
containers
within
that
ports,
including
the
site
cars.
So
you
eventually
you'll
get
a
policy
which
includes
that
sidecar
container
as
well,
so
it
should
it
should.
B
It
should
definitely
work
out
well
without
any
any
changes,
but
at
the
other
point
that
you
mentioned
is
it
would
also
be
nice
to
get
an
istio
policy
template
policy
developed
for
us
too.
Yes,
absolutely
that
would
be
really
nice.
You
know
we
don't
have
it
as
well,
but
again,
this
is
this
is
an.
This
is
an
open
area
of
contribution.
C
Yeah
just
to
add
to
what
rahul
is
saying
we
are,
I
think,
that's
a
very
interesting
point
right.
What
we
realize
is
that
there
are
some
known
workloads.
We
need
to
generate
some
policy
templates
for
the
same
one
on
the
auto
discovery
side,
we're
trying
to
provision
like
known
workload.
For
example,
we
do
have
a
kafka
specific,
auto
policy
generation
available
today
as
an
example
and
on
the
policy
templates.
C
Actually,
we
have
been
trying
to
use
mysql
and
postgres
sticks
that
have
been
released
by
the
usdod
and
and
we've
actually
converted
several
of
the
stick
rules
into
cube,
armor
policy
templates
and
it's
actually
available
as
a
part
of
the
github
repo
that
we
just
shared
to
you.
So
I
think
we
just
need
more
coverage,
and
then
you
know
people
should
be
able
to
come
and
say.
Okay,
I
have
an
envoy,
I
have
a
mysql.
I
have
a
java
microservice.
What
policies
are
the
most
common
to
harden
this?
A
Yeah,
okay,
that
sounds
great
for
for
me,
hope
everyone
who
was
listening
in
also
enjoyed
the
chat.
What,
as
a
maybe
last
thing
to
discuss,
was
the
best
way
for
people
who
watched
this
recording
or
was,
or
were
at
the
meeting
today,
to
reach
out
to
kubernetes
for
contributions
for
questions
or
for
any
suggestions.
C
We
do
have
our
own
slack
cuban
slack.
We
will
also
be
lurking
in
the
community
six
security
groups
so
either
way.
A
Okay
thanks
a
lot
again
rahul
and
asif
for
chiming
in
taking
notes,
I
definitely
enjoyed
learning
more
about
cube,
armor,
good
luck
with
the
cncf
project,
incubation
and
sandbox,
and
everything.