►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
and
welcome
to
this
webinar
by
the
cncf,
about
strengthening
your
kubernetes
and
container
security
without
losing
visibility,
we're
going
to
use
the
cncf
project
falco
and
the
divisor
today
for
the
for
this
purpose
that
we're
going
to
present
how
these
two
projects
can
interact
with
each
other.
First
of
all,
I
would
like
to
introduce
myself.
I
am
luca
and
I'm
an
open
source
engineer
at.
A
I
am
one
of
the
maintainers
for
the
falco
project,
and
but
my
passion,
when
it
comes
to
computing,
is
security
in
all
its
forms,
so
whether
it's
vulnerabilities,
how
to
make
system
more
secure,
how
to
identify
which
spots
and
and
correct
any
security
issues
or
detect
potential
threats,
I'm
at
every
level,
I'm
all
for
it.
You
can
find
my
code
and
my
bugs
on
github
or
you
can
find
myself
along
with
all
the
other
maintainers
and
the
falco
community,
on
kubernetes
slack
in
the
falco
channel,
and
I
will
let
now
nicholas
introduce
himself.
B
Hey
everybody:
my
name
is
nick.
I'm
a
software
engineer
at
google
and
I've
been
working
on
gvisor
for
the
past
couple
of
years.
I'm
passionate
about
containers
and
security
and
operating
systems.
You
can
find
me
on
github
at
the
link
there
github.comcast
and
I
also
hang
out
in
the
g
visor
getter
channel,
along
with
most
of
the
other
g
visor
maintainers.
So
yeah
happy
to
talk
to
you.
A
Thanks
a
lot,
I'm
very,
very
happy
to
present
with
you
today
and
also
I'm
very
happy
for
the
work
that
we've
done
along
with
both
the
falco
team
and
the
divisor
team.
On
the
falco
side,
I've
had
the
pleasure
of
working
with
several
people
and
the
I
actually
want
to
give
a
shout
out
to
two
members
of
the
team,
specifically
lorenzo
suzini,
who
is
is
working
at
sisdiga
with
me.
A
Also.
I've
had
the
pleasure
of
working
with
the
team
that
you
see
in
the
background
for
falco
that
are
some
of
the
falco
maintainers,
who
were
very
very
helpful
in
helping
us
getting
this
feature
upstream.
Also.
I
really
want
to
thank
for
the
divisor
team
fabricio,
who
is
the
person
on
the
left
on
the
divisor
team
picture,
who
has
been,
for
the
most
part,
our
main
point
of
contact
for
this
project,
and
it
was
fantastic
in
implementing
designing
solutions
to
really
get
this
project
to
work
smoothly.
A
Thanks
thanks
a
lot
fabrizio
and
nick.
If
you
want
to
talk
a
bit
more
about
the
divisor
team,
the
excellent
divisor
team.
B
Yeah,
you
know:
we've
got
a
lot
of
a
team.
That's
been
working
really
hard
on
g
visor
for
a
while.
Now
there's
about
half
of
us
pictured
here
we're
fairly
remote
fairly
distributed.
I
guess
so
it's
hard
to
get
a
picture
of
everyone
together,
but
yeah
fabrizio
did
most
of
the
work
for
the
falco
integration
that
we're
presenting
today.
Also
shambavi,
srivastava
and
steve
silva
helped
out
a
lot
too.
A
Cool,
so
let's
take
a
look
at
what
we'll
be
talking
about
today,
so
today
I
will
going.
A
We
are
going
to
to
tell
you
why
sandboxing
containers
and
applying
security
at
the
same
time
is
actually
a
challenge
where
the
challenge
is
then
we're
going
to
introduce
both
the
falco
project
and
divisor
and
then,
of
course,
some
some
of
the
audience
would
be
a
bit
more
familiar
with
falco,
because
it's
a
cncf
project
but
and
many
maybe
have
not
tried
divisor,
and
you
will
learn
how
cool
it
is
and
then
we're
going
to
explain
how
the
divisor
and
falco
integration
that
we
built
is
actually
working.
A
A
So,
first
of
all,
why
is
container
sandboxing
and
security?
A
challenge
so
nick
will
tell
us
much
more
about
divisor
and
how
it
works,
but
for
for
the
uninitiated,
think
of
a
divisor
as
a
mechanism
to
sandbox
and
isolate
your
container
applications
and
workloads
from
the
host
operating
system.
So
as
a
security
person
myself,
I
actually
love
this
feature,
and
I
think
it's
really
one
of
the
most
important
security
features
that
you
can
have,
that
is
attack,
surface
reduction
by
isolation.
A
Divisor
is
one
mechanism
and
one
technology
that
you
can
use
to
prevent
the
your
your
workload
from
performing
potentially
dangerous
operations
and
to
have
better
controls,
the
better
control
of
what
you
run
on
and
this
especially,
if
you
think
about
highly
sensitive
environmental
environments,
it's
a
very
important
capability
to
have
and
every
security
person
will
be
able
to
confirm.
A
So
this
is
great
and
falco
does
something
for
security.
But
it's
on
it's
on
a
bit
of
a
different
angle:
falco
is
able
to
detect
anomalies
is
able
to
detect
what
you
want.
Falco
is
able
to
gather
events
from
your
from
your
applications
from
your
hosts
from
the
kernel
and
is
able
to
report
what
you
are
interested
in
to
you.
A
So
it's
more
about
the
visibility
it
acts
like
a
security
camera
for
the
for
your
servers,
if
you
think
about
it,
but
actually
sometimes
we
will
see
it
in
a
bit
more
details.
If
you
try
to
use
such
a
technology
in
an
environment
that
is
deeply
isolated,
with
the
heightened
security
such
as
the
one
such
as
one
that
works
with
divisor,
you
are
actually
going
to
get
problems,
because
the
security
controls
that
you
put
in
place
can
actually
make
make
it
harder
for
you
to
see.
A
What's
going
on
inside
the
inside
your
inside
your
containers.
So
we
work
to
actually
get
no
compromise
here
and
be
able
to
both
have
a
more
secure
environment
and
and
the
first
we
are
we're
very
happy
to
say
that
the
first
tool
that
is
able
to
monitor
device
of
workloads
is
actually
falco
and
we'll
tell
you
how
it
works.
So,
first
of
all,
if
you
are
not
familiar,
I
would
be
happy
to
to
spend
a
couple
minutes
just
to
to
give
you
an
overview
of
falco.
A
Falco
is
a
project
that
is
the
first
cloud
native
runtime
security
project
that
has
been
adopted
by
the
cncf,
so
falco
was
originally
created
at
cisdig,
and
then
it
was
it's
now
incubating
at
the
cloud
native
computing
foundation.
It
was
adopted
a
few
years
ago
and
it
grew
tremendously
since
its
adoption
and
throughout
its
incubation
and
then
cncf,
the
more
I
work
on
falco
the
more.
A
I
realized
that,
probably
in
every
you
know,
everyone
uses
some
kind
of
big
cloud
deployed
systems
from
companies,
big
and
small,
and
at
least
one
system
that
you're
currently
using
is
probably
monitored
by
falco,
since
the
its
adoption
is
so
so
big
and
so
important
in
the
security
landscape
and
also
back
in
the
day,
it
used
to
be
just
one
project
from
sysdigger,
and
today
it
enjoys
a
lot
of
contributions
from
a
lot
of
different
people
and
maintainers
that
are
from
many
different
companies
for
many
different
reasons,
and
it's
it's
very
cool
to
to
see
this
working
to
to
see
people
from
different
backgrounds
working
together
on
the
photo
project.
A
So
if
you've
never
used
falco
its
most
basic
usage
is
like
this,
so
basically
falco
will
take
all
the
system,
calls
that
are
happening
on
your
system
and
will
alert
you
based
on
a
set
of
rules
that
you
can
customize
and
you
can
specify
with
a
very
simple
language.
So
it's
basic
idea
is
that
you'll
get
warnings
of
notices
or
errors
about
what
you
want.
A
As
you
can
see,
every
event
is
also
automatically
enriched
with
all
possible
informations
from
containers,
so
any
metadata
that
might
come
from
your
container
engine
plus
metadata
from
your
kubernetes
environment.
So
your
pods,
your
clusters
and
and
all
this
information
is
a
natively
available.
Falco
falco
is
not
just
a
command
line
application,
but
it's
got
an
environment
and
an
ecosystem
of
applications,
for
example,
of
the
application
that
can
connect
to
it.
A
So,
for
example,
we've
got
falco
sidekick,
which
is
a
cool
piece
of
software,
that
you
can
use
it
to
forward
your
file
to
events
to
any
other
event,
processing
system
that
you
might
have
being
at
the
cm
or
email
and
slack
integration,
and
also
it's
got
a
very
cool
ui
that
I
hope
I'll
be
able
to
show
you
a
bit
later.
So
the
the
basic
idea.
If
we
just
scratch
a
bit
under
the
the
surface
of
falco,
we
will
see
that
falco
itself
is
composed
by
two
main
parts.
A
One
is
the
falco
along
with
its
own
rule.
Engine
and
the
other
are
the
underlying
libraries
that
allow
this
to
that
allow
alpha
could
work.
The
two
libraries
are
called
the
lip
sales
vendors
and
the
lipscapa
and
collectively
are
called
the
falco
libraries
if
you
have
never
heard
about
lipscap.
But
if
you
have
used
the
packet
captures
in
the
past,
you
can
think
about
it
as
exactly
the
same
as
a
packet
captures
that,
but
for
system
calls.
A
So
it's
got
a
data
source
which
is
normally
a
kernel,
module
or
an
ebpf
probe,
that's
installed
in
a
kernel,
and
that
is
going
to
forward
events
to
a
user
space
component
which
is
which
is
inside
lipscap.
That
will
simply
collect
the
events.
Then
what
happens
on
top
will
be
a
lot
of
analysis
and
inspection
by
the
lip
syncs
module
that
and
and
that
will
eventually
end
up
in
the
powerful
and
flexi
flexible,
falco
rule
engine
that
will
allow
you
to
specify
these
are
all
these
kind
of
rules
that
you
can
see.
A
B
B
Gvisor
was
developed
at
google
and
used
at
scale
it
powers,
a
lot
of
different
cloud
products,
app
engine
cloud,
run,
cloud
functions
and
gke
is
google's,
hosted,
kubernetes
engine
and
there's
also
a
product
called
gke
sandbox.
That
makes
it
really
easy
to
run
your
pods
that
are
running
on
gke
to
run
them
inside
of
gvisor
sandboxes
pretty
easily.
B
It's
also
used
by
a
variety
of
different
projects
inside
of
google
that
I
can't
say
too
much
about,
but
guys,
there's
also
open
source.
So
gvisor.dev
is
our
homepage
and
all
of
our
code
and
issues
and
everything
are
on
github
github.com,
google
gvisor.
So
it's
a
great
way
to
get
in
touch
with
us
there,
as
well
so
before
getting
into
how
gvisor
works,
maybe
say
a
little
bit
about
you
know
why
it
exists
so
with
standard
linux
containers.
Those
containers
are
interacting
interacting
directly
with
the
host
kernel.
B
There's
a
couple
of
isolation
mechanisms,
mechanisms
in
place
like
name
spaces
and
c
groups,
but
fundamentally
those
containers.
Those
applications
are
making
system
calls
to
the
host
linux
kernel.
Linux
is
a
fabulous
piece
of
software.
I
mean
it
really
is,
but
it's
also
very
large
attack
surface
there's.
A
lot
of
system
calls
there's
a
lot
of
stuff
going
on
in
that
proc
file
system
and
there's
a
lot
of
vulnerabilities.
So
since
1999
there
have
been
over
2
000
cves
in
the
linux
kernel
and
250
of
those
have
been
privileged
escalations.
B
And
it
only
takes
one
bad
linux
bug
to
get
a
container
escape,
so
you
know
if
there's
one
system
called
with
a
vulnerability.
Containers
can
exploit
that
to
gain
access
to
the
host
and
from
there
malicious
workloads
can
attack
your
infrastructure
or
attack
other
containerized
workloads.
So
this
is
something
that
we
want
to
prevent,
and
this
is
why
gvisor
exists
so
gvisor.
The
architecture
is
a
little
bit
more
complicated,
there's
a
few
different
components.
B
B
This
is
a
fully
linux
compatible
kernel,
so
we've
taken
all
of
the
system
calls
that
linux
has
and
we've
implemented
them
ourselves
in
gvisor,
and
I
really
want
to
stress
these
are
full
system
call
implementations.
This
isn't
just
like
a
pass-through
or
a
filter
when
the
application
is
running
and
it's
making
system
calls,
it
thinks
it's
interacting
with
the
host
kernel.
It's
actually
interacting
with
the
gvisor
kernel,
so
we've
implemented.
All
of
the
linux
system
calls
we've
also
implemented
all
of
the
common
linux
file
systems
like
proc
dev,
sysfast
tempefest.
B
B
It's
really
great,
then
the
last
thing
I'll
say
maybe
about
the
kernel
is
this
kernel
runs
in
user
space
alongside
the
container.
So
from
the
host's
point
of
view,
the
host
linux
kernel
just
sees
g
visor
and
the
container
running
together.
It
just
looks
like
any
other
process
from
the
host's
point
of
view.
B
B
B
B
This
provides
another
layer
of
security
around
gvisor,
it's
another
area
where
we
can
sort
of
dictate
what
the
sandbox
can
do,
and
the
file
system
proxy
also
has
its
own
set
of
sitcom
filters.
So
you
know
all
of
this
is
of
course
it's
allowed
to
open
files,
unlike
the
the
gvisor
kernel,
but
all
of
this
is
about
providing
multiple
layers
of
security
around
the
containerized
workloads
yeah.
So
that's
sort
of
what
this
picture
here
is
showing,
so
these
multiple
layers
lead
to
defense
and
depth.
B
B
We've
got
the
setcom
filters
that
I
just
mentioned.
So
you
can't
open
a
file
or
create
a
socket.
The
entire
kernel
and
workload
runs
as
uid
and
gid.
Nobody
with
no
capabilities,
there's
name
spaces
around
those.
These
things
run
an
empty,
an
empty
username
space
and,
lastly,
around
all
of
that
is
the
pod
c
group.
So
all
of
these
different
layers
make
it
harder
and
harder
for
a
malicious
workload
to
break
out
of
its
container
and
reach
the
host.
B
B
Yeah
and
then
so
you
know
all
of
these
mechanisms
I
just
talked
about
are
about
preventing
container
escapes,
which
g
visor
is
pretty
good
at,
but
prevention
is
not
detection.
So,
while
gvisor
can
prevent
these
container
escapes,
we
never
had
a
mechanism
to
actually
detect
when
a
malicious
workload
was
trying
to
escape,
and
that's
why
we're
really
excited
about
the
falco
integration,
because
that's
what
we
can
provide
now.
A
Thanks
thanks
nick,
I
I
think
now
we
we
have
all
the
information
to
understand
why
our
project
scammed
life
and
what's
the
problem
that
we
had
when
we
first
started
to
monitor
container
container
workloads
that
were
actually
a
sandbox
within
divisor,
as
nick
explained
before.
A
Actually,
the
the
interactions
with
the
kernel
in
case
of
the
device
or
sandbox
are
very
different
from
the
regular
interaction
that
you
get
with
native
process.
So
what
would
happen
if
we
try
to
to
just
plug
falco
as
out
of
the
box,
as
we
always
do
into
such
into
such
an
environment?
Well,
we'll
get
a
very
confused
falco,
because
if
you
remember
divisor
not
only
is
a
different
process.
A
A
What
we
want
to
do
is
that
we
want
to
detect
suspicious
behavior
that
is
executed
by
a
potentially
compromised
application
or
a
potentially
malicious
application,
or
even
something
that
we
are
actually
trying
to
troubleshoot,
because
falco
can
be
used
for
many
different
types
of
use
cases.
A
So,
whatever
misbehave
we
have
in
mind,
we
want
to
get
it
straight
from
the
application
and
we
cannot
even
rely
on
the
capabilities
that
falco
has
to,
for
example,
tell
us
about
file
system,
usage
or
or
system
calls
that
interact
with
the
file
system,
because
even
the
file
system
to
to
get
the
secure
the
right
security
properties
is
proxied
in
divisor.
So
how
do
we
actually
solve
this
problem?
Well,
first
of
all,
if
you
recall,
falco
itself
is
not
a
monolith.
A
It's
composed
by
the
a
different
set
of
pieces
that
interact
together
and
the
piece
that
we're
actually
interested
in
is
lipscap.
That
is
the
foundation
on
top
of
which
everything
is
built
so
lipscap
inside
of
itself.
Isn't
that
complex?
I
mean
the
the
mechanism
that
it
has
to
collect.
Events
can
be
very
complex
but
itself
it
acts
as
a
collector
of
events,
exactly
like,
as
I
mentioned
before,
a
packet
capture.
A
You've
got
a
system
system
called
capture,
so
what
we
need
to
do
is
we
need
it
to
refactor
and
to
work
on
its
cap
in
such
a
way
that
is
able
to
accept
a
new
source
of
system
called
events
that
are
coming
not
from
the
kernel
but
directly
from
a
user
space
from
the
divisor
kernel
itself,
which
which
will
be
able
to
send
them
in
some.
In
some
way.
A
We
have
decided
to
use
a
unix
domain
socket
for
this
and
inside
it's
a
talking
protobuf
to
extract
events
from
the
divisor
kernel,
as
they
happen
directly
from
the
application
and
reach
them
on
the
devisor
side,
with
metadata
about
container
and
every
information
that
the
that
is
not
available
straight
into
the
host
and
the
linux
kernel
that
falco
usually
queries
to
understand,
and
then
it
will
forward
them
straight
into
falco,
which,
on
the
other
side,
will
will
act
as
a
server
receive
the
connection
from
the
device
or
kernel
and
we'll
be
able
to
read
the
format
that
divisor
provides
and
convert
it
into
the
lipscap
internal
representation.
A
That
can
then
be
used
by
anything
on
top
of
it.
And
this
is
pretty
cool.
I
think
for
several
reasons,
including
the
fact
that
being
this
happening
at
the
lower
level,
any
other
project
that
uses
the
falco
libraries,
which
are
several
of
them
since
the
falco
libraries
are
open
source
and
their
uses
used
as
a
base
for
not
just
falco,
but
also
other
research
projects
and
other
open
source
software
will
be
able
to
enjoy.
A
There
is
a
this
integration
and
will
be
able
to
seamlessly
connect
to
to
to
divisor
and
to
read
the
events
that
are
happening
in
the
sandboxes,
as
if
they
were
happening
directly
on
the
kernel.
A
A
So
you
will
be
able
to
spin
up
a
falco
instance
in
your
node
and
we'll
show
you
how
in
a
second
and
then
this
will
be
able
to
connect
to
any
number
of
sandboxes
that
are
running
that
are
managed
by
divisor
via
the
same
server
as
the
the
socket
is
the
same,
and
falco
will
act
as
the
server
and
will
handle
everything
in
the
and
that
doesn't
break
user
experience
in
any
ways,
because
that's
exactly
what
falco
does
when
it
monitors
several
pods
or
several
containers
within
the
same
within
the
same
system
and
the
same
cloud
node.
A
So
we
have
a
cloud
instance
here
and
we
have
installed
both
runner
c,
which
is
the
tool
that
is
used
to
monitor
to
actually
make
the
magic
that
nick
described
the
for
divisor
happen,
and
we
also
installed
the
falco.
A
So
we
can
check
first
of
all,
that
the
versions
of
our
tools
are
correct,
so
I've
got
a
falco
version
that
needs
to
be
at
least
0
32.1
and
we've
got
to
zero
32.2,
which
is
the
latest
at
the
time
of
of
speaking,
and
then
we've
got
runner
c
as
well
run
a
c
version
and
the
the
run
c
version
is
expressed
in
this
case
as
a
timestamp
and
as
long
as
it's.
This
is
newer
than
the
fourth
of
july
20
2022.
A
It's
going
to
it's
going
to
work
with
this
integration,
so
I'm
now
going
to
start.
Falco
start
my
falco
instance,
and
on
on
my
on
my
terminal,
I
will
add
a
couple
of
parameters
that
will
that
will
allow
us
to
enable
the
integration.
A
First
of
all,
I
will
need
to
specify
a
configuration,
a
divisor
configuration
now
for
the
whole
instructions
about
how
to
set
this
up.
You
can
refer
to
the
tutorials
that
are
present
on
the
divisor
blog
and
on
the
falco
blog,
but
just
to
show
you.
I
have
simply
installed
a
little
file,
a
little
configuration
file,
which
is.
A
I'll
call
json,
which
is
a
a
file
that
instructs
divisor
to
collect
some
types
of
system
calls
and
to
forward
them
straight
to
falco.
I've
also
configured
the
divisor
via
the
runesc
install
command
to
to
receive
this
configuration
when
it
starts
new
pods,
and
so
we
can
see
the
configuration
into
the
my
docker
demo
is
installed.
This
configuration
into
my
docker
daemon
configuration.
A
So
if
I
specify
as
a
runtime
a
runtime
that
I
call
the
runesc
falco,
then
this
integration
will
be
online,
so
we're
going
to
start
falco
falco
is
going
to
to
start
and
and
tell
us
that
the
event
collection
from
divisor
is
enabled
and
it
has
received
our
configuration
file.
The
configuration
file
is
shared
between
falco
and
devices
so
that
the
two
pieces
of
software
can
make
sure
that
they
are
aligned
in
terms
of
ciscos
that
they
collect
and
the
socket
path
and
all
that
all
that
tunable
configuration
things.
A
So
I
am
now
going
to
run
a
pod
directly
via
via
our
my
regular
docker.
So
docker
run,
I'm
going
to
set
the
runtime
as
run
c
falco.
That
is
the
one
that
we've
configured
here
and
I'm
going
to
run
my
image,
which
is
just
any
image
that
we
have
and
start
to
tap.
This
workload
is
now
running
under
divisor
and
it's
also
be-
and
it's
also
monitored
by
falco.
If
you
don't
believe
that
this
is
running
under
reviser.
Let's
take
a
look
at
this.
A
A
So,
let's
try
to
take
a
look
and
see
what
a
suspicious
event
my
might
might
be,
so
I'm
going
to
first
open
the
etc
shadow
file,
which
is
not
something
that
a
regular
user
does
and,
as
you
can
see,
this
has
generated
an
event
in
falco.
I
have
used
the
the
json,
the
json
format,
to
take
a
look
at
the
events,
but
we
can,
as
I
show
you,
can
see
these
events
in
whatever
format
we
see
fit.
A
So
we
can
see
that
the
program
cat
has
open
atc
shadow
and
then
we've
got
the
pid.
We've
got
the
container
id
and
everything
the
container
id
just
works
seamlessly,
as
if
the
container
was
running
into
our
application.
Another
thing
that
could
be
suspicious
is
the
fact
that
some
versions
of
netcat
are
have
an
e
flag
that
not
all
versions
support,
but
for
all
the
old
school
hackers
out
there.
A
We
all
know
that
you
were
able
to
just
do
this
and
with
adding
some
bunch
of
parameters
to
do
that.
My
version
doesn't
support
it,
but
falco
will
still
detect
our
attempt,
even
if
it
doesn't
do
anything,
but
it
will
still
detect
our
suspicious
attack.
Also,
I
I
have.
A
I
would
love
to
show
you
how
this
looks
also
in
a
in
a
different
in
a
different
user
interface.
A
So
I've
got
the
falco
sidekick
ui
over
here
and,
as
we
can
see,
all
our
all
our
events
have
been
collected.
We've
got
our
working
events,
we've
got
the
the
our
netcat
remote
code,
execution
attempt
in
our
container
we've
got
the
read
of
the
atc
shadow
file
that
we've
got
here
the
with
along
with
the
file
name,
along
with
all
the
context,
information
that
we
might,
we
might
want.
A
A
So
if
you
have
been
following
the
falco
project
for
a
little
while,
you
might
know
that
back
at
the
beginning,
probably
at
the
time
when
the
cncf
adopted
it.
That
was,
I
believe,
quite
a
few
years
ago,
at
least
four
years
ago,
there
was
only
one
way
to
capture
system
calls,
and
that
was
it.
So
you've
got
your
lips
cap,
your
kernel,
module
and
a
monolith
on
top
of
it.
That
would
just
run
the
rule
engine
then
later
it
got.
A
Support
for
kubernetes
audit
events,
which
are
events
that
are
not
system
calls,
are
just
something
different
and
after
a
while,
it
also
got
support
for
ebpf,
which
is
a
very
cool
technology
that
allows
you
to
to
write
code
that
runs
directly
into
the
linux
kernel
without
having
to
write
the
kernel
module
in
a
much
safer
way.
This
comes
from
network
and
now
works
for
system
call,
and
it's
a
very
exciting
technology
that
falco
has
fully
supported
since
years.
A
Now,
then
later
we
decided,
or
at
least
that
the
falco
community
has
decided
that
just
having
kubernetes
audit
event
wasn't
enough,
and
now
there
is
a
full-fledged
plug-in
frameworks
framework
where
you
can
upload
the
whatever
data
you
want
into
a
plugin
that
will
be
consumed
by
falco
and
will
let
you
interact
with
with
the
same
language
and
with
the
same
rule
engine
that
you
know.
I
love
from
the
system
calls,
but
it
will
work
for
any
event.
A
You
could
think
about
something
for
gcp,
for
example,
or
there's
a,
and
this
is
very
new
and
there's
a
lot
of
exciting
things
that
are
happening
in
the
community
on
that
and
now
we
were
able
to
refactor
it
even
more
to
make
it
even
more
modular
to
add
a
new
cisco
engine
and
new
way
of
collecting
events,
and
so
thanks
to
collaborations
thanks
to
the
community
and
thanks
to
the
the
work
done
within
the
cncf
falco
is
I
I
can
see
since
I've
been
following
falco
for
a
while
falco
really
growing
and
getting
more
and
more
capabilities
by
every
release.
A
Pretty
much.
We
would
like
to
now
tell
you
just
a
couple
of
things
about
this
integration
project.
As
you
can
see
it's
working
with
the
new
versions
of
the
falco
and
divisor,
and
it's
I
think,
the
latest
brand
new
feature
that
was
added
in
in
the
latest
versions
of
falco.
A
So
we
are
all
very
excited
as
it's
new
and
and
cool,
and
but
the
way
it
works
is
a
little
bit
different
from
the
usual
kernel
collection
event,
because
right
now,
since
all
the
captures
are
happening
in
user
space,
every
system
caller
will
actually
need
to
be
supported
on
both
the
devices
side
and
the
farco
side,
because
divisor
will
need
to
be
able
to
transmit
the
system.
Called
data
and
falco
will
be
able
needs
to
be
able
to
receive
it
and
convert
it
into
its
internal
representation.
A
Falco
itself
has
support
for
a
lot
of
events
that
are
more
than
290
different
system
calls
and
events
that
you
can
use
in
your
security
rules.
The
the
falcon
advisory
integration
does
not
support
all
these
events
right
now,
but
we
have
chosen
a
subset
of
them
that
are
the
most
common.
The
ones
that,
as
you
could
see,
can
can
be
most
useful
in
the
the
default
rules
and
then
the
rules
that
most
security
engineers
are
interested
in,
but
of
course,
there's
room
for
for
many
more
and
it's
possible
to.
A
I
don't
think
it's
possible
to
have
the
full
coverage
of
vents
just
because
divisor
and
falco
work
in
a
slightly
different
way.
Actually,
divisor
works
in
a
slightly
different
way
than
a
regular
linux
kernel
and
the
events
that
divisor
processes
are
sometimes
a
tiny
bit
different
from
them.
But
I
believe
that
the
majority
of
events
can
be
imported
and
just
with
support
from
both
sides
can
can
work
directly
on
in
devices.
A
The
good
news
is
that
it's
easier
to
implement
an
event
in
in
the
open
source,
divisor
and
open
source
falco
on
user
space
than
it
is
to,
for
example,
write
kernel
code
or
ebpf
code
to
collect
a
new
event,
because
you
don't
really
have
to
mess
with
kernel.
You
don't
have
to
think
about
all
the
compatibilities
with
all
the
different
versions
of
the
kernel,
which
is
one
massive
pain
point.
A
When
you
deal
with
directly
with
the
linux
kernel
as
a
developer,
you
don't
have
to
fight
with
the
ebpa
verifier,
which
is
very
cool,
but
also
very
picky,
about
your
code,
and
so,
if
you,
if
for
both
the
device
or
team
and
the
falco
team
and
actually
any
contributor
that
is
interested,
it
should
be
easy,
along
with
our
help,
getting
support
for
a
more
event
in
case
more
events.
In
case
we
see
that
we
need
them.
A
So
the
our
last
goal
in
here
is
to
invite
you
to
join
the
community
if
you're
interested
and
contribute
both
divisor
and
falco
have
thriving
community
and
the
one
is
supported
by
the
cncf.
A
We
have
actually
seen
usage
from
from
early
adopters
of
this
technology,
for
example,
mercadi,
which
is
a
global
marketplace
for
buying
and
selling
based
in
japan,
is
actually
a
shared
user
between
divisor
and
falco
and
uses
both
technologies
and
does
not
really
want
to
compromise
and
pick
one
over
the
other
when
it
comes
to
securing
their
their
environments
and
their
containers
and
their
applications.
A
So
mercalli
has
had
the
chance
of
trying
this
out,
and
especially
in
the
environment,
where
they
operate,
that
it's
highly
regulated
that
they
want
the
best
in
terms
of
security,
and
so
they
were
very
happy
to
see
how
this
can
drastically
improve
container
security
for
for
their
use
cases.
Also.
A
I
would
like
to
just
as
to
recap
what
we
have
discussed
today,
that
when
it
comes
to
security,
we
want
to
detect
anonymous
behaviors
within
workloads,
and
we
want
to
integrate
our
security
system
with
the
existing
rules
and
open
source,
open
source
rules
and
the
open
source
technologies.
A
As
much
as
we
can,
we
want
to
monitor
our
systems
and
also
we
want
to
have
workloads
that
are
natively
sandboxed
and
they
are
actually
prevented
from
doing
some
operations
that
might
actually
impact
our
systems
at
a
higher
level
and
make
it
as
a
simple
security
incident
to
escalate.
So
we
can
use
falco
to
to
monitor
container
nodes
and
still
enjoy
high
security
provided
by
deviser.
A
I
would
like
to
add
just
a
couple
of
links
to
both
the
advisor
website
on
advisor.dev.
You've
got
a
cool
tutorial,
though
that
guides
you
straight
step
by
step
to
get
an
environment
just
like
the
one
that
I
showed
you
that
has
both
divisor
and
photo
support,
and
you
can
find
out
more
on
the
falco
blog
as
well.
A
That
will
go
into
a
bit
more
details
about
how
you
might
want
to
configure
falcon
in
such
an
environment,
and
if
you
are
not
familiar
with
divisor,
I
would
encourage
you
to
take
a
look
at
divisor.develop,
since
there
is
a
lot
of
cool
information
about
how
this
technology
works
and
how
to
contribute.
In
case
you're,
interested
and
in
case
you
want
to
learn
more
about
falco.
A
A
book
recently
came
out
from
o'reilly
that
you
can
get
that
was
written
by
lodes,
the
original
author
of
falco
and
and
and
falcon
containers
to
to
learn
more
about
the
technology
and
regarding
falco,
I
don't
think
I
need
to
add
it
to
add
much
more
since
it's
a
cncf
project.
We
know
you
just
go
on
falco.org
and
you
will
find
our
inc
our
community
on
twitter.
There.
You
will
find
us
a
slack
channel.
B
Sure
we've
plugged
gvisor.dev
a
few
times
already,
but
it's
a
great
place
to
go.
There's
a
getting
started
guide.
If
you
want
to
just
you
know,
download,
gvisor
and
start
experimenting
with
docker
or
if
you
want
to
integrate
it
into
your
existing
kubernetes
deployments.
All
of
those
guides
are
there
at
gvisor.dev
there's.
Also
some
really
good
architecture
guides.
B
If
you're
curious
about
some
of
the
things
I
talked
about
today
and
you
want
to
learn
more,
we've
got
a
lot
of
documentation
on
gvasser.gov
all
of
our
code
and
issues
and
pull
requests
from
the
community
go
through
github.
So
that's.
If
you
search
for
gvisor
github,
you
will
find
it
and
then
most
of
us
hang
out
in
the
g.
Visor
getter
chat
room.
So
that's
another
good
way
to
get
in
touch
and
ask
questions
meet
some
of
the
some
of
the
engineers
working
on
gvisor.