►
From YouTube: CNCF CNF WG Meeting - 2021-10-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
greetings
folks,
I
will
be
leading
this
one
today.
Fellow
co-chairs
are
not
on.
B
A
Welcome
folks
did
anyone
else
join
there.
Please
add
your
name
and
andy.
Please
add
your
name
and
any
agenda
items
to
the
meeting
notes.
C
Yeah,
so
we
can,
depending
on
the
other
items
you
know
about,
10
minutes
should
be
good
for
a
quick
demo.
If
you
want
to
go
deeper,
we
can
certainly
showcase
more
use
cases
and
other
items
all
right.
C
I
had
a
quick
discussion
with
victor
as
we
were
discussing
different
use
cases,
and
I
have
looked
at
some
of
the
documents
and
other
information,
but
not
deeply
familiar,
but
I
am
at
a
high
level.
Yes,
all.
A
Right,
typically
for
longer
demos,
we
suggest
putting
them
on
on
that,
okay
and
within
that
user
group,
but
a
10
minute
demo
sounds
good
and
and
then
for
sure
discussion.
That's
the
main
focus
here.
Is
this
your
first
time
in
this
group.
C
B
Not
just
I
just
invited
taylor.
A
B
I
guess
like
giberno
is
a
great
tool
like
at
least
be
nice,
to
be
aware
of
the
benefits
and
all
the
efforts
that
they
are
doing
there.
So
yeah,
it's
the
first
time
you
came
here
so
I
guess
like
he
can
present
that
and
just
share
a
few
few
things
that
they
have
and
eventually
maybe
we
can
do
a
another,
more
deeper
demo
in
the
future,
but
yeah.
I
guess
for
now
like
what
jim
has
like
10
minutes,
showing
what
is
a
minimal
or
the
basic
ideas.
I
guess
good
enough.
A
Yeah
sounds
good,
maybe
thinking
about
the
focus
so
just
a
quick
overview
for
you,
jim,
the
cnf
working
group
or
it's
clap,
means
cloud
native
or
stands
for
cloud
native
network
function
and
the
focus
is
on
identifying
best
practices
and
the
use
cases
around
those
where
for
networking
applications
so
that
we
can
see
how
they
can
best
run
in
a
kubernetes
environment.
A
A
So
this
run
as
a
non-user
for
the
pod
security
part
of
your
demo,
that's
relevant
to
a
whole
area
on
the
security
side
that
we've
been
looking
at
specifically
within
the
principle
of
lease
privileges,
and
one
of
them
would
be
running.
Your
processes
is
non-root,
so
that's
actually
what
we're
going
to
be
talking
about
today
is
one
of
them
one
of
the
things
and
that's
so
that's
a
goal.
So
if
there's
use
cases
that
you
know
of
that,
are
you
would
be
useful
that
illustrate.
A
Here's
why
you
should
run
is
non-route,
here's
why
you
should
do
these
other
security
things
or
whatever,
and
that
would
be
good
contributions
from
the
working
group
also
areas
that
are
problematic
like
here's,
an
area
that
is,
you
know,
they're
thinking
about
over
in
the
plumber's
working
group,
or
you
know,
sick
testing
or
wherever
and
if
you
point
out
they're
trying
to
work
on
it,
but
they
don't
they're
having
some
problems.
A
Those
are
also
area,
those
kind
of
gaps
and
then
again
the
end
goal
is
to
be
able
to
share
here
are
best
practices
that
we're
trying
to
get
everyone
to
adopt
for
the
platform
and
applications
a
another
initiative.
That's
related
that
I
think
I
don't
know
enough
about
this
project
that
you're
going
to
demo
to
say,
but
the
cnf
test
suite
it's
actually
focused
on
creating
tests
that
try
to
check
practices
and
how
they're
how
things
are
deployed
and
how
it's
running,
how
it
works
in
run
time.
A
So
this
would
be
you
know,
deployment
onboarding
of
new
applications
as
well
as,
like
the
second
second
ongoing
life
cycle
management
type
items.
So
it
goes
across
the
board,
so
it
we
that
initiative
actually
use
various
tools
like
falco,
if
you're
familiar
with
that
and
opa
and
other
things
as
part
of
it.
Okay,
the
testing,
so
this
might
be
something
that
could
even
be
used
there,
and
we
could
talk
about
that
perfect.
D
A
All
right
I'm
having
some
problems.
This
just
came
up.
It's
see.
This
is
just
going
like
this.
So
bill.
Are
you
available
if
my
screen
dies
or
I
don't
know,
lucina
somebody
that
could
help?
If,
if
I
can't
load
something
essentially
right
now,
I'm
okay,
so
the
this
best
practice.
So
we've
been
working
on
a
whole
set
of
use
cases
and
we
have
a
bunch
of
write-ups
around
this.
I'm
not
going
to
go
back
through,
but
if
folks
want
to
look,
you
can
go
and
see
in
the
notes.
A
A
Resolving
issues
we're
now
down
to
a
point
where
everything
has
been
resolved
and
we
have
some
of
this
is
pulled
in
just
because
it's
it's
from
the
the
main
branch.
So
these
are
typos
and
stuff.
So
we
can
ignore
that
here's,
the
main
best
practice
so.
A
A
A
So
wherever
something
does
break
through,
if
it
happens,
which
is
likely
at
some
point,
then
how
can
we
stop
it?
Well,
the
non-root
user
is
part
of
it,
but
so
compromised
updates
or
maybe
a
central
registry.
Maybe
it's
coming
directly
from
a
vendor.
Maybe
there's
some
other.
You
know
centralized
registry,
where
everybody's
pushing
to
and
updates.
A
Maybe
the
registry
itself
was
compromised
and
we
actually
pull
down
compromise
code,
maybe
the
images
or
there's
if,
if
someone
has
done
more
of
a
lift
and
shift
and
the
application
is
not
deploying
new
images
for
updates,
instead,
it's
doing
an
update
within
itself,
which
wouldn't
be
a
good
practice
in
of
itself.
But
if
it
was
doing
that
and
it
wasn't
using
root,
then
it
could
limit
some
of
those
things
and
there's
several
other
user
stories.
A
I'm
not
going
to
try
to
pull
these
up
or
maybe
I'll
I'll
see
if
it
loads.
No,
I
don't
think
it's
going
to.
Let
so
y'all
can
click
through
this
if
you
want
to
check
them
out,
but
we
have
a
bunch
of
user
stories
written
out
and
a
bunch
of
references
going
to
different
places.
Talking
about
why
this
is
a
good
idea,
including
the
cncs,
tagged
security,
white
paper
and
a
bunch
of
papers
from
different
other
folks.
A
B
It's
just
one
thing
about
that:
one.
It's
like
at
least.
I
noticed
that
you
haven't
reversed
the
latest
changes,
so
at
least
for
me
it
was
a
little
bit
hard
to
distinguish
what
was
the
difference
between
the
previous
or
like,
like
what
is
like
dcr
adding.
B
A
Created
because
I'm
going
to
try
to
pull
it
back
up
it's
having
a
hard
time
but
I'll.
If,
if
someone
else
wants
to
pull
it
up
and
share,
then
you
can
do
that
and
I
can
walk
through
it,
but
the
the
we
were
actually
at
a
point
where
everything
was
resolved
and
ready
to
go,
except
for
the
the
user
stories.
A
So
everything
else
that's
been
updated
has
been
typos,
spelling
or
somehow
I
think
either
in
a
rebase
or
emerge
we've
pulled
stuff
in
from
maine
yeah.
It's
I
don't
know
what's
happening
with
my
dns,
but
we've
pulled
stuff
in
from
maine.
A
A
A
So
here
we
go
so
so
this
workflow,
I
don't
know
why
it's
showing
here
pulling
over
here.
A
So
that
it
should
be
a
no
op
whenever
it
goes
on
to
the
shouldn't,
do
anything
on
the
main
when
we
merge
these
in
git,
ignore
so
ignoring
the
dictionary.
This
is
probably
already
on
main
and
the
rebase
pulled
it
in
spell
check
so
that
also
shouldn't
matter.
This
read
me:
that's
a
spelling
update
that
someone
did
this.
One
is
also
a
grammar
issue
on
the
existing
process,
doc
document
I'm
going
to
skip
this
one
for
a
minute.
We
can
go.
Look
at
some
of
the
others
again,
spelling
on
the
main
readme
glossary.
A
A
This
is
one
of
the
gap
action
things.
I
think
this
is
more
spelling
on
an
existing
use
case.
A
Spelling
on
another
use
case,
this
is
the
newest
and
one
of
the
older
actually
use
cases.
The
onboarding
use
case.
It's
just
a
full
ad.
It's
already
been
added
to
main.
So
again,
this
is
something
where
the
pr
is
messing
up,
because
it's
not
it's
not
going
to
actually
be
an
update.
It's
is
already
there
supply
chain
attacks,
so
this
is
new.
This
is
the
user
stories.
A
It
looks
like
we
have
some
spelling
errors
there.
It
is,
I
see
it.
I
think
victor,
would
you
mind
doing
like
a
a
commit
request,
suggestion
for
those
sure
yeah
again,
I
can
do
that
right
now,
where
it
says
kana
tyner
instead
of
container.
A
A
And
that's
the
main
thing
that
one
and
then
this
section
right
here,
user
story
adding
this
section
with
the
user
story.
The
rest
of
the
the
rest
of
the
pieces
within
here
were
either
spelling
or
specific
changes
requested
by
folks
in
the
comments.
So
if
I
I
probably
can't
see
it
here,
but
if
we,
if
we
look
at
like
the
conversation,
there
was
some
stuff
that
randy
suggested
and
those
were
accepted.
Changes.
A
She
suggested
several
things:
those
have
been
accepted,
ponchai
made
suggestions
and
those
were
included
if
we
show
like
very
minor
things
on
the
central
system,
and
actually
this
user
story
got
deleted.
So
it
doesn't
really
matter,
but
there
were
several
things
like
that
on
essentially.
A
Like
this
notice,
here's
a
question
that
shouldn't
be
there:
okay
yeah.
This
was
a
comment
that
somehow
made
it
in
the
code,
so
we
deleted
that.
A
But
these
were
the
minor
changes.
Most
of
the
most
of
this
was
all
done
all
the
way
back
in
july
july
and
august,
and
then
the
user
stories
was
what
we
were
waiting
on
and
those
came
in.
A
So
I
think
that's
it
victor
and
you
can
I
just
refresh
you
can
do
a
a
new
review.
A
And
it
was
pointed
out
that
it's
harder
to
review
because
of
the
rebase.
So.
A
A
A
A
B
I
just
wondering
in
favor
of
taking
time
I
don't
know
if
thing
can
start
preventing,
I
mean
it's.
A
B
C
Right,
okay,
that
sounds
good.
Thank
you,
yeah.
Let
me
you
know
just
kind
of
do
a
very
quick
introduction.
In
fact,
this
presentation,
I'll
kind
of
just
share
a
few
slides
from,
is
something
I
did
at
oss
summit
just
last
week,
so
on
kievano.
C
C
I
will
also
act
as
a
co-chair
in
the
kubernetes
policy
working
group
and
a
track
lead
in
the
multi-tenancy
working
group,
so
also,
of
course,
participate
in
other
various
forums,
like
you
know,
tag
security,
security,
etc,
and
I'm
a
co-founder
and
the
ceo
at
nermata.
C
So
just
a
few
few
things
on
caverno
and
I'll
jump
around,
I'm
not
going
to
go
through
the
whole
architecture,
because
this
was
an
hour
long
presentation.
There
should
be
a
recording
up
in
a
few
weeks
if
you're
interested
you
know,
I
can
share
that
with
the
team,
but
talking
about
why
you
know.
What's
the
motivation
for
caverno
and
you
know,
what's
the
what's
the
kind
of
what
are
we
trying
to
solve
with
this
project
right
so,
first
off
in
kubernetes,
of
course,
policies
are
becoming
critical.
C
As
you
know,
the
complexity
of
kubernetes
and
not
just
kubernetes
but
extensions
being
built
on
kubernetes,
continues
to
grow
right.
So
what
we're
trying
to
do
at
caverno
is
to
bring
a
very
kubernetes
native
of
a
to
policy
management
right
and
obviously,
given
that
our
tagline
is
kubernetes
native
policy
management,
it
begs
the
question:
what
does
that
even
mean,
and
why
does
it
matter
so
one
way
of
kind
of
you
know
picturing.
That,
and
thinking
of
that
is
there's
several
tools
which
you
know
talk
about
being
kubernetes
native.
C
Idioms,
like
pod
controllers,
you
know
things
like
knowing
what
pod
admission
controls
means
and
how
do
we
complement
that
and
extend
that
to
provide
better
security
and
automation
tools,
so
that's
kind
of
what
we're
trying
to
solve
at
you
know
with
kiverno,
make
it
really
simple
to
and
very
native
to
kubernetes
and
how
policies
are
written,
how
policies
are
managed
and
even
how
policy
reports
are
visible.
You
know
in
kubernetes
itself
and
we'll
see
that
in
a
quick
demo
that
I'll
do
right,
but
just
to
quickly
explain
I'll
kind
of
skip
past.
C
Some
of
this
feel
free
to.
Let
me
know
if
there's
things
you
know
you
want,
if
there's
questions
etc,
we
can
go
back,
but
just
how
kiverno
works.
It
works
as
a
admission
control
time
webhook,
so
it
integrates
and
I'll
go
through
a
quick,
install,
it's
very
simple,
to
bring
up
on
test
clusters
or
production
clusters.
C
It
supports
full
mode.
For
you
know
if
you
have
larger
clusters,
but
it's
very
simple
to
get
started
and
install
it
plugs
in
as
admission
control
mutating
as
well
as
validating
web
books.
It
starts,
starts
receiving
then
policies
or
you
know,
admission
requests
and
based
on
your
configured
set
of
policies
which
are
just
kubernetes
resources
themselves.
C
It
will
either
validate
block
or
mutate
as
well
as
it
can
generate.
You
know,
new
resources
on
the
fly
right
so
brings
up
quite
a
lot
of
interesting
use
cases.
For
example,
when
you
deploy
a
new
workload,
and
you
want
to
you
know,
let's
say
mutate,
the
pod.
Even
you
know,
for
example,
I
saw
some,
you
know
like
some
other,
you
know
things,
but
it
doesn't
have
to
be
just
for
port
security,
but
you
could
automatically
inject,
or
you
know,
override
a
security
context
could
be
like
injecting
side.
C
Cars
could
be
even
changing
things
like
network
settings,
etc
on
the
fly
or
creating
completely
new
resources.
For
example,
if
you're
running
a
service
mesh
and
every
time
a
service
is
created,
you
want
to
generate
certs
and
create
an
istio
service
right.
Those
type
of
things
are
common
use
cases
we're
starting
to
see
in
the
community,
so
just
to
explain
how
a
policy
works
and
there's
different
kinds
of
rules
in
caverno.
So
every
policy
has
a
set
of
rules.
C
Each
rule
has
a
match
or
exclude
block,
and
that
lets
you
do
some
fine-grained
logic
on
which
you
know,
resources
to
match,
which
name
spaces.
Even
you
can
match
by
user
roles,
labels
things
like
that,
and
then
you
can.
You
know,
pay
once
you've
matched
a
set
of
resources.
You
can
run.
You
know,
rules
to
either
mutate
to
verify,
and
you
know
images
image
signatures.
C
So
I
think
earlier
hillary
mentioned
supply
chain
security.
So
that's,
of
course,
something
that
requires
admission
controls
to
complete
that
end-to-end
security
in
a
posture
and
some
of
the
work
that's
going
on
with
other
communities
like
six
store,
we're
integrating
cosign
with
kiverno
to
also
be
able
to
verify
image
signatures
from
any
oci
compliant
registry,
and
then
you
can
of
course
validate
which
can
either
be
just
blocking.
C
So
if
you,
if
something's
non-compliant,
you
can
block
in
production
clusters
or
you
can
report
an
audit
in
dev
test
clusters
and
you
can
have
a
mix
of
this
based
on
policies
or
even
based
on
name
spaces.
As
you
wish,
and
then,
like
I
mentioned
another
powerful
use
case,
is
generating
resources
itself
right.
So
this
allows
you
to
automate
a
lot
of
things
which
previously
required
custom
admission
controllers,
we're
seeing
more
and
more
use
cases
and
even
simple
things
like
if
you
want
to
deploy.
C
So
with
that,
let
me
dive
into
the
demo
and
I'll
show.
You
know
some
on
the
kiverno
io
website
and
you
know
we
have
a
whole
bunch
of
sample
policies
today,
we'll
just
look
at
the
pod
security
policies
and
but
there's
several
other
best
practices,
for
example,
using
immutable
label
tags
right
and
not
not
using
something
like
latest
now.
It
seems
harmless
to
use
latest
and
of
course,
it
gets
used
quite
a
bit
in
dev
test.
C
C
So
this,
if
you're
not
familiar
with,
is
a
very
key
document
which
is
driving
so
psps
were
one
implementation
of
pod
security
standards,
but
now
there's
other
implementations
like
caverno
oppa
gatekeeper,
as
well
as
the
upcoming
pod
admission
controller,
which
will
be
you
know,
doing,
label
based
settings
on
a
namespace
level,
granularity
right.
So
that
will
be,
I
believe,
it's
targeted
for
version
1.25.
C
But
if
you're
using
something
like
a
policy
engine
like
evernote,
you
just
get
a
lot
more
flexibility
and
how
you're
managing
these
profiles
and
how
you're
applying
them
across
your
workloads
name
spaces
as
well
as
you
can
do.
Of
course
you
know
security,
not
just
for
pods,
but
for
other
things
like
making
sure
other
best
practices
like
running
a
read-only
root
file
system
is
not
one
of
the
psp
policies,
but
that's
also,
you
know,
considered
a
best
practice
and
a
good
security
standard
to
apply
so
anyways.
C
There's
several
ways
to
install
kiberno,
I'm
just
gonna
use
the
in
a
command
line:
option
three
ammos.
So
this
will,
you
know,
pull
down
a
set
of
yamls
and
it
will
run
qrno.
Just
to
kind
of
you
know
show
I
just
brought
up
a
new
mini
cluster
and
I
have
I
think,
one
namespace.
You
know
that
I
created
oops,
let's
say
get
namespace.
C
I
created
a
test
namespace
which
is
running
in
nginx
pod,
but
that's
all
I
have
on
this
cluster
right.
So
the
first
thing
I'm
going
to
do
is
just
pull
down
these
yamls
and
install
kiberno,
and
it
comes
with
a
set
of
you
know:
custom
resources
which
allow
you
to
define
policies
which
also
and
then
there's
a
policy
reporting
resource
which
is
by
the
way,
which
is
also
now
being
used
by
falco
by
coop
bench
and
a
few
other
projects.
So
more
and
more
projects
are
generating.
C
You
know
policy
reports
in
the
same
manner,
which
allows
for
some
standardization
and
reuse
all
right.
So
if
I
do
get
namespace
now,
we
should
see
kiberno
running
if
we
do
minus
n
kiverno
and
we
do
a
get.
You
know
test
or,
let's
just
say,
get
pods.
I
don't
know
why
I
keep
saying
get
test
here,
but
so
we
have
a
pod
which
is
up
and
running
and
so
give
or
no
should
be
ready
at
this
point
right
so
like
we
can.
C
Get
rid
of
that
okay,
so
everything's
good!
It
says
that
you've
configured
its
own
web
hooks,
which
is
what
it
needs
to
start
receiving
policies
and
things
like
that.
But
if
I
now
do,
you
know
get
cluster
policy,
I
don't
have
any
policies
installed
at
the
moment
right.
So
it's
saying
you
know
if
I
do
so.
This
is
our
the
resource
which
has
you
know
which
we
will
now
install
some
of
these
pod
security
policies.
C
C
So
what
this
customization
will
do
is
it's
going
to
actually
set
these
spot
security
policies
instead
of
audit,
which
is
the
default
mode
that
they're
configured
in
it
will
set
them
to
enforce?
And
this
means
that
now,
if
I
try
to
create
an
insecure
pod
that
should
get
blocked
by
default
right.
So
to
test
this,
what
I'm
going
to
do?
You
know
on
site
that
I
use
for
testing
some
of
these
insecure
pods
is
there's
this
site
by
a
company
called
bishop
fox
and
the
security
space.
C
They
have
the
site
called
bad
pods,
which
shows
you.
You
know
like
pods
running
with
the
host
name
spaces
with
runners.
You
know
root
like
several
other
things
are
not
configured
correctly
in
the
pot
right
and
you
can
grab
like
a
deployment
or
a
daemon
set,
or
things
like
that
I'll
go
with
the
deployment
in
this
case,
we'll
go
for
the
raw
ammo
and
I'll
grab
this
one
right.
C
C
You
know
just
on
the
pod
resource,
but
kiverno
automatically
knows
again,
since
it's
designed
for
kubernetes,
it
can
automatically
now
apply
these
policies
on
deployments.
Daemon
sets
any
part
controller
which
you
run
even
if
it's
a
custom
like
something
like
a
you
know,
argo
deployment,
which
is
a
custom
part
controller.
It
will
recognize
and
it
will
apply
the
policy
correctly
to
it
right.
But
this
is
you
know
what
the
policy
looks
like
there's
just
some.
C
You
know
again
we're
matching
on
the
resource
pod
and
then
we're
checking
in
the
security
context
and
we're
saying
run
as
an
unroot.
So
if
security
context
this
this
means
this
declaration
means
that
if
security
context
is
configured
run
as
non-root
should
be
true,
and
similarly
here
we're
checking,
you
know
for
init
containers
and
then
we're
also
doing
the
checks
both
in
the
pod
spec,
as
well
as
the
container
spec
right.
So
that's
really,
you
know
how
simple
it
becomes
again
to
configure
and
run
these
policies.
C
One
other
thing
I
can
just
quickly
show
is:
if
we
do
get
policy
report,
minus
minus
all
or
let's
try
minus
a.
I
can
see
that
for
my
existing,
you
know
pod,
which
I
was
already
running
now
it's
generated.
You
know
some
policy
report
and
if
you
look
at
that,
you
know,
let's
just
do
minus
so
yaml
I'll,
see
all
the
details
of
what
you
know
passed
and
what
failed
right.
So
actually
this
is
in
the
namespace
test.
C
So
I
need
to
do
that
and
it
shows
me
every
workload
where
you
know
which
and
every
rule
that
it
applied
which
one's
passed
which
one's
failed
and,
of
course
all
of
this
can
be
collected.
There's
other
open
source
tools,
there's
ways
to
get
this
into
prometheus.
You
know
so
there's
several
ways
to
kind
of
report.
This
information.
C
In
fact,
I
think
I
have
some
slides
on
that
here,
which
I
was
showing
the
default
dashboard
as
well
as
there's.
You
know
a
policy
reporter
project
which
can
show
up
this
information
graphically
as
well
right,
so
lots
of
interesting
things,
but
you
know,
let
me
pause
there,
I
think
and
see
if
there's
any
questions.
Otherwise
we
can,
you
know,
keep
the
demo
short
for
today
and
certainly
happy
to
follow
up
with
more
details.
D
I
have
a
question
so
very
interesting
stuff
and,
of
course,
I'm
always
supportive
of
anything
that
brings
us
to
policy-oriented
orchestration.
I
really
think
that
is
the
future
of
we'll
keep
seeing
more
and
more
policies
being
used
in
lots
of
areas.
You
know
I'm
thinking
of
the
topology
operator
for
kubernetes
as
well,
for
policies
for
placement,
but
my
question
is:
maybe,
if
you
want
to
talk
more
about
how
this
would
directly
be
related
to
cnfs
and
where
you
see
it
in
telco
being
especially
important.
C
Right
yeah,
so
one
one
quick
thought
is
just
making
it
easy
to
automate.
You
know
as
you're
doing
testing
as
you're
doing
validation.
Certainly,
this
can
be
also
integrated
in
your
ci
cd
pipelines.
So
that's
a
very
simple.
You
know
a
simple
set
of
policies
which
can
be
managed
through
gitops
or
any
other.
You
know
solution.
You
wish
the
other.
You
know
thing
which
someone
by
the
way
you
know
something
which
may
come
up
is,
of
course,
and
we
often
get
asked.
C
How
does
this
compare
to
oppa
gatekeeper,
which
do
us
perform
a
similar
role
right?
So
the
main
difference
is:
have
you
author
policies,
but
then
other
powerful
use
case?
That
kiverno
enables
which
oppa
gatekeeper
doesn't
is
to
be
able
to
generate
resources,
and-
and
that
is
also
another.
You
know
area
where,
if
you
want
to
create
policies
for
workload,
you
can
in
fact
have
policies
to
generate
policies.
You
can
have
policies
to
distribute
common.
You
know
elements
to
set
up
different
things
which
helps
it
really
helps
in
decoupling.
C
You
know
that
that
the
creating
that
separation
of
concerns,
decoupling,
what
the
developers
have
to
do
from
what
the
operators
have
to
do,
which
is
a
fundamental
problem
right
now
in
kubernetes
and
scaling
kubernetes
right.
In
fact,
in
this,
I
think
I
have
a
slide
year
where
which
it
talks
about.
You
know
just
policies
acting
using
those
as
a
contract
and
helping
decouple
what
developers
care
about
what
security
cares
about
and
what
operations
care
is
about
is
where
I
think
caverno
can
help
quite
a
bit.
E
So
one
area
that
you
could
probably
help
with
in
this
scenario
is
like.
We
have
policy
on
networking
and
similar,
and
I
know
you
can
help
in
that.
If,
if
that,
networking
is
through
a
kubernetes
based
kubernetes,
aware
cni,
but
one
of
the
things
that
we
see
within
the
networking
and
telecom
service
provider
space
is
that
there
are
secondary
networks
that
may
not
have
the
same
set
of
policies
or
may
be
unaware
of
kubernetes
but
end
up
as
secondary
interfaces
within
within
pods.
E
And
if
you
have
a
way
to
help
with
where
the
policies
that
are
there
can
be
rendered
into
the
appropriate
sdns,
so
that
the
policy
can
persist
regardless
as
to
whether
which
direction
it's
that
information
is
coming
in
in
from
or
to
help
with
the
control
of
that
to
say,
like
what
systems
should
be
able
to
connect
to
each
other
or
not
be
able
to
connect
to
each
other
based
upon
a
set
of
rules
could
be
very
valuable.
C
Yeah,
in
fact,
victor-
and
I
were
discussing
that
use
case
right
so
from
my
understanding,
it
seems
like
what's
necessary,
is
based
on
the
cluster
configuration
your
your
developer
or
the
author
of
the
cnf
may
not
know.
You
know
how
the
cluster
is
configured
to
operate,
so
you
probably
want
to
inject
some
of
these
settings.
Add
mission
controls
and
it
could
vary
based
on
where
the
cnf
actually
ends
up
running
victor,
not
not
sure
if
you
want
to
add
anything
else
to
that
or.
B
No,
no
you're,
correct,
yeah
yeah
one
one
use
case
that
we
were
talking
about
was
the
usage
of
nsm
but
yeah,
for
example.
Multis
and
anon
can
also
include
a
few
few
modifications
or
validations
in
the
decoration.
B
I
mean
I
like
this,
this
particular
product,
because
it's
it's
it's
part
of
the
the
cncf,
and
I
guess
it's
proposing
a
another
cloud
native
way
to
to
do
the
things
or
more
like
kubernetes
way
so,
and
do
you
mention
I
don't
know
you
mentioned
like
there
are
a
few
default
policies
that
you
can
also
take
advantage
of
them.
C
Yes,
so
so
this
this
policy
library,
so
there
are
the
pod
security
policies,
but
there's
also
like
policies
you
can
have
for
you
know,
generate
mutate
validate
like
we,
of
course,
with
every
namespace.
You
know
it's.
It's
always
good
to
have
a
default
network
policy,
but
these
can
be
also
customized
based
on
the
deployment
or
whatever
needs
to
be
done
like
we're
talking
about
like
injecting
a
secondary
network,
and
you
know
configuring
the
pod
for
that
right.
So
I
think,
there's
a
there's
a
use
case.
C
Let
me
see
if
I
can
find
that
which
it's
kind
of
similar
to
injecting
a
sidecar
in
some
ways
right.
So
it's
a
fairly
elaborate
sort
of
in
this
case
here,
we're
you
know,
kind
of
checking
for
certain
things
and
creating
a
new
container
and
as
well
as
the
init
container,
based
on
you
know,
the
policy
setting
itself
so
but
yeah
in
terms
of
defaults
would
highly
recommend.
If
you
know-
and
I
know
the
team
has
already
been
looking
at
the
running
as
non-route,
but
starting
with
pod
security.
C
There's
several
other
controls,
which
are
you
know,
part
of
the
pod
security
standards
so
certainly
enforcing
those,
because
there's
no
typically,
you
know
most
pods,
don't
need
to
run
with
higher
privileges.
Much
more
spots,
don't
you
know,
shouldn't
be
using
non-default
volume
type
shouldn't
be
using
hostpath
privilege,
you
know
kind
of
having
again
requiring
escalated
privileges
host
namespaces.
All
of
that
can
be
blocked
by
default
right.
B
And
also
freddie,
the
use
case
that
you
are
also
mentioning
about
people
using
multus.
Maybe
another
possibility
could
be
adding
a
validation
which
ensures
that
you
have
predefined
the
additional
network
in
multus.
So
if
someone
is
trying
to
use
an
existing
network
yeah,
you
can
I'm
pretty
sure
that
you
can
cache
all
these
things,
because
at
the
end,
it's
just
like
a
single
annotation.
So
I'm
pretty
sure
that
kiberno
can
catch
and
do
some
logic
to
ensure
that
that
network
exists.
E
Yeah
and
it's
it's
an
issue
not
only
in
multis,
but
it's
like
you,
land,
an
interface,
and
I
could
even
use
network
service
mesh
as
an
example.
You
land,
an
interface
here
and
both
of
them
have
some
level
of
control
as
to
like
who's
allowed.
To
put
the
interface
there,
there's
there's
a
a
portion
there
that
that
could
be.
E
That
could
be
bound
against
there's
a
little
bit
more
flexibility
in
network
service
mesh
in
terms
of
in
terms
of
how
the
policy
can
get
injected
and
enforced,
but
neither
neither
one
of
these
has
the
has
the
component
that's
already
built
in.
That
is.
How
do
you
actually
program
the
sdn
itself,
like
maybe
you
have?
Maybe
you
have
certain
rules
that
need
to
be
within
the
sdn
once
something
is
set
up?
E
How
do
you
ensure
that
those
rules
have
been
rendered
into
the
into
the
sdn
itself,
and
so
those
those
particular
types
of
things
would
be
would
be
useful
in
both
the
nsm
and
and
multis
solutions.
Because
then,
if
you
could
define
what
those
rules
look
like
here,
then
you
can
render
them
into
each
environment
and
ensure
that
they're
getting
applied
consistently
across
the
across
the
board.
E
Oh
they're
they're,
not
ex
they're
not
expressed
at
all,
is
the
point
is
so
being
able
to
like
there.
There
was
some
literature,
I
don't
know
if
it
ended
up
in
the
in
the
cntt
path
where,
if,
if
you
are
adding
a
secondary
interface,
you
have
to
make
a
decision.
This
are
you
respecting
the
kubernetes
policy
contract
or
are
you
not
and
if
you
are
in
other
words,
are
you
exposing
a
faster
kubernetes
compliant
path,
or
is
it
a
non-compliant
path
and
we
and
that
distinction
was,
was
made
necessary?
E
Because
if
it's
non-compliant,
then
you
have
to
rely
on
the
sdn
and
additional
configuration,
and
you
want
to
make
it
explicit
to
the
person
who's
configuring
it
that
they
have
to
pay
attention
to
this
if
it's
compliant
to
kubernetes
you're,
just
providing
a
faster
path
like
maybe
I
have
a
web
application
that
needs
faster
access
to
a
storage
system.
That
storage
system
is
exposed
in
kubernetes
and
that
you're
basically
providing
a
faster
kubernetes
path.
E
Then
what
you're
doing
is
you're
saying
the
sdn,
has
awareness
of
the
cluster
and
is
able
to
monitor
the
policies
and
is
able
to
render
those
policies
regardless
as
to
where
the
regardless
as
to
whether
you're
taking
the
slower
path
or
the
or
the
accelerated
pass.
E
But
that
was
a
distinction
that
was
added
at
that
particular
level.
But
there's
still
that
issue
about.
If
you,
what
rules
do
you
want
to
apply
to
the
to
the
secondary
networks
that
are
non-compliant
to
kubernetes
policy
and
being
able
to
just
to
match
like?
I
have
these
particular
things
that
I
want
to
have
these
type
of
connections
with
and
being
able
to
then
set
something
up
where
you
could
eventually
interpret
that
into
the
appropriate
sdn
like
this?
C
C
Yeah
so
happy
to
you
know
again
help
explore
and
write
out
some
of
these
policies.
We
are
fairly
active,
of
course,
within
the
kiverno
community,
helping
users
with
different
use
cases
like
we're,
also
like,
for
example.
Here
you
see,
policies
for
cert
manager,
there's
you
know,
domain
specific
and
other
policies,
flux.
C
One
of
the
get
ops
controllers
is
also
using
given
for
multi-tenancy
open
ebs
is
using
kiverno
as
well
for
pod
security,
so
several
projects
are
starting
to
adopt,
but
so
yeah
it
would
love
to
kind
of
work
on
cnf
specific
policies
explore
some
use
cases
and
help
kind
of
advertise.
What
cabernet
can
do.
A
Yeah,
please
reach
out
I'd
like
to
talk
to
you
about
how
you
can,
I
guess,
get
more
engaged
for
writing
up
some
of
those
best
practices
that
you're
showing
on
the
baseline.
Maybe
we
could
talk
about
use
cases
and
that
would
be
make
it
relevant
to
the
folks
in
the
networking
communication
service
provider
space-
and
I
can
talk
with
you-
maybe
about
the
test.
Suite
sounds
great.
You
see.