►
From YouTube: Layer5 Community Meeting (Feb 19th, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Sorry,
I
didn't
have
my
headset
on
oh
no
yeah,
I
missed
burgundia
there
there
he
is,
how
are
you
we?
I've
got
electricity,
at
least
for
the
moment.
I
was
gonna.
Ask
about
that
yeah.
I
know
it's
been.
It
seems
like
a
rough.
A
Yeah
we've
well,
a
lot
of
the
people
on
this
call
have
already
heard
me
complain
about
it,
so
I'm
gonna,
I'm
gonna,
keep
my
complaints
to
a
minute.
It's
just
it's
just
so
easy
to
complain.
That's
that's
the
thing
it's
but
yeah.
Dare
I
ask
how
the
weather
is
out
there
in
you.
B
Know
I
haven't
looked
outside
it's
a
little
bit,
cloudy,
no
sunshine
yet,
but
probably
later
in
the
day,
I
can't
complain.
C
A
That's
good
yeah,
nice,
all
right,
very
good,
jim
is
with
us.
Okay,
I
get
to
harass
jim
publicly
any
chance.
I
get
I'll.
Do
that.
That's
not
a
problem.
A
Oh,
very
good,
okay!
Well,
we've
got
we've
got
about
10
or
more
folks
on
at
the
moment.
We'll
probably
should
have
about
twice
that
or
more
I
expect
so
in
the
meantime
most,
I
think
all
of
you
who
are
on
except
for
mr
kalansky
bart
there
he
is
bart-
is
this
this?
This
can't
be
your
first
time
on
hello
there
he
is
hey
bart,
hey
how.
C
Are
you
doing?
It
is
my
first
time
on
actually.
A
Okay,
all
right,
okay,
bart
there's,
while
we're
sort
of
waiting
for
others
to
filter
in
there's
a
there's,
a
tradition
and
it
is
to
say
hi
to
everybody,
give
a
give
a
brief
introduction.
If
you
would
kind
of.
C
I
would
honestly
rather
not
no,
but
so
well.
My
name
is
bart.
I've
been
coding
for
about
12
months,
I'm
being
made
redundant
due
to
the
company
that
I
work
for
they're
moving
operations
to
america,
so
I've
been
kind
of
looking
to
change
careers
and
about
six
months
ago
I've
wanted
to
join
in
with
the
hacktoberfest
and
started
to
help
out
with
the
layer
five
stuff
I've
been
enjoying
it
so
far,
but
yeah
it's
it's
nice
to
get
involved
into
a
zoom
call
and
hello.
Everyone,
nice.
A
For
the,
I
guess,
for
the
culturally
non-diverse
educate
the
rest
of
us
on
on
the
accent
on
where
you're.
C
So
I
was
born
in
poland,
but
I
grew
up
in
the
uk,
so
my
accent's
mixed.
I
grew
up
in
the
south
of
england
and
I
moved
to
the
north
of
england
to
for
university.
So
it
kind
of
goes
all
over
the
place,
but.
A
Nice,
nice
good-
I
don't
know
if
you
bart
this,
you
may
know
this
already,
but
just
words
of
the
wise
like
here
in
the
us.
I
think
I
think
jim
might
back
me
up
on
this
is
automatically.
If
you
have
a
british
accent,
it's
an
automatic
10-point
ad
on
your
iq.
A
A
Nice
good
bart,
it's
kind
of
kind
of
funny
to
to
have
you
introduced
because
you've
been
in
the
community
for
quite
some
time
actually
great
day
for
you
to
join
on
the
call,
because
well
because
it's
something
that
you
know
very
well,
which
is
that
the
layer
five
website,
then
one
that
you
guys
have
been
working
on
for
I
don't
know
for
oh
wow,
was
it
july
june
july.
We
started
talking
about
gatsby
and
switching
off
a
jackal
and
hey.
A
It
only
took
155
contributors
in
six
months,
but
thanks
to
to
bart
and
others,
it's
obligatory
like
if
you
haven't.
If
you
haven't
seen
the
site
lately
like
in
the
last
12
hours,
then
you
haven't,
you
haven't
seen
the
site,
it's
just
what
a
marvel.
A
This
is
been
a
few
people
who've
been
here
in
the
trenches
on
this
effort
for
a
while
and
I'm
seeing
a
lot
of
their
names
on
here,
I
hesitate
to
even
call
them
out
by
name
because
because
there's
150
others
that
have
contributed,
but
it's
if,
if
I,
if
I
can
say
so
other
than
the
occasional
picture
of
this
freckle-faced
individual,
it
looks
pretty
fantastic.
A
It's
looking
looking
amazing
go
check
out
your
profiles
on
this
thing.
This
is
this
is
funny.
I
was
just
chatting
with
ed
two
minutes
before
the
call
he's
out
here
in
austin
as
well.
That
picture
is
from
one
of
our
meetups.
Actually,
we've
got,
we've
got
the
usual
suspects
here,
but
go
go
check
it
out.
I
think
it's.
I
think
you
all
should
be
patting
yourself
on
the
back.
I
think
you've
really
you've
got
something
here.
A
Well,
you
know
what
we're
missing,
though,
is
this?
I
don't
see
a
caverno
picture
on
here.
I
don't
know
what
not
yet
nice,
oh
good,
okay,
well,
one
other
announcement,
I
think
and
then
and
then
we're
gonna
talk.
We're
gonna
hear
from
jim
a
lot
today,
hopefully
the
other
announcement
that
we
have.
We
talked
last
time
we
met.
A
We
talked
about
well
actually
before
we
do
that
so
welcome
everybody
who's
new
from
who
joined
the
the
community
from
last
week
or
since
the
last
time
we
met
anyone
other
than
bart
first
time
on
this
call.
A
And
nope,
I
don't
think
so
so
last
time
we
met
we,
we
talked
about
the
notion
that
that
there's
a
handful
of
us
that
are
delivering
a
workshop
at
istiocon
on
monday,
and
one
thing
that
I
didn't
note
that
I
should
is
that
if
any
of
you
are
so
so
as
we
go
to
deliver
this
workshop
at.
A
Istiocon,
there's
an
open
need
for
lab
assistance
for
for
folks
who,
like
yourselves,
who
either
have
familiarity
with
meshary
because
we'll
be
teaching
people
about
istio
using
meshri
like
usual
or
if
you
have
any
familiarity
with
istio
or
just
one
or
the
other
ping
me,
and
let's
get
you
on
as
let's
get
you
into
the
workshop,
I'm
hanging
out
as
a
lab
assistant
to
to
help
people.
I
recognize
many
of
you
there's
a
lot
of
you
that
are
probably
thinking.
Oh,
maybe
I
could
stretch
to
help
there.
A
Maybe
I
could,
but
you
feel
pretty
uncomfortable,
because
you
know
that
you
there's
a
lot
of
things.
You
don't
know.
A
No,
here's
here's
a
good
chance.
Here's
a
good
chance
to
stretch,
stretch
and
grow,
even
if
all
you're
familiar
with
is
just
common
pitfalls
that
people
have
when
they
go
to
set
up
measuring.
That's
enough.
I
mean
you
know
that
that
no
doubt
people
will
run
into
those
pitfalls
and
we
should
be
taking
more
notes
on
what
those
pitfalls
are
to
make
sure
that
we
overcome
them
so
good,
so
that
that
also
means
that
so
this
is
happening.
A
I
think
it's
about
noon,
central
time,
which
I
don't
think
it
will.
It
won't
impact
our
website's
call
on
monday,
so
that'll
still
go
on
so
as
a
prelude
to
the
websites
call
on
monday,
I
suspect,
augustine
will
have
he's,
got
a
list
that
he's
compiling
of
various
tweaks
to
be
made
on
the
new
website.
A
A
Okay,
final
item
before
before
we
talk
about
caverno,
and
that
is
manuel,
miss,
mr
more
more
jean
manuel.
Are
you?
Can
you
hear
me?
Are
you
on
the
call?
A
Yes?
Yes,
I'm
in
the
club
there.
He
is
good
good,
good,
good,
hey,
hey
it's
a
great
day
whenever
a
docker
captain
joins
the
community.
A
A
Man
well,
if,
if
you
would-
and
we
made
bart
suffer
just
a
moment
ago-
bart
kolansky
he's
on
the
call
and
he
had
to
introduce
himself
because
it
was
his
first
time
on
the
call.
And
so
manuel.
Do
you
mind
just
doing
a
quick
introduction
telling
people
about
about
you
a
little
bit.
D
Okay,
I'm
maybe
you
hear
some
noise
because
I'm
out
of
the
house
I'm
out
of
the
office.
So
that's
why
I'm
not
using
my
video
got.
D
D
Architectures
everything
around
containers,
workflow
around,
say,
cdc,
a
continuous
integration,
continuous
delivery.
I'm
part
of
the
dr
captain
team.
D
A
Well,
it
doesn't,
and
I
thought
I
thought
bart's
introduction
couldn't
be
topped
with
with
his
english
accent,
but
manuel
that's
beautiful.
I
just
welcome
I'm
very,
very
pleased
that
you
can't
you
you're
joining.
Thank
you.
That's
awesome,
yeah
and
by
the
way,
the
background
noise
is
almost
non-existent
like
it's
yeah,
so
nice,
good,
okay,
well,
jim.
We
just
keep
eating
into
cavarno
time
like
we
just.
A
Well
with
that
yeah,
let
me
let
me
be
quiet
and
say
they
all.
I
I've
talked
about
jim
on
the
last.
I
I
think
in
the
last
couple
of
weeks
now
I've
mentioned
that
jim
has
had
the
well
the
great
pleasure
of
knowing
me
for
the
for
a
number
of
years.
Now
he
and
I
got
to
spend
some
time
at
cisco
together
and
jim
went
off
and
founded
a
company
jim.
Do
you
want
to
tell
people
about
nermata
and
and
and
cabrerno
and.
B
Yeah,
absolutely,
and
if
I
can
share
my
screen
I'll,
also
pull
up
a
deck
as
we
go
through
so
hi.
Everyone
pleasure
to
be
here.
Thank
you
for
having
me
I'm
jim
beguardia,
co-founder
ceo
at
nermata,
so
likely
mention
you
know
kind
of
a
little
bit
about
myself
and
you
know
just
our
history
and
what
we
do
at
nermata
and
then
I'll
dive
into
ivarno.
So
likely
mentioned,
we,
you
know,
started
nermata
pretty
early
in
the
cloud
native
space
right,
so
this
was
even
pre-kubernetes
for
those
of
you
remember
such
a
time.
B
So
this
was
you
know
when
docker
was
just
emerging
out
of
dot
cloud,
and
you
know
our
whole
kind
of
thinking
and
what
we
do
at
nermata
is
help
enterprises
manage
cloud
native
applications
right.
So
my
background
is
in
operations
and
management.
B
I
built
you,
know,
network
management,
type
systems
and
various
domains
like
device.
Networking,
telecom
wired,
wireless
communications,
things
like
that,
and
what
we
saw
is,
of
course,
as
every
enterprise
becomes
a
digital
enterprise.
B
So
that's
that's
what
nermato
does,
but
here
you
know
what
I
want
to
talk
about
is
one
of
our
cncf
projects,
which
we're
super
excited
about,
and
you
know,
there's
been
a
lot
of
great
adoption
in
the
community
and
I
think
the
one
interesting
thing
about
it
is
how
you
know
what
we
you
know,
what
I'll
discuss
with
giver,
nero,
how
it
also
intersects
with
other
projects
and
other
things
happening
in
kubernetes
right,
so
we're
already
partnering
with
other
communities
like
data
on
kubernetes,
where
with
open,
ebs
and
other
technologies
on
the
storage
side,
where
you
know,
in
discussions
with
even
cert
manager,
for
identity
management
and
certificate
management
for
workloads
and
another,
of
course,
very
interesting
area
which
we
want
to
explore
and
understand
the
use
cases
and
a
house
and
see
how
cabernet
may
fit
into
is
with
service
mesh.
B
And
you
know,
managing
you
know
different
aspects,
whether
it's
security
related
or
even
automation
and
other
things
other
concerns
which
cluster
admins
may
have
with
you
know.
Installing
operating
service
meshes
so
anyways,
let's
dive
into
kiverno
I'll
go
through
a
few
slides
to
just
explain
what
it
does
we'll
look
at
a
hands-on
demo
or
two,
and
then
we
can
just
have
an
open
discussion
and
conversation
okay.
So
you
know
why
why
kevarno?
Why
policies?
B
So,
let's
just
start
there
right
so,
first
off,
you
know
caverno
means
govern
in
greek
right,
so
we
thought
it's
a
good
fit
for,
given
the
team
with
kubernetes,
you
know,
qiverna
was
talking
about
governance,
for
you
know
for
kubernetes
right
and
it's
as
opposed
to
a
lot
of
you
may
have
heard
of
oppa
gatekeeper,
which
is
another
very
popular
policy
engine
which
was
designed
for
as
a
general
purpose
policy
engine.
So
it
works
with
other
systems
as
well.
So
there's
some
interesting
tradeoffs
to
consider.
B
So
both
can
do
you
know
very
interesting
things,
but
you
know
it's
a
trade-off
as
you're
kind
of
evaluating
and
selecting
so
before.
You
know
going
deeper
into
again
kiverno
and
what
that
does.
Is
it's
important
to
understand?
Why
bother
with
policies
at
all
right?
B
You
know
the
more
better
the
outcomes
so
kubernetes
configuration
tends
to
be
fairly
detailed
and
complex
to
some
degree,
and
configuration
also
tends
to
address
many
different
concerns
like
if
you
think
about
a
kubernetes
cluster
or
any
kubernetes
resource.
You
have
the
operator
view
of
things.
You
have
the
workload
owner
or
the
developer
view
of
things,
and
then
perhaps
you
have
others
like
auditors
and
people
who
are
dealing
with
security,
and
so
you
have
several
different
views
of
a
single
resource
and
configuration
and
status
and
stuff
has
to
cross-cut
across
all
of
that
right.
B
So
this
also
adds
another
layer
of
complexity
and
another
layer
of
challenges
when
you're
managing
kubernetes
configurations
policies
are
great
at
solving
these
problems
where
they
can
simplify.
You
know
what
a
developer
needs
to
do
versus
what
an
admin
needs
to
do
and
they
can
also
kind
of
separate
those
concerns.
B
B
So
with
kiverno.
The
approach
we
have
chosen
is,
you
know,
to
again
make
policies
very
natural
to
kubernetes,
admins,
so
adopting
the
same
declarative
nature
of
kubernetes
configurations,
making
policies
very
easy
to
write
and
manage
using
standard
tools
and
also
making
policy
results
so
very
easy
to
you
know
view
manage
and
work
with
right
so
and
of
course,
we
want
to
make
sure
that
we
support
not
just
the
native
kubernetes
resource
but
any
resource.
B
So
internally,
you
know,
kiverno
uses
the
dynamic,
crds
and
dynamic.
You
know
kind
of
informers
and
things
to
that
nature
to
be
able
to
deal
with
any
kubernetes
resource,
including
custom
resources
that
are
not
you
know,
defined
in
the
native
api
schema
itself
and
then,
in
addition
to
that,
you
know,
one
of
the
key
approaches
with
kiverner
was
to
adopt
every
in
all
of
the
kubernetes
best
practices.
B
B
B
But
you
know
there
is
some
learning
and
complexity,
and
it's
not
you
know.
Sort
of
declarative
in
nature,
like
kubernetes,
is
right
so
on
the
right
is
the
the
same
policy
written
with
kiverno
and,
as
you
can
see
here,
you
know
if
you
look
through
the
some
boilerplate
in
terms
of
matching
resources,
etc,
but
the
last
few
lines
of
the
policy
will
look
very
familiar
to
anybody.
Who's
looked
at
a
pod
spec
or
a
deployment
in
kubernetes,
because
it
it
again
follows
the
same.
B
You
know
the
same
constructs
the
same
patterns
right,
so
your
what
this
policy
is
doing
is
checking
that
containers
within
a
pod.
You
know
have
a
security
context
and
that
the
read
only
file
system
setting
must
be
set
to
true
now.
This
is
something
that,
in
most
99
of
the
cases,
most
developers
don't
even
sort
of
need
to
know
about
or
may
not
be
concerned
with.
B
But
if
you
don't
specify
this,
you
are
kind
of
vulnerable
to
certain
types
of
security
hacks,
which
is
why
you
know
it's
important
again,
emphasizing
where
the
separation
of
concerns
and
for
admin
cluster
admins.
It
becomes
pretty
important
to
be
able
to
specify
these
sort
of
rules
within
policies.
B
So
the
structure
of
a
policy
in
kiverno
is,
you
know,
fairly
standard
right
and
it
see
we
hope
it's
pretty
intuitive
to
anyone
right,
because
a
policy
just
simply
consists
of
a
set
of
rules,
and
you
can
choose
to
group
them.
However,
you
wish
typically,
you
will
just
like
if
you're
writing
code,
you
will
group
related
functions
into
a
package
or
you
know
methods
or
whatever
you're,
based
on
whether
it's
an
object-oriented
or
procedural
or
other
type
of
language.
So,
similarly,
here
you
can
group
rules
into
policies.
B
Each
rule
will
have
a
match
and
exclude
block
which
specify
you
know
which
resources
you
want
that
policy
to
operate
on
and
within
those
there's
a
lot
of
flexibility.
You
can
use
kubernetes
kinds,
names
label,
selectors
namespaces,
you
can
select
namespaces
from
you
know,
for
objects
which
are
already
created,
user
roles,
groups
etc.
Right,
so
lots
of
different
ways
of
choosing
things
and
then,
once
you
have
selected,
if
the
policy
and
rule
need
to
apply,
you
can,
you
know
have
the
logic
can
either
mutate,
which
means
update
or
change
the
resource
data
you
can
validate.
B
Things
like
you
can
enforce
certain
best
practices
and
with
validate
you
can
either
choose
to
block
things
right
away
or
you
can.
You
know,
choose
to
allow
the
the
resources
to
be
created
and
then
report
violations
to
the
workload
owners
and
then
you
can
also
generate-
and
this
is
a
very
interesting
use
case,
which
is
unique
to
qiverno.
B
You
know
so
so
the
the
thinking
year
and
some
of
the
lessons
learned
for
us
working
with
several
customers
is
policies
are
not
just
about
validation
and
enforcement
right.
What
you
want
to
do
is
I
mean
the
easiest
way
to
think
about
this.
Is
it's
almost
like
ifttt
for
kubernetes?
B
So
you
want
to
say
something
like
if
a
namespace
with
label
pci
is
or
is
created,
then
any
workload
that
runs
in
the
namespace
must
have
a
node
selector
and
by
the
way,
if
it
doesn't
have
automatically
inject
a
node
selector
with
the
label
pci
in
that
workload
right
so
kiberno
lets
you
do
things
like
that,
and
you
know
reach
that
sort
of
higher
level
of
automation.
Very
simple
example,
of
course,
is
you
know
with
multi-tenancy.
B
If
you
want
to
create
name
spaces,
other
examples
even
and
this
may
be
related-
you
know
or
applicable
motor
service
mesh
when
you're
bringing
up
services.
Perhaps
you
want
to
based
on
labels
based
on
annotations
and
the
creation
trigger
generate
a
set
of
defaults,
or
things
like
that.
Another
use
case
that
we've
seen
with
storage
is
when
a
namespace
is
created
with
the
label
selector
to
be
able
to
automatically
create
a
backup
schedule
for
something
like
valero
right,
so
several
examples
being
built
in
the
kiverno
community
and
would
love
to
hear
more
about.
B
B
So
in
terms
of
how
the
machinery
works
right,
so
kiverno
installs
itself
as
a
admission
controller.
So
when
you
install
kaverno
and
we'll
go
through
that
live,
it's
a
pretty
pretty
simple
process,
either
through
a
hum
chart
or
through
command
line
and
what
it
will
do
is
it
will
install
itself
as
an
admission
controller.
B
It
will,
you
know,
generate
a
self-signed
certificate,
then
it
will
register
with
the
api
server
and
what
that
means
now
is
every
every
api
request
that
comes
in
into
the
cluster
will
pass
through
kiverno
and
based
on
the
arena.
Cache
of
policies
that
you
that
you
have
you
know
defined
kiverno
can
either
validate
mutate
or
generate
configurations,
and
the
outputs
of
that
is
either.
You
know
in
the
admission
review
response,
you
can
block
things
if
you
don't,
if
you
just
want
to
reject
insecure
configurations,
you
can
also
generate
you
know.
B
B
So
once
again
here's
a
simple:
you
know
policy,
you
know
definition,
but
you
know
we'll
look
at
a
lot
of
this
live
so
I'll,
just
kind
of
skip
through
this,
and
actually
let
me
pause
there
and
just
see.
If
there's
any
questions,
I
saw
some
messages
in
chat,
but
let
me
see
if
anybody
has
any
other
questions
or
topics
that
they
want
to,
or
you
know
on
what
I
just
went
through.
B
So
I
think,
there's
one
just
reading
chat
quickly,
there's
one
question
about
whether
it's
a
predefined
set
of
rules
or
for
different
workflows
or
different
namespaces.
So
there
are
some.
You
know
some
policy,
libraries
that
we're
creating
in
the
community,
but
caverno
is
customizable,
so
you
can
as
a
cluster
admin,
you
can
change
this.
You
can
define
your
own
rules
and
you
can,
you
know,
apply
those.
B
So
I
think
that's
you
know
one
of
the
reasons
why
and
I'm
sure
a
lot
of
you
have
heard
about
you
know
the
some
of
the
challenges
with
pod
security
policies
and
the
fact
that
they
are
going
to
be
deprecated
in
version
1.21,
so
they're
going
to
be
marked
for
deprecation
in
one
two
one
and
then
removed
in
one
two:
five.
The
reason
there
is
because
they
were
a
little
bit
hard
to
change
hard
to
manage.
B
B
Yeah
good
question
right
because
kubernetes,
I
think
user
roles
and
groups
are
typically
a
little
bit.
You
know
challenging
to
manage
right
because
kubernetes
itself
doesn't
have
a
you
know,
have
a
scope
of
users
in
itself.
So
kiverno
does
not.
B
B
Admission
requests
will,
have
you
know
the
user
information
and
in
that
user
information
could
it
could
either
be
a
service
account
or
a
username
which
associates
to
a
set
of
role
bindings.
So
kiverno
can
process
that
and
you
can
write
policies
based
on
which
user
is
is
making
a
request.
So,
for
example,
you
might
want
to
exclude
certain
rules
for
cluster
admin
but
apply
them
to
users
who
don't
have
admin
privileges.
B
So
that's
yeah,
that's
something
which
is
fairly
easy
to
do,
but
the
role
itself,
the
user
object-
would
still
need
to
be
defined
externally
or
you
can
use
service
identities
or
service.
You
know
accounts
to
manage
the
identities.
A
Jim,
in
order
to
define
policies,
do
you
does
that
require
a
cluster
admin,
privilege,
level
or
or
yeah.
B
So
kiverno
supports
both
cluster
level
and
namespace
level
policies.
So
for
cluster
level,
the
you
know.
Typically,
it
would
be
a
cluster
admin
that-
and
you
can
configure
this
through
our
back,
but
it
would
typically
be
a
cluster
admin
who
would
configure
cluster
level
policies.
Now
it
doesn't
have
to
be
the
kubernetes.
You
know
cluster
admin,
which
is
just
a
start
star,
star
type
of
role
which
allows
everything,
but
it
needs
to
be
a
privileged
role
that
can
operate
at
the
cluster
level.
B
A
And
those
you're,
if
you're
going
to
get
to
this,
don't
answer
it,
but
just
to
follow
on
that
are
those.
B
Good
question:
no,
so
so
the
policies
are
designed
to
you
know
so
that
you
cannot
override
or
change
some
something
at
the
namespace
level.
B
So
if
you
define
a
cluster-wide
policy,
it
applies
to
all
namespaces
or
based
on
the
filters
you
define
in
the
policy
itself,
so
you
can
exclude
certain
namespaces,
you
can
match
others
etc,
but
you
cannot
override.
So
it's
a
complete.
It's
like
a
and
of
you
know
all
the
policies.
You
cannot
change
the
behavior
or
override
the
behavior
of
a
policy
defined
at
the
cluster
level,
but
you
can
add
to
that
and
further
extend
the
behaviors
by
defining
new
rules,
new
policies.
A
Makes
sense
so
sounds
like
you're
like
this
is
like
there's
a
security
mindset
to
this.
What's
what's
going
on.
B
Yes,
no
very
much
so,
and
you
know,
one
of
the
you
know
first
examples
I'll
show
is
how
to
if
you're,
not
using
psps,
of
which
you
know
a
lot
of
folks
because
of
the
challenges
of
psps
tend
to
not
use
them.
I'll
show
how
simple
it
is
to
enforce
pod
security
with
kiberno
yeah.
The
other
question
I
see
here
is:
will
rules
consider
dynamic
changes
so
yeah
that's
a
great
topic,
so
the
rules
can
be
data
driven
and
in
fact,
actually
let
me
pull
up
a
more.
B
But
what
I
want
to
show
is,
if
you
dig
into
writing
policies
as
you're
kind
of
doing
things,
there's
a
whole
section
on
variables
and
data
sources
right
and
the
idea
behind
variables
is
to
make
you
know
just
like
you
know
the
questions
asking
to
make
things
dynamic
and
data
driven
right.
So
there
are
predefined
variables
that
givernom,
parses
and
populates
like
service
accounts
service,
account
names.
B
There
are
variables
coming
from
the
admission
review
request,
so
things
like
the
namespace,
the
object
is
operating
on,
etc,
and
you
can
also
like
look
up
things
within
the
policy.
Like
this
example
is
just
looking
up,
you
know
and
making
sure
that
the
readiness
for
probe
and
the
lymnus
probe
you
know
are,
you
know,
kind
of
you.
I
guess
it's.
It's
just
checking
that
it
must
be
the
port
number
that
must
be
less
like.
B
I
don't
know
why
this
would
make
sense,
but
if
you
want
to,
you
can
do
things
like
this,
so
the
more
interesting
example
is:
if
you
want
to
kind
of
look
at
the
request
operation
and
make
sure
that
you
know
if
you
just
want
to
operate
on
updates,
if
you
want
to
look
at
the
object
info,
that's
being
changed.
All
of
that
can
be,
you
know,
create
used
as
templates.
Here's
an
example
of
a
username,
so
lots
of
different
ways
of
driving
that,
in
addition,
you
know
for
more
advanced
policies.
B
Kivernor
can
fully
leverage
config
maps
and
other
external
data
sources,
so
one
data
source,
which
is
you
know,
of
course,
in
kubernetes.
Everything
is
driven.
You
know,
if
you
want
to
configure
your
services,
you
would
typically
use
a
config
map.
Similarly,
the
approach
we
have
taken
is
we've
made
config
maps
native
to
policies,
so
you
can
look
up
any
so,
let's
say:
there's
a
config
map
defined
here.
B
So
this,
for
example,
instead
of
env,
could
be
the
namespace
name
of
the
the
object
you're
operating
on,
which
will
pull
up
the
value
that
you
want
for
that
environment
variable,
so
yeah
lots
of
flexibility
there.
I
also
see
there's
another.
You
know
comment
on
wildcards,
so
with
namespaces
names.
Yes,
givernow
supports
wildcards.
So
that's
a
pretty.
You
know
handy
feature
because
then
you
can
start.
B
You
know
kind
of
having
some
naming
conventions
which
will
you
know,
like
you
could
say
every
namespace
that
starts
with
you,
know,
dev
right
and
you
say
dev
dash,
so
I
am
going
to
enforce.
In
fact,
you
can
even
enforce
that
and
say
that
if,
if
a
cluster
admin
is
not
creating
a
namespace,
it
must
have
a
certain
prefix
or
suffix
or
whatever.
You
wish.
B
All
right,
so,
let's
dive
into
some
actual
policy
examples
right.
So
I
have
a
few
in
the
slides,
but
let's
just
go
to
command
line
and
you
know
run
a
few
policies.
So
I
think
I
had
a
shell
up
and
running,
but
maybe
I
closed
that.
So
let
me
first
check
and
see
if
I
have
given
a
installs
or
I
think
it
may
already
be
running.
B
And
if
it
is
we'll
just
remove
it
and
reinstall
it
right,
because
I
just
want
to
show
you
the
full
experience,
so
I'm
going
to
go
back
to
the
docs
and
you
know
just
go
into
the
introduction,
and
you
know
just
in
the
quick
start,
there's
a
one.
You
know
it
pulls.
This
pulls
the
yamls
if
you're
you're,
using
this
for
any,
if
you're
using
it
for
a
test.
Cluster
like
we
are.
This
is
perfectly
fine.
If
you're
using
it
in
a
you
know,
there's
a
production
cluster.
B
B
In
terms
of
installation
right
so
pretty
pretty
straightforward
to
do
and
there's
a
lot
of
different
things
you
might
want
to
tune
which
you
can
configure
here,
but
going
back
to
what
we
want
to
do
for
now,
just
a
simple
way
of
getting
kiberno
up
and
running,
so
I'm
actually
going
to
just
do
a
delete
first
and
then
we'll
reinstall
it
and
bring
that
up.
So
this
will
pull
down
the
yamls
and
delete
everything
you
know
and
all
of
our
policies.
So
we
start
clean.
B
And
let's
make
sure
the
namespace
is
gone.
Okay,
sometimes
it
takes
a
while
to
terminate,
but
now
I'll
go
back
and
just
create,
and
once
I
create
or
apply
these
yamls,
it
will
go
ahead
and
recreate
everything
right.
So,
as
you
can
see,
there's
a
bunch
of
crds
that's
pulling
down.
Some
of
these
are
internal.
Some
of
these,
as
you'll
see,
is
like
more
user-facing
things.
B
B
So
it
may
take
a
few
seconds
to
initialize,
okay
there.
It
goes
so
now
it's
running
and
you
see
that
okay.
So
if
we
look
at
the
logs,
we
should
see
that
it's
registered
the
books,
the
certificates.
All
of
that
right.
So
at
this
point
now
we're
ready
to
you
know:
there's
no
policies,
there's
nothing
that
I
have
in
this
cluster,
but
what
I'm
going
to
do
is
we'll
just
go
to
that
example.
We
talked
about
about
pod
security
right
because
that's
something
that's
a
pretty
hot
topic
these
days.
B
As
you
know,
with
all
of
the
changes
coming
in
so
I'm
gonna
go
in
the
kiverno,
you
know
documentation,
I'm
gonna
go
to
this
policy
section
and
we'll
go
to
pod
security
here
so
policies.
We
have
pod
security
policies.
We
have
best
practice
policies
which
are
just
general.
You
know
nice
to
haves,
and
then
we
have
other
policies
which
are
in
here.
B
So
once
you
go
to
pod,
secular,
pod
security,
what
it
says
is
these
policies
are
based
on
the
kubernetes
spot
security
standards.
So
if
you
haven't
seen
this,
I
I
suggest
you
know
this
is
good
to
go.
Look
at
these
spot
security
standards
because
it's
a
great
way
to
organize
and
think
about.
You
know
different
security
levels
when
it
comes
to
apply
when
you
know
when
you're,
applying
these
to
workloads
etc.
B
So
here
there's
three
different
levels
defined
in
this
spot
security
standards.
So
this
is
in
the
official
kubernetes
talks.
There's
privilege,
baseline
and
restricted
and
privilege
means
everything's
allowed
baseline
is
just
the
defaults
restricted
is
I
want
to
run?
You
know,
secure
production
type
cluster.
So
what
I'm
going
to
do
is
I'm
actually
going
to
go
for
restricted
right
and
I'm
going
to
just
copy
this
one
liner,
which
is
telling
me
it
will
install
all
the
pod
security
policies.
B
So
if
we
go
back
to
our
command
line,
we'll
just
run
customize
and
you
know
apply
that
whatever
customizes
pulling
I'm
trusting
it
and
applying
it
to
coupe
couple
right.
So
this
takes
a
few
seconds
because
it
pulls
down
a
bunch
of
yamls,
and
once
that's
done,
I
see
now
I
got
a
bunch
of
different
policies.
B
If
I
go
into
my
command
line
again
and
do
you
know
get
c
paul
c
paul
is
short
for
cluster
policies
and
I
just
do
minus
a.
I
will
see.
I
have
a
bunch
of
security
policies
and
the
interesting
thing
is
they're
set
to
enforce
right.
So
with
kiverno
policies
you
could
set
them
to
either
audit
or
you
can
set
them
to
enforce.
If
you
set
them
to
audit,
it
will
just
produce
policy
reports
if
you
set
them
to
enforce
it,
will
actually
block
things
and
reject
workloads
from
running.
B
So
at
this
point
you
know
if
I,
if
I
try
to
run
a
workload
and
one
example
which
is
fun
to
try,
is
there's
this
website
called
bad
pods
and
it's
by
the
security
group
they're
called,
I
think,
they're,
yet
bishop
fox,
so
they
have
published
a
set
of
you
know
things
that
you
could
you
know
things
pods
which
allow
bad
things
to
happen
right,
so
things
which
you
should
not
have
in
your
production
cluster
like
this
one
says-
and
you
know
again,
this
is
fun
to
go
through
and
understand,
but
this
is
running
a
pod
with
all
doors
wide
open,
as
especially
shown
over
here,
which
is
pretty
bad
right.
B
You
shouldn't
be
doing
this
in
any
cluster
because
it
allows
a
lot
of
bad
things
to
happen
so
now
to
get
to
the
yamls.
What
I'm
going
to
do
is
just
grab
this
yaml
as
it
shows
and
then
by
the
way,
there's
actual
instructions
for
how
you
could
now.
You
know,
use
this
pod
to
create
or
to
get
access
to
host,
which
once
you
have
access
to
os,
of
course,
and
if
you,
if
the
pods
running
as
root,
you
not
only
have
access
to
hosts
but
your
route
on
the
host.
B
So
again,
not
a
not
a
good
thing.
So,
let's
see
if
caverno
and
the
policies
we
installed
actually
help
us
with
this
right.
So
I'm
going
to
run
this
very
bad
pod
on
my
cluster
and
immediately
caverno
came
back
and
said:
no
can't
do
this
and
it's
telling
me
exactly
why
it
was
blocked
for
a
host
ipc.
Can't
you
know,
run
it
with
that.
Namespace
can't
run
as
root
can't
run
as
privileged.
B
So
it's
just
blocking
everything
that
I
tried
to
do
right.
So
that's
extremely
simple.
You
know
and
of
course,
there's
a
lot
of
details
to
understand
before
you
run
this
in
your
cluster,
but
just
gives
an
example
of
how
easy
it
is
to
with
giverno
to
you
know,
sort
of
have
a
fairly
flexible
replacement
to
you
know:
pod
security
policies.
B
If
you
look
at
some
of
these
policy
details-
and
you
know,
we
can
kind
of
either
pull
up
a
few
yammels
and
go
through
those,
and
I
have
some
even
as
examples
so,
let's
actually
let
me
see
if
I
have
an
example,
but
you
know
yeah,
this
is
adding
a
label.
This
is
blocking
a
request.
I
want
to
search
for
an
example.
So
let's
look
for
root
yeah.
B
This
disallow
root
right,
it's
a
pretty
simple
example,
but
what
it's
doing
here
is
it's
checking
and
making
sure
that
you
can't
run
as
user
with
guide
or
group
id
0
and
you
have
to
set
other.
You
know,
specifications
in
your.
You
know
for
your
for
your
pods
itself.
B
Now,
if
you
want
to
so
one
of
the
other
interesting
thing
is
you
know
this?
These
policies
are
written
at
a
pod
level
right,
but
let's
try
to
do
something
with
caverno.
I
think
I
have,
even
though
I
applied
all
these
policies
at
the
pod
level,
I
want
to
run
a
deployment
and
see
what
happens
right,
because
I
don't
want
to
rewrite
all
these
policies
for
each
part
controller,
and
there
could
be
other
pod
controllers,
which
are
custom
resources.
B
That,
okay,
so
this
one,
I
believe,
I'm
pretty
sure
it's
a
deployment
so
I'll
just
do
nginx.gml
and
see
okay
good,
so
that
got
blocked
too,
and
the
reason
why
that
happens
is
kiverno
automatically
generates
rules
for
pod
controllers,
from
policies
that
are
defined
at
the
pod
level,
and
you
can
of
course
control
all
of
this.
You
know
through
annotations,
you
can
restrict
the
behavior.
B
In
fact,
if
we
go
back
to
the
caverno
docs,
there's
a
whole
section
on
this
in
the
autogen
rules
when
it
comes
to
writing
policies,
so
you
can
go
in
here
and
see
how
you
can
restrict
and
manage
this
and
what
you
know
what
problems
you
can
solve.
So
the
idea,
basically,
is
you.
You
write
a
policy
on
a
pod,
kiverno,
explodes
it
out
to
the
standard
controllers,
and
you
can
add
more
controllers
if
you
want
right.
B
So
this
is
just
a
simple
example
of
how
kiverno
can
leverage
kubernetes
patterns
communities
practices
and
make
things
simpler
in
terms
of
policy
management
all
right,
so
one
other
quick
example
I
want
to
give
and
then
you
know
we'll
we'll
kind
of
do
some
q
a,
and
so
I
want
to
show
you
know
how
you
can
generate
things
right,
so
I'm
going
to
write
up
take
a
policy.
B
This
is
a
pretty
simple
policy
which
you
know
we'll
just
grab
from
here,
which,
whenever
a
namespace
is
created,
and
it's
excluding
some
namespaces,
but
it's
all
it's
it's
also
gonna
clone.
You
know
clone
some
resources
which
I
could
either
have
you
know
defined
upstream
or
I
could
you
know,
define
inline
in
my
policy
itself.
Actually,
let's
grab
this
one
right,
so
we're
going
to
say
I'm
going
to
create
a
network
policy
for
each
namespace,
a
default
network
policy,
but
just
denies
all
by
default.
B
B
Okay,
so
if
we
create
that
policy-
and
now
if
I
create
a
namespace,
you
know
what
I
expect
to
happen
is
that
network
policy
also
gets
created
right.
So
I'll
just
create
a
namespace
called
test,
and
if
we
do.
B
B
If
I
say
delete
depending
on
whether
the
policy
had
said
it
should
synchronize
changes
or
it
should.
You
know,
allow
users
to
update
things.
You
can
control
whether
kibrano
will
recreate
this
or
it
will
you
know,
or
it
will
just
block
this
right.
So
actually,
I
need
to
say,
delete
network
policy
default
denying
grass.
B
So
let's
see
what
happens
there?
Okay,
so
it
deleted
it.
And
if
I
do
get
network
policy
again,
what
I
would
expect
to
see
is
it
got
recreated
and
it's
a
newer
one
right,
so
we
actually
in
kiberno
we
don't
block
the
delete
because
of
certain
you
know
the
way,
admission
controllers
and
our
back
and
other
things
work,
but
it
will
automatically
just
recreate
it,
and
this
is
a
good
way.
If
you
want
to
get
back
to
defaults,
you
can
just
let
kubernetes
recreate
it
as
well
all
right.
B
So
I
know
we're
just
seven
minutes
to
the
hour,
so
let
me
stop
there.
We
went
through
some
interesting
use
cases
there's
a
lot
more
in
the
documentation,
so
I
would
love
for
everyone
to
just
try
out
kiberno
install
it
play
around
with
some
of
the
policies
we're
fairly
active
on
the
kimono
slack
channel
too.
So
just
join
us.
There
take
a
look
at
the
github,
just
ask
questions
right
and
we're
always
creating
new
policies,
adding
things.
So
what
would
be
awesome
to
hear
from
folks?
A
Well,
it's
really
hard
for
me
to
bite
my
tongue.
Jim
I've
got
well
just.
I
have
so
many
questions
for
you.
I
am
I'm
trying
to
be
quiet
because
I
know,
and
I've
gotten
a
number
of
comments
from
others
on
the
call
who
are
who
are
just
digging
digging
what
you're
laying
down.
So
I
know
they've
got
some.
B
Questions
and
I'm
sure
so
feel
free
to
put
things
in
chat
or
just
you
know
we
can,
and
things
will
come
up
of
course,
later
too
so
slack's
a
great
way
to
reach
out,
but
lee.
If
you
had
anything
right
now,
we
could
either
answer
it
or
okay.
A
Good
yeah,
let
me
yeah
well,
let
me
express
this
in
a
way
in
which
you
can
educate
us
all,
including
me
so,
which
is
to
say
at
the
ethic
core
caverno
like
any
again
other
policy
engine
are
you.
It
was
nice
that
you
characterized
caverno,
in
contrast
to
other
policy
engines
and
sort
of
what
makes
caverno
unique
and
quite
useful
in
a
kubernetes
environment,
particularly
or
or
especially
all
the
so
while
caverno
does
at
its
core.
A
A
They
can
also
be
like,
and
I
don't
know
that
you,
you
probably
wouldn't
characterize
it
as
a
thumbs
up
or
thumbs
down.
You'd
say
like
hey,
there's,
also
this
a
mutate
or
the
ability
to
like
hey.
This
is
interesting.
It's
not
a
thumbs
up
thumbs
down
necessarily
or
some
respect.
It's
it's
a
match
right
and
so
then,
from
there
I
know
you
said
it
in
your
slide.
Can
you
articulate
it
again?
The
the
potential,
the
three
or
four
different
types
of.
B
Yeah
yeah,
so
you
can
mutate
things
like
you
mentioned,
so
you
can
add.
Labels
add
defaults
to
a
resource.
You
can
even
generate
things
right,
so
if
you
want
to
trigger
based
on
an
object
creation
or
a
label
creation
or
a
label
change,
you
want
to
trigger,
like
some
dynamic,
behaviors
and
those
come
down
to
like
generating
resources
or
deleting
generated
resources.
B
A
Makes
sense,
there's
no
there's
no
concept
of
a
scheduler
within
caverno?
Is
there?
Is
it
just
there.
B
Is
there
is
a
background
scan
so
it
will
periodically
scan.
You
know
the
all
of
the
configurations
and
cabernet
is
designed,
so
it
does
not
impact
existing
workloads
right,
so
it
won't
take
down
existing.
You
know
workloads
right,
but
it
will
periodically
scan
and
report
things
with
fines
so
and
that
creates
by
the
way.
B
One
thing
I
should
have
shown
is
a
policy
report
right,
so
you
can
kind
of
look
for
examples
for
that
in
the
docs,
but
kiverno
will
produce
a
policy
report
in
the
namespace
in
each
namespace
with
any
you
know,
violations
things
like
that.
A
Those
reports
then,
are
stored
in
in
one
of
the
custom
controllers
or
in
the
customization.
B
It
is
a
resource
itself
right,
so
it's
a
kubernetes
resource,
so
you
can
fetch
it
through
group
cuddle.
You
can,
you
know
then
use
any
other
tool
to
you
know
to
be
able
to
propagate
those
reports
upstream,
like
nirmata,
collects
those
reports
and
we
will
display
them
and
show
more
information.
Things
like
that.
A
B
So
certainly
making
ensuring
that
policies,
the
right
set
of
policies
are
deployed
to
each
so
one
nice
thing
about
you
know:
policy.
Kiverna
policies
is,
you
can
use.
All
of
your
git
tops
best
practices
to
manage
kubernetes
policies
right
they're,
just
resources
you
can
apply
customize
like
I
demonstrated
all
of
that.
Just
works
really
nicely
with
existing
tooling
without
having
to
you
know
kind
of
change.
The
way
things
are
done
just
to
fit
in
policy
management,
but
yeah
for
multi-clusters.
B
You
would
either
use
like
a
multi-cluster
manager
like
nermata
or
gitops,
or
you
know,
by
the
way,
the
flux
project
also
bundles.
Now,
given
a
policies
for
multi-tenancy,
so
that's
another
open
source
project
we're
collaborating
with,
and
so
that
you
know,
because
caverno
is
just
so
lightweight.
It
can
be
embedded
into
other
projects
for
some
standard,
behaviors
and
policies,
eliminating
the
need
for
more
controllers,
more
custom
logic.
B
Well,
yeah,
let's
follow
up
and
I'd
see,
you
know
also
dina
also
commented
on.
You
know
more
features
for
like
going
in
deeper
into
validate
and
mutate.
So
yeah.
Absolutely.
I
think
you
know
just
combining-
and
I
have
some
other
examples
which
we're
using
for
multi-tenancy
so
triggering
based
on
object,
creation
and
then
combining
validate
mutate
generate
is
where
the
power
really
lies
and
there's
some
pretty
interesting
things
that
can
be
done.
B
So
maybe
you
know
we
can
follow
up
with
some
offline
threads
and
conversations
just
you
know,
post
your
ideas
on
the
cabernet
slack
and
or
just
reach
out
to
me.
You
know,
however,
you
wish
and
would
love
to
explore
that.
B
So
the
policy,
of
course,
as
soon
as
it's
updated,
will
take
effect
right
away
for
any
other
subsequent
admission
control
right,
but
for
existing
resources
again,
the
way
kiberno
is
designed
is
it
doesn't
go
back
and
you
know
it
won't
impact
existing
resources,
so
there
it
will
produce
violations
if,
if
needed-
and
it
will-
you
know
like
if
you
change
anything
in
that
existing
resource-
it
will
then
you
know,
start
reporting
or
show
that
new,
the
new
update,
but
one
as
soon
as
you
save
the
policy
it
takes
effect
in.
B
A
I
I
have
to
do
this
second
one
other
one.
I
want
to
insert
in
here
real
quick,
the
of
the
of
the
criteria
that
a
policy
can
reason
over
like
the
various
constraints,
the
various
things
that
would
would
determine
a
match,
or
that
would
determine
the
outcome
of
the
evaluation
is:
is
it
possible
for
those
that
criteria
to
be
from
systems
external
to
kubernetes,
like
from
a
database,
for
example,.
B
Yeah
good
question
right,
so
we
don't
allow
in
kiverno
calling
external
systems
and
we've
specifically
designed
this
to
be
a
closed,
and
you
know
kind
of
secure
system
right,
but
you
can
use
config
maps
to
achieve
that.
So
the
idea
would
be
if
you,
if
you
want
an
external
system,
so
let's
say
some
other
system
is
producing
a
good
example.
Is
you
know
you
want
to
make
sure
that
the
image
name
and
tag
matches
the
hash
in
your
let's
say
in
your
upstream
image
registry
right?
B
So
for
something
like
that,
you
could
have
a
you
know:
periodic
job
in
kubernetes
which
fetches
the
hash
from
your
image
registry
stores
it
in
a
config
map,
and
then
you
could
use
that
in
kiverno
right.
So
that
would
be
the
pattern
almost
like
using
a
sidecar
pattern
but
think
of
it
as
a
sidecar
controller
right
where
you're
writing
a
separate
controller,
which
does
that
one
thing
produces.
The
data
inside
the
cluster
securely
and
cabrano
can
consume.
That.
A
Well,
it's
actually
kind
of
funny
because
well
like
like
technically
like
while
side
cars
get
mentioned,
a
lot
in
context
of
service
meshes
like
right.
It's
a
generic
concept
too,
like
it's,
not
a
surface
mesh
doesn't
own
it,
but
you
know
jim.
This
is
this
is
fantastic.
I
you,
it
looks
like
you'll
probably
have
a
couple
of
folks
jumping
into
slack
with
some
questions.
For
my
part,
I've
got
a
I'll
I'll
hit
you
up.
B
All
right
sounds
good
well.
Thank
you,
everyone.
I
know
we're
a
little
bit
past
time,
but
really
appreciate
you
know
having
me
here
and
would
love
to
hear
from
everyone
in
the
community.