►
From YouTube: TGI Kubernetes 104: Kyverno
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will be looking at Kyverno a Kubernetes Native Policy Management tool.
A
Good
afternoon
everybody
and
welcome
to
TGI
k1o
for
good
to
see
everybody.
Let's
see
was
with
us
already.
This
morning
we
got
Rory,
we
got
suresh
from
hamburg,
we
got
martin
and
while
the
and
the
maddie
you
know
the
usual
suspects,
so
it's
always
great
to
see
you
all
I'm.
So
I'm
super
glad
you're
here.
So
today
we're
gonna
look
into
Kivar.
A
I
think
is
actually
kind
of
an
interesting
segue
away
from
the
away
from
the
what
he
call
it.
The
grokking
series
I'm
taking
a
little
break
from
the
drokken
series
this
week,
but
we
are
also
going
to
probably
be
exploring
I
mean
in
Khmer
know,
is
a
dynamic
admission
controller,
and
so
they
implement
things
like
the
validating
Mohawk,
the
mutating
webhook.
Let
those
are
the
two
dynamic
admission
controller
interfaces
that
the
community
that
cube,
API
server
expresses,
and
so
we're
gonna
be
looking
into
how
they
go
about
that.
A
We're
gonna
talk
about
some
of
the
ways
that
those
things
are
configured
we're
gonna,
explore
those
things
and
we're
also
going
to
explore.
Compare,
know
a
new
policy
tool
put
out
by
pneumonic
cloud
that
looks
like
it
looks
to
try
and
solve
some
of
the
same
problems
or
actually
more
of
the
problems
than
even
the
cloud
security
policies
apply,
and
this
in
this
particular
case
or
I,
think
what
they're
trying
to
do
and
and
we'll
get
into
it
when
we
start
playing
with
it,
is
that
they're
trying
to
make
it?
A
So
it's
a
much
lower
learning
curve
to
be
able
to
implement
reasonable,
validating
and
mutating
policies
within
your
kubernetes
cluster.
So
you
can
say
things
like
I
want
to
generate
a
white
list
of
repositories
that
are
going
to
apply
to
pods
so
that
those
images
can
only
come
from
these
particular
repositories.
And
today
you
can
do.
A
But
you
know
that
all
each
they
each
have
the
kind
of
their
own
learning
curve.
Opa
is
one
of
those
ways.
There's
there's
tons
of
ways
to
do
that
and
so
I.
This
one
looks
really
interesting,
because
the
API
seems
pretty
concise
anyway
good
to
see
well.
Who
else
joined
us
here?
Mike
Morell
from
New
Jersey
good
to
see
you
Mike,
let's
go
ahead
and
get
started,
we've
got
Jim
from
San,
Jose
and
I.
Believe
yeah
Jim
is
one
of
the
people.
A
A
All
right
great,
so
all
right,
as
always,
our
notes
are
up
this
week
at
TGI,
K,
dot,
io,
/
notes,
and
so,
if
you
would
like
to
append
a
URL
or
add
some
notes
or
anything
feel
free
to
do
it
there,
that's
worth
that
looks
like
some
stuffs
been
added.
I
saw
that
actually
a
long
live
connection.
So
this
look
pretty
interesting.
A
A
So
Cooper
you
just
118,
alpha
3
is
out
I
can't
believe
we're
already
at
118.
It
seems
like
it's,
you
know
so
one
of
those
things
that
just
continues
to
march
forward
kind
of
amazing.
Really
and
now
it
looks
like
the
release.
Notes
are
now
sorted
by
kind
instead
of
sig,
hopefully
making
it
easier
to
read.
Let's
take
a
look.
What
that
looks
like
sort
release
notes
by
kind
instead
of
sig
Wow,
so
I
guess
it
would
be
tight
up
kind
to
graduate
I
wish.
B
A
What,
though,
now
looks
like
this?
Let's
take
a
look
alright,
so
this
is
an
example
of
what
it
looks
like
so
changes
by
kind.
You
have
a
list
of
API
changes
and
these
are
no
longer
broken
up
by
SiC
right.
So
if
there
was
an
API
change
that
was
related
to
signet
working,
it
would
have
been
down
underneath
the
Signet
working
section
rather
than
having
it
be
referenced
in
a
list.
This
way,
the
bugs
that
are
addressed,
the
cleanup
documentation,
feeling
tests
and
flaky.
A
Behind
the
idea
that
this
is
more
reader,
reader
friendly,
so
you're
not
actually
like
going
through
the
particular
kind
pieces,
and
remember
also
that
you
always
have
rel
note
skates
do
as
a
way
for
searching
for
these
things.
Is
that
too
small?
It's
probably
too
small?
Let
me
make
it
bigger
here
we
go.
A
You
have
real
notes
as
a
as
a
resource
as
well.
It's
just
cooking
further
than
that.
Real
quick.
Just
to
remind
you
all
that
it's
here,
so
real
notes
is
a
tool
that
allows
you
to
actually
pick
a
particular
release
in
a
particular
group
or
area,
but
you
want
to
look
at
and
it'll
actually
give
you
the
look,
the
release
notes
associated
with
that
particular
area
or
release.
So
this
is
a
pretty
tunable
page,
depending
what
you're
looking
at,
like
you
can
see.
Just
cook
code
generation
release
notes.
A
A
Friend
of
the
show,
Stephen
augustus
mentions
that
the
cluster
API
provider
for
Azir
is
new
at
alpha
is
now
at
v1,
alpha
2.
So
a
lot
of
the
providers,
cluster
API
kind
of
sets
the
pace
for
some
of
these
things
as
a
cluster
API,
which
represents
kind
of
a
generic
API.
A
That
allows
you
to
think
about
and
coordinate
the
lifecycle
of
kubernetes
clusters
as
its
API
generation
proceeds,
the
providers
or
the
people
of
the
the
implementations
that
are
going
to
act
as
sort
of
the
glue
between
the
generic
API,
that
is,
cluster
API
and
whatever
your
infrastructure
provider
is,
they
have
to
kind
of
keep
up
with
that
change
right,
and
so,
since
cluster
API
provider
says
cluster
API
went
to
V
1
alpha
2.
A
lot
of
that
providers
are
actually
coming
around
to
that
as
well.
A
A
Those
of
you
there
about-
probably
a
bunch
of
you
that
don't
know
this,
but
I
was
at
core
OS
for
a
year
before
it
was
acquired
and
I
have
to
say
that
as
a
company
it
was
one
of
those
companies
that
really
resonated
with
me
as
a
person,
and
so
it's
kind
of
it's
always
a
kind
of
a
shame
to
see
things
change
over
time.
But
you
know
it
is
what
it
is
right
like
we
all
we
go
go
through
it.
This
note
is
particularly
sad
to
me.
A
This
note
is
that
the
end-of-life
for
core
OS
container
Linux,
so
container
Linux,
is
no
more
at
this
point.
However,
and
then
they
do
call
this
out.
Oh
sorry,
they
do
call
this
out
in
the
notes,
which
is
really
exciting
and
I
thought
actually
really
a
super
classy
thing
to
do
here.
We
go
so
they're
calling
out
that
obviously
they're
making
a
new
OS,
which
called
Fedora
core
OS,
which
is
not
quite
the
same
thing
right
like
it,
doesn't
have
dual
boot
partitions.
A
A
lot
of
the
design
decisions
that
were
made
into
container
linux
didn't
didn't
continue
forward
in
the
form
of
Fedora
core
OS.
However,
it's
still
another
good
entry
into
the
operating
system
market.
It's
really
still
focused
on
a
lot
of
the
same
design.
Targets
that
that
container
Linux
was
read
was
originally
aimed
at
it's
just
really
more
specifically
around
fedora
and
what
they
do
call
out.
I
mean
this
is
what
I
wanted
to
make
clear.
A
That
way,
and
in
fact
that
was
actually
what
I
was
referring
to
earlier
was
a
tweet
by
red
beard.
Who
did
that
and
found
a
bug
of
course,
because
my
beard
is
really
good
at
that
sort
of
thing.
Right
beard
was
one
of
the
amazing
people.
I
worked
with
that
core
OS,
so
lots
of
good
lots
of
good
changes
there.
But
you
know
kind
of
sad
to
see
container
the
next
go
away,
but
it's
now
end-of-life
it's
been
longer
than
they
stated
that
it
would
be
when
core
OS
was
acquired
by
Red,
Hat
I.
A
Think
our
original
agreement
was
significantly
less
time,
but
there
we
go
see
what
let's
see
what
y'all
think
about
that
look.
My
look
over
here
for
a
second,
because
that's
where
my
that's,
where
my
chat
for
you
all
it
is
so
we
got
Tim
Downey,
happy
Friday
and
got
Jim
from
seven
days.
He's
like
I,
said
he's
part
of
the
kabir
know,
project
which
is
awesome.
They
got
a
son,
John
sing
a
little
everybody
and
shivkumar
hello.
A
Also
one
of
the
maintainers.
How
you
doing
the
Sheep
come
on
in
Alex,
hello
from
Northern
California
will
eat,
hello
can
I
do
something
like
can
I
do
something
like
style
guideline
eg
set
a
namespace
convention
via
policy
that
looks
like
question
before
and
it
looks
like
it
should
be.
Yeah
I
think
I
think
well,
maybe
we'll
play
with
an
example
of
that.
Have
your
Friday,
also
from
team
who
I
worked
with
already
kind
of
via
email
hit
a
few
questions
about
how
some
of
this
stuff
works
name
conventions.
A
You
can
also
add
prefixes
and
suffixes.
Ok,
Alejandro
from
Lima
Peru,
nice.
Oh,
that's,
awesome!
I'm!
Really
glad
you
got
the
CK
CK
is
awesome,
I'm
glad
that
went
well
for
you.
So
far,
I'll
sing
a
little
everybody
in
the
man
yeah.
It
was
really
it
is
revolutionary,
but
you
know
it's
being
perpetuated
in
a
variety
of
different
ways.
A
You
know
it's
like
when
you
have
a
really
good
idea
and
people
pick
it
up
and
carry
it
farther
than
you
plan
to,
and
that's
really
exciting
to
me
like
it's
exciting,
to
see
that
flatcar,
Linux
or
bike
in
bulk
is.
Can
it
continue
down
that
path,
they're?
Taking
that
a
lot
of
the
they're
taking
a
fork
of
it
forward,
and
that's
really
great
I
mean
that
I
also
just
delivered
on
Nebraska,
which
is
great
a
tool
that
replaces
the
container
update
or
the
container
linux
up,
yeah
pooling.
A
Leverages
the
omaha
protocol,
thus
the
inside
joke
from
nebraska,
which
I
thought
was
pretty
funny
so
people
in
moonlight
most
of
course
have
influenced
open
ship.
That's
true,
although
openshift
isn't
an
OS
right.
So
there
is
a
lot
of
core
OS
influence
inside
of
our
hat
and
it
really
did
change
the
product
that
is
open
shipped
and
that
is
actually
I
think
it
changed
it
for
the
better
but
well
I,
guess
you
know
time
will
tell
how
it
goes.
A
A
A
A
great
person
I've
been
actually
I
was
just
chatting
with
him
earlier
today
about
effectively
trying
to
replace
or
play
with
a
mono
for
replacing
a
cni
on
his
on
his
clusters,
and
so
we
were
just
talking
through
like
some
of
the
models
for
that
which
it
are
pretty
great.
So
as
he
calls
out
here,
this
is
not
a
beginner's
guide.
This
is
a
this
is
assuming
that
you're
kind
of
already
working
through
or
working
with
kubernetes,
and
you
want
to
actually
like
explore
how
to
get
past
the
CKA.
A
Based
on
that
experience,
so
he's
been
running
kubernetes
in
production
in
General
Motors,
since
2018
he's
an
active
community
member,
and
then
he
decided
to
kind
of
like
dig
into
how
this
works
so
feel
free
to
check
this
article
out
I
think
it's
a
really
good
one,
especially
if
you're
already
kind
of
using
kubernetes.
So
it
really,
it
provides
you
a
lot
of
really
great
reasonable
tools
for
that
sort
of
thing.
A
Sidecar,
Cuttino
life
cycle
changes,
I'm,
pretty
sure.
We've
mentioned
this
in
the
past,
but
I
want
to
make
sure
that
it's
made
make
clear.
So
this
article
is
definitely
worth
a
chat
psych.
Our
container
life
cycle
changes
in
puberty
is
118
means
that
there
will
now
be
a
sidecar
concept
right.
So
traditionally
I
guess
what
if
you
could
apply
such
a
term.
When
we
talk
about
sidecars,
we
generally
mean
that
the
SCI
car
is
is
a
container
that's
running
inside
of
a
pod
that
isn't
maybe
the
primary
pod.
A
It's
just
in
one
of
the
other
right,
and
so
traditionally
that
means
that
there's
some
like
racing,
it's
stuff
that
could
happen
because
you're
going
to
define
multiple
containers
inside
of
the
same
pod
and
they
might
come
up
at
different
times
and
then
the
lifecycle
of
that
wasn't
necessarily
handled
particularly
well.
So,
like
say,
you
had
a
a
logging
container
and
your
application
container
and
your
login
container.
A
You
wanted
to
make
sure
that
logging
container
ends
after
the
application
container
exits,
and
that
way
you
can
flush
the
last
bit
of
logs
up
before
calling
the
entire
pod
down,
and
so
there
wasn't
really
a
tool
per
se.
There
wasn't
really
a
paradigm
within
the
pod
that
would
allow
for
that,
but
I
have
a
118.
You
can
now
March,
if
you
know,
mark
these
containers
sidecars
and
they
start
up
before
normal
containers
and
shut
down.
A
After
all,
of
the
other
containers
have
terminated,
and
in
this
way
you
can
actually,
you
can
provide
kind
of
a
better
life
cycle
model
for
those
containers
that
are
not
the
primary
container
inside
of
your
pod.
I
hope
that
explains
the
problem.
Well,
what's
actually
happening
here,
but
the
article
gets
into
it
some
detail
and
it's
definitely
worth
checking
out.
You
know
to
be
really
clear
to
be
really
clear:
I'm
not
talking
about
the
NIC
containers
here,
I'm
talking
about
118,
definitely
worth
checking
out
I.
Think
it's
gonna
be
useful
for
people.
A
And
scaling
long-lived
connections
and
kubernetes.
No,
this
is
a
topic.
I
think
that
people
frequently
overlooked-
and
you
know
you
all
know
me
I'm
into
networking
stuff,
so
I
do
think
this
is
a
good.
It's
probably
a
good
article.
I
haven't
read
this
one,
and
in
but
Cabrini's
does
not
load
balanced
long-lived
connections.
Some
pods
might
receive
more
requests
than
others.
If
you're
using
HTTP,
2,
J,
RPC
or
others
might
want
to
consider
client
side
load
balance.
Cabrini's
offers
two
abstractions
for
to
deploy
apps.
A
A
Yeah,
it's
gonna
sort
of
round-robin
and
I.
Think
I
talked
about
this
in
the
Q
proxy
episode
in
the
grokking
queue
proxy
episode.
How
this?
How
this
particular
traffic
pattern
works
because
it's
it
is
somewhat
naive,
little
balancer
right!
It
says
it
has
no
intelligence
to
understand
that
there
are
a
number
of
connections,
because
each
cube
proxy
instance
on
each
of
your
nodes
is
making
a
decision
about
where
to
send
that
traffic
all
right,
and
so
you
have
things
like
external
traffic
policy
and
things
like
that.
A
Well,
the
connections
don't
scale
another
box,
I
mean
he's,
got
some
good
points
here.
It's
definitely
worth
checking
out,
but
what
he's
highlighting
here
is
that,
if
you
have,
if
you're,
establishing
a
bunch
of
connections
or
long
lobe
connections
like
WebSocket
connections
or
those
sorts
of
things
toward
applications
that
are
hosted
inside
of
your
kubernetes
cluster
and
you're,
using
service
type
load
balancer
to
access
those
applications
right,
then
the
path
to
actually
distributing
that
traffic
across
nodes.
A
The
only
only
real
information
that
your
load
balancer
might
have
to
work
with
in
distributing
that
traffic
is
how
many
healthy
endpoints
are
on
the
backend.
It
doesn't
really
have
the
intelligence
to
understand
how
it
doesn't
have
the
intelligence
to
understand
how
busy
those
things
are
right.
So
a
client-side
load
balancer
might
be
better
worth
talking
about.
A
How
does
everybody
going
mr.
Joshua
checking
in
he's?
Also
a
choreo
he's
left,
probably
putting
his
hand
up,
see
you
know,
solidarity
and
then
I
also
pulled
up
cute
weekly
because
it
had
a
couple.
Other
interesting
articles.
I
just
wanted
to
talk
about
real,
briefly
cube
weekly
is
actually
a
pretty
good
weekly
thing.
A
It's
actually
hosted
by
the
by
the
CN
CF
ambassadors
for
kubernetes,
so
it's
folks,
like
Chris
short
Bob,
Cohen,
craig
box
and
Kim
McLeod
and
Michael
Hassan
blah
sword
we're
actually
hosting
this
information
and
they
do
a
pretty
decent
job
of
gathering
up
some
relevant
articles
from
the
space.
You
know,
if
you
don't
follow,
Chris
short
on
Twitter
I
definitely
recommend
it.
He's
he's
always
got
he's,
always
linking
different
articles
and
kind
of
being
a
great
resource
to
the
community
in
general,
which
I
think
is
incredible.
A
Some
of
the
stuff
I
found,
which
I
thought
were
pretty
interesting
from
this
particular
week's
episode,
are
cube,
stone,
which
is
a
benchmarking
operator
that
can
evaluate
the
performance
of
Cooper
news
and
features
installations.
Sorry
and
I
thought
that
was
actually
really
interesting.
So
they
have
a
number
of
different
benchmarks
that
you
can
apply
from
the
operator.
Sis
bench
fiyo
eioping
system,
iperf,
cube
perf,
and
actually
they
can
do
sis
bench
on
memory
or
CPU,
which
is
pretty
cool.
They
have
drill.
B
A
A
load
balancer
or
that's
a
load
tester
looks
like
they're
trying
looking
to
actually
deploy
cube,
perf
and
Etsy
D
in
there
as
well
or
STD,
testing
and
benchmarking,
but
so
they're,
actually
they've
already
got
a
few.
Pretty
good
useful
tools
inside
of
pube
stone
so
definitely
check
that
out.
If,
if
performance
evaluation
is
something
you're
interested
in
exploring
HPE,
hewlett-packard
Enterprises
acquires
a
zero
trust.
Networking
firm,
Sai
tails,
so
site
L
was
the
company
that
was
actually
carrying
the
idea
of
spiffy
and
spiffy
and
those
things
forward
right.
A
So
they
were
actually
working
on
zero
trust.
A
lot
of
the
developers
that
were
working
upstream
on
spiffy
had
and
and
those
tools
which
are
basically
service
production
identity
for
everyone
else.
That's
what
spiffy
stands
for,
and
this
is
basically
the
idea
that
you
need
identity
at
an
application
layer
to
actually
establish
trust
very
early
in
the
life
cycle
of
the
application,
as
it
might
relate
to
any
of
the
resources
that
application
needs
access
to
right.
A
And
if
you
have
identity,
that
is
instantiated
at
the
time
that
the
process
is
being
created,
and
you
have
perhaps
some
way
of
asserting
that
the
identity
belongs
with
that
particular
application.
Then
you
can
build
real
trust
right.
You
can
build
zero
trust.
Networking
in
that
the
identity
to
that
application
has
is
now
the
way
that
it
can
be
constrained
to
access
to
other
applications
or
other
resources
within
your
infrastructure.
A
Traditionally,
what
ends
up
happening
is
that
we
end
up
creating
these
huge
silos
that
effectively
have
a
common
set
of
access
permissions
right
so
like
behind
that
firewall.
That's
where
we
put
all
the
financial
stuff
behind
that
firewall.
That's
where
we
put
all
the
stuff
that's
actually
hosted
in
our
data
center
in
in
Colorado,
and
the
way
that
we
provide
access
to
those
resources
is
that
we
actually
grant
access
through
that
firewall
from
some
particular
entity
inside
of
an
IP
block.
A
That's
kind
of
been
the
traditional
model
for
for
network
security
for
a
long
time,
where
are
things
like
Scytale
and
and
and
the
technologies
that
they've
been
working
on
come
in
is
that
we
gotta
kind
of
need
to
turn
that
on
its
head
and
gonna
come
up
with
a
different
way
of
actually
establishing
that
that
trust
between
resources,
that
ability
to
grant
that
access-
and
the
benefit
of
that
is
that
it
can
be.
We
can
become
a
lot
more
granular
if
we
have
the
idea
of
service
identity,
anybody.
A
A
A
This
is
another
interesting
article,
DNS,
lookups
and
kubernetes.
This
gets
into
some
of
the
details
that,
maybe
not
everybody
doesn't
understand.
It.
Definitely
we're
checking
out
like
how
this
works.
They
talk
about
n
dots.
They
talk
about
the
way
that
resolution
works
and
I
think
it's
simply
worth
checking
out.
A
If
you
don't,
if
you
if
this
is
something
that
is
a
relatively
new
construct
for
you
like,
if
you
haven't
really
explored
DNS,
you
know
a
great
degree,
I'm
happy
for
you
and
otherwise
I
would
say
you
know,
go
check
it
out
like
this
is
a
good
article
to
dig
into
the
detail
and
then
the
last
thing
I
was
gonna
show.
Was
this
continuous
profiling
go
applications
using
kubernetes?
A
So
if
you're,
just
you
got
a
whole
bunch
of
different
functions
and
you're,
not
sure
which
one
is
actually
being
the
cause
of
your
problem,
and
you
can
see
how
this
would
be
valuable
for
things
like
the
API
server
or
the
controller
manager
in
which
we
have
a
bunch
of
different,
like
you
know,
threads
and
those
sorts
of
things
that
are
actually
working
on
different
particular
pieces
of
code,
and
you
wanted
to
understand
lucky,
ok.
Well,
where
is
my
controller
manager
really
spending
all
of
its
time
right?
A
We
might
be
able
to
go
back
to
the
registry
and
take
a
look
at
what
the
p
prof
had
it
had
to
say,
or
the
for
that
particular
period
of
time,
and
that
is
great
I
mean
contextually.
That's
huge
right
because,
like
I
mean
for
good
or
ill
most
people
who
are
actually
trying
to
troubleshoot
this
stuff
or
trying
to
do
it,
live
right.
They're
trying
to
do
it,
they're
firefighting,
it
right.
A
So
they
need
the
problem
to
still
be,
in
effect,
for
them
to
actually
have
the
tools
applied
in
such
a
way
that
they
can
understand
what
happened
or
what
is
happening
right.
It's
rule
like
this.
You
can
say
you
know
what
I'm
just
going
to
do:
a
sampling
of
p
prof
data
over
time
and
load
it
into
a
registry
and
if
I
see
you
know,
if
somebody
notifies
me
that
it
won't
two
o'clock
on
Friday.
A
There
was
a
problem
and
it
was
with
this
application
and
I
can
go
and
take
a
look
at
the
P
prof
data
for
that
application
at
that
time
and
kind
of
build
a
little
more
context
about
what
happened
and
that's
a
huge,
huge
cool.
That's
a
very
cool
thing
like
I
mean
if
you've
ever
you
know
been
on
the
unlucky
end
of
like
trying
to
find
that
needle
in
a
haystack.
That
would
be
a
huge
thing,
so
just
pointing
that
out
definitely
worth
checking
out.
I
think
I
might
play
with
this.
One
I
think.
A
In
another
tgia
as
well,
because,
like
I,
like
the
idea
of
exposing
people,
well
talk
about
a
little.
Why
here
in
just
a
minute
as
we
get
into
the
Kaveri,
know
stuff,
so
that
was
this
week's
notes.
I
hope
those
were
helpful,
Chris
no,
but
did
present
yes,
and
it
was
neat
because
Chris
is
actually
presented.
I.
A
Think
a
number
of
possums,
like
I,
think
in
a
series
like
they've,
been
doing
a
lot
of
talking
at
that
particular
event
for
some
time
now,
which
is
awesome
and
now
and
and
in
the
past
they've
done
to
talk
about
you
can't
have
a
cluster
without
a
cluster,
bleep
and
I.
Think
if
they
continued
that
narrative
in
this
FOSDEM
talk
and
basically
described
how
there
is
tooling,
you
know
by
sistah
and
Falco,
which
are
both
o
or
o
as
an
open-source
project
that
might
allow
you
to
find
those
problems
right
there,
how
you?
C
A
Let's
dig
into
compare
them
so
I
mean
just
beat
up
my
tabs
here.
A
little
bit.
I
know
I'm
a
robo
I'm
closing
tabs
people
here,
we're
doing
it.
I'm
not
just
gonna
like
beat,
have
overloaded
if
they're
nota
IO
is
the
domain
for
this.
It
is
put
out
by
nomadic
loud,
like
I,
said,
and
the
way
they
describe.
It
is
managed
policies
as
kubernetes
resources,
validate
mutate
and
generate
configurations
and
I
have
to
say
when
I
started.
Looking
at
this
and
I
haven't
really
spent
a
lot
of
time
with
it.
A
Yet
we're
gonna
spend
a
lot
of
more
time
with
it
today.
So
I
didn't
really
want
to
like
do
too
much
exploration
before
we.
We
all
had
a
chance
to
look
at
it
together,
but
the
thing
that
I
thought
was
impressive
is
the
art
is
that
there
are
three
different
ideas
here
right,
so
validate
is
the
idea
that,
when
a
resource
is
created
or
updated
that
you
can
specify
what
you
expect
of
that
resource
to
be
defined,
and
if
that's
not
true,
then
you
can
take
some
action
right.
You
can
reject
it.
A
You
can
warn
about
it.
These
are
things
that
you
can
do.
Mutate.
Mutate
is
a
thing
where
an
object
gets
created
and
you
were
expecting
it
to
have
particular
fields
like
you
want
to
make
sure
that
when
a
namespace
is
created,
it
has
a
quota
associated
with
it
right
and
those
are
defined
at
the
namespace
layer.
So
now
all
somebody
has
to
do
is
say:
cubic
I'll,
create
NS
and
the
quota
would
be
applied
automatically
right.
So
that's
a
mutating
policy.
A
It
allows
you
to
mutate
the
object
with
some
rational
set
of
defaults,
so
such
that,
before
that
resource
is
accepted
through
admission
right,
we
can
actually
modify
that
object
with
some
mutating
capability
and
then
generate
this.
One
is
really
neat.
The
generate
is
the
idea
that
when
you
see
a
resource
be
created,
you
can
have
another
resource
be
generated
from
the
creation
of
that
resource.
So
when
you
see
like
so,
we
said
when
somebody
does
create
a
cube
kettle
creates
a
namespace
right.
Then
maybe
I
want
to
deploy
a
config
map.
A
That
has
you
know
some
interesting
information
in
it.
That's
just
globally
available
right,
like
perhaps
I
want
to
have
a
config
map
that
is
deployed
to
each
namespace
that
has,
or
maybe
like
I
want
to
share
a
secret
with
each
namespace
or
something
like
that
right.
There's
there
you
can
see
why
generate
would
be
kind
of
interesting,
but
we're
gonna
dig
into
the
docks
a
little
bit
and
play
with
those
things.
A
The
one
thing
I
did
notice
about
compare
know
so
far.
Is
that
they're
really
trying
to
kind
of
lower
the
learning
curve
as
it
relates
to
policy
management?
So
all
of
these
objects
is
validating,
mutating
and
generating
configurations.
Sometimes
there
you
have
to
you
have
to
know
quite
a
lot
to
be
able
to
make
a
useful
one
right.
So
I
think
that's
challenging.
A
Let's
kick
over
to
the
docs
here.
So
this
is
the
repository
and
the
readme
for
the
repository
and
they're,
basically
again
kind
of
describing
a
lot
of
the
same
things
that
I
described.
One
of
the
things
that
they
are
pointing
up
here,
which
I
thought
was
useful,
is
that
the
mutating
policies
can
be
written
as
overlays
leveraging,
customize
or
a
JSON
patch,
and
they
have
some
good
examples
we're
going
to
play
with
and
then
yeah.
A
Resources
mutating
validating.
Stick,
you
look
at
this
one,
so
so
the
validating
resource.
This
policy
requires
that
all
pods
have
CPU
and
memory
resource
requests
and
limits,
and
so
in
this
object
it's
a
ml
document.
We
got
Kaveri,
no,
the
IO
v1
as
the
API
version
and
the
kind
is
cluster
policy.
Presumably
this
means
it
applies
across
the
entire
cluster.
A
And
force
blocks
the
request
and
audit
reports
violations.
That's
a
good
note
there.
So
if,
in
this
case
we
see
this
validation
fail
right
and
it'll
actually
just
won't.
Allow
admittance
to
the
pod,
which
is
a
good
one,
and
we
define
these
things
as
rules
cause
similar
concept
in
some
ways
to
the
way
that
our
back
works
and
and
those
sorts
of
bits
right.
So
we're
going
to
match
for
resources
of
kind
pod
and
then
the
validation
is
that
nice.
So
this
is
actually
an
overlay
model
right.
So
we
have.
A
We
know
every
pod
has
this
idea
of
spec
and
under
in
in
the
containers
underneath
all
of
the
names
right
underneath
all
of
the
containers
that
could
be
associated
with
this
particular
pod.
We're
gonna,
look
for
the
resources
section
and
the
limits,
we're
looking
for
memory
and
CPU
limits
and
also
requests,
and
then
the
neat
thing
there.
It's
somewhat
intuitive,
if
you're
into
this
kind
of
thing,
but
like
what
they're
doing
here
is
they're,
basically
making
sure
that
you
have
something
set.
So
that's
actually
pretty
interesting.
A
What's
also
interesting
is
like,
if
you
had
defined
a
limit
range
within
the
namespace,
you
could
also
solve
the
same
problem
right.
You
could
basically
say
that
a
limit
range
would
define
what
a
default
limit
and
request
would
be
for
any
given
pod
within
the
namespace.
But
in
this
way
you
would
actually
get
the
pod
to
be
rejected
right,
and
this
way
the
pod
would
get
rejected
and
you
would
get
back
a
message
saying:
CPU
and
memory
resource
requests
and
them
is
required.
So,
let's
play
with
that
as
an
example.
A
A
I
know,
Josh
did
he's
mad,
I
did
I
traditionally
have
been
using
or
actually
for
the
last
all
the
episodes
you
all
have
seen
me
on
I've
been
using
I
three
and
I
might
go
back
to
I
three,
but
there's
no
way
to
really
make
I
three
have
Wiggly
windows
with
that
kind
of
stuff,
and
so
I
was
like.
I
got
a
new
laptop
that
has
a
video
card
and
I
was
like
all
right:
I'm
gonna
go
play
with
Wiggly
Windows
again
because
I,
you
know
it's
just
it's
so
much
fun.
So.
A
C
A
For
those
of
you
don't
know
about
kind,
already,
go
check
out
kind,
dot,
SIG's,
kh2,
IO,
it's
a
way
of
bringing
up
a
local
cluster
effectively
inside
of
docker,
and
for
this
particular
experiment,
I'm,
just
gonna
bring
up
a
single
node
I'm,
not
gonna,
get
too
creative
here.
If
we
do
cube,
I'll
get
pods
aw,
you
could
see
those
pods
starting
to
register,
and
these
are
the
pods
that
make
up
the
control
plane
for
the
cluster
right.
So
we
have
our.
They
have
our
DNS.
We
have
our
controller
manager.
We
have
our
API
server.
A
B
A
Everything
is
working,
ok,
so
DNS
resolves
I
can
do
an
apk
update
all
those
things
I'm
working
in
front
of
Oh
fallout
and
kubernetes
cluster.
So
that's
great!
So
let's
go
ahead
and
get
this
thing
deployed
and
take
a
look
at
how
it
works.
So
we've
got
great
a
whole
bunch
of
really
cool
sample
policies.
We're
gonna
play
with.
They
do
describe
some
of
the
alternatives.
The
license
is
an
Apache
2
license,
which
is
pretty
awesome.
The
open
part
of
the
they.
A
Good
things
to
say
about
the
other
things
that
are
out
there,
so
we
have
like
open
policy.
Agent
is
the
first.
Obviously
open
policy
agent
is
actually
their
goal
at
OPA.
Is
it's
not
just
to
solve
this
problem,
as
it
relates
to
kubernetes,
is
to
solve
the
the
bigger
problem
they
want
opa
to
apply
everywhere
right.
If
you're
trying
to
define
policy,
they
want
to
be
able
to
be
the
entity
that
is
helping.
A
So
let's
dig
in
here
they
talked
about
Kay
rail,
which
is
one
of
the
ones
I
did
earlier
this
year
or
I
reviewed
earlier
this
year
we
talked
about
Polaris,
which
is
not
what
I
was
actually
seeing.
That
one
looks
interesting
and
then
we
have
other
talking
about
external
configuration
management
tools
like
customized.
A
So
I'm,
actually
going
to
be
deploying
this
to
a
kind
of
cluster
say,
do
actually
have
a
really
interesting
tip
about
deploying
to
eks,
depending
on
any
case
requires
enabling
a
command
line.
Argument.
Fqdn
SCN
in
the
Kaveri
no
container
in
the
deployment
due
to
a
current
limitation
with
certificates
returning
by
ETS
for
CSR.
B
A
B
B
B
A
A
A
A
A
B
A
A
A
So
I
like
to
see
these
things
a
little
more
tied
down.
You
know
like
rather
than
just
applying
cluster
admin,
but
it
is
what
it
is.
That's
what's
there
now,
they
also
define
a
cluster
role
and
they
define
that
cluster
role
as
policy
violation.
This
one
is
actually
using
get
list
and
watch
they
create
a
config
map
and
then
these
resource
types
to
be
skipped
by
caverno
policy
engine,
which
I
think
means
that
they're
doing
a
filtered
there
they're
filtering
the
results
of
the
feed
or
the
firehose.
That's
coming
from
the
API
server.
A
A
Know
maybe
it's
everything
name
type
namespace
name
resource
filter
might
be
type
name,
space
name
or
kinda,
namespace
name,
that's
what
it
looks
like
that's
kind
of
where
they're
for
how
they're
doing
their
filtering.
They
have
a
nasty
one
deployment
of
covere,
no
they're,
giving
it
to
service
account.
They
have
a
Nanette
container
that
does
some
stuff
and
they
have
a
container.
That's
doing
the
other
stuff.
All
right.
B
A
That's
the
whole
deployment,
that's
actually
not
a
lot
of
stuff.
That's
you
know
relatively
concise.
The
only
thing
they
said
that
we
talked
about
before
already,
that
is
a
concern
is
like
that
stuff.
That's
where
it's
leveraging
the
cluster
admin
role,
so
I'll
get
pods
a
then
we
can
see
our
cover
no
pod
running.
A
A
We
can
see
that
it
is
got
an
owner
reference,
which
means
that
this
was
defined
by
the
deployment
when
the
deployment
was
created.
It
was
the
one
that
that
that
created
this
validating
whip
hook
and
that's
a
good
thing
that
owner
reference
also
make
sure
that
if
we
were
to
delete
the
we
were
to
delete
that
deployment,
the
owner
reference
would
actually
make
sure
that
that
goes
away
as
well.
A
A
We're
going
to
point
to
the
cavern
of
service
in
the
cavern
Oh
namespace,
and
then
we
have
a
pass
policy
validate
and
it's
on
port
443.
The
failure
policy
is
defined
as
ignore
now,
in
this
case,
failure
policy
means
a
different
thing.
In
this
case.
Failure
policy
means
if,
for
some
reason,
the
Kaveri
no
service
is
not
reachable
or
it
provides
us
bad
data
back
like
if
we
were
expecting
a
200
and
we
got
like
a
503
or
whatever.
A
A
So
this
is
an
interesting
attack
surface,
and
this
is
true
of
the
the
web
hook
configuration
this
is
not
just
Gewehr
know.
If
you
look
at
a
lot
of
the
web
validating
and
mutating
web
hooks
stuff,
that's
out
there,
a
lot
of
them
have
this
same
feature,
and
so
or
a
lot
of
them
have
the
same
configuration
they
make
the
same
set
of
assumptions
here
so.
A
Talk
about
match
policy,
then
we
have
the
name
of
the
web
hook.
We
have
the
namespace
selector
we're
looking
for
all
of
them,
we're
looking
for
the
object
selector,
all
of
them
that's
kind
of
interesting.
The
rules
are
API
groups,
governo,
API
versions,
v1
and
we're
looking
for
the
create
or
update
operation
for
cluster
policies.
A
A
A
You
want
to
see
what
all
of
the
fields
are,
that
you
could
possibly
define
for
a
for
the
validating
lab
book.
These
are
all
of
the
fields
that
can
be
defined
if
you
want
to
understand
a
little
bit
more
about
side
effects,
for
example.
So
let's
take
a
look
at
that:
let's
do
let's
drill
in
we're.
Gonna
look
for
web
hooks
side-effects.
A
So
what
this
field
does
is
side-effects
States,
whether
this
webhook
has
side-effects
acceptable
values
are
none.
None
on
dry
run
web
hooks
created
via
v1
beta
one
may
also
specify
some
or
unknown
web
hooks
with
side-effects
must
implement
a
reconciliation
system,
since
the
request
may
be
rejected
by
a
future
step
in
the
admission
change,
and
the
side-effects
therefore
need
to
be
undone.
Requests
with
dry
run
attribute
will
be
auto
rejected
if
they
match
a
web
hook
with
side-effects
unknown
or
some.
A
So
if
we
have
multiple
web
hooks
that
are
going
to
apply
to
a
particular
object,
the
side-effects
are
there
to
help
us
understand.
Are
they
going
to
apply
cleanly
to
all
of
those
objects
or
some
of
those
objects
going
to
be
adversely
affected?
And
how
do
we
reconcile
our
way
back
out
of
that
right?
If
some
later
admission
controller
was
trying
to
apply
a
change
to
this
object,
and
it
got
rejected
right
that
next
validating
test
said
nope.
This
is
not
a
lot
of
an
admittance,
but
you've
already
modified
the
object.
A
A
The
match
policy
defines
how
the
rules
list
is
used
to
match
so
equivalent
and
exact
allowed
about
allowed
values,
are
exact
or
equivalent
exact
match
or
request
only
if
it
exactly
matches
the
specific
specified
rule.
For
example,
if
deployments
can
be
modified
via
apps
fee,
1,
ab
c
1,
beta
2
or
others,
but
rules
can
only
apply
to
api
groups
apps.
So
this
is
an
interesting
option
that
basically
lets
you
be
a
little
more
loose
about
what
type
of
thing
you
might
be
able
to
match.
That
particular
object.
A
B
A
Oh
I'm,
using
yeah
I,
know
right,
I
love
it
though
I'm
using
mate
desktop
I've,
already
a
bunch
of
mate
and
I'm,
and
so
it's
just
using
like
you
know,
the
comp
is
stuff
composter
and
stuff.
A
A
Think
about
any
one
book
tricky
trade-off,
true,
enjoying
the
detailed
discussion
cool
and
we're
busy
paying
attention
run
on
God
your
whole
here
with
me,
okay,
so
let's
keep
going
so
we
also,
so
that
was
the
only
validating
while
hooking
we're
gonna
play
with
that
here
in
a
second.
But
let's
take
a
look
at
mutating
web
hooks
get
mutating
with
hooks,
and
here
we
see
three
of
them.
We
see
a
policy
one,
we
see
a
resource
one
and
we
see
a
verifying
one,
and
if
we
do
cube,
cannot
get
mutating
the
pokes
Oh
animal.
A
C
A
B
A
A
That's
defined
to
point
all
objects
for
all
groups
across
all
resources
to
the
slash
mutate
endpoint
that
cavero
has
now
I
haven't
looked
at
the
code,
there's
a
timeout
value
here
for
three
seconds,
which
I
think
is
probably
a
good
thing,
but,
and
they
have
family
policy
ignore,
but
when
you
think
about
it
and
I've
said
this
before
I
think
in
this
show,
when
you
think
about
it,
admission
control
is
effectively
like
bottleneck.
Is
a
service
right
because
like
as
soon
as
you
have
admission,
control
in
place.
B
A
So
all
the
things
will
be
sent
and
then
the
application
in
this
case
the
Kaveri
no
service,
will
have
to
determine
yeah,
allow
it
allow
it
really
quickly
all
right
and
then,
however
long
it
takes
to
actually
make
that
decision.
That
period
of
time,
especially
under
load,
is
going
to
be,
is
going
to
be
measured
against
how
that
bottleneck
that
we're
talking
about
right.
So
if
it
took
five
seconds
to
make
the
decision
for
every
and
it
shouldn't,
that
would
be
crazy.
A
But
if
it
took
it
five
seconds
on
every
event
that
came
to
the
slash
mutate
endpoint,
could
you
imagine
like
that?
Would
that
would
be
really
bad
right
now?
The
interesting
thing
is
that
you
have
the
ability,
in
rules
here
to
Nair
the
scope
of
that
right.
You
can
look
for
all
pods
and
just
look
for
pods
right
and
then
have
something
like
slash
mutate
pods,
which
is
only
making
decisions
on
pods.
A
Y'all
have
done
some
really
good
work
in,
like
you
know,
developing
a
tool
that
allows
for
these
sorts
of
really
great
flexibility
in
defining
these
things.
I
wonder
if
you
have
run
performance
tests
against
those
sorts
of
things
inside
of
there.
No,
the
other
piece
is
worth
bringing
up.
It's
like
that
constant
profiling
thing
we
talked
about
before
right
like
that
would
be
I
if
it
were
me
deploying,
but
it's
really
any
admission
controller
again.
This
is
not
related
to
kaveri.
No,
this
could
be
any
admission
controller.
A
Alright,
so
we
got
three
of
them
there.
I
did
want
to
point
out
this.
One
I
think
this
was
really
interesting
because
it's
like
this
is
the
fire
hose.
This
is
send
me.
Everything
and
I
will
figure
out
what
to
do
with
it.
Alright,
let's
take
a
look,
so
we
got
our
pods
running
actually
ice.
Take
a
look.
Do
cube
canal,
yeah
pause,
a
Covino
looks
like
it's
running:
let's
take
it
all
get
deployment,
n-no
I
meant
to
say
it.
I
think
we
looked
at
it
before,
but
I
want
to
look
at
it
again.
A
A
B
B
A
All
right,
let's
play
with
a
little
bit
more-
let's,
let's
actually
kick
some
stuff
up
here
and
see
what
we
can
do
to
to
play
with
policies.
Oh
that
was
the
other
thing.
I
want
to
point
out.
I
thought
this
was
really
neat
so
in
their
documentation.
When
I
apply
this
option,
one
use
cube
controller
manager
to
generate
a
CA
signed
certificate.
A
A
There
was
a
certificate
that
was
issued
for
the
cover,
no
service
that
is
likely
being
mounted
in
by
the
application
itself
so
likely
when
the
application
comes
up,
it
generates
a
CSR.
It
approves
that
CSR,
because
it
is
a
cluster
admin
right.
This
isn't
automatically
approved,
so
it
makes
a
call
to
approve
the
CSR.
B
A
A
A
Well,
house,
that's
right,
but
that's
cool
like
the
application
is
there.
We
could
see
that
it's
using
a
certificate,
that's
in
line
with
what
we
expected.
That's,
actually
you
can
see
the
issuer
is
the
CNS
ku
brain
it
is
so
it's
actually
pulling
this.
Do
we
get
from
the
kubernetes
cluster
itself?
That's
actually
a
pretty
neat
thing:
I've,
not
seen
another
admission
controller
that
leverages
that
techno
that
leverages.
That
idea,
but
I
love
it
as
an
idea,
because
it
is
actually
scoping
that
certificate
to
the
cluster
itself,
which
in
which,
in
reality
the
admission
controller.
C
A
Violation
has
okay
they're,
saying
they
created
a
role
called
policy
violation
and
they
want
you
to
expose
it
in
such
a
way
that
other
people
can
see
it.
That
makes
sense,
so
you
could
do
like
system
authenticated
or
whatever
right
and
just
allow
the
view
of
policy
violations
so
we'll
play
with
that
here
in
a
minute.
For
now,
if
we
were
to
do
cute
kettle,
you
could
all
get
policy
violations,
a
you,
don't
see
any
quality
of
the
elations
yet,
but
let's
fix
that,
shall
we
we've
got
that
running.
A
C
A
A
A
A
Anchors
are
conditionals
anchors,
allow
conditional
processing,
ie,
if-then-else
and
other
logical
of
checks
and
validation
patterns
following
type
makers
are
supportive,
I
think
an
example:
it's
gonna
go
like
million
miles
here,
so
child
elements
are
handled
differently
for
conditional
inequality
and
anchors
for
conditional
anchor
as
a
child
element
is
considered
to
be
part
of
the.
If
statement
and
all
pier
elements
are
considered
be
part
of
the
event
cloths.
A
A
B
A
There's
our
policy
and
it
looks
like
it's
actually
not
namespace
joy
and
I-
can
find
out
of
its
namespace
by
doing
cubicle
API
resources
I've
been
looking
for
that
particular
object,
grab
I
policies,
so
here's
our
cluster
policies
right,
then,
we've
been
defined
by
Kaveri
now
and
we
can
actually
even
just
grab
forgiver
and
I'll,
make
it
a
little
more
interesting.
So
this
gives
me
the
idea
of
things
that
are
named
spaced
and
things
that
are
not
named
spaced
right.
So,
in
our
case,
the
cluster
policies,
as
you
would
expect,
are
not
named
spaced
there.
A
That's
why
it
says
false
and
cluster
policy
violations
are
also
false,
and
then
we
have
to
name
spaced
objects,
generate
requests,
which
is
a
names
based
object
and
policy
violations
which
are
names
based
object.
So
let's,
let's
play
with
this,
so
our
idea
are
our
policy
right
that
we
that
we
just
looked
at
was
a
policy
that
was
going
to
disallow
the
docker
socket
to
be
a
part
of
the
mount.
C
A
A
B
A
B
A
A
B
A
B
A
B
A
A
A
A
B
A
B
A
C
A
And
I
have
a
we
would
have
an
average
execution
time,
which
is
actually
pretty
cool.
They've
really
got
some
really
thoughtful
stuff.
That's
in
here
that's
really
interesting.
It
does
so
our
validation
failure.
Action
is
now
in
force,
which
means
it
shouldn't
allow
it
in
as
far
as
I
understand
from
the
documentation.
A
So
let's
try
this
out:
let's
do
cute
kennel
5f,
Dockery
mo
and
boom
Kim
error.
When
creating
Dockery
Amal
admission
will
hook
Norma
dr.,
very
no
resource
mutating
will
hook
denied
the
request
resource
deployment
default,
docker,
socket,
filled
policy,
disallow,
bhava
validation,
error
use
of
the
Ducker
UNIX
socket
is
not
allowed
validation
rule
auto
generate
now.
What
was
interesting,
though,
is
that
we
noticed
the
that
there's
kind
of
was
a
tricky
thing
here
like
if
it
if
it
did
get
through
right.
A
B
A
B
B
A
It
true
behind
yeah
ball,
never
changes
there,
we
go
okay.
So
then
what
I
would
expect
to
happen
is
that,
although
we
would
see
the
replica
set
trying
to
create
a
new
one
and
that
new
pod
that
is
being
instead
she'd
buy,
the
replica
set
should
be
a
create
the
create
should
be
denied
because
of
the
enforced
policy.
You
cannot
get
pods
not
what's
happening,
though.
What's
happening
is
it's
it's
art
it's
still
being
allowed,
so
there's
a
hole
in
the
logic
here.
Somehow.
A
The
new
pod
should
not
allow
should
not
be
allowed
through.
The
new
pod
should
get
rejected
because
that
new
pod
generated
by
the
new
replica
set
generated
by
the
by
the
change
in
metadata
should
be
denied
by
the
policy
enforcement
I
hope
that
makes
sense,
but
I
can
check
with
you
offline
about
it.
I
just
thought
it
was
an
interesting
little
corner
case
all
right.
A
A
A
A
A
A
So
see
how
we
now
have
this
how's
the
language
work
here.
So
in
this
case
it's
saying,
you've
created
a
namespace
and
we
defined
the
the
hard
spec
and
the
limits
the
stuff
for
you
and
that's
where
we
see
them.
That's
great.
That's
totally
working
cuz!
If
I
were
to
do
that,
while
that
policy
was
not
in
place,
then
I
would
expect
it
to
not
be
applied.
If
I
do
cubic
it'll
get
Paul
V
and
tests.
B
A
I
would
expect,
because
you
had
to
take
some
action,
that
audit
would
tell
me
that
you
took
some
action
because
validation
failure,
action
was
on
it
and
because
you
did
have
to
apply
it,
I
would
expect
that
there
would
be
some
event
some
winter.
That
pointed
out
that
a
namespace
was
created.
Without
this
thing,
and
because
we
have
failure
action
audit,
we
had
to
apply
it,
get
events
Kabir
no.
B
A
Yeah,
so
that's
an
entry
point
generate,
will
create
new
objects,
but
it
would
be
kind
of
interesting
when
generated,
create
one
generate
created.
The
new
objects
so
generate
so
this
in
this
example
yeah.
So
in
this
example,
what
I'm
doing
is
actually
modifying
mutating
the
object
before
it
is
allowed
admission
right,
but
that
was
just
the
mutating
object,
so
I
create
a
namespace
and
then
other
fields
inside
of
that
namespace
are
going
to
be
applied
right.
So
if
I
did
get
cluster
policy.
A
At
NSA
NS
quota,
Oh
mo
so
to
your
question,
waleed
right
we're
actually
going
to
apply
this
rule
through
the
object
itself
and
that's
why
we
could
reference
request,
object,
metadata
name,
I
mean
inside
of
that
same
namespace,
because
I'm,
creating
a
namespace
and
if
and
by
default,
namespaces
don't
have
any
of
these
things
defined.
So
we're
mutating
that
object
that
is
being
created
with
these
new
fields,
and
you
can
even
see
like
how
that
falls
out
right.
A
B
A
So
this
is
creating
a
new
object
and
that
new
object
is
the
resource
quota.
Sometimes
my
brain
is
just
not
with
me
even
in
this
moment
in
these
weird
moments.
Yes,
it
is
a
new
object.
Thank
you.
Jim
I
was
not
I
made
that
mistake,
so
you
get
the
idea
right
so
like
it's
gonna
create
a
new
object,
called
resource
quota,
which
is
pretty
cool,
so
I
really
like
that
feature
like
I
mean
that
feature
seems
like
it
could
be
super
helpful
to
people
all
right.
What
else
do
we
got
examples
wise.
B
B
B
B
B
B
C
B
A
B
B
A
A
So
Shiva
Kumar
says
the
resource
quota
is
generated
as
part
of
the
generate
rule.
The
variable
is
used
to
make
sure
we
generate
the
resource
in
the
incoming
resource,
namespace,
I,
know
and
I
kind
of
really
did
that
you
could
actually
use
that
string
as
a
as
a
variable
like
that
was
really
cool
all
right.
Well,
that's
pretty
cool,
so
I
mean
it's
it's
a
developing
thing.
They
are
open.
Source
they've
got
some
great
engagement.
So
far
that
looks
like
about
last
time.
I
checked,
it
was
like
200
stars
come
on.
Oh
yeah,.
A
22
Forks
200
stars,
10
contributors
hasn't
been
around
for
too
long.
They
just
announced
it
recently,
but
I
kind
of
like
the
intuitive
way
that
they
describe
policy
and
it
looks
like
there
are
some
challenges
like
and
like
I
said
before.
What
I
was
expecting
was
when
a
pod
is
accentuated,
regardless
of
whether
it
came
as
part
of
the
what
deployment
I
expect
that
you
would
treat
that
pod
independently
of
whether
it
was
created
as
part
of
a
deployment
or
whether
was
created.
It's
just
a
pod.
A
So
right
now,
there's
still
some
holes
in
the
logic.
But
but
you
know,
I
mean
like
this
stuff
is
hard
all
right,
they're,
trying
to
make
it
easier
and
I
respect
that
and
that's
really
cool.
So
it's
2:38.
This
is
a
neat
project.
It's
really
shaping
up
really
well
I,
like
I
like
how
I
like
how
it
actually
goes
about
its
work.
I,
like
the
policy
violation
model
I
like
the
the
option
there,
one
more
thing
I
wanted
to
show
before
I
go.
A
They
have
this
testing
policies
thing,
which
is
not
the
thing
I'm
looking
for,
there's
where's
it
at
I
saw
something
in
the
docs
we're
in
you
could.
A
A
Bring
additional
control
policy
rule
execution
based
on
variable
values:
oh
I,
see
so
if
it
comes
in
and
you're
gonna
see,
this
stuff
could
get
out
of
hand
quickly
because,
like
when
you
start
adding
more
logic,
especially
like
in
preconditions,
you
could
see
that
like
it
would
have
to
work
through
more
of
the
logic
before
reporting
back
to
allow
or
disallow
right.
So
this
could
put
even
more
stress
on
the
actual
controller
code
running
inside
of
the
the
tool.
Ob
I'd
be
a
little
concerned
about
how
that's
gonna
work.
A
Background
processing
creates
a
thread.
Probably
what
does
it
do?
Grinev
applies
policy
during
admission
control
and
to
existing
resources,
and
the
object
that
may
be
created
out
before
a
policy
was
created.
The
application
of
policies
to
existing
resources
referred
to
as
background
processing,
but
the
cabinet
is
not
mutate,
existing
resources
and
will
report
balance,
II,
violation
policy,
violation
for
existing
resources
that
you
don't
match
mutating
well,
that's
cool
policy
is
Allah,
always
allowed,
is
always
enabled
for
processing
during
admission
control.
A
The
resource
definitions
for
testing
are
located
in
the
test
direction.
Each
test
contains
a
pair
of
files.
One
is
a
resource
definition,
and
the
second
is
that,
given
the
cover,
no
policy-
oh
that's
cool!
So
if
you
actually
wanted
a
validate
that
these
things
are
working
the
way
you
expect
theirs,
they
provide
you
examples
of
how
that
works.
A
A
B
A
A
Governo
solves
this
issue
by
supporting
the
automatic
generation
of
policy
rules
for
pod
controllers
from
a
rule
written
for
a
pod,
the
auto
generation
behavior
is
controlled
by
the
pod
policies.
Cover
no
dial
erosion
controllers
annotation
by
default
can
vary
no
inserts
all
to
generate
an
additional
rule
that
is
applied
to
pod
controllers.
A
B
A
B
B
A
B
A
So
now
we're
seeing
the
match:
that's
pretty
cool,
so
what
just
happened
there,
which
I
didn't
notice
before?
So
if
we
look
at
the
policy
that
I
was
right,
the
just
the
disallow
default
names
based
policy.
The
only
rules
in
here
are
for
pods
right,
but
I
did
Auto
gen
controllers
all
so
then.
What
happens
is
that
when
I
go
ahead
and
define
that
policy,
what
I
get
the
resulting
policy
from
it
actually
includes
some
generated
code,
get
cluster
policy.
A
A
With
the
same
policy,
the
same
pattern
just
actually
generated
into
this
into
the
spec
template,
because
it
is
no
longer
a
pod.
It's
a
it's.
A
higher-order
object
like
Damon,
said
deployment,
job
or
stateful
set.
That
is
really
neat.
So
that
means
that
if
I
were
to
define
a
pod
policy
that
I
cared
about
enforcing
at
the
pod
layer
as
long
as
I
have
the
auto
gen
controllers,
all
whatever
I
defined
at
the
pod
layer
should
also
be
enforced
at
any
higher
of
at
any
higher
order,
which
is
really
kind
of
cool.
A
Yes,
thank
you
very
much,
Jim
and
shivkumar,
and
shooting
for
jumping
in
and
helping
us
that
help
is
helping
us
answer,
questions
that
was
really
fun
so
think
you
think
you
think
you
and
we'll
see
you
all
next
time,
I
think
next
week
we
have
none
other
than
mr.
Joe
bethe
back
up.
He
said
he
did
say
that
he
was
in
for
next
week.