►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
and
welcome
to
this
cncf
webinar.
Today
we
have
a
very
cool
agenda
for
you
for
the
next
hour,
we're
going
to
be
talking
about
databases
in
kubernetes
and
how
to
run
them
the
most
secure
and
reliable
way
possible
and,
most
importantly,
how
to
make
sure
business
or
dba
crafted
guidelines
can
be
applied
and
enforced
during
the
life
cycle
of
your
stateful
application
and
because
it's
also
important
to
while
the
walk.
As
you
talk
the
talk,
we
have
a
couple
of
demos
in
store
for
you.
We
hope
you're
going
to
enjoy
it
again.
A
Welcome
to
this
session,
how
to
build
kubernetes
policies
to
ensure
compliance
for
databases
during
the
next
hour.
We're
going
to
be
touching
upon
a
couple
of
cncf
projects
that
are
related
to
that
topic.
The
first
one
is
flux
by
weave,
which
is
a
continuous
delivery
solution
for
your
application,
including
stateful
application
and
databases,
and
the
second
tool
we're
going
to
be
talking
about
is
kyverno,
which
is
also
cncf
project
and
is
a
policy
engine
specifically
designed
for
kubernetes.
A
So,
let's
get
started,
my
name
is
nick
varmundi,
I'm
a
principal
developer,
advocate
with
on
that
I've
been
working
with
kubernetes
for
the
past
five
years
and
before
on
that,
I've
worked
with
aviatrix
a
startup
focusing
on
multi-cloud
networking
and
then
before
that
I
spent
six
years
at
cisco
as
part
of
the
engineering
team
responsible
for
the
container
network,
interface
or
cni
in
terms
of
the
agenda.
For
today,
we're
going
to
start
with
database
in
kubernetes
and
talking
about
what
is
the
status
in
terms
of
the
industry?
A
You
know
how
to
do
it
in
a
way
that
can
match
the
requirement
of
enterprises,
then,
in
terms
of
delivering
those
databases
in
production,
we're
going
to
be
talking
about
policy
as
code,
which
is
more
focused
around?
How
can
we
put
some
guardrails
around
how
people
you
know,
deploy
databases
in
production
and
to
that
same
goal,
we're
going
to
be
taking
a
look
at
github's
principles
and
how
to
build
github's
pipelines
for
stateful
application
by
also
incorporating
some
notion
of
policies
and
compliance?
A
But
first,
let's
address
the
elephant
in
the
room.
Is
it
a
good
idea
to
run
a
database
in
kubernetes
or,
more
generally
speaking,
is
it
a
good
idea
to
run
stateful
application
within
kubernetes,
so
in
2016,
kubernetes
introduced
the
notion
of
pet
sets
a
first
try
at
handling
stateful
application
as
a
first-class
citizen
and
prior
to
that,
any
application
running
in
kubernetes
was
strictly
supposed
to
adhere
to
the
12
factor,
apps,
basically,
stateless
application.
A
A
So,
for
example,
you
cannot
attach
distinct,
persistent
volumes
to
individual
parts.
Composing
your
stateful
set
in
terms
of
the
network
network
identity,
is
guaranteed
and
is
stable,
meaning
that
in
case
of
pod
failure,
the
same
pod
can
be
restarted
on
the
same
node
or
another
node,
while
keeping
the
same
ordinal
number
as
well
as
the
same
hostname.
A
Then
when
it
comes
to
upgrading
your
stateful
application,
stateful
sets
support
both
running
updates
and
partition,
running
updates
to
features
that
are
critical
when
it
comes
to
managing
your
stateful
application.
So
all
these
various
options
can
be
tailored
and
fine-tuned
to
suit
your
need.
But
the
real
question
is:
is
this
sufficient
to
manage
databases
and
kubernetes
and
we'll
get
to
answer
this
a
bit
later?
A
A
All
those
containers
are
actually
stateful
application,
meaning
that
they
have
to
persist
some
sort
of
data
to
disk
which
makes
sense,
because
people
have
been
running
stateless
application
in
containers
for
a
while,
but
there's
not
such
a
thing
as
a
completely
stateless
application.
You
always
have
a
component
that
is
stateful,
maybe
a
message,
queueing
solution
or
a
database,
so
this
graph
shows
that
people
now
wants
to
collocate
their
stateless
application
together
with
their
stateful
component.
There
may
be,
you
know,
different
reason
for
this
operations,
but
also
maybe
latency.
A
Now,
if
we
take
a
look
at
the
top
container
images
running
as
kubernetes
stateful
sets,
the
result
is
actually
very
close,
which
shows
that
those
containers,
you
know
stateful
containers
that
are
popular
should
be
run
as
stateful
sets,
not
as
classic
deployments
for
the
reason
we
mentioned
before
stateful
said
have
specific
requirements
when
it
comes
to
network
identity,
stability,
hostname
and
also
the
order
of
operation.
A
When
you
deploy
stateful
application,
you
want
to
deploy
the
pods
in
a
particular
order,
and
if
you
need
to
scale
down
or
if
you
need
to
upgrade,
you
know
the
image
of
the
application,
then
you
need
to
take
the
reverse
order,
and
this
is
really
key,
because
it's
not
only
about
deploying
pods.
It's
making
sure
that
the
application
that
is
running
on
top
of
those
containers
are
actually
in
a
healthy
state,
which
means
for,
for
example,
for
it
means
that
the
cluster
is
formed.
A
So
if
we
remove
a
node
or
add
a
node,
then
the
states
of
the
cluster
also
needs
to
be
updated
right.
This
is
not
automatic
and
we're
gonna
see
how
we
can
potentially
alleviate
this
portion,
but
essentially
yeah
those
stateful
application
needs
to
be
run
as
stateful
set
to
begin
with,
but
it's
only
one
part
of
the
picture.
Once
you
have
your
stage
full
set
defined
with
your
application
template.
A
Then,
as
you
add
new
custom
resource,
the
custom
controller
will
create
new
instances
in
aws.
That's
why,
as
you
delete
the
custom
resource,
then
the
custom
controller
will
delete
the
instance
within
aws
and
you
as
a
developer,
need
to
embed
the
knowledge
required
to
do
all
this
action
within
the
controller,
so
how
to
delete
a
aws
instance,
how
to
create
an
aws
instance,
etc.
A
So
if
we
apply
this
principle
to
a
database,
then
we
can
have
a
lot
of
benefits
right,
so
it
can
automatically
perform
operations
for
statefully
critical
components
such
as
database
scale,
backup
upgrade.
All
of
that
can
be
managed
by
an
operator
and
potentially
simplifying
the
deployment
scale
out
and
scaling
of
cloud
native
application.
A
And
finally,
because
there
is
a
reconciliation
loop
that
is
performed
by
the
operator,
it
also
sort
of
enforces
natively
compliance
by
design.
So,
for
example,
let's
say
the
when
you
deploy
the
database
with
the
operator,
you
set
up
an
admin
user
with
specific
permissions,
so
the
database
get
deployed
with
that
setting
and
let's
say
you
want
to
change
that
permission
manually,
you
know
using
your
database
command.
A
So,
first
of
all,
they
are
no
standard
to
express
custom
resources
settings
for
a
database
for,
for
example,
you
know
how
you
want
to
call
the
storage.
The
pvc
in
terms
of
the
path
to
that
particular
settings
is
not
defined
in
kubernetes,
so
every
every
operator,
creator
or
provider
can
choose
its
own.
You
know
schema
to
express
its
own
settings.
There
are
no
standard
across
all
databases,
for
example,
and
typically,
once
you
start
using
operators
to
deploy
and
manage
the
lifecycle
of
you
know,
application
and
software
solutions
you
tend
to
get
off.
A
You
tend
to
get
a
sprawl
of
those
custom
resources
which
can
lead
to.
You
know
increased
difficulty.
When
you
are
troubleshooting
what's
happening
in
the
cluster,
then
we
also
have
potentially
challenges
with
the
supply
chain.
Quality
control,
as
you
are
probably
going
to
use
existing
operators
rather
than
create
your
owns.
You
need
to
be
able
to
trust
the
people
and
the
software
engineers
that
are
building
those
operators.
A
How
can
you
validate
that's
the
settings
you
enter
and
you
configure
for
those
schema
are
valid
into
your
environment
right,
and
this
is
what
we're
gonna
try
to
think
about
in
in
the
next
section
and
finally
documentation,
it's
quite
difficult
to
match
exactly
your
use
case
in
terms
of
finding
the
documentation,
so
you
will
find
a
lot
of
operators.
They
have
github
repositories,
of
course,
with
some
example
of
use
cases,
but
there's
no
real
any.
A
A
One
of
the
tools
we're
going
to
be
talking
about
is
kyverno
to
realize
this,
but
when
it
comes
to
policy
as
code
within
kubernetes,
it's
not
really
code
anymore.
A
kubernetes
policy
engine
should
just
support
yaml
right,
because
yaml
is
what
we
do
in
kubernetes.
A
So
let's
take
a
look
at
how
we
can
use
policy
as
yaml
with
kyverno
and
apply
this
to
custom
resources.
So
there
are
a
couple
of
principles
to
be
applied
when
using
policy
as
code
or
as
yaml.
So
first
we
want
to
decouple
the
validation
or
the
enforcement
of
the
policies
from
the
directive
decisions
themselves.
A
A
A
You
know
the
the
policy,
but
if
you're
already
already
running
kubernetes,
this
is
not
necessarily
something
you
want
to
start
with
right.
We
have
yamo,
so
let's
stick
with
yaml
and
if
the
solution
that
you
are
contemplating
doesn't
satisfy
your
requirements
because
they
are
too
complex,
then
maybe
you
can
try
a
solution
that
involves
a
new
language
like
regal,
for
example
right.
A
So
in
that
sense
it
doesn't
have
to
be
kubernetes.
The
the
policy
has
code
solution
doesn't
have
to
be
kubernetes
to
work,
but
you
should
start.
You
know
using
your
native
tools
in
the
case
of
kubernetes.
Well,
let's
just
use
the
ml
right,
so
we
also
want
to
control
and
validate
the
source
before
committing
to
the
cluster.
A
If
we
just
rely
on
an
admission
controller
to
validate-
or
you
know,
mutate
the
input,
then
I
would
say
it's
already
too
late.
So,
for
example,
if
you
have
an
application
composed
of
you
know,
five
different
manifests
and
two
of
those
manifest
don't
pass.
The
policy
validation,
then
you'll
end
up
with
three
manifests
deployed
to
the
cluster
and
two,
while
not
deployed
potentially
leading
to
some.
You
know
inconsistency,
so
it's
better
to
begin
with
to
have
the
ability
to
use
validation
within
your
github's
pipeline
before
deploying
the
manifest
within
the
cluster.
A
So
optionally,
it's
always
good
to
have
the
ability
to
eventually
mutate
the
input.
So
if
you
have
a
non-conformant
input
rather
than
invalidating
and
sending
an
error
message,
what
you
can
do
is
transform
the
input
to
make
it.
You
know
fits
within
your
policy
boundary,
and
so
there
are
multiple
solutions
on
the
market
that
can
help
you
with
you
know,
building
this
policy
as
code
or
policy
as
yaml,
so
oppa,
gatekeeper,
caiverno
day.
3
are
all
valid
example,
but
for
this
session
we're
going
to
focus
on
kyvernos
specifically.
A
So
if
you
look
at
the
traditional
process
to
handle
any
api
request
from
a
kubernetes
point
of
view,
you
can
insert
webhooks
at
two
different
sections
within
that
workflow.
So
mutating
animation
and
validating
admission
are
two
valid
web
hooks.
Where
you
can,
where
you
can
insert
specific
logic.
Xml
logic
from
you
know
any
sort
of
software
you
want
to
integrate
with
kubernetes.
A
In
the
case
of
cariverno,
the
validation,
admission,
webhook
is
used
to
validate
or
invalidate
specific
statements
and,
in
our
case,
compare
values
against
policies,
and
if
the
poli,
the
values
are
within
the
policies,
then
kaivan
always
will
validate
the
request
and
send
it
back
to
kubernetes.
A
A
So
a
quick
detail
about
mutation.
You
can
use
either
strategic,
merge,
patch
or
json
patch,
depending
on
the
granularity
you
need
to
go
into
when
modifying
you
know
a
particular
field
or
set
of
fields.
Kyverno
is
also
able
to
generate
new
resources
when
a
new
resource
is
created
or
when
the
source
is
updated.
A
It
also
has
a
notion
of
preconditions
which
means
that
it
can
gather
data
from
the
admission
request
payload,
so
the
admission
review
actually
and
reuse
part
of
that
data,
save
them
into
variable
that
you
can
further
use
when
building
kyvonne
policies
and,
in
addition,
kyverno
supports
image
verification
through
the
verify
images
rule
which
uses
cosine
to
verify
container
image,
signature,
attestations
and
more
stored
into
an
aci
registry,
and
finally,
kyverno
has
created
james
path,
which
is
coming
from
the
name
of
the
person
who
has
developed
this
language,
and
it's
actually
a
language
that
that
kyvernos
supports
to
perform
more
complex
selection
of
fields
and
value
and
manipulation
of
all
these
fields.
A
You
know
combined
with
filters,
so
let's
take
a
look
at
how
we
can
integrate
kyverno
with
your
traditional,
continuous
integration
pipeline.
So
a
githubs
pipeline
allows
you
to
use
git
as
the
single
source
of
truth
for
both
your
application
code
and
your
communities
manifest.
They
can
actually
sit
within
the
same
repository.
A
A
Then
you
specify
the
container
name
tag
and
you
know
other
details
into
your
kubernetes
manifests
and
you
can
use
a
tool
like
customize
to
do
that.
Its
job
will
be
depending
on
your
target.
Environment
will
fill
the
right
fields
with
the
container
information
and
environmental.
You
know
specific
specific
values
as
well.
A
So
once
you
have
your
communities
manifest
that
are
ready
to
be
pushed
into
your
communities
cluster,
so
you
can
have
staging
prod,
dev
customize
overlays.
Then
you
will
have
a
git
ups
tool.
In
our
case,
this
will
be
flux
that
will
pick
them
up
and
make
sure
that
that's
the
reconciliation,
loop
synchronize,
the
state
of
the
cluster,
with
what
you
have
within
your
kubernetes,
manifest
repository
and
so
on.
A
The
diagram
here
you
can
see
that
customize
is
used
as
part
of
the
pipeline,
but
flux
also
supports
natively
customized,
which
means
that
the
only
thing
you
need
to
have
is
just
the
overlays
in
the
communities.
Manifest
repository
and
flux
will
be
intelligent
enough
to
leverage
those
overlays
create.
The
targets
you
know
manifests
and
then
use
them
to
deploy
your
application
within
the
target.
Kubernetes
cluster,
which
leaves
us
with
a
question:
where
can
we
integrate
kyverno
in
this
picture?
So
there's
actually
two
solutions.
A
The
first
one
is,
you
can
use
kyverno
as
a
c
line,
let's
say
within
this
dotted
line
there.
So
as
you
build
your
kubernetes
manifests
just
after
that,
you
can
use
the
cli
to
compare
it
and
to
check
it
against
the
the
policies
that
are
defined
as
yaml
file,
so
kyverno
cli
will
use
on
the
one
side
the
communities
manifest
yaml
file
and
compare
them
against
the
policy
that
are
written
in
in
those
you
know
in
those
yaml
files,
the
kaivena
yamal
policies
that
are
also
sitting
in
in
the
repository
right.
A
Things
will
happen
after
the
community's
manifest
has
been
deployed
in
the
cluster
by
flux.
Flux
is
going
to
first
check
the
manifest.
If
there's
a
difference,
it
notices
a
change.
Then
it
needs
to
synchronize
with
the
community's
cluster,
so
it
will
send
those
files
through
you
know,
like
a
more
of
a
pull
mechanism
into
the
kubernetes
cluster
and
apply
the
manifest
to
the
cluster.
As
a
result,
the
admission
controller
will
either
authorize
or
prevent
the
manifest
from
being
deployed
into
the
cluster.
A
A
It's
picked
up
by
flux,
flux
apply
the
manifest
into
the
the
cluster.
The
admission
controller,
then,
will
change
part
of
the
values
to
match
your
policies
or
so
that
you,
the
values,
are
within
your
policy
boundaries,
but
then,
as
a
consequence,
one
could
argue
that
okay
now,
the
quantities
manifests
here
on
the
git.
Repo
are
not
the
source
of
truth,
because
some
of
the
values
has
been
tempered
by
the
admission
controller,
and
that
is
a
fair
statement.
A
Therefore,
it's
up
to
you,
you
know
to
mutate
or
validate
within
the
cluster,
but
my
personal
preference
would
be
to
keep
git
as
the
single
source
of
truth,
so
use
kyverno
in
the
context
of
a
github's
pipeline
as
part
of
a
you
know,
just
a
cli
tool
within
the
workflow.
A
So
I
hope
this
makes
sense
to
you
and
let's
just
sum
it
up,
so
enforcing
compliance
with
cayaverno
when
where
and
how
so,
what
we
have
been
discussing
so
far
when
well,
ideally
during
your
pipeline
execution
and
preferably
if
you're
using
kubernetes
git
ups,
is
the
best
of
breed
solution
to
implement
continuous
integration
and
also
using
flux,
then
as
the
continuous
delivery
mechanism
right
so
as
part
of
this
pipeline,
try
to
enforce
compliance
where,
obviously
you
want
to
have
your
kyvano
policies
sitting
in
a
git
repository
how
well
preferably
using
the
cayaverno
cli
that
will,
on
the
one
side,
leverage
the
manifest
in
one
git
repository
for
the
kubernetes
application
and,
on
the
other
side,
the
kyverno
policies,
which
are
also
represented
as
yaml
file,
probably
on
an
in
another
git
repository.
A
So
now,
let's
check
this
in
real
life
into
our
demonstration,
where
we're
going
to
be
creating
flux,
source
and
customization,
so
that
our
application
can
safely
be
deployed
in
the
quincy's
cluster
leveraging
github's
principle,
we
will
validate
the
application
using
the
kyverno,
cli
so
of
cluster
and
then
just
for
you
know,
comprehensive
testing.
We'll
also
show
you
the
same
thing,
but
this
time
with
an
animation
controller
that
will
validate
and
mutate
non-conformant
resources.
A
Let's
get
started
with
the
demo,
so
this
demo
is
available
as
part
of
a
workshop
or
lab
I've
developed
on
instruct
and
I'll
make
sure
to
put
the
link
in
one
of
the
last
slides
of
the
deck
so
that
you
can
also
use
it
if,
if
you,
if
you
wish
so
so,
let's
launch
this
slab
okay.
So
let's
jump
into
the
last
section
of
this
lab.
A
The
first
thing
we're
going
to
do
is
validate
some
policy
with
kyverno
of
cluster,
meaning
that
we're
going
to
be
leveraging.
Some
application
manifests
so
made
for
two
distinct
environments.
So
we
have
the
first,
the
first
environment,
which
is
a
dev
environment.
The
customize
overlay
is
there,
so
we
have
a
couple
of
components
within
this
application.
A
We
have
a
front
end,
which
is
a
web
application,
a
flask,
quick
application,
displaying
marvel
characters
that
are
picked
from
a
mongodb
database.
So
this
database,
as
you
can
see
here,
is
a
representation
in
yaml
as
a
custom
resource.
So
as
soon
as
we're
going
to
push
this
manifest,
the
mongodb
operator
that
has
been
installed
in
the
cluster
will
react
and
deploy
a
stateful
set
and
also
the
mongodb
cluster.
A
On
top
of
that
stateful
set,
we
also
have
a
storage
class,
so
we're
going
to
be
using
on
dats
as
the
the
storage
class
for
the
stateful
sets.
So
on
that
will
be
responsible
for
the
underlying
distributed
storage
layer
so
providing
features.
I
mean
enterprise,
great
features
such
as
at
rest
and
in
transit,
encryption,
persistent
volume,
replication
nfs,
you
know
share
if
you
wish
so
optimize
performance,
etc.
A
A
You
know
the
information
that
is
sitting
within
this
database.
Then
the
production
overlay
is
essentially
a
replication
of
the
dev
section,
but
with
some
differences.
So,
for
example,
you
can
see
in
the
dev
environment
we
have
two
replicas
for
the
front
end
in
prod,
we'll
have
five
in
the
prod
overlay.
We
also
have
a
service,
a
specific
service
that
is
exposing
the
application
in
the
outside
world
in
the
dev.
This
is
just
a
cluster
ip,
so
only
available
within
the
cluster
boundaries
and
in
terms
of
the
storage
class.
A
There's
also
some
differences
you
can
see
in
the
dev
environment.
We
have
no
replica
no
encryption
for
production,
we
will
enforce.
I
mean
we
want
to
enforce
two
replicas
and
encryption
as
well,
and
for
this
we're
going
to
be
using
kyoverno
policies,
so
the
validation
policies
are
described
there.
So
first
in
relation
to
the
mongodb
database,
what
we
want
is
to
have
here
one
user
admin
that
has,
I
mean
at
least
one
admin
user
that
has
all
the
permissions
that
are
sitting
there.
A
If
you
compare
to
the
original,
you
know
manifest
in
the
application,
you
can
see
that
it
has
four
different
roles,
and
on
top
of
this,
we
also
want
to
check
that
the
encryption
is
enabled.
So
you
can
see
like
the
kind
of
this
object
is
cluster
policy.
All
the
policies
have
this
type,
but
again
it's
just
yaml
right,
nothing's
too
too
different
from
a
normal.
You
know
just
kubernetes
manifest
on
top
of
this.
A
So
now
what
we
want
to
do
is
validate
the
policies
with
kyverno
only
using
the
text
file
only
the
manifest
nothing
in
cluster.
So
for
this,
we're
just
going
to
check
that
we
don't
have
any
policies
in
the
cluster
or
any
application.
So
there's
no,
the
application
has
not
been
deployed.
We
have
kyverno
that
is
installed,
but
we
shouldn't
have
so
cluster
policy.
This
is
the
object
or
c
poll.
We
don't
have
any
policies
implemented
in
the
cluster.
A
Yet
so
before
checking
the
policy,
let's
introduce
some
some
non
non-compliant
information
for
this
we're
going
to
go
back
into
the
editor,
we're
going
to
be
using
the
production
overlay
to
simulate
this
non-conforming
non-conformant
information.
A
A
A
Generate
the
manifests
for
the
production
environment
and
then
pipe
them
into
the
kyverno
cli,
which
will
be
using
this
directory
as
the
source
for
the
policy
validation,
which
is
exactly
what
I've
showed
you
before.
Okay,
let's
run
this,
and
hopefully
we
should
have
three
errors
right.
So
this
this
is
the
errors
I
have
I've
just
introduced,
so
we
have
one
which
is
require
permission,
so
we
have
one
permission,
permission
missing.
A
If
you
remember,
then
the
encryption
that
is
not
enabled
in
the
storage
class
and
also
the
the
pvc
volume
says
volume
size
that
is
greater
than
10
gig.
Next,
we're
gonna
see
what
happens
if
we
use
the
dynamic
admission
controller
for
the
validation
so
basically
doing
exactly
the
same
thing,
but
this
time
applying
the
manifest
into
the
cluster
and
using
caiaverno
in
cluster.
A
So
we
need
to
deploy
the
policies
in
the
cluster
first.
Here
we
go
and
if
I
check
now
the
policies
there
are
for
them,
so
it
should
be.
Oh
okay.
Now
all
of
them
are
ready
to
go,
and
I'm
not
gonna
use
flux
right
now
to
deploy
the
application
going
to
use
flux
for
the
last
one.
For
the
mutation,
well,
it's
the
same
principle.
The
the
only
difference
is
that
I'm
using
coop
ctl
apply
rather
than
a
git
ups
cd
methodology.
A
So
now
again,
let's
see
what
happened
if
I
paste
the
command
here
again,
we're
going
to
be
using
customize
to
build
the
manifest
and
pipe
them
this
time,
not
in
the
chive
are
no
cli,
but
directly
kub
ctl
apply
so
deploying
all
the
manifest
in
the
cluster.
So
as
expected
here
we
have
errors,
and
this
is
exactly
the
same
ones
as
before
so
complaining
by
about
the
volume
side,
the
permission
and
the
encryption,
but
the
difference
now
is.
A
If
I
look
for
the
application,
you
can
see
that
now,
like
it's
literally
when
I've
applied
this
manifest,
we
have
part
of
the
application
that
is
running.
Of
course,
we
are
missing
the
non-conformant
resources,
which
is
the
storage
class
and
the
the
state
rule
sets.
So
our
application
is
currently
broken,
and
this
is
not
necessarily
something
you
want
right.
A
So
this
is
why
I
was
telling
you
before,
for
validation,
it's
better
to
to
use
it
in
a
pipeline
and
as
to
the
mutation,
then
again,
if
you're
using
git
ups
and
you
mutate
values
as
your
application
is
getting
into
deploying
to
the
cluster.
Is
it
really
git
ups,
I'm
not
sure
right,
but
let's
proceed
with
the
next
section
which
which
is
going
to
be
around
mutating
these
resources,
but
first,
let's
delete
the
application
all
together
so
that
we
can
use
flux
to
deploy
it
again.
A
A
We're
going
to
enforce
a
couple
of
settings
here
so,
regardless
of
what
the
user
will
input
in
the
storage
class
manifests,
we
will
change
it
and
force
the
number
of
requests
to
two
force.
A
Encryption
force
the
usage
of
xfs
at
the
file
system,
which
is
the
the
one
recommended
for
for
and
in
terms
of
the
volume
size,
we're
gonna,
enforce
five
gig
for
all
the
volumes
that
are
going
to
be
attached
to
the
mongodb
stateful
set,
so
not
only
for
data,
but
also
for
logs
for
any
volume
that
are
going
to
be
attached
there.
Okay,
so
let's
go
back
to
the
console.
A
The
policies
have
been
mutation.
Policies
have
been
created,
let's
just
check
now
again
the
cluster
policies,
so
you
can
see
these
three
new
policies
are
now
living
in
the
cluster
and
what
we
need
to
do
now
is
to
create
the
the
flux
resources
so
that
flux
can
deploy
the
application
into
the
cluster
in
a
git
ups
fashion.
So,
first
we
need
to
tell
flux.
A
Where
is
the
repository
to
monitor?
You
know
the
non-conformant
resources,
and
then
we
also
need
to
create
the
corresponding
customization
to
tell
flux
what
to
do
with
it.
Okay,
this
is
done
so
now.
If
we
have,
if
we
look
at
the
fleet,
infrared
repository,
which
is
the
config,
the
repository
that
hosts
the
configuration
for
flux,
we've
been
using
the
apps
pro
yaml
file,
which
is
this
one.
A
So
now,
if
I
display
this
prod
yaml
file,
you
can
see
all
I
mean
these
two
resources,
the
the
source,
the
git
repository
and
the
customization.
A
So
the
repository
that
I've
configured
there
is
essentially
a
mirror
of
the
changes
that
we
made.
You
know
when
we
we
change
a
conformant,
the
conformant
prod,
manifest
into
non-conformant.
This
is
exactly
what
this
git
repository
contains.
A
A
So
after
a
few
seconds,
we
should
see
a
new
customization
getting
reconciled
within
the
cluster
and
the
application
getting
deployed,
and
what
we're
going
to
see,
hopefully,
is
that
the
settings
that
have
been
set
within
the
manifest
are
then
going
to
be
overwritten
by
kaya
verno.
So
now,
let's
verify
the
application
configuration.
A
So
for
this
we
can
check
the
volume
size
for
the
mongodb
database
and
you
can
see
that
we
moved
from
you
know:
50
gig
and
10,
and
one
gig
respectively
for
the
data
volume
and
the
log
volume
into
5
gig,
which
is
what
has
been
enforced
by
the
policy
now
if
we
also
get
the
storage
class.
A
A
And
finally,
let's
check,
let's
check
that
our
application
is
running,
so
we
have
our
job
that
is
now
completed.
We
have
those
five
different
replicas
for
the
front
end
and
we
should
also
see
a
load
balancer
type
service.
That
is
there
and
if
we
go
to,
if
we
browse
to
that
ip
address
on
poor
8080,
we
should
see
our
application
running.
So
it
is
there.
It's
working,
we
have
some
random
marvel
characters
that
are
being
displayed
on
the
screen
as
expected.
A
A
We've
also
seen
that
git
ups
and
policy
as
code
principles
provide
best
of
class
paradigms
to
manage
enterprise
application
life
cycle,
so
we've
been
testing
flux,
we've
been
testing
kyverno
and
finally,
let's
embrace
this
principle
to
enhance
your
platform,
security,
facilitate
collaboration
between
development
teams
and,
ultimately
experience
faster
innovation
cycles
and,
finally,
a
couple
of
action
on
your
side.
So
again,
if
you
want
to
test
the
lab,
you've
got
the
link
there
or,
if
you
want
to
learn
more
about
on
that,
we
also
have
all
the
labs
you
can
try
out.