►
Description
Blog post with URLs & insights: https://everyonecancontribute.com/post/2021-03-10-cafe-20-securing-kubernetes-with-kyverno/
Continuing from last week, not breaking security but making it more secure :)
https://everyonecancontribute.com/post/2021-03-03-cafe-19-break-into-kubernetes-security/
Twitter thread: https://twitter.com/dnsmichi/status/1369697355281367047
A
So
we
have
been
discussing
what
we're
going
to
talk
about
today
as
a
as
a
quick
peek,
and
today
we
meet
for
the
20th
iteration
of
our
coffee,
chat
or
coffee,
and
since
we've
been
trying
to
break
kubernetes
security,
lately
we're
now
gonna
look
into
more
ways
of
making
it
more
secure
and
cavanaugh
is
one
of
the
things
the
other
one
is
encrypting
lcd,
but
I
don't
want
to
spoil
anything,
I'm
happy
that
philip
has
prepared
something
for
today
and
just
hand
it
over
to
you.
Phillip.
B
Yeah
hi
so
welcome
again
just
a
small
wrap
up
from
last
week,
so
we
created
our
cluster
again.
So
we
have
one
control,
plane
and
two,
let's
say
worker
nodes.
We
have
the
same
name
spaces
like
last
time.
We
have
our
everyone
can
contribute
namespace,
and
we
have
my
workspace
and
we
have
again.
We
have
two
contexts.
B
B
So
last
week
we
let's
say,
did
some
intrusion
or
some
some
security
views
on
the
kubernetes
default
setup.
When
you
create
a
cube,
adm
setup,
so
we
could
create
privileged
ports,
and
with
this
we
could
read
secrets.
We
could
gain
access
to
ssh
to
the
record
node
from
there
with
the
cree
ctl
and
the
container
run
time.
B
We
did
some
more
effort
to
finally
end
up
on
the
master
master,
our
control,
plane,
node
and
there
we
had
basically
complete
access
to
our
cluster
and
we
could
basically
also
take
over
took
over
the
cluster.
We
renewed
some
certificates,
we
could
change,
ssh
keys,
etc.
So
this
was
the
status
what
we
had
from
last
week
this
week.
B
We
want
to
de-dive
a
little
bit
deeper
in
the
security,
how
to
secure
the
communities
and
there
you
have
a
default
default
recommendations
which
you
have
from
quinnitis
and
documentation
when
we
just
take
a
look
together
here,
you
have
the
infrastructure,
security,
cluster
security
container
on
code
security
on
the
infrastructure
part,
we
have
network
access
to
the
rp.
So
this
is
not
the
part
of
today
because
there's
external
access
to
the
rp
server.
So
with
the
cloud
providers.
You
could
manage
this,
but
this
is
not
part
of
our
presentation
today.
B
What
we
will
focus
on
is
the
rp
security,
the
access
to
lcd
and
the
lcd
encryption
for
this
infrastructure
section,
and
then
we
have
some
pod
security
policies
in
place,
and
then
we
also
check
some
container
security
policies
to
check
a
little
bit
the
best
practices.
What
we
can
do
on
our
own
to
secure
the
cluster
which
we
set
up
on
our
own,
so
there
I
want
to
start
with
the
rp
server
and
I
prepared
my
commands
a
bit
faster.
B
Okay,
so
when
we
check
the
rp
server
by
default
also
last
week
we
talked
about
the
the
execution
command
security
server
and
it
started
with
some
parameters
by
default.
This
is
a
cube,
adm
setup,
so
it
should
be
secure,
but
it
could
be
that
somebody
configured
some
unsecure
parameters.
For
example,
the
unsecure
port
is
set
to
disabled
as
we
see
with
the
zero.
This
is
very
good.
We
only
use
the
secure
port
with
https.
B
Now
plain
text
is
enabled,
and
also
no
no
local
setup.
We
can
also
see
this
here
on
the
rp
server
local
host
port
with
the
insecure
port
flag.
You
could
also
add
some
insecure
byte
address
and
some
more
parameters,
and
what
you
really
want
to
check
for
is
the
anonymous
of
true.
If
this
is
enabled
you
could
basically
curl
the
rp
server
and
get
some
information
about
the
cluster
without
authentication,
the
ncq
port,
when
it's
set
to
somebody
so
default
would
be
8080.
B
What's
the
word?
Oh,
my
god,
sorry,
security
exposure.
So
this
you
also
want
to
check
if
it's
not
there,
so
in
our
cluster
again,
what
is
default
cube,
adm
setup.
We
can
see
that
we
don't
have
any
of
this
of
this
flag
set
except
this
one,
but
this
is
with
zero,
is
disabled.
So
on
the
rp
server
we
are
good
to
go
also
in
the
cloud
providers.
They
also
have
the
same
recommendations
on
the
rp
server.
B
But
if
you
have
a
managed
quantities
then
they
will
do
it
for
you,
but
it's
good
to
know
and
to
check
and
maybe
go
for
it
and
dig
a
little
bit
deeper
in
the
rp
server.
B
What
we
want
to
do
now
is
with
remember
last
week
we
connected
to
lcd
and
could
read
sequence
and
plain
text
for
this.
We
can
do
it
again,
we're
just
creating
a
secret
again
sorry,
so
so
we
create
a
secret
and
everyone
can
contribute
namespace.
It's
a
generic
secret
and
recorded
plain
text
secret,
and
then
we
set
the
password
as
plain
text:
okay
and
then
we
go
again
to
our
lcd
port
and
then
cd.
B
B
B
This
is
explained
here
the
important
thing
here.
What
is
very
important
is
if
you
have
a
running
cluster-
and
this
is
not
new-
you
have
the
identity
provider,
which
is
this
one,
and
this
is
basically
the
provider
for
plain
text.
What
kubernetes
will
do
is
the
encryption
configuration
is
used
in
batch
mode,
so
it
will
go
there
and
check
what's
the
first
provider
if
it's
identity
at
the
store,
basically
the
secret,
the
plain
text
and
so
on
yeah.
B
So
what
you
want
to
do
is
basically
in
this
example,
you
want
to
do
the
encryption
configuration
before
and
then
the
identity
provider
after-
and
this
is
also
important
because
you
need
the
identity
provider
there
be
for
reading
the
secret.
The
the
order
doesn't
matter,
but
you
have
to
need.
You
need
the
identity
provider
in
the
configuration
in
order
to
read
plain
text
secrets.
B
A
B
Scream,
okay
and
then
we
go
to
our
default
creators
directory
what
we
do
here
now.
What
you
want
to
do
here
is
create
a
new
directory.
I
just
call
it,
I
don't
know
abby
and
then
I
create
a
new
file.
It's
saying.
B
And
here
we
just
go
here
and
just
use
this
config
there
are
some
providers
available,
for
example,
as
cbc's
here
you
can
see.
The
strength
is
the
strongest
speed
is
fast.
Key
name
is
this:
you
can
decide
what
you
want
to
use,
but
I
think
execute.
Cqbox
is
a
new
standard
here,
but
let's
go
with
this
one.
It's
the
recommended
choice
for
encryption
on
the
rest
yeah.
B
So
we
will
just
copy
the
configuration
here.
Okay
and
then
there's
a
workflow,
so
we
just
create
a
I'm
gonna
write
a
random
base64
secret.
There
we
have
it,
then
we
will
copy
it
in
our
configuration.
B
B
And
what
we
want
to
do
now
is,
as
we
created
a
new
directory,
so
the
cube
acquisition
will
not
know
about
or
it
doesn't
it's
not
there.
So
we
have
to
go
to
the
magnificent
folder
manifest.
B
I'm
there
so
I
will
go
okay
check
the
copy
server
and,
as
you
see
on
the
cube,
ap
server
you
have
in
the
in
the
bottom
of
the
file.
You
have
some
where
you
mounted,
so
we
have
the
the
folder
etc
as
a
third
mounted
cs3
kunitis,
and
what
we
want
to
do
is
to
mount
our
directory,
but
I
did
a
little
bit
the
mistake
so
use
the
wrong
folder.
So
I
will
just
move
my
move,
my
api,
our
photo
to
atc
coordinators.
B
I
remove
my
encryption
configuration
okay
now
we're
good,
so
we'll
check
here.
Okay,
so
I
created
a
file
with
some.
I
think
I
created
the
folder
here
to
the
etc
communities
api,
and
here
I
place
my
configuration
finally
after
some
workarounds.
So
what
we
want
to
do
now
again
is
go
to
the
the
manifest
folder.
So
we
manifest
and
then
update
the
api
server
configuration.
B
Then
we
just
use
the
api
folder
and
we
call
it
api.
Okay,
so
we're
mounting
our
new
directory
into
the
pot,
and
then
we
have
to
this
is
the
volume
and
now
we
have
to
create
the
mount
inside
kubernetes
there
we
go
again.
B
B
B
B
B
B
A
A
B
B
A
A
B
A
If
it's
easier
to
just
check
check
the
logs,
just
do
that,
I'm
just
thinking
out
loud
what
I
would
use
for
debugging.
B
Yes,
we
could
use
yamlinta
for
sure,
but
this
would
not
tell
us
about
the
typo
in
the
path
or
something.
So
I
just
want
to
recheck
my
one
second.
B
B
B
Api,
okay,
this
looks
good
okay,
then
we
could
have
also
check
again.
A
B
C
B
B
B
I
have
to
I
have
to
put
my
config
here,
because
it's
not
prepared
for
for
container
d,
so
we
create
the
pctf
and
then
we
just
paste
this
command
here
for
unix
we
update
the
sockets
to
container
g
and
then
we
will
see
some
container
deluxe
like
last
time.
B
So
I
I
used
container
d
as
a
the
container
runtime
here
and
the
crease
cgl
is
the
comment
line
tool.
This
one
does
not
know
that
we
are
running
in
container
d,
so
I
have
to
give
the
config
to
chris
ctl.
B
Okay,
yes,
you
can
create
a
config
file,
you
see
security
and
then
you
just
paste
the
you
give
them
the
the
container,
the
runtime
and
the
right
container
runtime
socket,
and
then
you
can
use
the
preset
here
like
reduced
last
time.
Here
we
are.
C
And
if
you
want
to
use
the
more
compatible
api,
you
can
use
not
nerd
ctl,
so
that's
also
from
actually
so
it's
like
it's
like
better
ctl,
so
it's
like
more
intuitive
for
new
users,
because
the
main
stuff
behind
it
is
that
there
is
also
and
docker,
is
only
proxy
commands
directly
to
contain
id
or
manages
continuity
instances.
A
So
probably
then
the
next
session
is
should
be
about
debugging
kubernetes.
Yes,.
A
B
B
A
A
A
B
But
normally
should
be
enough
to
so
my
preparations.
It
worked
five
times
with
moving
on
to
fire.
A
B
C
B
B
But
it
was
not
like
I
said
it
was
not
planned.
It
was
not
refused.
B
B
B
B
B
Then
we
just
make
just
check
it
again,
our
file,
so
we
removed
our
host
path.
What
we
did
we
created
this
one?
Okay,
we
did
the.
B
B
B
C
I
know
probably
the
problem
with
a
reload:
are
you
using
them
right?
Yes
and
vim
creates
a
temporary
file
not
directly
on
the
file
itself.
So
when
you
write
it
afterwards,
probably
if
I
notify
only
won't
be
triggered
so
that
the
tuplet
knows
the
change.
So
probably,
if
you
open
it
with
nano
or
something
like
that,
and
if
they
have
a
change,
it
should
be
awesome
metroid
for
readout
to
be
related
to
inotify.
B
B
I'm
wondering
is
here:
there
is
okay,
we're
back
online.
Now
we
see
it
not
really
interesting.
B
So
we're
mounting
the
volume
to
the
folder
to
the
pot
and
then
we
created
a
mount
the
volume
mount
and
I
did
a
space
issue
there.
So
I
added
the
trailing
white
space.
B
So
your
first
idea
to
make
a
yamaland
would
help
us
a
lot.
B
A
B
Yeah,
I
know,
but
basically
I
mean
I
thought
just
just
go
on
the
server
edit,
the
file
reset
the
ap
server
and
then
leave.
It
would
be
enough
for
us,
but
yeah
the
live
session.
Okay,
so
what
we
did
now
we
encrypted
after
some
circumstances
we
encrypted.
Finally,
our
configuration.
What
we
do
now
is
backed
on
track.
We
create
a
new
secret
okay,
we
call
it
again
and
everyone
can
contribute
namespace.
We
call
it
encrypted
secret
and
the
password
is
encrypted
in
some
capital
and
smallers.
B
Yeah,
so
this
secret
now
was
created
encrypted
and
if
nobody
has
the
key,
we
are
not
completely
safe,
but
we
are
safe
here.
Let's
say:
here's
the
old
one
which
you
can
still
read
yeah
and
then
there's
also
a
good
thing
from
kubernetes
over
documentation.
You
will
see
here
a
small
little
command
on
the
section,
verifying
that
your
data
is
encrypted.
You
can
recreate
it
and
read
it,
and
then
we
have
to
ensure
our
secrets
are
encrypted.
So
we
just
run
this
command
here
as
an
administrator
for
sure.
B
I
can
show
the
comment
at
the
meantime.
It
gets
our
secret
puts
it
out,
makes
it
an
output,
json
and
then
just
replacing
the
existing
secrets
with
the
new
generated
ones
and
then,
as
we
enable
the
encryption
configuration
on
the
api
server,
you
know
we'll
have
all
the
secrets
encrypted
in
the
fct
server.
B
B
B
So
basically,
you
want
no
user
that
they
have
access
to
the
cube
system
namespace
in
your
multi-tenancy
cluster,
and
then
we
can
go
on
checking
this
thing
here:
the
components
in
the
cluster
so
bac
authorization,
we
discussed
it
and
niklas
presented
it.
The
authentication
part
we
will
do
later,
maybe
we'll
open,
open
id
and
what
you
now
have
is
network
policies,
quality
of
service
tls
for
ingress.
This
is
what
is
really
covered.
B
I
think,
by
max
the
network
policy,
a
little
bit
more
advanced
topic
which
you
can
take
in
on
session
and
we
can
discuss
now
the
bot
security
policies
and
there
we
have
this
this
tool
or
this
software
called
kivano,
which
basically
is
the
admission
controller.
When
we
looking
at
kubernetes,
give
me
a
sec.
B
It
was
a
nice
picture.
No,
this
is
not.
So
what
is
the
admission
controller?
So
niklas
talked
a
little
bit
about
it
when
you
do
ap
api
request.
This
is
handled
by
the
api
server.
It's
a
web
server
and
at
the
end
it
will
be
persisted
to
at
cd.
B
What
we
basically
do
is
the
authentication
authorization
part.
This
is
the
lbsc,
and
then
we
have
the
admission
control
admission
control
intercepts
the
the
request
before
they
are
written
to
the
lcd
server
or
before
they
are
persisted,
and
you
can
validate
the
request.
You
can
mutate
them,
you
can
generate
them,
but
the
most
most
way
of
using
this
is
mutating
and
validating
that
mission.
So
basically,
for
example,
you
will
send
a
request
to
api
server
to
create
a
namespace,
and
you
can
do
a
validating
admission
to
check
if
the.
B
If
the
namespace
has
some
labels,
because
you
will
need
labels
later
for
network
policy,
so
you
want
your
namespace
is
labeled
or
you
can
add
a
mutating
admission.
You
create
the
namespace
and
your
policy
or
your
mutating
admission
controller,
adds
and
adds
a
label
to
the
namespace.
So
these
are
the
basic
use
cases,
and
for
this
you
can
use
kivano.
B
When
you
look
at
the
introduction
about
quirano
or
caivano,
I
don't
know
how
this
is
that,
so
it's
basically
it's
an
admission
controller
which
validates
mutate
organization,
all
the
other,
the
the
api
commands
we
will
send
to
lcd
and
create,
create
or
validate
it
or
yeah.
With
this
thing,
so
what
we
can
do
is
to
just
install
it
with
a
small
command
and
there
we
can.
B
Okay
and
then
we
can
just
a
little
bit
explore
it
now
what
it
did
it
added
some
custom
resource
definitions,
custom
resource
definitions
are
here
a
small
overview
is
there
are
extensions
of
the
api,
so
we
will
have
now.
With
this
custom
resource
definitions,
we
will
have
some
new
commands
available
in
lcd
and
on
the
kunis
cluster.
Sorry,
for
example,
we
will
have
cube
ctr
yeah
good.
Then
we
have.
B
B
Yeah
I'll,
just
get
it
here.
For
example,
we
have
this
now
cluster
policy,
so
yeah
no
resources
found
and
then
you
can
also
do
policies,
so
it's
fun
because
we
did
not
created
anything.
So
these
are
basically
custom
custom
resource
definitions.
We
have
new
commands
available
in
grenadas.
C
Yeah
short
question:
what
are
the
policy
reports?
What
are
they
doing.
C
B
We
make
a
small
shortcut,
you
can
maybe
save
your
question
for
later
it
created
a
server
account.
It
created
some
lbsc
roads
and
permissions.
Some
cluster,
our
bindings
equated
the
conflict
map
and
created
the
service
yeah.
What
we
basically
can
check,
because
the
the
given
system
works
like
this,
that
your
policies
will
be
checked
in
the
background.
Okay
and
one
second,
give
me
one
thing:
I
have
to
close
the
baby
phone,
so
you
want
to
get.
B
Okay,
I'm
back
so
yeah.
I
can
hear
my
wife
talking
with
my
girl,
everything
just
a
little
bit
annoying
now
in
the
session,
but
what
kiwano
will
do?
There
are
two
ways
of
working
with
policies.
You
have
enforced
mode
and
you
have
audit
mode
okay,
and
this
we
can
see
here
with
introduction
background
scans.
B
Now
you
have
enforced
mode
and
the
audit
mode
and
a
background
check,
background
fault
or
background.
True
kiwano
validates
in
the
background,
your
policies,
okay-
and
this
will
be
done
like
discussed
here-
every
15
minutes-
and
you
can
also
see
here
when
your
audit
is,
for
example,
you
you
have
a
running
cluster.
You
will
test
the
policies,
you
create
your
policies,
you
you
do
the
audit
mode
and
then
you
will
see
a
report.
B
How
many
ports
are
matching
your
policies?
Are
they
failed?
Are
they
passed
so
you
can
test
new
policies
and
basically
check
what
will
happen
in
your
class
stuff
yeah?
Then
you
can
do
report
on
it.
You
can
check
it
and
so
on
so
and
then
in
the
background,
every
15
minutes
it
will
be
it's
running
and
yeah
we'll
get
make
the
report.
B
Then
the
enforce
mode,
for
example,
as
you
see
here
when
you
have
a
new
resource
created
and
you
have
the
enforce
mode
with
background
true,
you
will
see
non-report
because
enforced
means
it's
false
and,
for
example,
if
I
don't
allow
you
to
run
a
pot
with
privileged
and
I
make
it
enforced,
it
just
blocks
the
pot
and
then
you
cannot
run
it,
but
there
will
be
also
no
report
for
you
available.
So
you
do
not
see
how
many
agents
or
how
many
agents
say
how
many
users
are
doing
this.
A
Quick
question
url.
B
Okay,
so
what
we
basically
maybe
want
to
do
first,
is
I
created
some
default
policies
which
we
can
take
a
look
later
and
we
just
apply
them
now
then
see
see
later
the
outcome,
so
we
can
explore
a
little
bit
because
it's
running
in
the
background.
Okay
and
then
we
can
create
some
new
ports
and
so
on.
So
what
we
do?
We
just
allow
some
host
namespace
privilege,
antennas
and
privileged
containers.
B
B
B
So
basically
yeah,
then
we
have
a
conflict
map.
This
is
really
important,
so
we
just
go
down
so
okay,
then
I
think
in
the
key
vienna
namespace.
B
So
you
can
some
exclude
some
namespaces
if
you
want
or
replica
sets
are
excluded,
maybe
because
I
know
you
normally
create
deployments
and
even
sets
the
policy
in
the
cluster
report.
Things
are
excluded,
so
you
can
exclude
your
namespace.
B
If
you
want
a
cube
system
yeah,
it's
a
good
good
way
to
exit
it
because,
for
example,
if
you
would
need
to
run
privileged
pot
in
your
cube
system,
for
example,
the
the
network
plug-in
the
cni,
which
you're
running,
for
example,
psyllium
or
calico
or
flannel,
they
will
run
privileged
pots
to
enable
network
for
your
cluster.
So
you
definitely
don't
want
to
block
them.
That's
why
they
are
excluded
here,
but
just
you
just
have
to
keep
in
mind
that
chip
system
is
executed.
B
B
Okay,
then
we
can
go
on.
Then
we
have
also.
We
have
a
service
here
where
this
is
basically
the
admission
controller,
where
the
way
the
validation
and
the
the
mutilating
generating
things
will
happen.
This
is
called
every
time.
So
then
we
can
also.
We
can
explore
a
little
bit.
You
have
a
mutating
macbook
configuration
policy,
mutating
resource
mutating,
verify
rotating.
B
B
I
don't
want
to
dig
that
deep
in
this
things
we
can
go
here
now
and
see
our
cluster
policies
and
there
we
see
background
true
for
everything.
So
every
15
minutes
this
will
be
validated
and
we
have
the
action
here.
So
we
will
enforce
this.
The
low
cost
named
spaces,
the
privileged
things
and
the
namespace
tables,
as
we
definitely
want
to
enforce
and
be
audited
or
monitor
some
some
other
policies.
B
Then
we
will
have
the
the
thing
niklas
wanted.
The
policy
report.
The
thing
on
vienna
is
that
the
policies
are
generated.
Not
the
report
is
not
on
policy.
It's
unnamed
base,
so
the
name
space.
Everyone
can
contribute.
Have
this
policy
report
so
polar
is
a
short
abbreviation,
so
you
could
also
use
it
here.
B
Polar
yeah,
pole
r,
and
then
you
have
the
report
here
so
14
of
our
of
our
configuration
items
passed
and
four
failed
and
in
the
other
name
space,
it's
even
worse,
so
10
failed
okay.
So
we
can
audit
this
or
check
what's
happening
there.
In
my
space,
for
example,
that
we
have
a
comment
we
can
make
this
describe
report
in
the
namespace
and
then
we
just
grab
the
status
phase,
and
then
we
have
a
nice
overview
or
not
not
nice,
but
we
have
overview,
for
example,
in
the
pod
showcase
one.
B
We
have
the
validation,
unknown
image
registry,
so
I
use
the
registry
which
is
not
trusted
by
us,
our
cluster
and
so
on.
Those
are
some
things
you
could
can
do
here
and
if
you
want
to
dive
deeper
and
something
then
just
tell
me
here
and
then
we
have
also
the
cluster
policy
report,
so
this
is
for
objects
which
are
not
namespace
like
namespaces
cluster
objects,
which
are
not
namespace.
B
They
are
in
the
cluster
policy
report.
Everything
else.
What
is
namespace,
for
example,
like
even
set
deployment
spots,
will
be
in
the
policy
report.
Section
yeah,
so
you
can
also
see
here.
There
are
some
failed.
Let's
see
why
we
have
the
validation
error.
The
label
cafe,
slash
number
is
required
for
a
namespace,
so
we
did
not
add
caffeine
number
20
here.
So
this
is
a
validation
of
our
error
of
our
namespaces
yeah.
So
I
think
you,
everyone
who's
working
communities
should
know
that
you
should
also
label
namespaces
because
lay
down
you
will
definitely
need
it.
C
B
I
think
you
could
write
a
policy
for
this
and
you
should
see
it
then,
on
the
cluster
policy
report.
Okay,
I
think
the
the
thing
is
here
we
can.
We
can
also
look
at
one
policy,
for
example
this
one.
This
is
the
the
layout
of
the
policy
and
what
you
see
here,
it's
like
the
the
big
pro
argument
of
caveno.
Is
it's
like
cloud
native
integrated,
so
you
just
use
your
yummy
files.
B
C
The
simple
case
would
be
hey.
I
want
to
see
who
is
everyone
on
the
wall?
18V,
cluster
admin
or
system.
B
But
the
thing
is
there
should
be
a
there
should
be
a
validation.
You
know
it
must
be.
I
think
it
must
be
more
like
is
if
this
niklas
or
is
somebody
else
except
nicolas
clasman.
I
think
this
should
be
the
rule
here.
In
this
case,.
A
What's
what's
the
most
most
common
attack
vector
so
like
what
we
tried
last
week,
yeah.
A
Common
use
case
or
say
I
have
no
idea,
yes
or
maybe
the
forbidding
the
amount.
B
Yes,
you
could
you
could
do
a
lot.
For
example,
here
we
have
a
policy,
the
privilege,
escalation
yeah.
We,
if
must
be
fought,
so
it's
not
fault,
then
you
will
get
an
exception.
We
can
also
try
it
soon.
We
have
prejudice
container,
so
the
privileged
is
not
fought.
B
Then
you
will
get
an
error.
We
we
have
this
disallow
secrets
from
environment
variables,
but
I
would
not.
Maybe
I
don't
know
you
can
you
can
just
audit
it
and
later
enforce
it?
B
A
Last
week
we
kind
of
mounted
a
file
system
and
injected
the
ssh
keys
and
like
authorized
ourselves,
and
can
we
like
practice
this
now
and
say
hey
we
want
to
do
this
again.
So
now
we
have
the
policy
which
prevents
us.
B
Yes
sure
so,
then
again,
I
just
wanted
to
show
the
policies
that
it
is
yummy
based
and
you
don't
have
to
learn
a
new
language.
So
if
you
use
the
open
policy
agent,
you
have
to
learn
this
regal
language.
It's
called
and
I
mean,
as
we
discussed
also
last
week,
you
have
much
more
use.
You
can
use
opa,
maybe
everywhere
you
can
use
it
in
your
ci.
You
can.
B
But
if
you
want
us,
your
clusters
are
more
secure
with
one
click
and
don't
want
to
learn
any
language
write
some
yammer
thing
then
cabriano
definitely
do
good
and
it's
also
really
extendable.
So
I
will
like
it
for
now
what
we
can
do
now.
Last
week
I
shared
this
repository
we
edit
again
here.
So
we
have
this
bad
pods
repository
on
github
yeah,
our
shared
oscillator
there.
You
have
some
examples
and
what
we
again
had
is
the
not
the
policies
we
will
want
to
get
my
policies
this
so
first
report.
B
Oh
no,
sorry,
wrong.
Cluster
policies-
and
here
we
see
house
namespaces,
is
enforced
again,
privilege
escalation,
enforce
privilege,
containers
enforce
namespace
davis
and
fourth,
okay.
So
what
I
can
do
now
is
go
back
to
the
repository
here
and
then
there
are
some
commands.
B
So
you
could
also
with
the
host
ip,
for
example.
You
could
also
take
over
root
rights
on
the
on
the
on
the
host
where
the
container
is
running,
and
we
have
the
same
here
with
privileged
pots
or
privileged
containers.
Privileged
containers,
privilege
mode
is
not
allowed.
So
basically
we
could
not
run
the
pot
now
here
we
have
the
same.
B
For
example,
just
privilege,
a
privileged
exact
port
there's
another
party
which
we
just
try
to
run
here
with
the
same
issue
here
privileged
containers.
It
could
not
run
yeah.
Do
we
have
more?
Do
we
have
still?
We
can
just
run
a
basic,
the
basic
port,
because
we
disallowed
the
the
do.
We
enforce
it
or
audited
sorry
wrong
command.
B
B
Okay
and
what
we
now
did
is
we're
enforcing
that
nobody
can
run
a
pot
where
you
have
an
auto
mounted
service
account
token,
which
is
basically
default,
and
when
we
run
test
mount
and
the
image
is
nginx,
it
should
basically
not
run
and
it
does
not
run
the
auto
mounting
or
service
account.
Token
is
not
allowed,
so
you
have
to
create
a
yeah.
You
have
to
create
a
yaml
file
now
and
then
you
have
to
set
service
auto
mount
token
thoughts,
and
then
basically
there
you
have
the
security
back.
B
What
we
also
have
is
image
registries,
for
example.
B
We
can
also
take
a
look
at
this
manifest
file-
image
registries,
for
example
this
rule
here
this
allows
all
images
which
are
not
from
kubernetes
google
container
registry
or
directly
google
container
registry.
So
this
is
like
a
r,
so
you
could
add
yours
here
whether
or
not
try
to,
for
example,
we
have
to
also
edit
it
and
enforce
it,
because
otherwise
we
will
have
to
wait
a
little
bit
to
see
our
results
because
of
the
15
minutes
thing
we
also
go
here.
B
A
B
A
B
C
A
B
Install
yes,
so,
basically,
here
we
call
we
run
as
a
container
or
a
port,
which
is
called
unsecure
registry
and
we
use
the
engine
x,
but
from
the
red
hat,
query:
qui,
io
dot.
Registry
and
again
we
have
the.
We
have
two
errors
here:
the
validation
we
have,
the
automotive
service
account
and
we
have
the
unknown
registry
which
not
matches
our
policy
yeah
and
because
the
also
what
I
showed
last
time,
because
the
let's
say
the
reporting
of
the
the
policies
is
not
that
good.
B
He
wrote
a
small,
the
small
reporter
policy
reporter
for
graffana
and
loki
operators
and
loki
and
grafana,
and
then
you
have
like
a
better
reporting,
for
example,
in
the
screenshot.
You
can
see
here.
The
policy
check
label
app
with
the
rule
on
the
namespaces.
You
can
have
a
little
bit
more
or
better
overview
of
your
policy
failures,
but
I
did
not
have
the
time
to
install
it
in
the
cluster
this
week
yeah,
but
I
talked
yesterday
with
him
and
he
was
on
version
0.50.
B
Now
he
soon
will
release
the
0.1
final
release
because
he's
satisfied-
and
I
think
it's
also
a
good
tool
if
you
really
use
a
lot
kaivano
in
your
cluster.
B
So
if
you
want
to
do,
maybe
we
can,
if
we
are
later
automate
our
cluster
more
and
implement
monitoring
what
we
she
always
wanted
to
have
auto
cluster
with
monitoring.
We
can
integrate
this
policy
report
also
with
caverno,
because
I
think,
as
port
security
policies
are
deprecated,
you
should
use
policy
agent
or
kyberno,
at
least
in
the
cluster,
to
have
it
secure
yeah.
Basically,
if
you
want
to
see
some
more
on
cavanaugh
and
I
can
show
more
otherwise,
I
think
I
would
be
finished
well.
A
A
Thanks,
I
have,
I
have
crazy
ideas.
Oftentimes.
I
was
wondering
if
we
had
any
other
use
case
from
the
break-in
session
last
week,
which
we
did
not
cover
yet
so
the
ottoman
amount
is
fixed
privilege.
Escalation
is
also
fixed
containers.
We
did
not
touch
yet.
What
else
did
we
do?
Last
week.
B
B
So
I
can't
do
anything
here,
so
I
can't
create
a
even
a
normal
pot,
so
we
had
this
when
you
have
the
service
account.
Marketing
and
a
service
account
mount
the
token,
basically
in
the
pot
and
what
you
could
do
if
somebody,
for
example,
what
we
said
last
week,
if
somebody
edited
the
default
service
account
and
give
them
more
roads
for
testing
purposes,
and
they
forget
it
to
revert
it
because
to
sometimes
happen,
when
you
do
manual
things,
then
you
could
use
this
token
to
call
the
rp
server.
B
So
this
will
prevent
it,
because
we
don't
want
the
default
service
account
mounted.
Then
we
had
the
use
case.
Basically,
the
most
use
case
is
when
you
get
root,
writes
on
the
host
node.
This
is
by
a
post
pit
or
via
privilege,
port
yeah.
This
we
also
have
here
in
the
first
thing,
what
we
had
and
then
good
overview
of
this
key
security
yeah.
So
this
infrastructure
we
handled
plus
that
we
handled
with
the
security
policies,
then
you
have
the
container.
So
this
allow
privileged
users.
B
You
could
also
run
more.
You
can
also
in
the
best
practices
here
default
or,
let's
say,
restricted,
there's
some
more.
You
can
require
run
as
non-root,
so
yeah.
The
pit
of
the
process
running
in
the
container
should
not
be
root.
For
example,
then
deny
privilege
escalation,
restrict
volume
times
types
for
example.
Nobody
should
run
host
mounts,
there's
a
lot
of
things
you
can
restrict,
but
basically
the
the
the
most
concern
is
the
privileged
bots.
B
What
you
also
want
to
do
for
sure
is
don't
run
vulnerable
containers
on
your
cluster.
Sometimes
you
cannot
prevent
it,
but
you
definitely
want
to
implement
some
vulnerability
scanning
of
the
containers
you
run,
but
this
can
be
done
easily
in
the
ci,
with
trivia
trivia,
what
we
discussed.
So
it's
also.
Maybe
we
need
a
session
and
then
yeah.
We
have
the
code
security,
which
is
also
not
part
of
our
showcases.
So
I
think
we
did
a
really
good
progress
with
our
cluster.
B
If
somebody
else
has
something
in
mind,
you
please
share
it,
so
you
could
for
sure
network
policies,
but
it's
like
a
not
advanced,
but
it's
a
topic
where
you
have
to
really
know
what's
in
your
cluster
and
which
parts
connecting
from
where
so,
you
could
basically
restrict
pot
from
namespace
one
to
connect
to
the
other
port
and
netspace2.
A
So
this
is
a
crazy
thought.
Is
there
a
way
to
measure
runtime
data
so
like
network
traffic
or
something
else,
cpu
usage
and
use
that
data
to
define
a
policy
so
like
receiving
metrics
from
promisius
or
the
alert
manager
actually
triggering
a
threshold?
And
then
the
policy
kicks
in
and
prevents
something
bad
from
happening.
B
For
sure
you
can
you
can
do
metrics
yeah,
so
the
default
metrics
in
kubernetes,
if
I'm
not
wrong,
are
cpu
and
ram.
So
you
don't
have
more
default
metrics.
So
I
need
to
program
but
easily.
You
can
do
as
we
did
last
week,
but
you
have
to
set
it
up
more
secure.
You
can
use
a
rp
connection
from
a
set
of
server
or
like
a
python
program.
I
don't
know
which
calls
the
rp
server
and
patches
the
deployment
patches
policies
create
some
things.
B
B
C
There
are
products
out
for
the
case,
so
if
you
want
to
do
that,
so
they
are
also
laying
on
different
network
layers
to
do
on
reducing
the
traffic
and
see
auto
anonymities
in
your
network.
C
So
I
know
at
least
one
to
nova
neu
vector
they're,
using
it
they're
using
also
more
kernel
technology
technology,
so
that
they're
a
little
bit
independent
from
the
stuff
if
you're,
using
velcro,
that's
based
mostly
on
ebpf
or
using
other
tools
like
across
security,
has
also
some
toon
surprise.
But
I
don't
know
it,
but
that's
for
securing
network
traffic
on
demand
yeah
so
that
you
automatically
can't
see-
or
it's
also
possible
with
serium.
You
can
also
do
that.
B
It
often
depends
where
your
cluster
is
running
and
so
on.
So
are
you
how
your
setup
should
be
if
it's
a
cloud
managed
cluster?
If
it's
google
autopilot,
if
it's
cube,
adm
or
rancher,
you
have
to
see
what
you
are
running
and
then
check
the
solutions.
I
think
I
mean,
but
we
are
on
a
good
good
status.
Now
you
could
definitely
do
more.
Always
you
can
always
do
more
for
security.
You
could
harden
your
notes
like
they
proposed
to
you:
restrict
api
api
access,
restrict
network
access
to
nodes.
So
this
is
really
good.
B
Basically
a
basic
security
guideline
here
and
then,
basically,
you
kicks
in
the
admission
controllers.
You
can
program
your
own
admission
controller.
You
can
validate,
you
can
generate
requests,
so
everything
is
possible.
You
see
the
possibilities
here.
This
is
the
list
denial
resource
quota.
We
did
not
touch
resource
quotas,
so
there's
also
something
yeah.
There's
a
lot
of
things
you
can
do.
What
is
your
report?
There's?
Also
oncology
has
two
good
good
blog
posts.
B
B
A
I
think
I
was
touching
too
many
topics
in
my
thought,
but
I
was
probably
thinking
of
trying
out
cube
state,
metrics
and
touching
with
promisius
and
grafana,
and
also
use
that
for
like
combining
it
later
on.
So
probably
one
of
the
next
things
which
would
be
interesting
would
be
either
monitoring
or
like
diving
more
into
the
security
layer.
A
What
else
automating
things
yep,
I'm
totally,
I'm
totally
up
for
what
you
want
to
do.
So
just
the
first
one
who
says
I'm
preparing
something
for
next
week
wins
as
always.
B
I
think
we
should,
if
niklas
has
time
we
should
dig
into
his
open
id
thing,
because
the
authorization
and
authorization
that
we
got
authentication
and
authorization
parts
only
is
also
really
interesting
for
me.
So
if
he
has
time
and
can
finish
off
my
d
thing,
this
would
be
cool.
If
not,
I
don't
know
we
can.
B
What
what
max-
and
I
did
so
we
could,
I
can
add,
caverno
policies
to
the
cluster
repository
of
max,
for
example,
I
don't
know
if
he
wants
it
and
then
we
can
automate
the
whole
stuff
with
terraform
and
max
gitlab
ci
code.
From
my
point
of
view,
sorry
nicholas,
I
interrupted
you.
C
Yeah
no
problem
yeah.
We
can
do
at
least
for
up
id
stuff
next
week
so,
but
I
think
it
will
take
only
30
minutes,
mostly
20
under
30,
to
do
that
and
then
probably
I
could
also
give
a
short
introduction
about
how
to
monitor
stuff
on
kubernetes
so
that
we
setting
up
the
stick
like
I
don't.
I
won't
go
with
cubesat
metrics.
C
We
will
see
probably
using
previous
adapter
there,
I'm
probably
deploying
the
promiscuous
operator.
It's
not
quite
pretty
much.
This
tube
stake,
something
like
that.
I
don't
know
by
my
hand
and
then
we
can
see
at
least
some
metrics
on
the
castle
that
we
created
and
probably
then
we
can
combine
a
little
bit
more
in
terms
of
that.
We
probably
then
also
integrate
tv,
no,
then
or
having
a
more
policy
framework
for
that.
C
So
probably
I
can
go
into
a
little
similar
topic
so
because
I
think
was
it
juvenile
or
was
it
another
tool?
Let
me
check
this
is
a
policy
agent?
No,
I'm.
B
A
Oh,
so,
if
we,
if
we
want
to
keep
a
security
and
an
open,
ide
and
everything
around
it,
wrapping
it
up
next
week
would
be
a
cool
thing
because
for
monitoring,
I
think
we
will
take
at
least
one
hour
and
if
it's
in
two
weeks
time,
I'm
probably
more
relaxed
with
preparing
something,
and
we
could
like
take
the
whole
hour
and
diving
into
monitoring
observability
and
doing
some
more
life
hacking
sessions.
Next
week,
we
can,
if
we,
if
you're
up
to
it,
open,
ide
and.
C
Op
id
and
user
management
a
little
bit
more
advanced
case,
so
probably
I
can
probably
shortly
share
my
screen.
Give
me
one
second.
C
Link
if
philippians.
C
C
B
C
C
Yeah,
probably
then
we
have
users,
but
then
we
have
also
a
little
bit
more
problem
of
account
management
which
user
can
go
in
and
so
on
and
here's
a
small
tool
when
you
want
to
manage
multiple
users
on
your
clusters
and
also
what
is
more
interesting,
if
you
see
also
accounting,
for
example,
that
you
can
create
templates
or
spaces,
they
have
a
abstraction
stored
space
where
you
can
dynamically
create
pre-templated
names
best
for
your
apps
and
so
on.
C
So
this
is
a
more
high-level
management
solution
for
really
having
a
multi-tenancy
in
your
cluster.
A
I
like
that,
I
buy
it
when
it
where
need
to.
I
put
the
russia
kicks
in.
C
A
Okay,
then,
then,
let's
wrap
up
and
say
we're
gonna
look
into
open
id
and
kiosk
for
multi-tenancy
management
next
week
and
in
two
weeks
time
we
do
a
stop
in
in
monitoring,
maybe
debugging
monitoring
breaking
something
again
yep,
and
I
would
love
to
say
thanks
to
philip
for
preparing
today's
session,
making
it
really
insightful
and
learned
a
lot
today
still
need
to
practice
it,
and
I
encourage
everyone
listening
now
or
watching
now
to
just
like
go
ahead
and
try
this
out.
A
We
will
share
the
blog
post
later
on
yeah,
and
thanks
for
watching
and
see
you
next
week,
bye
on
youtube.