►
From YouTube: Kubernetes SIG Auth 2020-03-04
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2020-03-04
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
C
C
C
A
Ok,
so
hi
everybody,
and
thanks
for
receiving
me
in
sick
earth,
so
my
name
is
Cristian
Klein
I'm
working
Telesis,
a
small
consultancy
company
in
the
cloud
native
space
located
in
northern
Sweden
and
we're
trying
to
specialise
in
compliance
and
I.
Myself
have
been
a
researcher
teacher
and
consultant
in
computing
for
them.
Probably
since
the
term
was
almost
is
a
term
was
coined,
so
I
just
wanted
to
go
very
quickly
about
what
is
compliance
and
why
exactly?
This
project
is
so
important
for
us.
A
So
there
are
very
many
industries
that
are
under
heavy
regulations
such
as
financial
technology
and
medical
technology,
and
in
order
for
these
to
gain
the
public
trust
to
work
with
financial
information
and
to
work
with
personal
information,
they
have
to
adhere
to
a
certain
set
of
regulations,
and
this
is
just
not
not
just
something-
a
paper
tag
that
gets
stashed
into
an
into
a
desk
and
that's
it,
but
on
the
contrary,
they're
being
audited
regularly.
And
if
the
auditor
determines
that
the
company
is
compliant,
then
they
can
continue
operating.
A
Otherwise,
if
they
determine
that
they're
not
compliant,
then
basically
they
have
to
close
shop
and
well
haul
the
developers
there
need
to
need
to
brush
up
there
see
so
now
there
are
various
regulations,
and
there
is
one
of
them
in
particular
when
you're
talking
about
defense
contracts,
that
it's
very
much
about
physical
security.
So
in
that
kind
of
regulations
to
my,
you
might
hear
about
weird
stuff,
such
as
network
cables
being
physically
visible
and
not
hidden
inside
the
walls,
so
that
they
can
at
any
time
be
checked
whether
they
have
been
tampered
with.
A
So
it
should
come
as
no
surprise
that
they
really
like
to
be
able
to
point
at
the
physical
object
in
which
all
the
credentials
are
being
stored,
and
this
is
commonly
done
by
a
hardware
security
module.
So
my
favorite
one
is
the
Yubikey,
that
is,
let's
say,
a
bit
more
targeted
at
developers,
but
you
have
of
course,
also
hire
higher-end
HSM
such
as
yeah.
D
A
No
electronic
fund
or
Gemalto
or
Tallis
they're
all
kind
of
providers,
for
that,
and
the
idea
is
that,
once
the
private
keys
are
uploaded
to
HSM,
it
cannot
be
extracted.
It's
usually
also
protected
behind
a
pin
so
that
you
cannot,
and
if
you
don't
know
the
pin
yeah
after
three
attempts
are
being
locked
out
and
often
they
also
have
certain
features
such
as
the
number
of
signatures
that
have
been
Europe,
for
which
the
private
he
has
been
used
or
counted
and
time-stamped
and
so
on.
A
So
all
in
all,
your
laptop
has
no
credentials
and
it
communicates
with
the
HSM
using
a
standards
such
as
the
pkcs
11.
Certainly,
certainly,
the
person
who
has
invented
standards
hasn't
thought
about
pronounce
ability
anyway,
and
then
your
laptop
is
delegating
all
operates
to
designing
encryption
and
decryption
to
your
HSM.
A
A
Unfortunately,
until
recently,
there
has
been
no
support
for
TLS
client
authentication
using
HSM
sin
to
control.
Until
of
course
today,
so
let
me
show
you
overall
how
this
contribution
could
be
made
useful,
so
I
envision
two
scenarios,
two
user
stories:
let's
call
whoever
controls
the
client
certificate
authority
of
the
cluster,
the
commands
administrator,
and
there
are
two
scenarios
either
that
person
would
create
a
certificate
and
a
key,
and
that
would
be
then
imported
on
the
hsm.
So
in
this
scenario
you
could
do
what
is
often
called
an
off-key
backup
of
your
certificate
and
key.
A
A
So
now
I'm
going
to
do
what
I
like
to
call
a
semi
life
demo.
So
the
let
me
maybe
zoom
out
zoom
in
a
little
bit
so
I
can
see
it
easier.
So
the
keystrokes
are
pre-recorded,
but
the
comments
that
are
happening
behind
the
scenes
are
are
real
and
the
output
of
those
commands.
So
just
before
this
demo
have
started
a
mini
cube
cluster
and
then,
if
I,
for
example,
look
at
what
the
pods
are
running
is
cluster
than
well.
A
However,
the
problem
is,
as
in
with
many
configurations
of
cube
controls
that
the
private
key
is
resting
on
my
laptop
and
so
now,
for
example,
if
you
would
quickly
copy/paste
a
sting
here,
then
you
could
potentially
endanger
my
mini
Q,
cluster
and
hack
into
it,
and,
like
I
said,
there
are
certain
environments
in
which
this
is
an
unacceptable
risk
and
certain
regulations
that
really
want
you
to
put
credentials
on
a
specific
device
on
a
specific
physical
device.
For
that.
A
A
A
So
now
I'm
going
to
use
the
contribution
that
I'm
that
I'm
about
to
discuss
now
so
I
implemented
a
custom
authentication
provider
PK
s
11.
Let
me
show
you
what
is
the
result
in
Q
control
to
discuss
it
a
little
bit
easier?
So,
basically,
what
I
say
is
that
well,
the
oath
by
drip
is
PK
CSL
Evan,
pkcs
11
implies
that
I
need
to
have
some
kind
of
module
that
translates
the
the
signature
and
encryption
decryption
operations
into
USB
commands
for
the
HSM
or
into
PCI
comments
for
the
HSM.
A
This
is
a
module
that
is
very
hot.
A
specific
and
needs
to
come
with
the
hardware
also
specify
the
slot
ID.
So
if
I
have
several
hsn's,
then
I
can
slot
IDs
their
term
in
the
standard
for
specifying
which
HSM
I'm
addressing
and
object.
Id
is
the
certificate
that
I
want
to
access
in
case
I
have
let's
say
several
certificates
on
it
and
then,
in
this
case,
I
put
the
pinion
er
configurations
in
future
versions:
I
plan
on
making
the
user
type
they
the
pin
at
the
terminal.
A
A
So
the
way
this
works
under
the
hood
is
that
when
I,
when
the
user
types
to
control
get
pods,
then
there
is
a
certain
phase
during
which
the
transport,
the
TLS
setup,
is
being
done,
and
here
I
have
added
some
some
code
in
order
for
the
the
TLS
configuration
to
be
slightly
different
and
there
I'm
calling
find
keeper
against
a
library
called
crypto
11.
So
this
is
a
library
that
implements
the
pkcs
11
standard
in
the
go
program,
language
this
in
turn
gets
converted
into
a
find
objects,
call
against
the
HSM
specific
module.
A
So
this
is
basically
a
sea
binding,
a
sea
call
against
the
dot
a
so.
This
gets
translated
into
USB
comments
to
the
Yubikey
and
then
I
and
I'm
coming
back
on
the
path.
This
returns
eventually
a
TLS
certificate
objects
which
includes
a
custom
crypto
designer
with
us,
with
a
crypto
signer
interface
that
is
again
implemented
by
the
crypto
11
library
and
then
when
at
some
point
server
the
kubernetes
api
asks
for
the
Cavanaugh's
client
to
prove
that,
indeed
it
can.
It
controls
the
client
certificate.
A
Then
the
sign
operation
is
being
sent
to
the
crypt
11
library
to
the
pkcs
11,
a
so
module
in
the
USB
comments,
and
then
the
signature
is
being
propagated.
Back
now,
I
have
us
a
little
bit
people
around
on
slack
already.
If
this
is
ok,
so
I
should
mention,
probably
that
we
were
contracted
by
a
company
to
implement
this,
and
one
of
the
main
requirements
was
that
we
need
to
implement
eNOS
in
a
way
that
is
up
streamable.
A
So
it
seems
that
it
was
very
important
for
them
that
our
contribution
would
be
accepted
in
mainline
kubernetes
and
not
have
a
separate
fork
for
this.
So
I
have
checked
initially
on
on
slack.
What
could
be
the
problems
with
with
its
architecture,
and
the
main
issue
that
was
raised
was
that
queue
control
requires
I
mean
in
this
particular
implementation.
Queue
control
would
require
cgo,
so
this
is
a
little
bit.
A
One
could
say
by
definition,
because
since
this
is
since
the
dot,
iso
is
a
c
library
and
the
standard
need
mandates
to
do
c
calls
then
basically
you're
you're,
forcing
yourself
to
to
compile
keep
control
with
cg.
Oh
now,
my
understanding
is
that
this
is
not
okay,
and
so
there
are
two
ways
of
fixing
this
either
one
could
insert
some
kind
of
TLS
certificate
proxy.
A
The
cube
control
process
and
to
the
right
of
it
would
be
a
complete
different
process
or
the
other
possibility
would
be
to
implement
some
kind
of
pkcs
11
proxy
I
haven't
found
any
on
the
internet
and
I
feel
that
implementing
a
complete
one
would
be
quite
a
lot
of
work,
so
I
would
rather
favor.
This
first
approach
so
now
have
some
open
questions.
That
I
would
very
much
appreciate
if
you
could
help
me
answer
so.
First
of
all
how
important?
A
Let
me
just
fly
through
the
questions
and
then
maybe
somebody
can
moderate
the
order
in
which
the
question
should
be
answered.
So,
first
of
all,
how
important
is
avoiding
cgo
and
cube
control.
I
have
noticed
that
there
are
some
some
issues
or
the
opening
github,
which
are,
let's
say
contesting
the
position,
and
especially
on
Mac
OS,
my
understanding
that
queue
control
is
implemented
with
its
compiled
with
cgo.
A
In
order
for
some
dns
resolution
to
work
properly,
then
I
was
wondering,
assuming
that
the
answer
this
question
is
yes,
it's
important,
then
what
would
be
the
preferred
way
of
avoiding
it
and
then
also
how
generics
should
the
implementation
be
I
mean?
Should
this
interface
be
generic
enough
to
be
consumable
by
some
kind
of
logins
in
the
future,
or
can
this
kind
of
be
considering
internal
API?
That
is
specific
to
something
like
the
PKS,
11,
authentication,
blogging
and
then,
depending
on
what
you,
what
the
answer
to
these
are
I
will
be
cursed.
A
B
B
So
I
can
take
a
little
bit
slack
so
I
think
it's
a
mixture
of
not
wanting
C
go
but
I
think
primarily
not
wanting
something
as
large
as
pkcs
11
IC
go
like
I.
Think
the
back
stuff
is
like
the
go
version
like
the
go:
compiler
from
Mac
binds
against
like
unstable,
Mac,
api's
and
so
I
like
pretty
quickly
deprecates
like
Mac
support
as
it
goes
on,
like
they
just
drop
support
for
like
Mac
OS
from
like
five
years
ago,
where
I
was
like.
B
A
So,
on
the
very
high
level,
this
depends
a
little
bit
if
this
would
be
an
internal
API
or
an
public
API
I
think
I
would
prefer
in
the
first
iteration
to
have
it
more
like
a
private
API,
since
they
would
be
read
just
one
consumer
for
it.
But
what
I
could
imagine
is,
for
example,
that
you
would
spawn
this
process
to
the
right
and
then
by
a
standard
input
and
start
up
would
use
something
like
multi-line
JSON
protocol
in
order
to
communicate
back
and
forward.
A
A
B
Don't
know
if
I
understand
what
you
mean
by
private
API
like
if
cube
CTL
is
calling
some
external
process
and,
like
any
particular
way,
that's
an
API.
It
doesn't
really
matter
if
you
can't,
like
obviously
see
it
like
the
exact
plug-in
stuff.
There's
like
a
public
API,
because
there's
a
there's,
an
implicit
contract
between
the
binary,
your
calling
and
the
cube
CTL
binary,
and
we
have
to
support
it
right.
So
we
call
it
an
API.
We
version
it
and
do
all
the
things.
A
So
anybody
can
write,
now
write
and
execute
answers
plugin,
whereas
if
I
feel
at
least
that
if
there
is
really
only
one
consumer
for
this
TLS
certificate
proxy
style
protocol,
then
maybe
it
doesn't
make
sense
for
they
want
to
to
advertise
an
API
that
you
then
document
and
you
you
make
stable
and
so
on.
Unless,
of
course,
you
think
that
this
would
be
a
good
idea,
in
which
case
that's
how
we
would
do
it.
I.
D
D
F
G
G
A
B
On
the
painting,
couldn't
the
pin
part
live
to
the
right
of
the
blue
line
beside
certificate
proxy
right,
like
like
I,
mean
I,
guess
it
depends
on
how
you
communicate
with
the
process
but
like
if
you
can
have
like,
like
I,
don't
know,
maybe
a
socket,
so
that
way
the
other
end
can
detect
if
the
socket
is
mess
in
some
way.
After
the
initial
start,
it
could
hold
on
to
the
pin.
A
That's
actually
good
suggestion.
I
have
thought
about
it,
so
the
process
to
the
right
could
potentially
yeah
just
just
start
once
and
maybe
stay
in
the
background
for
a
while
and
cook
you
control
woods
would
spawn
it
only
if
necessary.
This
resembles
very
much
the
way.
Egp
currently
does
it.
It's
had
its
own
smartcard
demon.
That
is
being
you
know,
kept
in
a
background
for
a
while
and,
amongst
other,
is
in
order
to
implement
in
retention
policy.
So
yeah.
That
sounds
like
a
really
good
idea.
I
think.
D
It
also
parallels
the
discussions.
We've
had
well,
first
of
all,
with
kms
plugins
on
the
API
server,
as
well
as
the
external
signer
on
the
API
server
that
basically
delegate
whatever
we
can
off
out
of
cube
core
make
us
make
the
fewest
decisions
and
lock-ins,
and
then
let
that
be
implemented
by
the
plugin
mm-hmm.
A
G
G
A
I,
we
weren't
quite
given
things
that
at
that
high
level
of
explanations,
but
my
assumption
is
that
the
compliance
were
really
much
of
physical
security,
so
they
want
to
be
able
to
point
where
the
credentials
are
being
stored
so
that
you
can
actually
physically
inspect
if
there
has
been
extract
in
any
way
or
another,
and
then,
of
course,
the
way
that
this
recording
has
been
translated
in
the
software
is
used
at
pkcs
11
standard.
That
being
said,
is
not
the
only
one,
and
my
understanding
is.
A
A
A
I'm
I'm
not
quite
sure
to
have
understood
your
question.
So
what
what
I
wanted
to
highlight
is
that
in
the
pkcs
11
standard
you
have
to
specify
to
what
HSM
you're
talking
to
and
you
have
to
specify
which
certificates
you
want
to
use
of
that
HSMs.
Now,
of
course,
there
are
several
ways
of
discovering
what
the
certificate,
what
the
right
certificate
is.
You
can
either
address
it
numerically
or
you
could,
for
example,
list
all
the
certificates
and
then
look
at
the
subject.
That
would
be
another
discovery
mechanism
like.
G
When
I
log
into
github
I
only
touch
my
UV
key
on
initial
login
and
then
I
get
a
credential
that's
stored
in
my
browser
that
I
used
to
authenticate
further
request
to
github.
Additionally,
a
common
pattern
is
to
require
periodic
reallocation
so
that
I
have
to
do
that
flow.
Maybe
at
once
every
24
hours.
A
A
G
If
you
connect
to
cube
control,
if
you
connect
to
a
server
like
envoy
or
nginx,
which
people
are
using
as
from
practice
like
kubernetes
api,
they
can
be
configured
to
support
TLS
session
resumption.
I.
Don't
think
you
baby
a
server
supports
it
today,
but
many
common
l7
proxies
to
support
TLS
session
resumption.
Okay,.
G
E
G
Though,
on
subsequent
commands,
though
right,
that
is
true,
but
it's
I
think
the
key
point
is
that
it's
something
that
is
about
as
easy
to
steal
as
a
certificate
that
lives
on
disk
or
a
key
that
lives
on
disk.
So,
ideally,
if
you
want
the
property
that
you
have
to
touch
your
Yubikey
every
time
you
want
to
authenticate
as
your
identity,
you
need
to
be
careful
of
secrets
that
carry
the
same
identity
and
make
sure
that
those
are
protected
to
the
same
level
as
your
private
key.
No.
A
G
Yeah,
no
I'm
just
brainstorming
a
little
bit
I,
don't
think
it's
a
common
thing
to
have
enabled
just
because
I
think
most
people
are
not
using
l7
look
down
I,
let
l7
proxies
in
from
their
API
servers,
but
something
to
consider
and
I
think
that
gets
to
the
point
as
like
I
think
we
need
a
really
clear
set
of
security
requirements
and
we
just
have
no
solution
here.
Mm-Hmm.
B
G
F
B
Think
my
general
take
away
from
this
cap
and
sort
of
just
functionality,
there's
a
there's,
a
certain
set
of
failure
modes
that
are
basically
present
on
API,
that's
sort
of
like
a
G
RPC
willful
thing
for
the
master,
okay
and
we're
basically
the
same
thing.
That
happened
like
the
depiction
addressed
from
on
disk
configuration
to
kms-
and
this
is
you
know,
like
an
on
disk
encryption.
B
Sorry
I'm,
just
signing
to
an
external
signing
is
now
basically
the
same
failure
modes
like
you,
went
from
a
basically
unfailing
process
to
a
vague
guarantee
that
this
external
thing
will
keep
working
and
and
I
know.
We've
been
going
back
and
forth
a
lot
with
chemistry,
I
mean
the
nice
thing
is.
This
is
basically
a
brand
new
thing
and
it's
alpha
like
it'll
start
at
a
foot,
probably
what
we
come
up
with
to
solve
problems
all
relatively
applicable
KMS,
the
caching
stuff.
C
G
C
C
F
G
C
F
C
B
G
It
to
provide
me
like
baseline,
we
need
to
make
these
controllers
be
able
to
do
stuff
that
they
need
to
do
and
give
an
initial
administrator
admin
permission.
Or
do
we
want
to
have
like
curated
maintained,
curated
roles
I
just
fundamentally
and
I,
don't
see
why
I
guess
I
think
there
needs
to
be
a
like
a
strong
reason
to
make
choices
for
an
administrator
that
are
very
difficult
for
them
to
turn
off
sure.
B
G
G
To
my
question,
so
what
we
have
used
it
and
for
both
types
of
both
types
of
roles-
and
we
have
used
it
before
for
role
bindings,
and
we
have
also
tried
to
walk
back
some
of
the
stuff
we've
done
with
role
bindings
for
us,
I,
don't
see
for
speaking
for
decays.
Specifically,
we
have
the
same
authentication
stack
as
the
rest
of
Google
Cloud.
Anybody
can
authenticate.
We
don't
consider
that
a
significant
boundary
to
accessing
Google
api's.
We
tend
to
use
authorization
for
that.
So
this
is
particularly
at.
B
Not
like
my
question
today
is:
have
you
guys
considered
walking
back
the
anyone
can
not
dedicate
to
my
cluster
I'm,
just
gonna
say
it's
anyone
because
anyone
can
get
a
Gmail
good.
That's
kind
of
the
point
of
do
you
know
like
I
I,
don't
like
I.
Don't
understand
that
reasoning
right
because
that's
like
a
like
it's
just
weird
to
me
just
say
that
anyone
can.
G
D
B
G
G
Philosophically
I'm
not
sure
that
I
agree
with
relying
so
heavily
on
annotation
for
this
type
of
stuff,
I
think
that
it
is.
It
has
benefit
restricting
authentication
from
the
front
defense
in
depth.
But
I
would
like
our
initial
authorization
that
we
set
up
for
the
youth
for
the
users
to
be
sane
and
to
be
minimal
such
that
we
deferred
decisions
to
operators
to
make
choices
that
suit
their
needs.
Oh.
B
Yeah
I,
don't
think
I
really
follow
that,
though,
because
I
get
the
end
of
the
day,
this
kind
of
functionality
is
about
discovery.
Right,
like
discovery
is
supposed
to
be
about.
I
can
build
on
top
right
so
like
if
I'm
gonna
build
like
an
extension
that
allows
you
to
have
like
nice,
like
simple
service
or
service,
or
something
using
service
accounts
right.
H
B
Mean
I,
don't
really
think
I
particularly
care
as
a
cluster
admin.
If
you
can
verify
a
service
account
tokens
like
that
seems:
okay,
like
right.
It's
public
data,
you're,
verifying
a
piece
of
private
data
using
a
public
key
like
I
put
this
in
the
bucket
of
public
data
right.
Yes,
the
issue
is
I
can
tell
you
something
about
the
cluster
right.
G
B
C
You've
won
two
time
boxes,
because
I
don't
think
we're
going
to
get
this
change
in
for
118
and
I.
Think
it
needs
more
discussion
and
also
Jim
is
going
to
us
from
the
policy
working
group,
and
so
I
want
to
make
sure
that
you
have
time
to
discuss
some
of
the
issues
on
the
agent
that
so
is
it
okay.
If
we
move
on
and
then
pick
this
conversation
up
either
offline
or
in
the
next
meeting,
I
was.
B
E
There
we
go
yes,
so
two
different
things
to
discuss
both
working
group
projects.
One
is
related
to
multi
tendency,
so
the
first
document
that
Tim
is
just
pulling
up
is
a
set
of
benchmarks
that
we,
you
know
so
multi
tendency,
of
course,
there's
a
lot
of
different
work
streams
going
on,
including
trying
to
define
you
know
what
different
different
constructs
for
providing
multi-tenancy
but
one
of
the
topics.
E
One
of
the
questions
we
get
often
in
the
working
group
is
how
do
I
know
if
my
cluster
is
properly
configured
for
multi-tenancy,
so
the
intent
here
was
really
to
start
defining
a
set
of
tests,
validation
tests.
We
could
do
both
behavioral
as
well
as
configuration
checks
and
that's
what
this
is
outlining.
So
we
are
collecting
comments,
we're
also
starting
to
implement
some
of
these
tests.
E
You
know
either
like
network
access
across
name
spaces
or
even
resource
access.
Things
like
that
and
validating
what
level
of
multi-tenancy
a
particular
cluster
or
set
of
configuration
is
trying
to
achieve.
So
that's
this.
This
particular
you
know
effort
and
work
stream
within
the
multi-tenancy
working
group
and
how
this
will
relate.
E
There's
other
work
work
strange
like
there's
the
virtual
clusters
proposal
from
Alibaba
that
actually
virtualized
as
part
of
the
control
plane
itself,
and
then
there
is
a
hierarchical
namespace
controller
which
of
Adriane
and
if
you
folks,
are
working
on
which
helps
manage
configurations
across
namespaces.
So
all
of
those
are
again
tools
and
means
to
implement
multi-tenancy.
The
effort
here
was
measuring
and
according
some
set
of
validation
on
multi-tenancy
levels,
so
please
yeah
I
think
this
was
just
more
making
making
folks
aware
of
this
effort.
E
E
E
Yes,
so
the
CIS
benchmarks,
you
know
from
our
understanding
and
our
usage
of
that
it
seemed
more
focused
to
words,
of
course,
making
sure
like
the
security
best
practices
are
followed
for
configuration,
so
there's
a
number
of
different
configuration.
You
know
types
of
recommendations
right
so,
but
they
didn't
seem
to
reflect
again
brother,
even
even
following
all
the
CIS
benchmarks,
there's
no
guarantee
or
there's
no
multi-tenancy,
whether
you've
achieved
it
or
not,
is
it's
sort
of
left?
It
was
right.
E
So
what
our
focus
here
was
to
just
you
know,
sort
of
pick
the
tests
which
would
try
and
measure
multi-tenancy
and
verify
multi-tenancy
only-
and
we
didn't
want
this-
to
be
something
which
just
security
best
practices.
Now,
of
course,
there
are
things
like
you
know:
CIS
benchmark
like
okay.
You
must
enable
our
back,
yes,
of
course,
and
that
would
get
validated.
You
know,
of
course,
as.
D
E
As
we
run
some
of
these
basic
tests
across
namespaces,
so
we
didn't
see-
and
maybe
there
is
you
know-
maybe
it's
worth
a
closer
look
and
understanding
if
there
is
a
clean
way
of
saying
that.
Okay,
if
there's
some
level
of
CIS
benchmarks,
where
we
could
require
that
as
a
minimum
starting
point,
then
certainly
that
would
make
sense
and
one
possibility
is:
there
are
some
tools
out
there
which
can
be
controlled
to
also
or
it's
possible
to
write
those
tests
as
well
and
validate
certain
configurations
have
been
provided.
E
E
We
wanted
to
allow
that
flexibility
right
to
say
if,
if
you're
configuring,
your
pod
as
non-root,
even
if
it's
manual,
we
can
validate
that
and
report
on
that,
so
we
weren't
requiring
necessarily
that
PSPs
have
to
be
set
up,
but
it
was
more.
You
know
checking
that.
The
end
result,
of
course,
is
that
the
pod
has
that
correct
configuration.
E
So
there
were
nuances
like
that,
which
we
were
discussing
in
the
same
discussion
we
had
with
network
policies,
because
you
know
some
network
policies
are
greater
layer
for
segmentation,
but
you
could,
in
some
cases,
we've
seen
that
teams
are
now
doing
near
seven
policies
like
perhaps
at
service
measure
similar.
So
that
was
another.
You
know
where
we
didn't
want
to
necessarily
require
that
you,
you
have
to
have
network
policies
configured,
but
it
was
more
checking
if
there
were
segmentation
across
or
isolation
of
networks
and
ingress
and
egress
traffic
within
these
workloads.
C
D
E
Okay,
yeah,
so
what
what
we
can
do
is
we
can
go
through
and
at
least
maybe
identify
the
CIS
benchmarks,
which
would
be
applicable
and
ones
which
we
perhaps
want
to
take
a
slightly
different
stance
on.
We
can,
even
you
know,
identify
those
and
explain
why
maybe
there's
some
more
flexibility
required
than
the
currently
I
guess
documented
CIS,
benchmark.
H
I
do
remember
some
problems
with
the
CIA
s,
benchmark
related
to
it
being
so
strict
trying
that
it
was
extremely
difficult
to
treat
such
a
cluster
and
I.
Also
remember
that
was
very
difficult
to
get
those
things
changed.
You
know
I
guess
in
this
case
we
would
probably
be
able
to
change
recommendations,
but
I,
don't
think
I
would
just
let
me
take
a
CIS
been
fun
I.
C
D
E
That
would
be
excellent.
Thank
you,
I'll
reach
out
after
we
can
do
that.
Okay,
so
real,
quick,
just
the
other.
The
other
effort
that
I
wanted
to
highlight
was
now.
This
is
with
the
policy
working
group,
so
there
you
know,
of
course,
there's
again
lots
of
different
work
streams.
One
of
the
things
we
were
discussing
is,
as
so,
we're
seeing
there's
more
and
more
different
ways
of
implementing
and
managing
policies.
E
Some,
like
some
tools,
like
you,
know,
OPA,
gatekeeper
and
cavern.
Oh
they're,
doing
admission
control
level
policies,
Falco's
doing
runtime
policies,
there's
k
rail,
which
is
doing
also
some
policies
which
are
mostly
admission
control,
but
also
other
considerations.
So
the
idea
that
we
were
discussing
and
what
seemed
like
everybody
in
the
working
group
agreed
to
was
and
thought
would
be,
a
good
idea.
Moving
forward
is
at
least
there's
a
common
way
to
report
policy
violations,
independent
of
which
tool
is
used
to
do
the
checks,
the
audits
etc.
E
So
that's
what
this
you
know,
cost
CRD
proposal
is:
do
you
have
a
policy
violation
which
different
tools
either
through
adapters
or
natively
tools,
can
support
and
the
benefit
there.
Of
course,
is
the
cluster
admin
or
users
can
see
violations
in
a
consistent
manner,
and
can
you
know
also
do
other
reporting
and
visibility
based
on
that?
E
One
other
question
that
had
come
up
is
we
don't
have
a
place
for
some
of
these
type
of
you
know,
proposals
or
you
know,
work
streams
in
the
policy
working
group,
so
I
think
there
was
a
good
issue
that
we
had
raised
to
try
and
you
know
and
I'm
not
sure
what
the
exact
process
is,
but
to
get
and
I
think
what
was
recommended
was.
This
should
be
under
the
cig
at
the
Kirby
96,
repo
and
multi-tenancy.
C
E
E
One
of
the
things
Kevyn
Orr
supports
is
the
ability
to
either
block
or
just
do,
audit,
in
which
case
it
will
generate
a
violation
so
based
on
that
setting,
then
an
admin
could
could
decide
whether
they
want
to
outright
just
block
a
resource
or
they
want
to
allow
it
but
report
a
violation,
and
the
other
use
case
would
be
for
any
existing
resources
before
that
policy
was
put
into
place
and
then,
like
you,
mentioned,
the
dry
run.
So
if
I
have
a
cluster
I
just
want
to
run
a
scan
and
see
what
what
violations
exist.