►
From YouTube: sig-auth bi-weekly meeting 20210210
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I
want
to
make
sure
that
well
make
sure
it
writes
a
strong
word,
but
we've
talked
about
this
topic
a
lot
and
we
have
concrete
proposals.
Now,
that's
good
psp
is
marked
as
deprecated.
A
That's
good,
but
I'd
like
to
help
us
to
make
some
real
progress
and
kind
of
come
to
some
conclusions
and
I'd
like
to
make
the
assertion
that
we
don't
all
agree
on
the
problem
that
we're
trying
to
solve
right
now,
and
I
don't
think
that
we
can
agree
on
a
solution
if
we
don't
agree
on
the
same
problem
that
we're
trying
to
solve.
So
that's
where
I
would
like
to
start,
I'm
a
plus.
A
A
C
C
B
So
from
a
just
having
it
defaulted
to
on
standpoint,
you
know
either
one
of
these
can
be
defanged
relatively
easily.
You
know
the
the
container
boundary
policy
proposal.
If
you
simply
don't
have
any
bindings,
then
it
doesn't
enforce
anything
whether
it's
turned
on
or
not.
C
C
A
Let
me
briefly
pause
you
can.
Is
anyone
willing
to
volunteer
to
take
notes.
A
E
E
I
know
so.
E
B
So
to
the
to
the
you
want
enforcement
from
one
policy
engine
on
some
name
spaces
and
from
a
different
policy
engine
on
the
on
other
namespaces
point
is
binding,
a
policy
that
allows
everything
into
the
namespaces
in
engine
b
so
that
it's
only
affected
by
the
policies
of
engine.
A
like
does.
Does
that
suffice
to
your
desire
or
or
do
you
want
it
to
be
even
more
disinterested.
C
So
the
name
space
boundary
is
one
of
two
dimensions
right.
The
other
dimension
that
exists
today
and
that
other
people
enforce
on
is
the
user.
Who
is
actually
making
the
request.
So
the
namespace
dimension
is
half.
I
would
actually
suggest
that
something
like
a
namespace
label
would
be
more
consistent
with
how
webhook
admission
is
done
today,
but
the
dimension
of
what
users
requesting
a
pod
also
seems
to
be
one
that
we
would
need
to
take
into
account.
A
I
I
actually
consider
the
enforcement
of
policy
based
on
the
requesting
user
to
be
extremely
problematic.
That's
a
bug.
D
C
So
so
one
that
that
isn't
universally
held
and
two
this
isn't
asking
for
enforcement
by
user.
This
is
asking
for
a
switch
on
whether
the
admission
plugin
works
operates
at
all
based
on
user.
That's,
that
is
unfortunate.
A
C
Okay,
I
could,
I
could
see
it
that
way,
but
yes,
I
think
that
that
is
actually
something
valuable
right.
It
allows
someone
to
create
pods
in
a
namespace
that
are
quoted
to
a
particular
namespace
that
cannot
be
exact
to
that
cannot
be
modified.
That
can
come
from
a
trusted
source
so
that
you
can
charge
a
particular
namespace,
but
you
can
create
a
controlled
pod
that
a
user
could
not.
A
I
slicing
by
user.
If
we
had
better.
Let
me
rephrase
that
if
we
had
delegation
mechanisms
where
a
controller
could
carry
through
the
identity
of
the
user,
that
originated
the
object
through
the
various
levels
of
intermediate
objects,
or
we
had
yeah
delegation
mechanisms.
A
That
would
say
I
can
let
you
have
permission
to
do
this
as
me
or
on
my
behalf
scoped
down.
If
we
had
those
things,
then
user
enforcement
might
be
more
reasonable,
but
every
person
that
I
see
who
encounters
the
feature
in
pod
security
policy,
which
allows
either
because
of
the
user
or
bypasses
because
of
the
user
in
your
terms,
is
confused
by
it
thinks
they've
discovered
a
security
hole.
A
This
is
why
I
wanted
to
start
with
talking
about
goals
and
requirements,
though,
because
I'll
go
sorry
like
I,
I
don't
necessarily
think
we
should
say
that
some
sort
of
delegation,
mechanism
or
propagation
of
properties
down
to
requests
is
something
we
should
rule
out
at
this
stage
like.
If
that's
actually
what
we
need
to
do
to
really
solve
the
problem
we're
trying
to
solve
here,
then
I
think
we
should
consider
it.
B
It's
by
far
the
most
common
complaint
that
folks
have
had
about
either.
One
of
these
proposals
is
there's
no
way
to
mix
privilege
levels
within
a
namespace,
and
my
response
to
that
has
always
been
yes,
because
in
kubernetes
there
is
no
practical,
strong
way
to
mix
privilege
levels
within
a
namespace.
C
But
the
power
that
the
pod
wants
to
have
is
normally
elevated
compared
to
what
a
regular
user
does.
So
you
want
a
regular
user
to
be
controlled
and
say
the
number
of
pods
they
can
create
same
reason.
You
would
put
quota
in
a
namespace
that
had
a
job,
for
instance,
and
you
would
quote
a
pause,
but
you
have
this
controller,
which
is
creating
very
carefully
constructed,
pods
directly,
like
any
normal
workload.
Controller
would
do
and
it
has
a
need
to
create
pods
that
a
service
account
does
not
have
sufficient
privileges
to.
D
C
Rather
than
try
to
go
and
say
I
want
to
have
all
the
power
of
these
apis
based
on
the
user,
I'm
not
looking
for
that.
I'm
looking
for
a
way
to
to
coexist
and
say
you
know
what
this
user
isn't
bound
at
all
if
he
requests
it,
maybe
on
an
annotation
on
the
pod,
so
he
doesn't.
He
has
an
option
so
not
asking
for
enforcement
or
dual
consideration
or
trying
to
make
this
api
honor
multiple
actors,
but
a
way
to
have
multiple
possible
engines
run
with
maybe
two
different
sets
of
capabilities.
F
A
question
about
this
notion
of
user
enforcement
and
there's
a
special
user
that
is
allowed
to
perform
this
action
and
and
that's
validating
if
state
is
okay
after
the
fact
right,
because
at
the
boundary
you
have
this
requesting
user
info.
F
But
if
you're
looking
at
the
state
of
the
cluster
and
you're
just
looking
at
the
pods
that
exist,
is
there
some
way
to
verify
that
every
running
pod
was
created
by
a
user
who
had
the
appropriate
privileges?
A
Largely,
which
is
the
only
reason
this
is
even
close
to
possible
today
right
now,
mutation
of
existing
pods
is
very
rare,
because
very
few
fields
can
be
changed,
pretty
much
just
image
and
that's
sort
of
considered
an
anti-pattern,
and
so,
if
something
updates
the
pod,
then
this
policy
could
check
again
that
the
current
user
trying
to
update
the
pod
was
allowed
to
do
stuff.
That's
what
psp
does
that's?
A
A
I
I
think
max's
point
is
at
least
my
interpretation
correct
me.
If
I'm
wrong
max
it,
it's
useful
to
be
able
to
say
like
for,
for
example,
for
pci
compliance
right,
we
have
some
policies
in
place.
A
We
need
to
be
able
to
go
to
prove
to
an
auditor
that
these
policies
are
being
enforced
as
we
expect
them
to
be
enforced
across
the
cluster
and
if
I
don't
have
a
clear
way
of
saying
that,
yes,
this
privileged
pod
was
created
by
a
privileged
user
and
easily
be
able
to
validate
that
in
the
current
cluster.
I
think
it's
it's
harder
to
do
that.
Of
course
you
can
use
audit
logs
and
other
methods.
C
But
even
without
that,
in
a
case
like
this,
we
wouldn't
be
asking
any
auditing
tool
to
do
that.
What
will
come
down
as
there
would
be
a
list
of
hey
these
pods
all
look
suspicious
and
there
would
be
the
manual
check
of
okay
is
the
image
one
we
expect
from
this
build?
Does
it
match
exactly
what
the
shape
of
this
build?
Pod
is.
Yes,
it
does
okay,
you're
fine,
but
we
wouldn't
expect
cube
to
have
a
policy
that
allows
us
to
select
like
this
particular
image.
C
These
particular
arguments
with
this
combination
of
persistent
volumes.
Yes,
this
is
safe,
that's
excessive
to
try
to
enforce
in
any
reasonable
way.
A
Just
to
time
box
the
goals,
discussion
and
maybe
get
to
evaluation,
I
think
david's
point
is
useful.
I'm
not
sure
I
agree
with
the
user
dimension,
but
that's
a
useful
perspective.
I
would
like
to
very
quickly
throw
a
few
other
goals
out.
I
think
that
whatever
we
come
up
with
should
be
validating.
Only
I
think
changing
pods
in
order
to
make
them
comply
with
a
policy
is
very
difficult
to
reason
about
and
has
not
generally
worked.
A
H
I
think
it's
really
important
to
have
things
in
tree,
partly
because,
if
it
is
the
case
with
one
specific
commercial
distribution
that
it
is
secure,
that
does
not
mean
that
it
is
secure
on
every
other
distribution
for
everybody
else,
and
I
think
it's
really
important
to
keep
that
in
mind.
A
There
is
some
fuzziness
there,
though
so
there's
certain
fields
where
we
can
say
kind
of
unilaterally
like
this
is
root
on
cluster.
A
If
you
have
this
or
root
on
the
node
at
least,
there
are
some
other
ones
where
it's
a
significant
weakening
of
the
attack
boundary,
but
we
would
probably
or
sorry
weakening
of
the
isolation
boundary
increase
of
the
attack
surface,
but
we
would
still
consider
it
a
vulnerability
if
you
could
escape
a
container
with
just
that
permission
and
then
there's
some
that
are
even
a
step
further
than
that
where
it's
like
you
know,
this
field
is
only
furthering
further
scoping
down
the
attack
surface
and
limiting
permissions.
D
D
I
Yeah
it'd
be
good
to
design
with
a
like.
Maybe
a
place
to
have
windows
live
in
this
new
system,
but
not
necessarily
like
so
we're
not
running
into
the
same
issues
in
the
past,
where
we
have
to
fit
windows
concepts
into
these
pod
security
policies,
but
not
necessarily
limit
them
to
windows
or
not
not
enforce
them
for
windows.
I
A
A
What
one
of
the
things
that
I'm
acutely
aware
of
having
had
to
deal
with
evolution
of
psp
over
the
past
several
years
is
that
the
compatibility
of
policy
enforcement
over
releases
is
really
hard,
especially
if
down
the
road
we
figure
out.
Okay,
we
figured
out
the
windows
permission
model.
A
We
realized
that
by
default,
like
yeah,
you
can
escalate
to
root
on
node
on
a
windows
node.
We
would
like,
whatever
policy
we
come
up
with
here,
whether
it's
the
simple
one
or
the
baseline
one
or
the
ps
like
we
would
like
to
make
it
so
that
if
you
had
applied
a
baseline
policy
to
all
your
namespaces,
they
now
keep
you
from
getting
root
on
windows
nodes.
A
That
would
be
a
breaking
change
and
that
would
put
us
in
a
similar
place
to
where
psp
is
at
today,
where,
like
the
best
practice
policies
of
four
years
ago.
Actually
you
would
want
to
like
tighten
the
screws
down
a
lot
more
now.
So
it's
okay,
I
mean
if
the
the
commission
model
is
not
defined
well
defined
right
now,
then,
there's
not
really
a
lot.
A
We
can
do
to
lock
it
down
right
now,
but
at
the
same
time
saying
like
we
would,
we
would
make
it
better
in
the
future,
like
I
hear
that,
and
I
think
we're
going
to
break
people
and
they're
going
to
upgrade
and
they're
going
to
break
and
they're
coming
to
us
and
be
mad.
So
this
being
safe
to
enable
by
default
is
a
good
goal
and
I'm
wondering
how
we
accomplish
that
goal
over
time
for
new
things.
I.
B
So
then,
within
a
particular
distribution,
they
can
decide
that
you
know
they're
going
to
make
kubatium
clusters
be
secured
by
default
by
having
a
default
binding
of
like
a
baseline
policy
and
maybe
in
a
future
version.
B
And
so
it's
a
way
to
soften
that
breaking
change
by
making
it
so
that
it
is
breaking
but
opt-in-able
like
if
you
are
doing
just
in
place
cluster
upgrades,
you
should
be
safe,
but
if
you
want
the
new
hardening
things,
you'll
need
to
take
changes
to
to
get
them
and
if
you're,
installing
new
clusters
on
newer
versions.
I
hope
that
you're
doing
validation
before
turning
them
loose
and
as
part
of
that
validation.
C
I
think
I
would
be
very
surprised
if
I
were
a
user
and
I
selected
a
green
policy
and
then
I
later
upgraded
and
some
new
field
wasn't
being
considered
by
whatever
the
stock
resource
I
had
was,
and
suddenly
a
new
pod
got
created
and
this
enforcement
on
the
new
field
didn't
happen.
So
my
green
policy
didn't
actually
protect
me
anymore,
and
especially
since
these
sorts
of
things
are
usually
additive,
I
would
actually,
as
a
cluster
admin,
have
to
try
to
keep
up
with
a
newly
named
policy.
Each
time
create
the
bindings.
A
Oh,
I
think
windows
is
a
good
example
for
that
where,
if,
in
the
future,
the
windows
permission
model
is
clarified,
and
we
realize
that
current
like,
if
you
create
a
windows
pod
today,
it
is
effectively
privileged,
I
mean
I'm
hand
waving.
So
maybe
that's
not
the
case,
but
in
the
future
we
realize
that
the
defaults
of
today
are
effectively
insecure.
A
Would
we
not
want
to
improve
the
baseline
policy
to
exclude
that
case
yeah?
I
would
advocate
to
change
the
defaults
in
that
case
and
and
to
continue
to
track
the
defaults
with
the
policy
and
say
that
actually
that's
a
case
where
our
defaults
are
broken.
I
know
that's
proved
challenging
in
the
past,
but
also
if
you,
if
our
defaults
are
equal
to
privileged
pod,
then
I
think
we
can
make
that
happen.
A
A
I
think
it's
only
when
you
want
to
place
further
restrictions
on
a
new
field
or
change
the
restrictions
on
existing
fields
that
you
need
a
version
pump.
So
that's
the
that's
exactly
the
case
I
was
just
describing
where
you
know
we
realize
the
defaults
are
bad
or
there's
a
new
field
that
says
like
actually
don't
be
privileged
on
windows
or
or
whatever
or
less.
A
I
don't
want
to
pick
on
linux
and,
and
so
an
existing
policy
that
wasn't
enforcing
on
that
field,
like
according
to
compatibility
policies,
would
say
like
keep
allowing
things
you
previously
allowed
in.
I
I
was
actually
wondering
like
in
terms
of
api
surface,
I
really
like
the
simplicity
of
pod
security
standards
like
saying,
like
we've
thought
about
all
the
fields
we've
categorized
them.
This
way
we
have
red,
yellow,
green
policies.
A
Those
might
change
over
time.
I
think,
having
a
way
to
capture
like
in
115.
This
is
what
they
were
in
116.
This
is
what
they
were
in
117.
This
is
what
they
were,
and
then
let
people
either
select
red,
yellow
green,
like
pinning
to
a
version
say.
I
want
the
green
for
115,
because
I
don't
trust
you
guys
to
like
not
break
me
on
upgrades.
I
want
to
do
my
own
validation,
so
I'm
going
to
be
115,
green
and
I'll.
A
Do
my
own
validation
figure
out
when
I
want
to
move
to
116
green
or
they
can
float
and
say,
I
want
to
be
green,
and
that
means
green
latest,
like
whatever
green,
is
on
the
version
that
I'm
on
to
me.
That's
a
much
easier
thing
to
communicate
than
taking
our
documentation
on
all
the
fields
in
the
pod
spec
and
making
that
an
api
and
giving
control
over
all
of
those
things
to
end
users
so
like
if
the
and
this
goes
back
to
goals.
B
Then
you,
then
it
would,
if
I
understand
correctly,
that
would
be
essentially
the
same
as
the
like,
psp
plus
plus
type
of
proposal,
except
that
instead
of
shipping
user
modifiable
policy
definition
objects
that
got
versioned
and
that
they
could
choose
to
use
or
not.
It
would
be
hard
coded
in
code
and
they
could
choose
to
use
it
or
not.
A
I
don't
think
it
does
if
it's,
if
it's
a
library,
if
it's
a
here,
are
the
policies
here,
here's
what
they
mean,
it's
a
library
that
can
be
reused,
but
you
are
bounded
to
this
set
of
policies.
Instead
of
like
we
exposed
an
api
that
was
powerful
enough
to
express
these
policies,
but
also
powerful
enough
to
express
like.
E
E
A
A
A
E
A
And
that
would
accommodate
people
who
wanted
to
policy
on
dimensions
that
we
didn't
consider
or
express
conditional
policies.
Or
you
know
if,
if
the
image
is
in
this
specifically
allowed
set,
then
it's
okay
for
it
to
be
a
privileged
pod
or
or
whatever
like,
come
up
with
as
complex
of
a
condition
as
you
want.
B
C
C
This
one
in
particular,
is
one
that
we
know
has
existed.
It
has
clear
use
cases
we
both
helped
develop
those
use
cases
it
has
the
look
on
jordan's
face.
Is
this
great?
I
mean
he
helped
me
build
it,
it
has.
It
has
active
users,
and
it
is
one
that
we
have
recognized
for
other
similar
boundaries
like
priority
and
fairness
where
who
you
are
matters,
and
so
does
where
you're
trying
to
touch
a
thing.
C
A
C
If
it
much
depends
on
how
you
try
to
describe
the
boundary,
it
doesn't
have
to
be
hard
to
understand.
It
could
be
as
simple
as
when
you
configure
the
admission
plugin
give
me
a
list
of
users,
never
never
something
that
hits
by
default,
nothing
based
on
an
apple
right.
There
are
many
ways
you
could
configure
such
a
thing.
B
C
A
A
C
E
Yeah,
I
just
I'm
just
I'm-
I
haven't
puzzled,
it
all
the
way
through
in
my
head,
but
I'm
I'm
thinking
about
scenarios
where
I
know
it.
You
know
we
have
a
cluster
where
you
know
customer
environment
where,
by
default
they
want
to
use
the
restricted
sec,
so
the
most
restricted
out
of
the
box
psp
that
openshift
offers
and
they
must
allow
they
have
had
to
create
some
three
custom
sccs
for
specific
use
cases.
E
This
is
particular
to
networking
functions
and
features,
because
in
the
networking
space
the
cna
apis
don't
give
them
everything
they
need
yet,
and
they
absolutely
want
these.
You
know
to
manage
the
the
application
of
these
privileges
based
on
you
know
the
where
it's
a
cnf,
you
know
who's
deploying
the
cnf.
What
privs
does
that
user
have
on
the
cluster?
What
name
space
are
they
running
in
so
so
I
I
you
know.
I
think
I
think
it's
very
similar
to
what
david
is
teeing
up.
E
It's
just
a
very
concrete
scenario
where
I'm
trying
to
in
my
head,
try
and
figure
out.
Okay,
if
we're
not
doing
that,
all
with
the
equivalent
of
psps
you
know
is,
is
opa
gatekeeper,
sufficient
alternative,
but
but
again
it
is,
and
how
does
it
coexist
because
they
want
by
default
the
most
restricted,
and
they
only
want
to
allow
privilege
in
very
specific
scenarios
in
specific
namespaces
and
sometimes
those
coexist
with
less
privileged
pods
in
the
same
name
space.
G
G
A
Yeah
yeah,
I
think
so,
and
I
think
that
we
we
don't
have
sufficient
controls
in
kubernetes,
I
would
say
to
effectively
have
exceptional
pods
at
a
sub-namespace
level
without
further
configuration
of
policy,
so,
for
instance,
if
you
are
giving
a
user
you
know
a
user
can
create
these
privileged
pods.
A
Now
you
also
need
to
worry
about
which
users
can
exec
into
those
pods
which
users
can
create
ephemeral,
contain
ephemeral
containers
on
those
pods
or
update
the
images
on
those
pods
and
all
these
other
pieces
that
come
with
that,
and
so
you
need
that
additional
security
enforcement
anyway,
and
so
at
that
point,
you're
talking
about
adding
on
a
new
kind
of
third-party
system
to
manage
that,
and
at
that
point
I
would
say
suggest
that
that
third-party
system
should
also
just
handle
the
full
suite
of
things,
and
we
definitely
need
that
good
documentation
of
this,
the
policies
and
and
here's
the
details,
so
you
can
report
it
to
whatever
system
you
want.
A
C
There's
a
choice
between
saying
you
just
can't
use
this
tool
and
saying
you
know
what
we
know
what
dimensions
exist.
We
know
what
the
likely
ones
are
and
if
you
go
through
and
punch
this
user
through
and
allow
him
to
do
whatever
you're
responsible
for
figuring
out
whether
an
exec
can
be
done,
whether
an
attach
can
be
done,
whether
you're
allowed
to
port
forward
or
or
modify
this
pod.
It
becomes
your
responsibility,
but
allowing
that
dimension
to
exist
seems
pretty
valuable.
C
To
me,
I
mean
obviously
I'm
biased
because
I
use
it,
but
I'm
looking
at
it
saying
like
I
have
the
ability
I
can
go
through
and
I
could
I
could
patch
the
submission
plug-in
for
myself
right.
It
would
be
10
lines
patch,
it
be
done
with
it,
but
it
doesn't
help
everyone
else
who
doesn't
run
a
distro
and
can't
just
patch
cube.
A
Yes,
thank
you,
it
needs
to.
It
needs
to
be
composable,
I
would
say
or
coexist,
maybe
the
coexistence
we
don't
agree
on,
but
there
needs
to
be
a
way
to
at
least
bypass
this
flip
it
off
at
some
level.
A
Even
if
that's
cluster,
wide
or
namespace
wide
to
say,
I'm
going
to
deal
with
this
with
another
system
and
then
I
think
we
get
into
more.
I
don't
want
to
get
into
the
nuances
of
the
wording,
but
I
would
call
these
other
things
more
requirements
of.
So
that's
the
goal.
That's
that's
the
problem,
we're
trying
to
solve
and
then
there's
requirements
on
what
that
solution
looks
like
it
needs
to
be
able
to
evolve
over
time.
A
I
I
would
also
split
that
into
two
pieces
like
we
need
to
have
a
clear
way
to
maintain
it
and
improve
it
where
it
when
needed,
and
then
consumers
need
to
have
a
clear
way
to
like
have
their
compatibility,
guarantees
met
or
opt-in
to
our
improved
versions.
A
A
B
H
But
I
think
a
lot
of
I
think,
as
as
this
conversation
has
been
going.
I
think
a
lot
of
us
have
the
same
goals,
and
I
think
you
know
we're
in
more
agreement
than
I
think
it
has
always
been
entirely
reflected.
You
know
in
terms
of
how
funny
we're
all
describing
this
conversation
to
be,
and
I
think
it
if
we
know
that
we
are
working
toward
the
same
north
star
of
the
same
set
of
goals,
then,
after
that,
it's
implementation
details
and
that,
I
think,
is
a
very
different
conversation
and
and
a
good
one.
A
Analyses
the
safe
to
enable
in
new
and
upgraded
clusters.
I
don't
know
if
it's
a
goal
or
a
requirement.
You
have
to
tell
me
tim,
but
I
I
think
that
implies
like
the
ability
to
test
a
policy
like
and
see
if
it's
going
to
break
existing
stuff
before
you
enable
it,
but
we
might
want
to
make
that.
A
And
even
if
it's
on
by
default-
that's
not
always
on,
so
you
still
need
to
be
able
to
go
from
the
off
state
to
the
on
state
relatively
easily.
I
think
dry
run
is
one
approach
to
that.
A
We
haven't
talked
about
non-goals
at
all.
I
don't
know
how
deep
we
want
to
go
on
that,
but
something
that
I
know
a
lot
of
the
third-party
controllers
are
working
on
is
integration
into
ci
systems,
and
so
I
would
say
it's
worth
explicitly
stating
that
we're
not
building
a
tool
that
can
run
outside
of
kubernetes
unless
we
think
we
should,
in
which
case
we
can
add
that.
B
B
J
A
B
B
A
Another
problem
that
was
called
out
before
with
pot
security
policy
is
the
sort
of
unbounded
nature
of
the
api
that
it's
not
always
clear,
whether
a
field
or
something
should
be
added
to
the
policy
and
this
kind
of
gets
back
towards
what
I
was
saying
earlier
on
around
the
fuzziness
of
the
goal
of
preventing
privilege
escalation.
A
A
I
know
we
had
asked
people
to
think
about
like
the
philosophy
of
pod
security
policy.
I
can
give
my
20-second
version
of
it.
I
thought
about
like
the
dimensions
like
confidentiality,
integrity,
availability
and
I
tend
to
separate
out
availability
and
say
those
are
either
just
bugs
that
should
be
fixed
or
their
quota
issues
or
their
scheduling
policy.
A
Like
this
is
a
dos
risk,
confidentiality
and
integrity,
I
think
apply
so
like
host
access
or
volume
access
or
escalated
capabilities
or
privileges
or
assist
controls
or
run
as
user,
like
those
are
the
types
of
things
that
I
was
imagining
focusing
this
on.
A
Sorry,
what
about
network
level
stuff
host
ports
so
being
able
to
consume
host
ports
like
seemed
like
a
resource
on
the
nodes
like
the
two
dimensions,
are
host
access
so
host
level.
So
I
could
see
it
from
a
host
perspective,
but
not
a
resource
consumption
perspective.
So
yes,
or
no
maybe
could
belong
in
this,
but
not
only
these
ones,
and
only
this
number
does
that
make
sense.
A
A
I
put
together
a
spreadsheet,
I'm
not
sure
if
folks
had
a
chance
to
look
at
that
where
I
was
trying
to
kind
of
think
about
these
questions
for
every
field
in
the
pod
spec.
C
Yeah,
when
I
was
thinking
about
it,
I
came
up
with
I
wanted
to
have
five
or
fewer
policies
that
was
very
effective
in
openshift
when
we
shipped
five
and
said
choose
from
these.
C
I
also
like
evolution
over
time,
where
the
intent
is
maintained,
as
opposed
to
precise,
keep
the
same
things
that
should
have
been
enforced,
but
weren't
working
so
and
intent
based
evolution
was
very
important
and
we
have
generally
followed
with
that
and
then
being
able
to
handle,
I
guess
as
a
litmus
test
for
that
like
how
would
you
handle
runtime
class
as
it
as
it
comes
in,
because
that
has
very
clear
implications.
C
You
know
if
you
use
kind
of
containers
or
gvisor
or
if
you
have
a
windows
container
runtime,
that's
pretty
significant
implications
for
how
you
build
an
api
and
then
coexistence.
How
do
you?
How
do
you
turn
this
on
in
a
cluster?
And
how
do
you
turn
it
off
in
a
cluster
in
a
controlled
way
where
you
can
sort
of
migrate
over
time
instead
of
a
big
bang,
so
that
was
that
was
what
I
came
up
with
for
my.
K
I
have
a
small
question
and
I'm
from
aqua
security,
and
I
wasn't
on
the
other
calls.
So
in
aqua,
we've
implemented
an
admission
controller
using
rego
and
recently
we
open
source
and
the
rego
library
for
the
photo
security
standards,
and
I'm
wondering
whether
you
plan
to
use
rego
for
the
future.
Psp
plus,
like
you
said,
because
that
would
allow
users
to
do
that
in
the
ci
cd
and
also
use
the
same
policies
in
kubernetes.
A
We
haven't
talked
about
building
a
rigo
engine
or
opa,
or
anything
like
that
into
kubernetes.
I
think
that's
probably
off
the
table.
That
would
be
a
pretty
significant,
increasing
complexity.
B
B
A
E
Really
just
a
clarifying
question
actually
tim:
when
you
were
talking
about
host
ports,
the
ability
to
ensure
that
there's
no
access
to
host
port
is
is
a
big
win
with
the
majority
of
the
openshift
customers.
I
wasn't
quite
sure
what
you
were
trying
to
say
when
you
mentioned
host
ports,
were
you
saying
we
shouldn't
be
concerned
about
them?
I
wasn't.
I
wasn't
clear.
A
It's
not
clear
to
me
that
host
ports
are
preventing
like
blocking
host
ports
are
preventing
a
privilege,
escalation,
a
confidentiality
breach
or
an
integrity
reach
yeah.
I
see
it
as
you
know,
there's
interactions
with
network
policy
yeah,
and
so
that
was
kind
of
my
questions
around
you
know
yeah,
where
we
see
the
overlap
of
this
in
network
policy.
E
A
I
want
to
be
respectful
of
everyone's
time,
so
thank
you
everyone.
I
found
this
super
helpful
as
far
as
next
steps,
I
would
like
to
write
up
and
summarize
what
we
talked
about
in
terms
of
goals,
requirements
non-goals.
A
What
not-
and
maybe
we
can
carry
on
some
of
the
discussion
on
that
document
and
then
I'd
like
to
schedule
another
one
of
these
sessions,
two
weeks
out,
at
the
same
time,
to
dig
into
the
implementation
side
of
things
yeah
and
we
are
at
time
so
any
concerns
about
that
plan
feel
free
to
say
so
on
slack
with
a
mailing
list
all
right.
Thank
you.