►
From YouTube: Kubernetes SIG Auth 2022-01-19
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2022-01-19
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
All
right,
everyone
welcome
to
january
19,
2022
meeting
of
sigoth.
Let's
kick
it
off.
We
have
few
items
on
our
agenda.
A
Can
you
guys
hear
me
okay,
yep
awesome,
so
today
we
have
one
announcement
which
is
124
enhancements.
Freeze
is
18
pacific
time,
thursday
february
3rd
and
then
prr
is
saw
freeze
on
thursday
january
27th.
So
if
you
have
a
cap
that
you're
working
on,
please
make
sure
you're
making
progress
on
that
and
paying
the
folks,
you
need
to
get
approval
and
review.
A
I
don't
see
anything
else,
so
we're
going
to
go
straight
ahead
to
discussion
topics.
So,
first
off
the
there
is
a
cap
for
kms
observability.
B
No,
I
was,
it
was
something
about
the
enhancement
series
making
sure
people
got
not
only
kept
merged,
but
also
things
in
the
tracking
spreadsheet.
A
C
So
a
few
offers
have
been
meeting
every
week
on
things
that
we
can
enhance
for
the
kms
plugin.
So
some
of
the
areas
that
we
have
tried
to
look
at
is
observability
recoverability
and
then
performance.
So
we've
categorized
that
and
then
we
have
like
a
living
dock
where
we
are
working
on
it.
So
the
first
thing
that
we
wanted
to
focus
on
was
observability
and
we
wanted
to
keep
the
changes
short
without
any
breaking
changes.
C
So
the
first
thing
that
we're
trying
to
do
is
actually
add
uid
for
requests
that
go
from
the
cube
api
server
to
the
kms
plugin,
because
today,
if
you
have
to
do
any
kind
of
correlation,
the
only
way
that
we
can
correlate
errors
or
requests
coming
in
is
by
looking
at
logs
in
kms
plugin
and
then
correlating
the
api
server
logs
based
on
the
timestamp.
C
So
what
we
are
trying
to
propose
as
part
of
this
cap
enhancement,
is
to
add
a
uid
field
in
the
proto
api,
so
that
for
every
encrypt
decrypt
request
that
we
make
to
kms
plugin.
We
generate
a
unique
id
and
then
add
that
as
part
of
the
api
request,
and
then
we
are
also
planning
to
implement
a
wrapper
in
the
api
server
so
that
any
logs
that
is
generated
for
these
kms
envelope.
C
Encryption
calls
have
the
uid
and
then
also
maybe
have
additional
metadata
around
the
name
namespace
and
then
the
group
version
resource
so
that
if
I
as
a
cluster
admin
want
to
look
at
logs,
then
I
can
look
at
the
kms
logs
and
then
correlate
that
with
my
api
server
logs
and
then
this
uid
can
also
be
passed
from
the
kms
plugin
to
the
external
kms
store,
so
that
that
can
be
used
for
auditing
purposes.
So
there
were
a
couple
of
options
that
we
considered.
C
One
was
basically
using
the
audit
ids
that
we
can
string
along
from
user
user,
originating
request
all
the
way
to
the
kms,
but
I
think
we
talked
about
it
during
the
last
call.
Not
all
the
requests
that
are
made
to
kms
plug-in
originate
from
the
user.
Some
of
it
is
during
the
api
server
cache
warm-up.
C
So
there
is
no
way
to
correlate
the
user
request
there.
So
for
now,
we've
decided
to
just
generate
a
uid
in
the
api
server,
for
every
request
that
it's
making
the
kms
plugin
and
then
start
the
correlation
between
api
server
and
kms
plugin,
and
then
in
the
future.
We
are
exploring
an
option
where
we
can
use
the
audit
id
annotation
and
extend
this
so
that
we
can
use
this
uid
without
identification.
C
D
D
Just
the
this
api
is
a
beta
api,
but
then,
like
the
encryption,
configuration
is
like
ga
v1
api
right,
like
that's
already
kind
of
weird,
because
you
configure
the
thing
that
you
configure
is
done
apparently,
and
this
thing
over
here,
that
is
being
configured
is
not
done
so
that
that's
weird
but
also
like
in
in
this
particular
case,
and
I
commented
on
there,
which
is
that
this
change
that
we're
proposing
to
make
it's
unclear
to
me
how
it
flows
through
any
graduation
criteria
or
any
feature
gate
like
as
described
to
me.
D
This
thing
just
starts
off
as
ga
as
in
we're,
adding
a
uid
hope
you're
fine
with
that,
like
I
don't
really
know
what
else
we
do
in
this
case
I
mean
you
know
I
think
back
to
the
csr
duration
cap,
and
you
know,
because
of.
D
D
B
Yeah,
the
there
are
a
few
reasons
we
have
like
feature
great
progressions.
One
is
if
there's
potential
issues
that
might
come
up
the
feature
gate
gives
us
a
simple
way
to
sort
of
cut
off
the
problem,
and
so
the
service
area
of
this
proposal
is
pretty
small,
but
it
is
like
adding
data
to
audit
stuff
and
it
is
adding
data
to
outgoing
requests.
B
I
could
imagine
a
backend
that
didn't
like
new
fields
showing
up
in
the
grpc
request
like
that,
would
be
weird
given
how
grpc
and
proto
works,
but
it
could
happen
right.
So
that's
the
sort
of
thing
that
a
feature
gate
would
let
someone
who
didn't
care
about
the
audit
id
functionality
like
turn
off
and
then
come
to
us
and
say
hey.
B
B
B
So
if
you're
trying
to
follow
those
to
the
letter
and
are
having
trouble,
I'm
not
surprised
feel
free
to
point
that
out
and
like
jump
into
the
api
review,
channel
or
open
issues
or
pull
requests
to
that
doc.
To
like
add
grpc,
specific
exemptions
or
notes.
That's
fine.
D
At
a
high
level
do
folks
have
a
particular
approach.
Like
I
mean,
certainly
the
safe
approach
is
to
add
some
feature,
gate
and
use
that
gate
to.
I
guess
enable
if
the
uid
is
going
to
get
passed
through.
I
I
don't
actually
fully
know
exactly
how
the
proto
looks
on
the
wire.
I
presume
the
field
is
not
set
the
you
know
the
binary
data
that
is
actually
on
the
wire
does
not
change
at
all.
B
B
This
probably
don't
need
to
go
into
that
level
of
detail
here,
like
I
said
the
two
things
that
this
is
proposing,
don't
seem
like
high-risk
things,
so
I
I
don't
feel
strongly
either
way,
but
just
remember
that
our
ability
to
react
quickly
if
we
release
something
and
there's
a
problem
for
someone.
B
B
B
I
saw
you
tagged
clayton
as
the
reviewer
on
that,
since
he
was
one
of
just
a
couple
original
code
reviewers
on
the
storage
transformation
stuff.
I
would
suggest
reaching
out
to
him
in
slack
or
something
to
make
sure
he's
aware
of
it.
He
might
have
gotten
lost.
B
B
Yeah
separately,
it
would
probably
be
worth
talking
with
him
about
like
it's
good,
that
the
kms
discussions
are
happening
and
people
are
interested
in
improving
that
it
would
probably
be
worth
talking
with
him
about
getting
some
more
redundancy
there.
In.
D
Yeah,
I
I
can
do
that
other
than
the
graduation
things
do
folks
have
any
thoughts
around
just
kept.
E
I
I
haven't
been
following
closely,
but
maybe
this
has
already
been
answered,
but
how
is
the
ui
uid
being
generated?
Is
that
I
mean
what
kind
of
random
number
generator
are
you
using
underneath
it.
D
So
we
we
would
just
use
the
existing
uuid
package
that
we
use.
So
within
the
cap
we
reference
or
the
niche
references.
I
shouldn't
use
the
initial
of
this
kept.
I
was
just
present.
D
D
They
survey
vaguely
similar,
but
not
exactly
the
same
purpose
as
what
we're
trying
to
do
here
and
that,
if
I
remember
correctly
uses
the
goes,
a
crypto
random
number
generator
under
the
hood,
which,
if
anyone
cares
that
you
know
that
does
get
swapped
out
if
you
have
like
the
fips
compiling
stuff.
So
if
that,
if
that's
the
concern
or
if
there's
other
concerns
about
the
rent,
yeah.
E
D
Yeah,
yes,
that's
the
function
right
there.
That
is
on
screen
right
now.
So
certainly,
if
there's
any
concerns
with
it,
we
could
use
something
else.
If
there
was
a
need
yeah.
E
Okay,
I
know
the
linux
finally
gets
upgraded
at
guaranty,
I
think
about
a
month
ago,
but
anyway,
yeah
okay,.
D
I
I
was
going
to
admit
so
this
level
of
detail
isn't
in
the
cap,
because
it's
really
just
about
implementation.
I
had
proposed
this
to
a
niche
as
a
as
a
as
a
way
we
could
implement
this.
D
And
then
you
know
you
can
have
tests
and
stuff
around
that,
but
my
thought
there
was
that
over
time,
as
we
add
new
transformers
or
otherwise
tweak
things
or
because
I
I
know
a
while
back
I've
seen
someone
have
a
kept
for
like
a
gzip
transformer
or
something
which
seemed
like
a
perfectly
valid
thing
to
maybe
want
to
do
for
your
lcd
data,
just
kind
of
making
sure
that
it
sort
of
exists.
D
F
I'd
be
interested
in
knowing
what
kind
of
code
wrapper
you
needed
around
the
wrapper
that
was
already
there,
but,
like
you
know
in
concept,
do
I
object
to
zipping?
No,
not
really.
F
If
I,
if
I
had
zipping,
would
I
try
to
use
it
as
a
way
to
finally
store
crd
v1
yeah,
I
probably
would
but
beyond
beyond
griefing
jordan,
I
don't.
I
don't
have
plans.
D
Yeah,
I
I
just
you
know
at
a
high
level
right
like
this
is
just
about
making
sure
that,
when
things
go
bad,
that
there's
enough
information
in
the
logs-
and
I
just
wanted
to
make
sure
like
logs-
are
hard
to
assert
over
time
as
being
useful
yeah.
Well,
I
just
kind
of
try
to
have
it
built
in
a
way.
That's
unlikely
to
drift.
G
G
G
So
we
are
updating
the
cap
correspondingly,
because
previously
it
was
using
security
policy,
and
now
it
has
a
security
consideration
chapter
somewhere
that
basically
copies
the
stuff
that
is
in
both
security.
What
security
standards
about
inline
csi
volumes
that
they
should
be
used
for
fmr
volumes.
G
And
risk
inline
csi
drive
drivers
should
use
third
third-party
admissions
and
so
on.
So
we
definitely
plan
to
document
that
we
will
document
our.
We
will
update
our
documentation
for
csi
driver
vendors
to
use
fmr
volumes
for
save
data
and
not
expose
insecure
parameters.
G
We
can
do
that,
but
I
am
100
sure
that
there
will
be
a
csl
driver
that
will
break
that,
because
people
want
people
really
want
to
have
in-line
volumes
in
ports
using
csv
drivers
to
access
persistent
storage.
They
do
it
now
with
nfs
volumes.
They
will
want
to
use
it
in
with
csi
drivers
too.
So,
whatever
we
document
ss
storage,
whatever
we
recommend,
I
am
pretty
sure
people
will
break
it.
B
So
I
spent
some
time
talking
about
michelle
talking
about
this
with
michelle,
also
from
six
storage,
and
I
I
had
actually
gotten
confused
about
there
being
two
ways
to
use
sort
of
ephemeral
volumes
from
within
a
pod.
B
So
we
had
talked
about
ephemeral
volumes
and
there
actually
is
an
ephemeral
field
inside
a
pod
which
drives
the
same
workflow
as
creating
a
pvc
and
then
getting
a
pv
bound
to
it.
Yes,
and
so.
B
G
Yes,
sort
of
the
one
we
call
generic
fmr
volumes
that
creates
pvs
pvcs.
It
gets
you
empty
here,
basically
yeah,
it
gets
you
empty
there
like
you,
get
something
empty
or
it
could
be
populated
via
snapshot
or
whatever,
but
the
use
case
is
like
empty
there.
On
steroids,.
B
Volume
which
could
be
pulled
from
a
snapshot
right
yeah,
so
so
for
persistent
storage,
we're
saying
you
should
use
pvcs
for
ephemeral
volumes
that
are
built
on
top
of
like
persistent
drivers.
You
could
use
this
ephemeral,
so
it'll
provision
a
pvc
and
then
tear
it
down
and.
B
Stuff
that
that
is
focused
more
at
the
like
secret,
providing
volume
or
credential,
injecting.
G
G
A
problem
with
that
is
that
people
use
entry
and
fs
volumes
in
line
inputs.
Today
they
will
not
have
that
possibility
with
the
generic
fmr
volumes
with
pvs
pvcs,
because
they
need
to
specify
the
nfs
share.
They
need
to
use.
They
want
to
use
the
nfs
mount
basically
and
that's
not
possible
with
generic
effect
volumes
in
pvs
because
they
create
pvcs
and,
like
you,
can't
tell
what
you
want.
You
get
something
provision
for
you,
based
on
the
storage
class.
G
B
G
Sort
of
like
the
use
case
is
that
there
is
an
organization
that
traditionally
uses
nfs.
Everybody
has
a
home
directory
with
proper
ownership.
Proper
access,
bytes
and
a
person
in
that
organization
runs
some
workloads
in
kubernetes.
They
run
some
workloads
on
their
own
on
their
machines
on
their
desktops
and
all
that
they
want
to.
G
B
B
G
G
B
To
me,
it
makes
sense
to
say
we
should
not
have
an
opinion
about
this.
If
you're
going
to
write
a
thing
that
is
not
inherently
safe,
then
you
need
to
put
protections
around
it
which
understand
the
context
that
would
make
it
safe.
So
in
that
case,
the
best
one,
it's
a
good
type
of
restriction.
B
G
So
we
will
update
our
documentation
that
faces
csr
driver
vendors
that
that's
the
css
drivers
that
implement
the
smart
volumes.
B
G
G
Okay,
so
dangerous.
We
will
document
that
if
a
driver
implements
the
inline
volumes,
then
it
should
be
safe,
and
if
they
want
to
do
something
unsafe,
then
they
should
provide
a
third
party
permission.
Plugin
that
will
enforce.
Then
it
allows
users
to
do
some
policy.
That'd
be
fine.
H
G
No
it's
up
to
the
csi
driver
and
okay,
we
can't
enforce
it.
You
know
permanently.
B
Yeah,
the
parameters
that
are
passed
to
the
driver
are
arbitrary,
like
if
you're
using
this
inline
volume
in
a
pod
spec,
you
can
specify
any
parameter
you
want,
and
so
what
parameters
the
csi
driver
consumes
determine
how
safe
or
unsafe
it
is.
So
you
could
imagine
the
csi
driver
consuming
parameters
that
it
like
passes
blindly
into
like
colonel
mount
calls
or
something
okay
that
could
be
like
super
disruptive
or
destructive
or
in
the
case
of
the
nfs
one.
B
One
of
the
parameters
it
consumes
is
like
information
about
which
nfs
volume
it
wants
to
mount
and
so
like,
depending
on
other
things
like
you,
would
get
permissions
or
sd
linux
control
or
like
how
the
nfs
stuff
is
configured
on
the
nose
depending
on
other
stuff,
that
might
let
a
pod
author
access
arbitrary
nfs
data,
so
it's
pretty
specific
to
the
driver,
whether
it's
safe
or
not,
and
then,
if
it's
unsafe,
what
other
conditions
need
to
exist
to
make
it
safe,
whether
it's
other
attributes
of
the
pod
or
attributes
on
the
namespace
or
configuration?
B
H
H
I
guess
I
had
never
thought
about
kubernetes
trying
to
control
that
it
seems
like
if
I
want
to
write
some
arbitrary,
unsafe
software
that
runs
with
node
credentials
I'll
be
able
to
do
it,
no
matter
what
mechanisms
and
deploy
it
on
my
nodes
and
I'm
the
cluster
administrator,
like
it's
not
clear
how
much
kubernetes
can
do
to
keep
me
from
shooting
myself
well.
B
I
think
the
proposal
in
the
last
couple
meetings
or
last
couple
months
was
to
have
sort
of
a
binary.
Yes
or
no.
This
namespace
may
access
this
driver
type
of
policy
that
could
be
built
into
kubernetes,
and
I
I
I've
been
kind
of
opposed
to
that,
like
I
think,
there's
reasonable
alternatives
using
pvs
and
pvcs
or
using
the
generic
primordial
volumes,
and
if
people
don't
want
to
do
that,
they
still
want
to
do
inline
stuff,
I
suspect,
even
for
the
nfs
example
like.
B
Would
it
actually
be
acceptable
to
just
deny
use
of
the
nfs
driver
to
all
these
namespaces?
I
would
guess
not.
I
would
guess
you
would
still
want
to
allow
it
conditional
on
the
uid
and
group
id
aspects,
and
so
that
goes
beyond
what
a
built-in
admission
thing
would
do
as
soon
as
you
start
tying
in
some
second
piece
of
information.
F
Yeah,
the
the
question
of
whether
a
cluster
admin
always
knows
in
advance
what
the
software
he
installs
does
is.
In
my
experience,
they
are
not
always
super
aware
of
all
the
minutia
and
having
a
way
to
say
that
you
know
this.
F
B
I
I
will
say
just
having
sort
of
come
into
this
area
sideways
via
these
discussions.
I
got
super
confused
by
the
difference
between
the
ephemeral
field,
which
drives
the
pvc
flow
and
the
csi
field,
which
does
inline
volume
specs.
I
know,
there's
been
like
updating
the
documentation.
A
few
times
has
been
mentioned,
but
it
would
be
super
useful
to
have
like
a
if
you
want
to
provide
this
kind
of
storage.
B
Do
this
with
your
driver,
if
you
want
to
provide
this
kind
of
storage,
do
this
with
your
driver
type
of
flow
chart
with
the
the
caveats
and
warnings
like
if
you
want
to
allow
pod
authors
to
do
it,
provide
this
kind
of
driver
but
watch
out
for
like
arbitrary
parameters
or
kernel
parameters
or
like
something
that
doesn't
just
bury
the
documentation
on
the
particular
page
talking
about
inline
volumes,
but
actually
lays
out
like
here's
as
a
csi
driver
author,
you
can
provide
volumes
via
pvcs
and
they
can
be
consumed
these
ways
or
you
can
do
it
in
line
it
can
be
consumed
this
way-
and
here
are
the
considerations
like
kind
of
the
the
overview
with
the
pros
and
cons
of
each
might
be
helpful.
G
C
G
G
So
maybe,
as.
G
Like
the
recommendation
in
the
both
security
standards
about
inline
volumes,
it
still
holds,
kubernetes
will
not
provide
any
policy,
and
we
will
just
update
documentation
in
this
and
provide
guidance
about
the
security.
Is
that
correct.
B
That
matches
my
expectation-
and
I
think
we
already
had
issues
filed
for
the
drivers
that
we
knew
of
that
were
making
use
of
inline
functionality
with
problematic.
G
Exposure,
nice
guys,
somebody
driver,
I
think
they
dropped
the
support
phone
number
correctly,
but
I'm
pretty
sure
that
the
nfcs
and
driver
they
will.
They
will
either
want
it
there
or
they
will
fork
it
because
it's
used
yeah.
We
can
document
it
in
the
driver
that
about
the
security
consequences.
Definitely,
but
I
don't
think
we
can
just
remove
it
easily.
B
B
A
So,
what's
the
action
item
here.
B
I
think,
as
sixth
as
part
of
six
storage
wanting
to
ga
the
inline
field,
like
making
sure
the
the
docs
are
clear
and
call
out
these
implications,
and
I
think
we've
already
engaged
with
the
drivers
that
are
known
there
were
a
couple
drivers
that
were
known
that
were
using
this.
A
Okay
pot
security
plans
for
v1
for
v124,
ga
or
not
g.a,.
B
Yeah,
so
I
tim
put
this
on
here,
I'm
not
sure,
are
you
on
tim.
B
That
means
that
customers
or
your
users
will
need
to
migrate
their
clusters
off
of
psp
in
124
or
by
124,
so
so,
in
other
words
before
they
upgraded
to
125.
B
So
it
would
be
really
nice
if
pod
security
was
ga,
so
they
could
migrate
from
pod
security
policy
to
a
ga
feature.
A
couple
other
considerations
there
yeah
so
as
david
mentions
psp
is
beta.
So
it's
not
the
end
of
the
world.
If
we're
telling
them,
you
have
to
migrate
from
one
beta
thing
to
another.
B
If
pod
security
instead
went
to
ga
in
125,
then
probably
a
lot
of
users
are
not
going
to
actually
do
this
migration
until
125
is
out
just
kind
of
realistically
based
on
people's
upgrade
timelines,
and
so,
if
it
went
to
ga
125,
they
at
least
have
some
assurance
that
the
thing
that
they're
migrating
to
is
not
going
to
change
or
go
away
in
the
future.
B
And
yeah,
so
that's
consideration
and
then
the
other
consideration
is
just
you
know.
Psp
went
to
beta
in
123.,
123
hasn't
been
out
for
that
long.
B
B
So
yeah,
I
think
that's
the
that's.
The
current
situation
is
anything
you
want
to
add
jordan,
I
linked
to
the
tracking
project
just
calling
out
some
of
the
outstanding
items.
There
aren't
a
lot.
There
was
one
sort
of
cleanup
usability
one
where,
if
you
submit
pause
to
a
restricted,
namespace
you'll
sometimes
get
a
forbidden
error
from
the
baseline
policy
and
the
restricted
policy
that
basically
say
similar
things,
which
is
ugly,
so
it'd,
be
nice
to
clean
that
up.
David
brought
up
an
issue
where
it
would
be.
B
Our
ede
tests
should
configure
the
namespaces
they
create,
so
that
a
cluster
that
is
enforcing
restricted
policy
by
default
will
still
pass
ede
tests
that
got
started
at
the
very
end
of
last
release
and
didn't
quite
make
it.
So
I
think
that
is
a
good
requirement
for
ga.
B
I
would
want
to
see
all
of
our
ede
tests
explicitly
set
the
pod
security
level.
They
need
to
pass
and
ideally
scope
themselves
to
baseline
or
restrict
it
where
possible.
B
B
At
the
point
where
the
field
goes,
ga
we
can
update
pod
security
to
only
pay
attention
to
os
specific
fields,
which
is
nice
it'll,
make
the
restricted
policy
cleaner.
If
you're,
creating
windows
pods,
and
you
don't-
and
you
think
it's
silly
to
have
to
specify
like
restricted
capabilities
and
linux,
specific
things
in
your
pod
spec
just
to
pass
the
restricted
policy
you
can
say
this
is
a
windows.
Pod
and
pod
security
could
relax
and
be
like
oh
cool,
then
I'll
only
require
you
to
opt
into
restricted
windows
related
things.
So.
B
Yeah
all
the
feedback
I've
seen
have
been
from
people
who
were
writing
blog
posts
about
the
feature
which
was
not
nothing,
but
it
also
wasn't
like
wide
user
signal.
123
like
realistically,
I
think,
is
just
now
like
rolling
out
to
sort
of
production,
managed
production
environments
so
requiring
the
migration
from
psp
to
pod
security
in
124
to
be
from
one
beta
thing
to
another.
Beta
thing
is
not
the
end
of
the
world.
F
Yeah,
like
I
I
know
we
are
still
working
through
how
we
handle
migration
plan,
how
we
get
clusters
that
did
not
previously
enforce
to
enforce
what
that
looks
like
in
terms
of
auditing
and
user
opt-in
and
how
it
plays,
with
other
admission
plugins
that
we
currently
have
that's
all
still
being
worked
through,
and
you
know
not
that
we're
necessarily
like
you
must
wait
for
us.
F
But
you
know
openshift
is
fairly
knowledgeable
about
like
how
this
stuff
worked
have
years
of
experience
and
forcing
security
on
pods
and,
like
we
haven't
gotten
our
feedback
back,
I
haven't
seen
other
people's
feedback
come
back
yet
either
and
so
planning
for
a
ga.
Without
that
feedback
worries
me
yeah
other
than
a
desire
to
encourage
people
to
migrate
from
a
beta
thing
to
a
ga
thing.
D
Could
I
ask
a
maybe
dumb
questions
so
I
know
psp
is
slated
for
removal
in
125
right.
Could
we,
instead
of
removing
it,
disable
it
by
default
or
something
and
just
give
you
a
like
if
the
concern
like
the
concern
I'm
hearing
from
david
is
we
don't
have
enough
feedback
and
our
runway
is
relatively
short
so,
like
we
now
need
to
make
a
decision,
and
I
guess
I'm
asking:
can
we
make
our
runway
a
little
bit
longer.
D
The
the
thing
I'm
trying
to
ask
is:
can
we
can
we
decouple
in
some
way
the
the
the
psp
runway
with
the
psa
runway?
I
guess-
and
I
think
you
know
david
is
suggesting-
would
do
that
which
is
basically
it's
okay,
that
you
have
to
migrate
from
a
beta
thing
to
another
beta
thing,
I
I
I
worry
that
we
ga
something
as
important
as
pot
security
too
soon
and
then
realize
we've
done
something
bad
too
late.
D
B
B
I
will
say
that
the
current
discussions
in
cigarch
are
around
like,
should
we
really
be
enabling
beta
things
by
default
in
production
and
right
now
it's
focused
just
on
rest
apis,
like
new
rest,
api
endpoints.
It
would
not
surprise
me
if
the
same
question
was
asked
about
other
beta
things
in
the
future
like,
should
we
really
be
opting
all
users
into
beta
things
in
production
by
default,
and
so
in
the
like?
B
If
we
think
it's
good
enough
to
turn
on
by
default,
I
would
expect
a
pretty
crisp
list
of
things
that
would
keep
us
from
going
to
ga
and
if
we
like,
is
it
just
user
reports
that
we're
waiting
on
and,
if
so
in
the
future?
If
we
are
not
enabling
beta
things
by
default,
how
how
are
we
gonna
get
that?
F
So
I
think
that
it
is
possible
to
get
feedback
without
having
something
on
by
default.
If
we
look
at
at
cluster
that
people
who
respond
to
a
prr
survey,
for
instance,
30
of
people
turn
on
an
alpha
feature
in
production,
so
so
they
third
percent
of
plus
yeah
300
people
turn
on
alpha
features
in
production
so
like
they
can
find
it
if
they
want
to.
We
have
not
been
idle
so
turning
it
on.
We
found
the
ede
issue,
because
we
were
you
know,
trying
it
early,
pushing
things
through
seeing
what
broke.
F
So
I
think
if
this
had
not
been
enabled
by
default,
I
believe
that
our
feedback
would
still
be
coming
about
now
and
and
that
feedback
at
least
the
the
hardest
parts
for
us
are
around
the
migration
of
turning
on
forcing
by
default,
and
I
it
could
be
that
it
is
worse
for
openshift
than
for
anybody
else
right.
F
It
could
also
be
that
it's
it's
hard
to
turn
on
and
maybe
if
we
get
feedback
from
two
three
people
who've
tried
to
turn
this
on
across
a
wide
number
of
clusters,
they'll
be
able
to
say
like
if
it
was
just
this
little
piece
that
was
different.
I'd
be
able
to
do
this
more
easily.
I
don't
know
whether
it
won't,
but.
F
B
So
I
guess
I'm
it
seems
like.
There
are
two
aspects
there.
One
is
someone
just
coming
to
the
feature
cold
like
they.
They
aren't
doing
any
enforcement
in
their
clusters
today
and
they
want
to
start
using
this,
like
we
haven't
gotten
a
lot
of
that
feedback.
We
I
mean
we've
done
benchmarks
and
our
test
coverage
is
actually
excellent,
but
we
haven't
gotten
user
feedback,
especially.
B
F
The
current
issue
is
actually
facing
having
dual
enforcement
mechanisms.
You
have
previous
enforcement
mechanism,
a
you
have
new
enforcement
mechanism
b.
It
is
not
pod
security,
it's
something
different
and-
and
how
do
you
have
both
of
these
turned
on
at
the
same
time
and
make
sure
that
they
have
some
sort
of
coherent?
F
I
B
We
should
actively
seek
that
out
during
this
cycle
so
that
if
there
are
changes
that
would
be
helpful
or
need
to
be
made,
we
could
work
on
those
while
in
beta
and
try
to
deliver
things
in
124,
otherwise
we'll
be
in
the
position
where
changes
would
have
to
happen
in
125,
and
I
don't
think
we'd
be
ready
at
that
point.
Then.
F
I
I
can't
speak
to
all
cluster
cases,
I'm
not
trying
to
the
the
migration,
the
very
first
stage
of
if
you
start
a
new
cluster,
how
you
turn
this
on
was
fairly
straightforward,
even
the
stage
of
if
you
have
a
cluster,
it's
already
running
workloads.
How
can
I
enforce
what
I
currently
have
fairly
straightforward?
F
The
the
difficulties
come
in
with
you
have
a
different
enforcement
mechanism
today,
and
you
want
to
support
both
enforcement
mechanisms
tomorrow.
How
do
you
reconcile
the
two
and
pod
security
policy?
Is
one
one
example.
I
believe
there
are
other
secondary
enforcement
mechanisms
out
there.
So.
B
A
B
Go
ahead
for
124,
I
like
the
ed
tests
and
user
feedback
on
migration
and
policy.
Coexistence
are
the
two
things
that
I'm
hearing
the
most
concrete
that
we
need
to
work
on
and
resolve.
Is
there
anything
else
if
I
know
we're
out
of
time,
if
people
have
other
specific
things,
they
would
like
to
see.
Please
note
them
here
or
reach
out
to
mri
or
jump
on
the
list.
We
want
to
make
sure
we're
in
we're
using
the
timer
124
to
resolve
stuff.
Well,.
A
How
I
guess,
how
does
the
ga
timeline
for
security
impact
the
psp
removal,
I
think
we
kind
of
jumped
around
it,
but.
B
Because
psp
is
beta,
I
think
pods
migrating
from
psp
beta
to
pod
security
beta
is
reasonable.
I
don't
think
that
anyone
should
be
waiting
for
pod
security
to
reach
ga
before
moving
from
psp
to
it.
A
Okay,
so
so
this
is
the
independent
of
pod
security,
ga
right.