►
From YouTube: SIG-Auth Bi-Weekly Meeting for 20220427
Description
SIG-Auth Bi-Weekly Meeting for 20220427
A
Hey
everyone:
this
is
the
sig
off
meeting
for
april
27
2022
we've
got
a
pretty
full
agenda,
so
let's
get
started
tim.
Are
you
on
the
call.
B
Well,
jordan,
you're,
probably
the
next
person
I'd
ask
yeah,
there's
not
a
lot
to
say
beyond
the
title
the
desire
was
to
take
pod
security
to
ga
and
125,
barring
any
new
information.
I
think
the
main
reason
we
didn't
take
it.
Ga
124
was
mostly
around
getting
experience
reports
from
people
transitioning
from
psp
to
cloud
security,
and
I
think
openshift
has
at
least
gotten
that
underway.
So
they're
aware
of
what's
involved
in
that
and
gke
is
in
a
similar
spot.
So
we
have
like
documentation
and
processes
sort
of
set
up.
B
B
If
you
have
comments
here
would
be
great
or
in
slack
or
the
mailing
list
or
the
enhancements
issue.
C
A
David,
I
was
going
to
ask,
I
think.
Kristoff
had
mentioned,
that
standard
had
been
doing
some
of
the
work
around
the
psa
transition
for
openshift
yeah.
C
In
sharing
stonda
has
a
design.
It's
it's
public
about
how
we're
gonna
transform
secs
into
these
particular
labels.
There
were
some
changes
required
along
the
way,
so
new
clusters
and
upgraded
clusters
are
going
to
have
slightly
different
existent
rules
and
and
the
way
they're
likely
assigned
because
of
changing
defaults
over
time.
C
C
A
Be
great
yeah.
D
B
A
All
right
cool,
as
an
aside,
does
anyone
know
of
any
api
like
sec
that
folks
are
migrating
off
of,
like
I
don't
know
exactly
how
the
internals,
like
opa
and
stuff
express
their
policy
like.
I
know
it's
regular,
but
more
common
use
cases
yeah
to
be
clear.
We
aren't
migrating.
C
E
There
are
still
things
that
we
could
do
to
make
the
transition
from
psp
a
little
easier.
I
think
the
time
has
passed
for
doing
any
of
that
in
tree,
but
there's
there's
opportunity
to
automate
some
of
the
tasks
with
out
of
tree
utility.
E
Specifically,
the
we
allow
all
local
host
app
armor
policies,
which
makes
sense
for
setcomp,
since
the
secomp
policies
need
to
be
in
a
dedicated
folder
that
gives
pods
access
to
them.
In
the
case
of
app
armor,
any
loaded
policy
is
accepted,
which
includes
policies
that
are
loaded
for
system
demons.
So
there's
no
way
to
like
limit
that
to
policies
that
are
explicitly
granted
for
pods
in
the
cluster.
E
I
would
say:
that's
a
deficiency
in
how
we've
specified
the
this,
the
restricted-
or,
I
guess,
maybe
also
baseline,
pod
security
standards.
E
E
And
just
say
that,
if
you
want
to
allow,
if
you
want
to
add
a
profile
for
the
cluster,
it
needs
to
have
this
prefix,
the
other
option
would
be
a
deeper
change
which
would
be
refining
how
refining
the
semantics
of
how
local
host
profiles
actually
work.
App
armor
is
still
beta.
There's
some
discussion
about
bringing
it
to
ga
and
I've.
I've
raised
this
on
that
cap
as
something
to
think
about
in
the
transition
of
app
armor
to
ga.
A
B
E
Yeah,
I
know
I've
mentioned
it
on
the
app
armor
to
ga
discussion.
I'm
not
sure
if
there's
a
separate
issue
for
it,
but
I
can
follow
up
on
that.
B
Okay,
the
backwards
compatib
backwards
and
forwards
compatible
approach,
we've
taken
or
come
up
with
for
pod
security
seems
like
it
would.
Let
us
make
changes
to
the
latest
versions
of
these
pod
security
standards
if
they
were
necessary
in
the
future,
would
you
agree
with
that.
B
B
B
I
can't
remember,
but
the
the
plan
was
to
at
the
point
when
that
field
goes
ga,
let
pods
which
say
I'm
a
windows
pod
and
I
am
explicitly
running
windows
containers.
B
B
I
don't
think
that
blocks
ga
and
that
actually
encourages
me
in
terms
of
like
being
able
to
maintain
this
feature
over
time
like
if
new
fields
show
up
in
the
pod
spec.
We
have
a
way
to
sort
of
fold
them
into
this
policy
when
it
makes
sense.
E
A
C
B
In
general,
I
think
the
declarative
not
declared
of
the
non-persistent
type
endpoints
are
a
little
weird,
so
I
would
want
to
make
really
sure
we
wanted
to
add
another
one.
I
don't
have
a
lot
of
context
about
the
about
exposing
information
from
the
authentication
layer
back
to
the
user
being
a
problem.
A
You
can
already
learn
the
username
associated
with
the
token
by
trying
to
perform
some
arbitrary
spaghetti
action
that
our
back
will
deny
and
we'll
say
you
can't
do
this,
but
they
won't.
They
won't
tell
you
anything
more
than
a
username.
A
D
D
D
A
B
A
A
Okay,
hopefully
we
can
track
down
my
task
all
right.
I
will
move
on
to
the
next
item,
so
this
this
came
up
in
triage
meeting
on
monday.
I
just
wanted
to
ask
folks
opinion.
The
gist
of
this
issue
is
that
the
kubernetes
controller
manager
performs
health
checks
against
the
api
chart
api
server.
A
A
In
this
case
they
just
disabled
the
kmsq,
which
is
obviously
not
going
to
work
out
too
well
for
them,
but
I
think
the
the
gist
of
the
question
I
had
was.
Would
we
want
to
expand
the
configuration
of
the
cube
controller
manager
to
support
filtering
what
health
checks
are
or
are
not
performed.
A
And
sort
of
related
to
that
is
like
is
that
the
best
approach?
Is
there
some
better
approach?
But
to
me
this
time
is
like
a
relatively
generic
issue
that
as
a
for
example,
if
you
have
a
load
balancer
in
front
of
the
api
server,
you
can
decide
which
health
checks
you
want
to
perform
by
configuring,
your
lg
code,
but
it
seems
kind
of
annoying
to
have
to
copy
paste,
basically
that
same
configuration
to
every
component.
C
I
I
don't
remember
about
this
guy,
but
I
do
have
memories
of
why
people
have
tried
to
do
it
in
the
past,
so
there
were
in
configurations
where
a
load
balancer
is
not
used
and
the
cube
api
server
starts
up
and
the
queue
controller
manager
starts
up
the
qpi
server.
If
you
don't
check
to
see
if
it's
ready
can
serve
incomplete
discovery,
data
and
the
incomplete
discovery,
data
had
an
impact
on
the
way
garbage
collection
functioned
as
well
as
which
controllers
would
be
enabled
or
disabled,
and
so
in
the
very
distant
past.
C
C
Interestingly,
it
was
valid
at
one
point
in
time
versus
if
you
never
check,
then
you
can
be
in
a
state
where
you
were
where
the
api
server
knew.
It
was
never
valid
right.
So
imagine
if
you
hit
discovery
before
crds
have
been
doing
that
can
cause
you
to
have
incomplete
information
in
your
gc
graph
for
block
number
deletion
right
like
was
it
bad?
It's
probably
not
terrible
you'll
fix
yourself
eventually,
but
you
know
what
breaks
in
the
meantime.
C
C
A
D
B
B
A
E
B
If
discovery
was
the
only
thing
we
cared
about
like.
Are
there
specific
health
checks?
We
could
look
at
that
say,
like
are
the
things
involved
in
discovery
done
setting
up
or
if
there
are
specific
things
at
startup
that
we
could
make
more
resilient,
so
that,
like
the
things
depended
on
discovery,
would
wait
until
discovery
was
ready
and
then
once
it
was
ready,
they
would
start
off.
I
I
don't
know
it's
at
least
worth
asking
the
question
like:
is
there
a
way
we
could
improve
this?
So
we
don't
need
the
knob.
C
It's
also
worth
remembering
that
I
only
have
my
narrow
window
of.
Why
was
this
originally
added?
I
don't
know
what
else
depends
on
it
now.
So
discovery
is
something
I
know
this
got
added
for
and
we
actually
improved
other
aspects.
Since
then
I
mean
you
rewrote
gc
discovery
and
maybe
review
that
monster.
B
Yes,
so
that
check
used
to
only
try
to
complete
a
successful
discovery
call
and
then
that
issue
had
it
switched
to
calling
health
z
without
a
lot
of
explanation.
G
Basically,
this
is
like
a
proposed
addition
to
the
exact
credential
spec,
so
the
exact
original
plugins
kind
of
need,
caching,
so
that
they
can
be
at
all
performant
but
caches
you
know
can
have
be
subjected
times,
clock
skews.
They
can
cache
something
that
then
gets
invalidated,
because
you
know
any
number
of
reasons
and
then
from
your
you
know,
for
example,
your
web
token
authenticator
does
some
sort
of
check
and
realizes
it's.
It's
not
valid
anymore.
G
Then
today,
the
cubecut
all
can
retry
the
exec
credential
plugin,
but
if
the
plugin
doesn't
have
any
way
of
knowing
that
it's
being
retried,
it
might
just
give
you
the
same
invalid
cache
credential,
and
so
this
proposal
is
that
there
should
be
a
boolean
field
within
the
execution
spec
to
let
the
plugin
know
that
it's
being
retried
as
the
result
of
an
unauthorized
response.
G
G
G
The
open
questions
of
like
does
this
feel
like
something
that
people
would
be
on
board
with
adding
and
if
so,
would
it
need
a
cap
and
what
any
gated.
B
The
thing
we
had
in
the
v1
alpha
one
version
where
we
were
passing
down
some
information
about
the
request
or
the
like
last
response
or
something.
I
think
it
was
the
status
code
of
the
previous
response.
B
So
you
could
see
if
you
had
gotten
like
a
403
or
a
401
or
something
I
forget,
and
the
plug-in
could
use
that
information
to
make
decisions,
and
so
that
that
was
only
in
the
alpha.
It
wasn't
exactly
clear
like
how
plugins
were
expected
to
use
that.
I
think
we
were
maybe
envisioning
something
like
building
on
top
of
that.
B
It's
almost
like
a
kerberos
kind
of
like
challenge
like
you're,
going
to
need
information
from
the
previous
invocation
plumbed
down
to
you
to
do
a
thing
and
that
that
never
really
materialized,
I
think
giving
plugins
information
they
need
to
make
smart
decisions
is
reasonable,
especially
cases
like
we
have
right
now
in
tree,
like
the
service
account
tokens
where
you
can
say
like.
I
get
a
token
and
it's
good
for
an
hour.
B
So,
in
the
absence
of
other
information,
like
don't
call
me
again
for
an
hour
or
like
this-
should
be
good
for
an
hour
but
they're
revocable,
so
it
might
start
failing
it'd
be
nice
to
not
fail
for
an
hour.
In
the
meantime,
I
think
it's
probably
worth
a
design.
B
A
boolean
might
be
too
simplistic
like
you
might
need
more
information
like
some
time
stamps
or
some,
maybe
even
the
status
code
or
the
last
request
or,
like
I
think,
more
information
might
be
needed,
or
at
least
it's
worth
asking
if
more
information
would
be
needed.
So
a
small
design,
I
think,
is
useful,
but
I
don't
know
what
other
would
other
people
think.
D
It
would
be
awesome
if
cubecontrol
had
some
built-in
or
recommended
this.
It
could
be
a
built-in
caching
functionality
or
a
recommended
caching
pattern
or
something
because
I
think
every
at
least
based
on
my
experience
with
the
gke
plug-in,
like
I
think,
every
credential
plug-in
is
kind
of
rolling
their
own
caching
layer
right
now,
because
it
turns
out
that
they're
all
super
slow.
A
D
D
Yeah
yeah,
but
I
think
the
problem
is
like
we
don't
know
what
this
like.
On
linux,
we
know
what
the
standard
is
right.
There's
like
a
xdg
super
complicated
and
read
these
environment
variables
in
that
environment
variable
and
this
other
environment.
Variable
and
that'll.
Tell
you,
where
you're
supposed
to
put
your
cache
files
on.
F
D
Who
knows
right
like?
Will
it
work
if
the
user
has
relocated
their
home
profile
to
a
network
chair?
We,
I
don't
know
what
all
kinds
of
weird
stuff
people
do,
so
we
thought
cute
put
it
next
to
config
is
pretty
safe,
but
yeah,
like
I'm
sure
that
we
are
also
affected
by
this
problem.
That
margot
is
describing.
My
two
senses
would
be
great
if
there
were
more
support
for.
B
Both
of
those
seem
like
related
problems,
but
probably
different
problems,
and
I
think,
like
this
is
useful,
independent
of
whether
like
more
caching
capabilities
are
added
on
top.
So
I
I
think,
if
they're,
if
we're
like
sort
of
talking
about
the
whole
life
cycle
like
not
just
how
do
I
get
a
credential,
but
how
do
I
get
a
credential
even
once
it's
been
invalidated,
or
how
do
I
get
a
credential
and
not
make
everyone
roll
their
own
caching
layer?
Those
are
good
questions
to
ask
to
build.
B
On
top
of
this
feature,
I
think
I'd
probably
ask
them
sort
of
separately.
It's
I
mean
if
there's
a
one
solution
that
like
easily
solves
both.
That's
fine,
but
I
wouldn't
hold
this
up
on
adding.
C
G
B
In
terms
of
especially,
if
the
caching
I
would
expect,
there
would
be
more
of
a
handshake
because,
like
a
plug-in,
that
is
counting
on
cube
control
to
do
some
built-in
caching
stuff
would
maybe
behave
differently
if
it
knew
that
it
was
talking
to
an
old
cube
control
that
wasn't
gonna
be
able
to
cache
anything,
and
so
there
I
think
you
would
need
more
sort
of
like
q
control
indicates
to
the
plugin
it
can
cache
so
then
plug
in
skips
its
own
internal
caching,
whereas
this
this
is
more
just
sort
of
additive
information
that
I
think
we
could
probably
send
somewhat
unconditionally.
A
A
B
Don't
know
we
would
we
could
consider
like
how
confident
we
are
in
the
approach
and
what
the
risks
are.
If
there's
any
risks,
we
probably
want
to
make
it
opt-in,
but
you
can
opt
in
when
you
set
up
the
invocation
to
your
like
off
plugin
like
if
your
off
plugin
knows
how
to
respond
to
you
know
force
refresh,
or
this
thing
was
stale
or
whatever
inputs.
Then
you
could
indicate
that
when
you
configure
your
off
plugin.
B
But
we
did,
we
did
similar
things
with.
I
think
the
standard
in
handling
like
plugins
that
I
expected
to
be
able
to
consume
standard
in
could
indicate
that
in
their
configuration
like
I
require,
I
require
standard
ends.
B
So
if
anything
else
in
the
cube
control
invocation
use
standard
in,
like
don't
even
call
me
just
like
fail,
because
my
off
plugin
requires
standard
in
or
I'd
like
it,
but
if
you
can't
give
it
to
me,
that's
fine
invoke
me
anyway,
and
so,
like
we
have
patterns
for
indicating
capabilities
of
the
off
plugin
in
the
configuration.
G
B
B
I'd
probably
start
with
just
the
issue
in
the
enhancements,
repo
and
sort
of
the
like
the
motivation,
paragraph
of
a
cap
like
there's
a
lot
of
stuff
in
a
doc.
If,
like
I,
I
would
start
with
just
sort
of
the
motivation
and
like
the
three
sentence,
version
to
get
start
getting
feedback
and
attention
and
agreement
that
here's
the
crisp
problem
and
here's
what
we
want
to
do.
And
then
we
can
fill
out
details.
E
A
As
an
aside
to
the
last
one,
I
was
going
to
ask
a
related
question.
Do
folks
like
like
to
hear,
I
know
you
have
to
maintain
one
of
these
things?
Do
you
any
of
y'all
have
a
desire
to
plumb
through
log
level,
from
my
go
to
the
plug-in.
A
I'm
sorry,
I
missed
it
to
plumb
through
what
log
level
so
like.
If
you
do
cube,
ctl
get
pods,
you
know
dash
dash,
just
v
equals
100.
like
if
you're.
If
it's
the
reason
that
that
get
pods
isn't
working
is
because
your
plugin
is
broken
like
you,
don't
actually
get
anything
out
of
the
indication
of
doing
it,
verbosely,
because
the
plugin
doesn't
know
that
you're
trying
to
do
it
for
ghostly.
D
I
think
that
would
be
helpful
in
our
specific
in
the
gk
specific
case
like
we're
caught
we're
shelling
out
the
g
clouds,
so
I
guess
we
could
pass.
You
know
like
do
a
further
pass
on
of
like
up
g
clouds
verbosity
level,
so
you
can
understand
exactly
why
it
is
taking
a
minute
to
get
your
credential.
A
So
we
we
have
their
debug
wrappers
wrapped
around
all
that
stuff.
That
does
timing,
if
you
have
it
at
a
high
level.
B
Okay,
I
I'm
super
sensitive
about,
like
adding
logging
to
things
that
are
handling
credentials
and
like
accidentally,
making
them
start
spitting
out
credentials.
But
if
you're
wondering
like,
where
am
I
spending
all
my
time?
And
that
is
not
clear
with
high
verbosity
that
it
says
this
call
to
the
api
server
took
like
a
minute,
but
really
50
seconds
of
that
is
like
invoking
this
off
plug-in
process
and
waiting
for
the
result
like
we
should
at
least
include
that
sort
of
timing.
B
A
So
yeah
that
could
be
an
enhancement.
It
looked
like
a
minor
enhancement
to
the
existing
exact
plugin
stuff
yeah.
I
I
can
see
being
wary
about
accidentally
causing
credentials
to
be
logged.
Granted.
Your
plugin
should
not
do
that
at
any
lock
level.
So
just
just
refuse
to
lock
up
your
threads,
but
you
know
I
can.
I
can
see
that
going
awry.
A
C
This
is
related
to
bdm
right,
so
is
it
I
thought
I
saw
cubadium.
I
normally
skip
over
cube
adm
once
figuring,
they
tend
to
review
their
own
episodes.
F
Yes,
so
I
don't
know
who
remembers
it,
but
we
had
a
discussion
about
heading
cuba
proxy
over
from
my
private
repository
to
sicko.
So
david
took
the
time
to
review
the
code
with
me
and
we
had
kind
of
two
lists:
one
pre-acceptance
from
post-acceptance
and
yeah.
I
think
it
was
really
cool
to
do
that.
We
found
a
lot
of
topics
that
need
some
attention
and
also
david
wanted.
A
second
reviewer.
C
For
a
couple
points
in
particular
the
actual
transport
configuration
that's
set
up
by
the
q
barback
proxy
was
not
something
I
was
expecting
offhand
and
then
the
actual
single
host
reverse
proxy.
I
could
use
some
assistance
looking
at
those
two
areas-
yeah,
jordan.
I
see
you
look
it
I
took.
I
took
the
other
ten
000
lines.
It's
like
10
lines,
yeah
dude,
but
singular
most
reverse
proxies
like
a
demon,
a
single
host
reverse
proxy,
with
custom
transport
configuration
set.
Oh
wonderful,
that
do
not
match
cube
so.
A
Yeah
it
I
I
you
know
this
repo
is
all
so
I
don't
know
what
the
state
of
the
world
was
when
it
was
initially
written,
but
over
the
years
the
auth
code
has
become
much
more
consumable.
C
And
the
api
server
code
as
well
so
setting
up
transports,
handling
server
creation,
automatic
reload
of
of
serving
certs
and
keys.
I
got
I
have
several
of
those
things
on
the
pre-acceptance
list,
just
to
understand
what
we
have
and
have
something
uniform,
but
those
those
questions
about
the
the
proxy
and
the
transport
themselves.
I
am
looking
for
someone
who
is
willing
to
help
look
at
that.
A
Cannot
promise
that
I
will
have
time,
but
I
am
willing
to
look
at
it.
I
spend
enough
time
in
the
not
to
just
pass
working
on
reverse
proxies
to
have
some
some
context
for
where
they
go
wrong.
F
F
I
would
need
to
take
a
look
at
the
issues.
All
the
links
in
within
the
issues
are
links
to
review
comments.
Basically,.
C
C
F
Yeah,
so
so
the
points
are
definitely
legit.
I
I
think
I
wasn't
there
as
it
was
created,
but
as
far
as
I
understood,
it
was
more
like
some
kind
of
prototype
that
was
created
had
huge
success
and
kind
of
yeah
overlapped.
It's
it's.
It's
expected
prototyping
weekend
where
it
was
developed,
so
it
needs
a
little
bit
of
love
and
care.
So
I
completely
agree
with
that.
C
C
I
don't
know
what
change
is
needed,
so
they
have
a
there's,
a
significant
checklist.
I
bet
it's
going
to
take
a
few
weeks
for
for
kristoff
to
get
through
it
anyway,
but
before
we
say
yep
it's
a
cubistic
project.
I
think
we
need
to
know
what
that
does.
B
If
we
both
look
with
one
eye
mo,
then
when
that
eye
starts
bleeding
we'll
both
have
one
good
eye
left
yeah
proxy
code
every
every
time.
I
look
at
anything
touching
proxy
code.
I
my
brain,
goes
into
slow
motion.
A
Oh
as
as
in
the
side
I
see,
you
know,
there
needs
to
be
test
coverage
first
off.
This
is
for
you
please
test
with
both
http
1
and
http
2
like
by
themselves,
because
the
the
way
the
transports
work
in
the
standard
library
for
those
are
completely
different,
and
I
have
found
cve
from
proxies
that
only
work
on
htv
one,
so
just
just
be
careful.
B
This
probably
should
have
gone
up
in
announcements,
but
124
planning
to
cut
next
week
and
the
schedule
for
125
is
our
draft
schedule
is
up.
Let
me
drop
a
link
in
I'll
put
it
up
in
your
announcements.
125.
B
A
That's
better
than
I
thought
I
thought
enhancements
freeze
would
be
one
week
like
week
three
into
the
week
four.
So
that's
nice.
B
Anyway,
that's
keep
an
eye
on
that
and
if,
as
always,
if
you
want
to
get
stuff
into
125
the
earlier,
you
have
designs
and
things
up
the
better
the
chances.
A
A
I
might
just
give
everyone
their
five
minutes
back.
It
has
been
a
long
day
unless
anyone
desperately
wants
to
look
at
test
flights
and
ci.