►
From YouTube: Kubernetes SIG Auth 2020-02-19
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2020-02-19
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
B
B
Moved
most
things
out
of
the
unauthenticated
by
default,
access
rules,
version
and
health
were
left
and
the
health
related
checks
were
related
to
load
balancers
that
typically
run
on
authenticated
and
those
endpoints
don't
actually
expose
much
information
beyond
just
an
okay,
not
okay
and
status
code.
So
they
don't
really
leak
anything
useful.
B
The
version
I
could
the
argument
here
is
that
it
allows
fingerprinting
and
go
version
and
knowing
exactly
which
finer
version
and
which
exploits
to
use.
I
am
pretty
ambivalent.
About
this
I
think
most
exploits
could
just
be
iterated
through
so
I,
don't
feel
particularly
strongly,
but
I
wanted
to
bring
it
up
here
and
see
what
other
people's
thoughts
for.
D
D
B
Is
the
I
guess
there's
sort
of
a
spectrum
of
responses?
One
is
we
think
this
is
fine,
and
if
you
proposed
this
and
we're
willing
to
do
all
the
work,
we
would
say
no
on
the
other
end.
Is
we
really
want
to
do
this?
We
want
to
staff
this
and
we're
not
there.
If
someone
the
thing
about
like
tightening
stuff
that
might
break
people,
that's
really
hard
to
prove.
So,
even
if
someone
was
willing
to
do
the
work,
I
mean
the
work
is
not
that
much.
B
D
A
F
B
F
Argument
beyond
just
like
the
birds,
only
two
I
think
there's
just
like
a
like
a
risk
of
extra
things
on
authenticated
that
could
be
abused
in
unexpected
ways
like
a
kind
of
like
expected
use
of
this,
as
you
get
a
version
back
and
so
there's
an
expected
attack.
I
use
case
of
like
okay
I
can
like
talk
of
my
exploit
like
really
tightly
against
this
version,
but
it
is
also
like
an
unexpected.
F
Oh
there's,
some
like
vulnerability
in
the
code
that
happens
to
be
serving
this
particular
path,
or
it
happens
to
leak
like
something
that
we
didn't
expect
it
to
leak
and
I'm
like
more
worried
about
that
class
of
risk
than
the
kind
of
like
version
leaked
and
so
I
think
that
might
like
that
class
is
kind
of
very
hard
to
quantify
and
describe
because
it's
it's
like
unknown
risk,
but
I'd
be
more
worried
about
that
and
kind
of
like
the
benefits
either.
Benefit
of
this
is
like
reducing
that
more
than
like
specific
version
targeting
yeah.
D
F
Though
I
think
the
state
is
like,
we
definitely
know,
people
are
gonna,
be
using
this.
We
will
break
someone
and
whether
we
feel
like
the
case
is
strong
enough
to
say.
Yes,
you
have
to
do
the
work
to
turn
it
back
on
as
well
as
then.
You
know
we
need
to
some
point
someone
who
actually
wants
to
push
this
through
the
whole
process.
B
B
Right
so
I
would
given
that
it
is
likely
to
break
some
people,
and
no
one
here
and
no
one
on
the
issue
so
far,
that
is
responsible
for
this
component
is
convinced
by
the
information
leak
in
numerating.
Exploit
argument.
I
would
probably
recommend
documenting
how
to
remove
this
for
clusters
that
care
for
this
particular
anyway
and
going
forward
recommending
we
not
add
new
endpoints
to
an
authenticated
access.
B
B
That
is
the
most
complex
version
of
this
request.
Another
variant
would
be
the
ability
to
not
mount
tokens
to
particular
containers.
So
if
only
some
of
the
containers
in
your
pod
needed
service
account
tokens,
you
could
only
give
some
of
those
containers
API
access
and
avoid
giving
cribben
api
credentials
to
other
containers.
That
would
be
still
an
api
change
but
less
complex.
It
would
just
be
a
matter
of
mounting
and
admission
at
that
point.
B
D
C
G
Can
you
wear
but
yeah
you
can
stop
the
default
mount
and
you
can
create
a
secret
with
a
known
name
that
will
get
filled
in
by
the
service
count
token
controller,
and
then
you
can
mount
that
moon
token,
with
the
key
of
you
knows,
gonna
show
up
inside
your
pod
inside
or
inside
a
particular
container,
and
that
way
it
won't
start
until
the
value
is
present.
I
won't
say
it
is
easy,
but
I
think
you
can
do
it.
Oh
so.
B
B
Yeah
I,
don't
think
sideloading
different
service
account
credentials
and
different
containers
is
a
primary
use
case.
The
one
where
you
give
credentials
to
some
containers
without
others
seems
more
useful
to
me,
but
you
could
do
that
with
projected
token.
So
you
could
you
could
turn
off
all
automatic
for
your
bot
and
then
specifically
mount
and
the
token
the
projected
took
into
the
container
she
wanted
or.
B
I
mean
I
kind
of
like
the
don't
do
anything
automatically
if
you're
managing
it
to
this
level,
remembering
to
put
a
dummy
mount
on
every
new
container,
I
I
would
probably
disable
the
August
and
then
just
give
it
selectively.
I'd,
rather
a
container
fail.
It
didn't
have
what
it
needed,
instead
of
accidentally
get
more
than
it
should
then
yeah.
B
Mike,
were
you
gonna
copy
in
the
sandbox
boundary
I'm
looking
for
it
right
now,
all
right,
and
then
we
can.
We
can
describe
what
David
talked
about
like
there
are
ways
to
accomplish
this:
it's
not
clean
or
elegant,
but
if
this
is
a
super
important
use
case
for
you
and
it's
worth
doing
not
clean,
not
elegant
things,
here's
how
you
could
accomplish
it.
A
I
put
this
on
there
I
just
seen
it
and
the
kind
I
just
got
left
there.
That
was
curious.
If
we
wanted
this,
the
general
change
I
think
was
not
to
add
the
kms
plugin
Dorothy,
yet
the
kms
stuff
into
the
which
one
it
was
in
lightness.
It
doesn't
get
attitude
I.
B
B
B
D
D
B
B
D
H
D
H
Perfect,
yes,
or
it
was
plugged
in
so
I
I
was
thinking
about
this
and
I
can't
really
think
of
a
scenario
where
restarting
cube,
API
server
woods
of
issues.
If,
if
there
is
a
for
example,
miss
configuration,
for
example,
cube
abs
are,
is
not
being
like
talking
to
the
right
socket
like
you.
Can
it's
not
gonna
solve
the
issue?
I.
B
B
H
G
D
D
Execute
or
to
decrypt
a
deck
takes
20,
milliseconds
and
there's
15,000
secrets.
Then
the
initial
startup
of
the
cube
API
server
takes
8
minutes
to
churn
through
all
those
because
the
list
is
followed
by
serial
decryption
of
all
the
decks.
So
in
that
scenario,
I
think
we're
like
looking
for
mitigations
to
make
the
API
server
not
crash
loop
in
that
kind
of
pathological
performance
scenario.
D
Help
jerks
and-
and
it
looks
like-
we
in
fact
do
exclude
this
he'll
check
in
the
end
entice
and
we
do
in
gke,
so
I
think
from
your
explanation
of
what
Livesy
and
how
C
is
or
what
lives
Ian
readies
is.
Should
it
just
match
what
we
expect,
what
we
use
is
it
like
an
opinionated
version
that
we
say
we'll
maintain
and
if
that's
the
case
like,
if
we're
excluding
it
in
our
wideness
probe
in
our
n
dentists,
and
we
should
probably
just
copy
that
change
over
to
the
live
CD
handler.
D
A
D
A
The
one
I
built
didn't
use
a
TPM
I've,
never
finished
like
the
pkcs
11
integration,
but
it
it
used
the
key
locally
that
was
like,
like
it
was
given
the
key
encryption
key
on
startup
and
then
just
used
it.
But
one
of
the
extensions
was
supposed
to
be
pkcs
11,
so
it
still
didn't.
Do
our
pc
or
wasn't
planned
on
doing
RPC,
and
that
is
a
choice.
I
I
do
generally
agree
with
you
guys
that
I
would
expect
that
to
be
the
uncommon
choice.
J
So
I
just
I'm
trying
to
think
about
the
case
of
an
RPC,
write
and
say
that
it's
outside
of
the
I
mean
this
is
a
storage
level
issue
right,
but
so
particular
the
AWS
case.
If
a
customer
say
revokes,
said,
grant
on
the
key
and
so
that
on
a
managed
service
or
something
where
I
actually
like,
because
we're
actually
once
that,
if
you
guys
server,
not
to
continue
to
be
able
to
decrypt
decrypt
things.
But
tonight
what
what
are
the
applications.
B
J
B
J
H
Yeah
that
that's
was
sort
of
the
consensus
internally
as
well,
that
it
doesn't
seem.
It
doesn't
seem
that,
basically,
what
we
do
today
is
we
check
the
healthy
of
kms
plugin
on
creation
of
the
cluster.
In
other
words,
we
want
to
make
sure
that
the
contract
of
creating
a
cluster
with
encryption
on
is
fulfilled
on
create.
H
J
J
J
B
B
H
Could
we
check,
like
I,
assume
that
the
way
we
want
to
allow
an
opt-in
is
implementing
ERP
C
Health's
protocol
yeah?
Okay,
we
could
check
for
both
we
could
check
for
the
implement
a
like.
If
B
plugin
implements
healthy
health-
sorry,
our
PC
health,
then
we
can
use
that
and
then
fall
back
to
the
sort
of
the.
What
we
have
today.
D
J
Of
is
like
so
in
the
AWS
specific
case
like
we
call
yes,
we're,
calling
the
AWS
kms
API.
What
if
it's
rate
limited
right
like
I,
get
a
TPS
issue
where
it's
not
kms,
isn't
down
I'm,
just
making
too
many
km/s
calls,
then
what
do
I
present
that
forbid
that
new
API
server
from
coming
online
or
or
like
like
is
this
is
like
a
separate
from
the
question
you
just
asked.
Yeah.
G
G
And
then
the
discussion
of,
do
we
really
want
to
stop
serving
everything
when
you
can't
read
secrets
as
I
understand
what
was
discussed
here,
it
would
still
be
part
of
read
easy,
so
it
would
give
the
appearance
to
all
external
consumers
that
it
stopped
serving
the
benefit
that
you
get
is
that
you
don't
pay
a
startup
cost.
It.
G
C
G
D
A
G
A
G
Takes
a
long
time-
and
in
that
case
like
if
you
have
a
very
large
cluster,
then
the
situation
really
is
a
case
of
this
clusters.
Fine,
it's
priming
its
information,
but
you
could
still
functionally
use
this
cluster
without
any
difficulty.
You
can
read
and
write
any
object
you
want,
but
if
you
have
a
bug-
and
you
end
up
with
750,000
secrets,
if
you
wait
for
the
caches
to
sink,
then
you
don't
become
ready
for
a
very,
very
long
time.
I've.
B
H
Yeah
that
this
this
cache
warm-up
issue
is
an
interesting
one
and
I
was
I.
Was
thinking
maybe
approaching
this
problem
from
a
different
point
of
view,
and
this
will
probably
should
be
a
separate
discussion
but
I'm
entertaining
the
idea
of
preserving
the
cache
in
encrypted
form
it
cross
three
stars
of
humor,
yes,
I,
think
that's
the
only
way.
I
know
how
to
make
that
startup
issue
go
away.
I.
H
D
J
H
H
H
A
E
Sure
so
this
is
something
that
we
discussed
remember
quite
how
long
ago
it
was
but
a
while
ago
and
Sagat
this
idea
of
having
sort
of
standardized
policies
that
we
publish.
That
could
be
that
are
sort
of
implementation,
agnostic,
and
so
we
might
publish
kind
of
official
pod
security
policy
recommendations.
E
This
would
sort
of,
let's
say
some
similar
problems.
This
addresses
are
some
questions
around
like
helm
packages
and
and
third-party
packages
for
kubernetes
how
those
should
work
with
pod
security
policy,
and
the
idea
is
that,
by
having
these
standardized
policy
is
you
could
just
document
what
bucket
the
package
is?
E
E
I
E
Yeah,
so
this
is
sort
of
the
most
like
security,
relevant
restrictions.
So
as
an
example,
we've
decided
not
to
put
read-only
hosts
file
system
or
read-only
root,
filesystem
in
restricted
I'm
open
to
discussion
around
that,
but
I'm
not
convinced
that
the
compatibility
that
you
sacrifice
by
including
that
is
worth
the
list
security
benefit.
E
The
goal
is
that
these
are
useful
and
are
useable
enough
to
be
widely
adopted
and
so
I'm
thinking
sort
of
of
restricted
as
like
what
we
wish
the
defaults
were,
and
then
the
last
one
is
the
default
policy
and
the
way
I'm
defining.
This
is,
if
you
create
a
pod
that
has
the
absolute
minimum
specified
in
the
pods
back
so
basically,
I
think
that
looks
like
a
pod
with
one
container
that
has
a
name
and
an
image.
That's
sort
of
like
your
bare
minimum
pod.
That
pod
should
be
allowed
under
the
default
policy.
E
But
if
you
tweak
the
security
settings
to
essentially
weaken
the
security
of
that
pod
or
weaken
the
isolation,
then
that
should
be
denied
and
so
default
tracks.
With
the
current,
the
current
default
values
of
pod
features
and
restricted
tracks
with
the
current
recommended
best
practices,
so
those
ones
both
need
to
have
some
sort
of
concept
of
versioning,
since
they
are
subject
to
change
over
time.
E
Privileged
always
just
means
anything
goes,
and
so
it
doesn't
need
to
be
restricted
area
versions
right,
so
those
are
kind
of
how
we're
defining
with
three
buckets
and
then
the
rest
of
the
document
goes
into
the
specifics
of
exactly
like
what
the
spec
should
look
like
for
which
fields
are
restricted
and
what
the
allowed
values
should
be.
I'm,
not
sure
we
need
to
walk
through
that
all
right
now,
but
feel
free
to
take
a
look
from
the
linkage
document.
E
G
G
A
E
What
approach
we
could
take
is,
as
we
add
fields
to
this
spec,
but
by
the
way
the
plan
is
to
sort
of
publish
this
on,
as
like
a
Doc's
page
on
kubernetes
know
that
you,
as
we
add
additional
fields
or
constraints
to
the
spec,
that
we
include
a
version
annotation
on
those.
So
you
know
this
field,
as
of
kubernetes
119
is
in
the
spec,
or
you
know,
this
value
is
restricted
as
of
Cabernets
120
and
then
leave
the
how
to
handle
versioning
of
the
different
profiles
up
to
the
specific
implementation.
I.
E
G
Host
nick
is
one
we
hit
a
lot
in
openshift,
the
other
one
that
we
hit
a
lot
had
to
do
with.
Was
it
host
mounts
there
were?
There
was
something
about
volumes
where
some
volumes
were
used,
but
you
didn't
want
to
allow
someone
to
be
rude
and
have
it
or
set
any
SELinux
label
and
be
able
to
mount
anything
right.
So.
E
The
users
who
are
managing
typically
managing
the
pods,
but
the
deployments
that
need
elevated
privileges
above
default
are
already
fully
privileged
users.
So
a
lot
of
these
pods
end
up
being
things
like
your
monitoring,
infrastructure
or
security
agents,
and
things
like
that
that
are
typically
administered
by
the
cluster
administrator
or
security
team
who
already
has
elevated
access
and
that
the
the
security
benefit
of
having
some
sort
of
in-between
profile
is
marginal.
E
Some
other
issues
around
this
one
are
I've,
seen
a
lot
of
confusion
about
the
difference
between
a
pod
security
policy
and
a
pod
security
context
from
users
who
sort
of
think
that
you
need
a
pod
security
policy
that
completely
matches
exactly
what
the
workload
is.
I
think
we
haven't
exactly
helped
the
situation
by
shipping,
a
CSP
for
every
system
workload,
but
yeah.
So
so
the
workload
should
be
running
with
lease
privileges,
but
that
doesn't
necessarily
mean
that
the
policy
on
that
user
or
names
base
needs
to
be
the
least
privilege
policy.
E
And
then
the
third
thing
I
would
add
here
is
that
it's
I
think
hard
to
come
up
with
a
general,
a
general
policy
between
privileged
and
default.
I
think
it
ends
up
looking
fairly
workloads
specific
and
it
be
hard
to
do
that
without
hard
to
define
a
policy
that
is
actually
more
secure
than
privileged,
but
elevated
above
default.
It's
so,
for
instance,
with
the
host
bounced
like
if
you
allow
host
mounts
as
far
as
I'm
concerned,
that's
the
same
as
privileged.
Similarly
right,
it
isn't
because
SELinux
labels
don't
work
out
that
way.
E
G
Have
to
take
special
action
to
try
to
get
something
special
to
force
yourself
to
to
do
something.
It's
so
SELinux
labels
mean
that
you
don't
necessarily
just
get
everything
by
virtue
of
being
able
to
mount
a
path.
The
other
distinction
I've
seen
commonly
is
the
difference
between
host
network
and
privileged.
The
fact
that
I'm
willing
to
allow
a
user
to
find
the
host
network
doesn't
necessarily
mean
that
I
want
to
allow
them
to
mount
any
host
path.
E
G
But
if
I'm
looking
at
a
helm,
shirt
and
trying
to
decide,
is
this
a
thing
I
want
to
install
it's
saying
privileged
is
a
lot
less
likely
for
me
to
try
to
install
it's
saying
host
network
right,
whether
it
should
be
or
whether
I
truly
understand
what
it
means
to
be
able
to
bind
to
these
boards,
maybe
one
way
or
the
other
but
I
think
as
a
user.
Looking
at
it,
this
thing
wants
to
be
privileged
I'm,
not
gonna.
Let
that
happen.
This
thing
wants
to
have
an
hose
network.
Oh
okay,.
G
So
I
don't
think
it's
about
like
are
you?
Are
you
able
to
express
a
restriction?
It
is
when
a
user's
looking
at
this
in
terms
of
this
package
requires
XY
and
Z
great
I.
Think
I
host
Network
stands
out.
Host
mount
stands
out
a
lot
less
right.
Once
you
can
mount
the
filesystem
yeah,
most
people
probably
associate.
That
is
about
the
same
as
as
privilege.
Even
though
is
a
distinction,
but
I
do
see
host
network
as
one
that
we
I
think
commonly
I.
E
So
one
policy
that
I've
kind
of
thought
about-
and
it
sounds
like
your
sounds
consistent
with
what
you're
saying
is
that
maybe
there's
like
a
sort
of
network
admin
level
of
privilege
that
lies
between
privileged
and
default,
that
we
can
potentially
standardize
a
policy
around
and
I
think
that
would
allow
host
network
host
ports.
Maybe
cap
net
admin,
yeah
think
about
what
else.
A
Would
some
would
this
be
easier
if,
like
so
if
we
did
define
just
the
three
Tim
that
you
have
in
the
dock,
yeah
the
bulk
of
helm,
charts
end
up
being
privileged,
probably
I
mean
yes,
they
might
mean
that
the
helm
charts
are
just
low
quality,
but
it
probably
might
you
know
it
might
just
be
that
okay,
we
don't
have
enough
buckets
and
everything
ended
up
in
the
policy.
I.
Don't.
G
E
E
E
Profiles
for
Windows
pods
is
under
discussion.
I,
don't
understand
when
those
security
enough
to
have
a
good
answer
here,
but
trying
to
figure
out
what
the
best
practices
would
look
like
and
I'm,
not
even
sure
if
you
can
exactly
have
privileged
Windows
pods
in
the
first
place,
and
then
the
last
piece
we've
talked
about
in
the
past
is
sandbox.
E
Pods
and
I
think
this
sort
of
falls
into
the
category
of
we
don't
have
a
enough
of
a
standardized
definition
of
what
a
standard
of
what
a
sandbox
pod
looks
like
for
it
to
fit
into
this
model,
and
so
the
stance
that
I'm
taking
is
basically,
you
can
still
have
your
three
security
buckets
and
then
you
might
have
a
separate
policy
for
governing
the
runtime
class
and
whether
a
pod
runs
the
sandbox
and
then
you
kind
of
fit
those
policies
together.
How
you
see
any
fit.
A
We're
out
of
time,
I
did
want
to
ask
if
you
didn't
have
like
the
intersection
of
policies
like
I'm,
just
like
the
person
installing,
like
the
hem
chart,
understand
what's
gonna
happen.
I
have
two
policies
like
this
thing
is
like
not
in
a
box
but
not
like
I
guess,
if
you're
restricted
in
not
sandbox,
maybe
you're,
okay,
but
if
you're
something
higher,
but
in
a
sandbox.
What
does
that
mean
right.
E
Yeah
I
mean
I
think
the
way
I
would
think
about
that
is.
Oh,
this
thing
needs
privileged
I.
Don't
really
want
to
give
it
privileged
full
privilege
to
access,
so
maybe
I'll
give
it
privilege
but
running
in
a
in
a
cata
container
that
might
depending
on
why
it
means
privileged
that
might
make
it
not
work,
but
I
think
once
you
get
into
that
level,
I'm
not
sure
how
much
we
can
say.
Android
standardized,
we'll
see,
I!
Guess
one
option
is,
we
could
say
like
I
guess
this
mostly
just
becomes
relevant
for
privileged
again.
E
G
I
think
it
could
also
be
a
just
a
forcing
function
for
whatever
the
author
of
the
helm
charter
operator
or
whatever
it
is
they're
providing
does
right
like
do.
I
really
need
to
be
privileged,
or
does
those
network
work?
Do
I
really
need
to
be
privileged
or
can
I
just
run
as
any
you
would
that
doesn't
do
a
host
mouth
and
I
think
that
it
could
be
a
forcing
function
to
get
them
to
improve
so
that
they
would
no
longer
be
listing
privileged.