►
From YouTube: Kubernetes SIG Auth 20190821
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20190821
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
B
B
B
The
idea
here
is:
we
can't
necessarily
control
the
policy
sprawl,
but
can
we
use
some
methods
to
understand
them
better
and
give
guarantees?
On
top
of
that
here,
it
is
I'll
link
to
it's
the
larger
kind
of
discussion
we
are
from
that
goes
across
the
sig
security
in
the
CNCs
we've
kind
of
narrowed,
down
sort
of
what
make
sense
for
it
and
we're
looking
for
BOC
Nick
from
different
members
of
the
community.
Then
another
effort
we
kind
of
keep
tabs
on
the
open
policy
agent.
That's
big
I
think
it's
been
good.
B
We
hear
more
and
more
every
day
about
it.
So
one
interesting
thing,
our
thing
that
might
be
of
relevance
to
this
sig
in
particular
was
the
gatekeeper
which
is
a
a
project
of
the
pop
open
policy
agents,
specifically
for
running
on
kubernetes
kind
of
did
some
kyo
sees
of
what
would
it
mean
to
replace
pod
security
policies
with
something
like
gatekeeper,
so
they
added
kind
of
some
constraints
in
the
libraries
and
to
kind
of
cover
some
of
the
aspects
you'd
like
to
look
at
what
it
would
mean.
B
Kind
of
policies
what
they
would
look
like
to
have
similar
sort
of
coverage
as
what
we
have
in
kubernetes
at
the
moment.
I
think
those
are
the
main
things
that
would
be
of
interest
this
summer.
Of
course,
there's
also
the
multi-tenancy
group
might
have
ongoing
work
and
I
keep
tabs
on,
but
for
right
now
those
are
remain
things.
Does
anyone
have
questions?
I
can
answer.
C
Yeah,
so
one
thing
that
I've
been
thinking
about
and
chatting
with
the
gatekeeper
team
about
a
little
bit
is
sort
of
the
the
workflow
around
actually
rolling
out
policies
and
also
requesting
exceptions,
I
think
in
a
large
organization.
It's
sort
of
unrealistic
to
simply
push
out
a
new
enforcing
policy
across
the
whole
organization.
C
And
so
we
sort
of
in
talking
a
little
about
this
idea
of
what
it
would
look
like
to
either
request
a
temporary
exception,
while
a
team
or
to
fix
a
policy
or
to
fix
their
code
to
meet
a
policy
or
a
permanent
exception.
For
something
like
a
system
demon
set,
has
has
the
working
group
kind
of
talked
about
the
workflow
or
the
rollout
for.
B
I,
don't
think
we
have
touched
too
much
about
that,
except
so
much
as
can't
what
kind
of
we're
looking
at
do
we
understand
what
policies
effects
will
actually
have
so
this
one
thing
aspect
you're
looking
at
is
yeah.
Do
we
know
what
this
really
means?
I
know
that,
like
people
recently
added
dry
run,
which
is
sort
of
me
dry,
runs
or
kind
of
audit
only
as
sort
of
the
easy
wait
from
a
technical
perspective
to
deal
with
that
I'm.
Sorry
think
definitely
interesting.
B
A
A
Add
the
enforcement
know
what
policies
are
existed,
that
are
violating
the
meta
policy
and
and
go
and
remove
them,
but
for
something
like
authorization,
I
think
it's
difficult
to
turn
on
something
like
this
team.
Only
this
team
can
access
resources
in
the
same
space.
That
policy
is
breaking
if
it
isn't
a
meta
policy
that
applies
to
our
Bechtel
or
our
Beck
world
creation
or
our
Metro
binding
creation.
B
I
think
that
we
didn't
have
necessarily
plan
so
much
as
okay.
We've
heard
a
lot
about
this
about
people
wondering
what's
the
future,
so
I
think
you
know
step
one
was
okay.
Well,
what
would
it
even
look
like
to
use
something
like
this?
Is
this
even
possible,
or
are
there
things
that
they
need,
that
need
would
need
to
be
done
in
one
place
or
the
other,
or
what
does
the
experience
kind
of
like?
So
we
thought
was
what
we,
the
and
the
constraint
library
was
kind
of
looking
at
was
first
of
all,
okay.
B
C
Don't
think
that
a
gatekeeper
constraint
will
ever
be
a
total
dropping
replacement
for
pod
security
policy,
because
the
semantics
of
how
constraints
are
applied
is
a
little
different.
Pod
security
policy
is
sort
of
like
by
default.
Nothing
is
allowed
and
any
policy
that
you
match
gatekeeper
constraints
are
sort
of
the
inverse
of
that
where
it's
by
default,
everything
is
allowed
and
I,
don't
I,
don't
know
if
the
PSP
one
is
applied
kind
of
completely
or
if
it's
sort
of
layered
on,
but
in
any
case
you
can
only
kind
of
tighten
restrictions.
I
believe.
B
C
B
The
motivation
is
that
PSP
is
are
in
a
very
weird,
in-between
state,
where
they
need
more
work
to
be
to
GA,
but
have
some
musicality
quirks
that
make
it
the
future
of
where
we
should
go
with
them
unclear.
At
the
same
time,
open-
and
things
like
this
are
coming
up
in
the
ecosystem.
Let's
apply
a
lot
of
the
same
functionality.
I
think
that's
was
kind
of
the
motivation.
No
I
can
find
the
issue
in
kubernetes.
A
I
think
what
Tim's
decided
earlier
in
this
presentation
a
couple
meetings
ago.
It
was
bad
enough
authorization
models,
inability
to
roll
out
and
bad
API,
that's
sort
of
the
three
nuts
that
I
took
away
from
it.
It's
unfortunately,
at
this
time,
PSP
has
a
ton
of
users,
and
it's
currently
in
perma
beta
and
while
we
have
some
major
I,
have
identified
pretty
major
issues
with
the
current
PSP.
It's
hard
to
evolve
in
backwards,
incompatible
ways,
and
it
doesn't
seem
at
this
time
like
something
that
we
would
really
want
to
take
to
GA.
C
What
we've
kind
of
found
is
that
most
users
have
some
sort
of
specific
cases
that
they
need
policies
for,
and
so
you
end
up
having
to
mix
these
built-in
mechanisms.
White
cloud
security
policy
with
something
some
kind
of
third-party
add-on
kind
of
thing
like
OPA,
and
then
you
end
up
having
to
manage
these
two
policies
and
different
systems
that
don't
necessarily
like
compose
well
together,
and
you
need
different
tooling
around.
You
know,
rollouts
and
and
management
of
the
two
different
policy
mechanisms
and.
A
A
C
B
Sorry
King
goes
the
other
things,
of
course,
if
people,
the
more
people
I
think
who
engage
and
try
out
these
different
methods
or
kind
of
bring
forward,
what
they're
feeling
that
always
kind
of
helps
out
and
the
other
thing
is,
if
you
have
kind
of
priorities
or
things
that
you
want
that
become
like
higher
priority
to
as
part
of
your
roadmap.
Definitely
let
us
know
so
that
we
can
prioritize
appropriately.
C
Before
we
go
into
issues,
I
just
wanted
to
mention
discussions
on
the
security
process
working
group
are
still
ongoing.
I
haven't
had
a
chance
to
follow
up
too
much
on
that
this
past
two
weeks,
but
hoping
to
push
more
on
that
the
I
think
the
big
open
question
around
that
working
group
right
now.
It
was
whether
we
want
to
kind
of
fold
some
of
that
in
this
loop
or
if
it
makes
sense,
to
have
a
standalone,
so
yeah
stay
tuned.
If
you
have
thoughts,
feel
free
to
join
in
the
discussion
on
that,
alright.
A
E
A
So
I
think
the
idea
here
is
that
we
have
a
bunch
of
people
who
make
in
CA
certificates
into
their
docket
images
in
order
to
connect
to
outside
services.
Public
services.
I
don't
have
a
very
good
facilities
for
managing
shared
state
certificate,
as
config
Maps,
mostly
I.
Think
one
of
the
big
problems
is:
don't
have
cross
namespace,
config
maps
or
global
config
Maps,
and
if
we
don't
have
a
lot
of
tooling
for
managing
these
bundles.
C
Yeah,
this
is
partly
motivated
because
building
the
CA
certs
into
the
docker
images
seems
like
a
anti-pattern.
You
end
up,
especially
if
you
have
different
base
images
running
in
your
cluster.
You
end
up
with
potentially
different
sets
of
trusted,
CI
certs,
and
so
the
idea
is
to
kind
of
manage
that
centrally
and
have
a
consistent
set
of
see
a
surge
across
all
pods
in
the
cluster.
D
E
E
A
A
A
E
A
A
It's
the
low-hanging
fruit
of
a
bunch
of
other
stuff.
We
would
I
would
like
to
see
monitoring
such
as
this,
the
same
metrics
for
individual
authenticators
and
overall
latency,
but
I.
Don't
think
that
there's
too
much
of
a
drawback
to
having
these
metrics
other
than
they're
hard
to
remove.
So
they
should
be
saying
when
we
decide
to
add
them.