►
From YouTube: Kubernetes SIG Auth 2019-10-16
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2019-10-16
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
A
D
D
D
So
it's
already
got
some
history
in
trying
to
get
it
out.
Then,
at
some
point
of
time,
I
joined
to
try
to
arrange
for
the
whole
thing
and
Mellon
implementation.
For
that
later
on,
there
was
an
API
review
meeting
somewhere
in
July
I.
Believe
if
I
recall
correctly,
where
we
decided
that
we
need
to
elaborate
further
the
use
cases
and
reevaluate
the
whole
thing,
and
how
do
we
select
resources
for
auditing
and
so
on?
D
D
There
was
also
another
document
on
the
wine
that
was
publicly
shared.
It
was
kind
of
a
design
draft
as
a
first
attempt
to
wrap
up
all
all
the
discussions
that
we
had
so
far
and
see
how
to
proceed
with
some
suggestions
for
design
and
so
on.
Again,
it
contains
some
interesting
quality
of
comments
inside
and.
D
Some
weeks
later,
this
falls
into
a
cap,
coconuts
enhancement
proposal
and
on
the
company
include
supplementary
document
that
has
more
details
than
the
Cape
in
case
to
clarify
the
whole
thing.
So
during
this
period
there
has
been
quite
interesting
ideas
how
how
to
approach
the
whole
thing
we
get
back
and
forth
conversations
and
implementations
around
mainly
how
to
select
resources.
D
What
would
be
the
most
convenient
for
for
the
end
users
and
to
avoid
some
mistakes
from
the
past
and
on
the
other
hand,
we
had
some
discussions
and
letters
to
open
around
who
does
what
and
how
to
exactly
separate
things,
especially
in
the
much
more
reaching
rows
scenarios
that
we
are
currently
working.
Looking
with,
because
probably
initially
things
started
around
normal
hosted
coaster,
but
then
we
added
to
the
scene
also
matched
coasters
and
so
on,
which
make
things
more
complicated.
D
Also,
the
community
contributed
also
interesting
use
cases
and
rows
where
you
have
also
separation
between
the
say,
you
know
an
administrator
and
an
operator,
we
didn't
mention
coaster
and
that
kind
of
stuff.
So
it's
pretty
interesting
and
abundant
in
in
scenarios
topic,
which
of
course
makes
it
kind
of
complex
thing
to
to
tackle.
D
Apart
from
the
fact
that
we
are
improving,
we
kind
of
seem
to
progress
and
at
some
point
of
time
we
we
scratch
everything
and
start
once
again,
which
does
not
draw
us
any
closer
to
releasing
something
around
that
and
trying
the
whole
thing
cutting
into
the
world.
So
I
would
like
to
just
ask
you:
do
you
have
any
ideas,
how
to
proceed
here
in
some
more
constructive
way
or
something.
F
G
H
F
G
I'm,
not
I'm,
not
saying
at
all
I'm
I
think
but
I
do
think
it's
worth
asking
before
we
kind
of
reboot.
The
discussion
like
why.
Why
do
we
think
we
ended
up
here,
and
can
we
maybe
try
to
avoid
that?
So
maybe
it
is
asking
more
fundamental
questions.
Maybe
it
is
trying
to
pick
smaller
pieces
that
we
don't
envision
needing
to
grow
in
really
complex
ways.
G
I
think
some
of
the
initial
attempts
came
up
with
an
API
that
could
start
simple,
but
we
could
see
it
needing
to
get
a
lot
more
complicated
in
the
future
and
that
kind
of
started.
This
endless
attempts
to
predict
the
future
and
predict
all
the
complex
ways
that
would
need
to
grow,
so
it
maybe
that's
kind
of
a
thing.
We
could
do
differently
like
try
to
tackle
pieces
that
wouldn't
need
to
grow
and
complexity
in
the
future.
We
could
imagine
them
combining
with
other
simple
pieces
to
address
future
news
cases.
G
F
D
H
Yeah
I
definitely
agree,
and
my
hope
is
that
that's
sort
of
what
we're
doing
now,
you
know
if
we
need
to
meet
more
than
every
two
weeks
and
and
make
room
for
other
Cigala
genders.
Then
we
can
schedule
additional
meetings,
but
but
yeah
I
agree
that
to
move
this
forward,
it's
definitely
gonna
require
a
bit
more
than
bi-weekly
comment
responses
on
get.
D
D
H
So
I
think
going
back
to
Jordans
question
of
sort
of.
How
did
we
get
here
when
we
were
thinking
about
use
cases
for
this
dynamic
audit
policy
I
think
we
sort
of
started
with
okay.
We
have
this
audit
policy.
What
are
the
use
cases
for
manipulating
this
audit
policy
and
I
think
that
was
the
wrong
starting
point
and
I
think
that's
kind
of
what
led
to
getting
stuck
here
and
so
I
just
wanted
to
kind
of
back
up
and
talk
about
what
is
the
point
of
an
audit
policy
at
all
like?
H
I
would
argue
that
for
now,
let's
leave
multi-tenant
audit
logging
out
of
the
picture,
and
perhaps
we
can
solve
that
problem.
The
kind
of
tree
in
a
different
way.
Are
there
any
kind
of
thoughts
on
on
that
framing
of
the
problem
or
if
there
are
other
purposes
for
an
audit
policy
that
we
should
consider
I.
D
H
G
H
And
maybe
just
to
clarify
like
why
I
think
that
this
is
a
good
starting
point
for
this
discussion.
This
is
where
we
are
today,
where
we
have
this
sort
of,
like
top
level,
just
kind
of
send
everything
at
a
certain
filter
level
to
the
backend,
and
so
the
question
is
sort
of.
Why
do
we
want
to
build
this
into
kubernetes,
as
opposed
to
continuing
to
just
say,
do
the
filtering
on
on
the
server
side
for
the
auto
logging
back-end.
I
I
H
H
H
H
For
those
properties,
the
main
one
is
that
it
should
be
secure.
So
we
don't
want
an
attacker
to
be
able
to
cover
their
tracks
by
manipulating
the
audit
policy,
and
we
sort
of
get
that
today
through
the
static
audit
policy,
because
it's
a
static
file
on
the
master
that
can't
be
manipulated
through
the
API.
H
If
your
attacker
has
access
to
that
file,
it's
that's
bad
news
already
and
then
the
other
piece
is
which
sort
of
gets
into
these.
The
original
motivation
for
splitting
up
this
policy
into
these
audit
classes,
which
is
we'd
like
this
to
be
extensible.
So
when
thinking
about
like
the
optimization
piece
or
the
sensitivity
of
CR
DS,
the
people
who
are
gonna
know
which
of
those
requests
are
just
noise
or
expected,
are
going
to
be
the
ones
deploying
those
objects
and
workloads,
not
the
cluster
admin.
D
There
is
different
perceptions
to
what
is
important,
and
what
is
noise
through
two
different
roles,
like
the
same
resource,
could
be
that
it
can
be
perceived
by
the
administrator
as
something
important
and
there's
noise
by
the
operator
and
who's
led
to
the
correct
level
down,
and
the
other
thing
is
I
was
thinking
got
your
proposal
here.
If
we
use
the
resources
themselves
to
an
update
these
things,
there
won't
be
a
central
place
where
you
can
look
into
entries
on
what
was
the
actual
case.
What
was
your
policy?
H
Yeah
so
I
think
we're
kind
of
skipping
ahead
a
little
bit
towards
the
like.
The
solution
which
I'd
like
to
talk
about,
but
also
I
want
to
kind
of
nail
down
the
the
problems
are
trying
to
solve
a
little
more
first
sorry.
What
was
your
first
question
was:
oh,
if
what
is
noise
is
defined
differently
in
different
contexts,.
G
Or
by
the
cursor
so
like
one
like
I
are
back.
Audit
are
back
consumer
of
audit
logs
is
a
good
example
like
it
really
wants
to
know
every
single
request
that
a
particular
user
made
so
that
it
can
make
a
policy
that
covers
that
or
a
role
that
covers
that,
and
so,
like
creation
of
events
is
probably
something
that
admin
considers
noise
right,
because
it
happens
all
the
time
and
they're
like
they're
just
informative
and
they
get
thrown
away
after
an
hour,
and
it
doesn't
matter
so
they're,
probably
going
to
filter
that
out.
G
H
B
G
I
I
think
be
like
whipping
X's
sense
or
dollars
play
log.
One
like
a
sense,
definitely
thing
and
you
could
say
well,
you
just
need
to
put
a
filtering
back
and
in
front
of
that
back
end,
which
is
possible
for
some
people,
I
guess,
but
also
a
fairly
large
button.
So
I
wonder
if
initely
some
of
these
cases
could
be
solved
by
having
the
acceptable
level
would
noise
chosen.
H
Yeah
so
I
guess
how
I
was
thinking
about.
This
was
supposed
that
the
audit
event
has
some
data
on
it.
That
says,
you
know
this
is
this
level
of
sensitivity
in
this
level
of
noisiness
and
then
the
backend
pulpo,
like
per
back-end
policy,
could
just
look
like
what
level
of
sensitivity
am
I
allowed
and
what
level
of
noise
can
I
tolerate
and
the
you
can
almost
think
of
it.
The
noise
piece
is
almost
like
the
log
levels
that
we
have
today,
so
the
verbosity
level.
H
G
B
H
G
Because
I
could
definitely
see
problem
if
someone
installing
a
web
hook
thought
that
they
had
a
picture
of
activity
in
the
cluster
like
it's
really
different.
If
a
hook
gets
an
audit
event
that
says
this
user
made
this
request
and
due
to
cluster
policy,
I
can't
show
you
the
body
of
the
request,
like
that
at
least
gets
sent
to
the
web
look
and
they
know
that
the
request
occurred
and
they
know
that
they
weren't
send
the
contents
yeah.
That's
different.
A
A
G
F
G
H
What
I
don't
want
to
be
able
to
do
is
say
I'm
going
to
create
a
privileged
pod
that
takes
over
the
node
that
it
runs
on
and
I'm
going
to
label
this
as
sensitive.
So
you
won't
actually
see
the
body
of
that
pod,
so
sensitivity,
I,
think,
isn't
all
shouldn't
always
be
in
control
of
the
user,
creating
or
owning
the
object
right.
I.
G
H
I
H
F
G
If
you
don't,
let
it
be
changed
or
you
like,
you,
have
controls
around
how
it
can
be
changed
like
you
wouldn't
have
to
worry
about.
I
said
up
a
C
or
D
and
I
said
it
was
sensitive,
and
then
somebody
came
along
later
and
said:
oh
no,
it's
not
sensitive
or
looking
around
the
other
way
and
the
masculine
tracks.
I
don't
know.
G
Yeah
I
think
putting
it
at
the
level
of
the
resource
author,
so
for
the
built-in
things
either
making
a
determination
saying,
Secret,
Service,
sensitive
and
other
things
are
not
or
tying
it
to
the
things
that
are
encrypted
at
rest.
Maybe
we're
having
a
separate
control.
I,
don't
know
for
CR
the
year.
I
could
see
it
being
on
the
sierra
de
object.
I
guess
I
mean.
G
C
I
I
generally
get
the
feel
that
we're
kind
of
doing
a
lot
of
speculation
without
customers,
like
let's
say
users
and
the
people
who
originally
proposed
us
like
I,
use
the
uses
or
you
like
kind
of
building
this
on
behalf
of
people
who
would
use
it
like
if
we
were
to
come
up
with
a
set
of
proposals
like
who
we're
gonna,
take
them
to
and
say
validate
it
and
say
oh
like
this
is
what
we
think
like
with
this
actually
meet.
What
you're
trying
to
do,
I
think.
H
I
I
That's
all
we
did.
We
just
take
as
many
look
as
we
can
stuff
them
into
a
giant
like
log
analysis
pipeline
and
then
keep
them
like
you
know,
depending
on
what
the
dollar
is
but
like.
If
it's
you
know
something
that,
like
for
a
corporate
network
or
something
that
we
might
want
to
do
an
investigation
on
later,
then
we
keep
them
for
quite
a
while.
So.
F
Right
so
yeah
we
use
them
for
debugging
our
clusters
and
it
is
yet
we
want
to
see
everything
in
every
namespace
and
I'm.
Probably
not
gonna,
keep
the
content
most
of
it,
but
I
would
see
the
flow
of
but
I've
also
heard
of
people
who
would
like
to
in
their
namespace
say:
hey
I,
wonder
what
happened
to
me
in
my
name:
space
I,
don't
I,
don't
personally
know
those
people
and
I'm
having
never
been
that
person.
F
H
H
That's
getting
a
little
more
into
the
like
multi-tenant
audit
logging
case
that
I
said
at
the
beginning,
like
I'd
like
to
avoid
thinking
about
that
I.
Guess:
that's!
That's
where
you
say
like
okay!
Well,
what
if
we
have
these
different
audit
sinks
and-
and
we
don't
trust
them
with
all
the
data
yeah.
F
G
D
D
Is
really
that,
with
the
proposal
you
can
have
the
cousteau
providers
carrying
their
own
thing
completely
secure
and
isolated
with
the
static
policy,
and
you
can
have
also
the
users
of
the
coasters
could
be
administrators
or
something
else
to
impose
their
own
policy
as
per
their
organization.
Compliance
rules.
G
That
is
that
starts
to
get
really
complex
and
would
be
nice
to
defer,
because
there
are
so
many
different
use
cases
coming
up
with
one
true
API
to
describe
them
all,
it's
difficult
as
we've
seen,
and
so,
if
you
have
like
a
debug
use
case,
you
really
don't
care
about.
Like
99%
of
this
API
you're
gonna
do
something
like
grab
everything
or
grab
everything
from
one
namespace
or
grab
everything
from
one
user
and
that's
all.
You
need
I.
H
F
G
G
A
F
B
F
J
H
F
A
C
G
G
But
quantifying
that
or
you
know
seeing
how
difficult
it
would
be
to
get
some
of
the
simpler
use
cases
I
think
you
could
get
some
mileage
on.
You
know
small
the
medium
clusters.
Most
clusters
are
not
gigantic:
most
of
them
are
small
to
medium,
and
so
knowing
when
what
you
could
do
with
the
tools
we
have
today
would
be
helpful.
F
F
D
D
A
J
Are
gonna
be
really
quick
and
we
can
dive
in
next
meeting
so
I'll
do
the
credential
provider
extraction
once
at
one
first
I
think
this
is
probably
the
people
here.
This
is
just
a
cap,
so
we
can
discuss
it
after
glad
a
chance
to
review
it,
and
then
the
first
announcement
was
I'm
working
on
adding
OSS
fuzz
coverage
to
some
of
our
serializers
there's
already
set
up
for
I
set
up
for
Yambol,
a
cig,
yeah
Milland,
yeah,
Mille,
v2
and
those
run
under
OSS
fuzz.
J
The
targets
are
in
the
test:
fuzz
directory
I'm,
going
to
add
a
lot
of
documentation
on
how
to
write
your
own
tests.
But
if
you
do
write
tests,
they'll
get
run
a
lot
of
times
continuously
in
that
directory.
That's
the
eventual
goal.
So
let
me
know
if
you're
interested
in
learning
more
and
I
will
go
into
more
detail.
Maybe
next
meeting
can.
J
Fuzz
is
a
project
that
runs
continuous
fuzzing
on
a
very
large
cluster
I
think
there's
about
25,000
machines,
it's
part
of
chromium
security
and
they'll
Ekta
files
open
source
projects.
So
if
we
write
targets,
they
will
throw
random
data
at
them
and
they
find
a
lot
of
bugs
and
it's
free.
So
we
should
do
it
and
I
think
that
we've
had
a
number
of
incidents
that
might
have
been
helped
with
a
little
better
fuzzing.
Maybe
the
JSON
path,
one,
and
maybe
the
recent
Yamma
one,
not
sure
I'm,
not
sure
till
we
try
it
so.
G
J
Yes,
so
looking
forward
for
people
to
write
targets,
it's
not
easy
now,
because
there's
no
documentation
but
I'll,
add
that
and
then
building
nice
corpus
is
luckily
didn't
built
a
really
nice
one
in
case
that
I/o
test
data
that
I'm
gonna
use
for
some
of
the
initial
targets
and
then
I
have
a
doc
that
has
some
potential
targets.
That
I
will
also
publish
to
give
people
a
starting
point
that
are
interested
in
writing.
Targets.