►
From YouTube: Kubernetes SIG Auth 2020-04-29
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2020-04-29
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
B
About
this
a
little
bit
yep,
this
has
been
I,
think
we're
pretty
good.
At
this
point,
we've
been
a
lot
of
feedback
on
this
Pierre,
but
I
think
it's
I'd
like
another
second
I
I,
think
I'm,
the
only
one
can
figure
out
who's
reviewed
it.
So
if
anyone
else
wants
to
take
a
look
and
make
sure
that
we're
covering
sort
of
all
the
bases
in
terms
of
things
that
need
to
be
touched
in
the
correct
order,
that'd
be
great.
C
B
Yeah
I
think
the
other
reason
to
bring
this
up
is
just
that:
there's
not
a
guide
currently
on
how
to
rotate
the
ca
for
your
cluster.
So
I
think
that
this
is
a
very
valuable
guide
to
have,
but
we're
gonna
make
sure
that
it's
comprehensive
before
we
put
it
up
there
so
that
people
don't
have
a
nominal
comprehensive
guide.
I.
E
Giving
the
six
that
give
you
the
cookie
one
anyway,
but
but
yeah
Wow,
it
only
took
151
line.
All
right,
I
will
put
this
on
it.
We
to
read
through
I
also
concede
is
fair,
difficult.
There
are
going
to
be
questions
overall.
Like
do
we
want
to
document
how
you
can
rotate
individual,
distinct
chains
of
trust,
because
you
know
not
all
trust
in
the
cluster
is
necessarily
uniform.
E
C
B
A
C
Can't
remember
if
I
had
this
token
the
lesson
or
not,
but
this
is
calling
out
the
v1
API
differences.
These
are
things
that
we
had
talked
about
when
we
were
adding
the
multi
signatory
support
and
talking
about
a
better
validation
of
the
status
certificates
and
then
a
couple
invariants
around
not
being
able
to
deny
and
then
approve
or
approve
and
then
deny
with
watch
based
signers
racing
to
deny
after
an
approval,
is
dangerous
and
misleading.
So
there
are
a
few
invariants
like
Felice
discusses.
Adding
there's
been
a
fair
amount
of
attention
on
it.
C
C
D
A
D
So
I
made
the
change
to
I,
got
the
initial
change
already
merged
for
this
to
be
in
the
new
format
and
then
I
started,
adding
sections
to
sort
of
comply
with
the
overall
desire
of
the
new
kept
format.
So
I
mean
I've.
Had
some
feedback,
Mike
I
haven't
looked
at
yours
from
today,
I
didn't
I
didn't
see
it
yet.
D
So
I
mean,
if
folks
are
interested,
please
do
look
at
it.
I
mean
there
so
there's
one
API
change.
There's
updates
on
information
to
try
to
make
it
more
specifically
current
to
what
we
have
there's
discussion
on
environment
variables.
I,
don't
particularly
feel
strongly,
but
I
would
just
like
to
do
whatever
we
think
is
the
right
thing,
and
this
is
safe
thing.
I,
calling.
C
D
Part
is
there
I
tried
to
go
in.
There
were
some
pieces
that
weren't
necessarily
super
cleared
I
just
tried
to
add
more
words
and
more
robustness
to
try
to
hate
all
the
things,
yeah
and
I.
Don't
think
I
think
it's
Andrew
Ally
concerns
about
me.
Dropping
the
TLS
bits
and
might
be
useful
responded,
which
was
particularly
that
not
being
GA
does
not
comply.
D
A
So
yeah
I
think
we're
just
in
the
printer
like
not
letting
the
cube
config
be
world.
Writable
is
an
example,
making
sure
we
exec
binaries
that
are
lots
controlled
by
other
users.
What
is
an
example,
or
in
like
world
writable
as
I
guess
how
we
Traverse
to
the
binaries
not
controlled
by
and
privileged
users,
there's
a
lot
of
validation
that
we
could
do
before
we
exec
a
binary
that
we
don't
do
they
kind
of
like
SSH,
refusing
to
use
a
private
key.
That
is
not
a
cold
correctly,
so
I!
A
A
A
Don't
know
what
all
the
changes
are,
but
after
we
brainstorm,
we
decide
which
ones
are
worth
it
or
if
there
are
options
that
wouldn't
break
anybody,
which
seems
unlikely
because
we
are
trying
to
restrict
behavior,
it
is
unlikely
that
it
couldn't
possibly
break
anybody,
but
it
is
likely
that
it
won't
break
anybody's
as
the
ones
that
we
should
probably
consider,
but
yeah
I
think
we
need
to
brainstorm
a
little
bit
before
we
move
forward.
So
it
was
just
like
this
kind
of
a.
A
D
D
D
A
B
D
E
D
A
H
Yeah
I
mean
certainly
workload.
Portability
was
the
primary
goal
of
conformance
originally
there's,
certainly
an
element
of.
If
we
projected
cute
configs
into
pods,
that
would
be
different.
I'm
a
little
I
mean
it's
definitely
a
part
of
the
ecosystem
definitely
different
deserve.
The
tested
hasn't
traditionally
met
the
bar.
What
we
would
have
called
conformance
doesn't
mean
that
conformance
couldn't
expand,
but
I
would
take
it
to
the
conformance
working
group
and
ask.
H
Conformance
is
not
required
to
accept
those
tests
into
conformance
for
GA
in
the
subtle
case
of
where
we
might
decide
that
that
might
not
be
acceptable
for
all
conformant
distribution.
I.
Don't
think
I've
parsed
that
last
statement
you
you
required
to
prepare
tests
that
could
be
suitable
for
conformance.
H
The
conformance
group
could
defy
decide
not
to
mark
that
conformance,
because
that's
unacceptable
for
conformant
distributions
right
like
if,
if
it
requires
something
that
I
know
it's
a
basically
test
coverage
is
required
and
suitability
for
inclusion
into
conformance
is
required,
but
not
entrance
into
conformance
required
for
you
to
be
GA.
Although
there's
a
certain
thing
of
like
if
the
API
is
such
that
it
would
be
intended
to
be
the
kind
of
things
conformance
and
we
can
decided
not
to
include
it.
H
That
would
be
probably
a
misstep
like
we
missed
something
in
the
cap
process
like
the
examples
are
like
pod
security
policy
had
that
gone
to
GA.
That
would
have
been
one
where
conformant
distribution
was
not
required
to
turn
it
on
necessarily,
and
we
may
not
have
agreed
to
change
the
definition
of
conformance
to
require
that
you
use
out-of-the-box
pods
curity
policy,
but
the
bank
Avior
of
the
api
should
have
been
covered
by
conformance
so
there's
just
a
ton
of
nuance
in
it,
like
I.
H
C
D
H
D
A
D
D
I
D
New
API
and
the
authentication
that
Kate's
group
I
think
we
call
this
the
type
declaration
bit
a
union
like
a
Union
discriminator
I'm,
trying
to
remember
exactly
what
it
is,
but
general
idea
is.
You
can
specify
some
kind
of
dynamic
authentication,
config
and
based
on
the
type
one
of
the
other
three
fields.
Oh
sorry,
yeah,
the
other
three
fields
gets
to
be
invoked
so
for
CA.
Bundles
is
two
for
ADC.
D
It
is
similarly
what
you
can
specify
on
the
CLI,
except
it
uses
the
the
format
for
how
client
configs
are
done
for
web
hooks
and
admission.
I.
Don't
know
if
you
would
just
read
to
find
the
type
here,
but
I
didn't
want
to
get
too
much
into
that
my
desire
there
is,
you
know
we
we
have
the
distinction
between
service
URLs
for
web
hooks
for
admission
and
I.
Think
I
would
like
to
have
the
same
year.
D
D
That's
the
general
high-level
there's
been
discussions
that
haven't
I,
haven't
put
I
haven't
made
any
changes
to
the
API
since
the
discussions
was
started.
Just
because
I
want
to
let
people
have
a
chance
to
look
at
it.
I
think
the
primary
one
that
I
would
want
to
change
is
I
did
not
include
any
language
around
protection
of
the
API
and
I.
Think
I
would
definitely
want
similar
to
the
are
back
close
to
role.
Aggregation
check.
D
A
A
It's
like,
theoretically,
every
year,
the
convention,
CR,
DS
and
front
moxie
I
think
it
would
behave
and
handle
a
little
bit
differently,
but
nothing
here
is
incredible.
With
a
CRD
based
approach
and
I.
Think
working
out
usability
problems
or
development
problems
with
with
the
front
proxy
approach
would
just
be
beneficial
in
the
long
run.
A
The
from
a
like
does
that
make
sense.
Yeah.
D
Like
valid,
it
comes
down
to
a
variety
of
things:
seamlessness
there's,
certainly
protections.
You
can
do
on
CRTs
that
you
can
do
for
native
types
all
right,
because
a
person
can
just
remove
any
webhook
that
you
have
like
an
admission
webbook,
that's
trying
to
detect
the
CRV
from
any
like
malicious
behavior
right.
This
is
a
cluster
admin.
Api,
there's
no
other
way
to
look
at
it
and
like
to
a
certain
degree
right,
like
I,
the
proxy
approach.
You
know
you
could
say
that
you
could
do
that
for
our
back.
D
You
could
do
that
for
admission
right,
but
it
it
pushes
the
burden
too
hard.
On
the
other
end,
right,
like
you,
have
to
basically
make
this
perfect
proxy.
That
has
like
an
enormous
security
surface
and
you
get
to
own
the
whole
thing
for
some
reason.
Like
I,
like
I
I,
don't
really
see
this
meaningfully
different
than
like
mission
led
books
right.
A
D
If
you
just
want
some
bundles
to
be
supported,
you
can
run
anything
at
all
and
the
verification
that
relieves
the
a
passcode
so
like
there's
I
cute,
like
significantly
different,
like
network
or
architecture,
was
at
play
and
like
I,
get
to
trust
things.
That's
like
a
trust,
the
things
I
trust,
which
is
you
guys
server
again,
I
do
probably
provider
whatever
you
want
to
trust
me,
like,
certainly,
obviously,
in
many
cases.
A
D
D
Well,
supporting
resting
right,
that's
what
I'm
trying
to
get
right!
I'm,
not
saying!
Let's
build
like
web
hooks
like
I,
don't
know
like
let's
not
build
like
our
back
v2
or
like,
let's
not
build
like
a
lot
directly
into
the
API.
So
nothing
like
that
I'm
saying
guarantee
a
seamless
integration
in
any
right
like
if
I
want
to
have
an
opinionated
aspect
that
I
use
across
providers.
I
can't
really
do
that
today,
right
I,
don't
want
to
use
an
impersonation
proxy
I,
don't
want
to
tell
people
hey
if
you're
a
user.
D
G
Yeah
I
was
one
of
the
comments
was
about.
You
know.
Why
would
you
give
since
this
thing
would
be
Questor
admin
permission
only?
Why
would
you
give
users
cluster
admin
I
just
want
to
call
out
that
jinking,
for
example,
does
make
a
distinction
between
sort
of
operators
and
admins.
Where
you
know
today,
we
don't
have
this
available
to
our
customers,
even
if
they
are
cluster
admin
for
all
of
their
pens
and
I
I.
D
F
G
Yeah
and
I
think
Mike
had
called
that
out
as
well
that
we
would
so
there's
a
couple
ways
to
do
that.
We
either
build
it
into
the
API
and
say
provide
a
way
sort
of
not
through
our
back
to
limit
how
this
API
can
be
used.
So
we
could,
you
could
say,
I,
don't
know
ordering
or
restrictions
where
a
cluster
operator
might
be
able
to
say
no
I'm,
not
gonna.
Let
you
throw
in
random
authentication
web
hooks
accident,
but.
D
Like
the
conversation,
I
would
want
a
GK
customer
to
have
with
you
guys
is
hey
I
want
to
use
this
feature.
Are
you
willing
to?
Let
me
use
it?
Not
okay.
Can
you
go
do
this
work
for
me
great,
like
it's,
you
know
like
I
like
I.
Would
if
this
thing
GA
is
at
some
point
right,
I
would
one
on
by
default,
but
I
want
all
the
knobs
that
make
operators
comfortable
and
if
they
decide
that
or
not,
can
find
and
turn
it
off.
And
then
you
can
have
a
conversation
with
your
end.
D
G
We
definitely
had
this
conversation
because
we
get
this
request
like
and
a
lot
of
times.
People
say
you
know:
can
you
just
open
up
the
authentication,
CLI
flags
to
us
and
they're
like
whoa,
but
I?
Think
Mike's
point
is.
That
is
the
thing
that
people
want
and
is
the
thing
that
we
want
as
a
as
maintainer
zuv,
this
a
like
fully
featured
restful
api.
It
might
be
I,
don't
know,
but
you
know
what
they
really
want
is
just
I
want
my
kubernetes
to
work
with
my
authentication
stack
and
there's
just
a
question
of.
H
What
the
surface
area
of
kubernetes
is
when
you
cross
the
boundary
between
workloads
to
infrastructure
like
it's,
just
not
a
crisp
line,
and
that's
fine
like
the
reality
is
plenty
of
people
get
huge
amounts
of
value
from
kubernetes
without
that
line
being
there,
but,
like
this
argument
is
just
we
don't
have
a
principle
that
guides
how
we,
how
we
configure
mission
webhooks,
how
we
configure
our
web
hooks,
how
we
configure
audit
dinette
our
audit
sessions.
We
don't
have
a
way
to
separate
infrastructure
running
on
the
cluster.
We
don't
have
a
way
to
restrict.
H
Who
can
can
you
gate
or
logically
separate
out
the
powers
of
cluster
I
mean
I'm,
not
advocating
necessarily
that
we
have
to
go
do
that,
but
it
does
kind
of
come
back
to
most
of
these
issues
come
down
to
anything
infrastructure
related,
there's,
no
guiding
principle.
We
have
no
like.
We
have
a
guiding
principle
for
every
workload
API,
even
if
it
sometimes
isn't
a
great
one
like
use.
The
namespace
service
accounts
are
tied
to
a
namespace
secrets,
are
tied
to
a
namespace.
You
know
the
cubelet
will
restrict
these
things.
H
Could
we
like
everybody's
gonna,
have
different
needs
to
like,
like
Moe
and
I
probably
want
to
go?
Do
things
with
self-hosting
and
running
clusters,
allowing
configurability
that
gke
or
any
KS
weren't
on
premise
provider
might
not
offer
if
we
could
come
up
with
a
way
to
have
those
principles?
Would
that
make
us
happier
or
is
it
just
too
much
work
to
go?
Have
those
principles,
and
so
we'll
spend
the
time
arguing
on
this
stuff
without
a
clear
guiding
direction?.
D
Yeah
I
mean
I,
guess
I
would
ask
like
how
is
this
sort
of
meaningfully
different
from
admission,
Web
books
for
sure,
but
even
to
a
certain
degree,
with
our
Beck
right
like
you?
Can
you
can
create
an
are
back
rule
that
lets
system
:
authenticated?
Do
anything
right
and
now
like
at
the
end,
is
basically
no
longer
functioning,
because
then
is
just
saying
that
you're,
not
your
nobody,
your
system,
anonymous
and
you
can
do
anything.
Okay,
like
III,
guess,
I'll,
be
very
curious
to
know
like
what
are
we
trying
to
defend
against
it?
D
F
G
Now
I
can
see
it
on
my
I'm
using
the
zoom
web
app
for
the
first
time.
That's
not
going
great
by
so
to
try
to
make
it
concrete.
Let's
say:
I
am
a
gke
user
at
some
large
company
and
I'm
telling
all
these
smaller
teams
to
go
do
stuff
in
their
gke
clusters
and
I'm
in
charge
of
maintaining
security
policy
and
all
those
gke
questions.
G
I,
as
a
gke
customer
in
charge
of
maintaining
security
policy
would
be
scared
if
users
could
randomly
start
authenticating
using
their
Instagram
accounts,
and
so
so
I
want
gke
to
give
me
a
way
to
prevent
that
from
happening.
Now,
all
that
is
doable
and
we
could
figure
it
out.
It's
just
right
now.
It's
easy
because
it's
just
not
configurable.
D
But
I
would
I
would
ask
how
is
that
neat
like
so
the
person
you're
defending
against
at
least
on
those
clusters
has
to
have
cluster
admin
right?
That's
that's
what
we
were
asking
about.
How
is
that
different
from
that
person
setting
up
an
impersonation
proxy
on
those
clusters
with
cluster
admin
rights
and
then
just
telling
people
hey
yeah,
you
can
you
can
use
this
thing,
I
set
up
and
you
can
log
in
with
Instagram
and
at.
G
D
But
I
don't
think
we
need
to
be
so
cautious
with
audio
annotations
right,
like
being
able
to
say
yeah.
This
person
authenticated
Oh,
IDC
and
heck.
Maybe
here's
the
issuer
URL
when
their
stuff
expires,
like
those
types
of
things
like
it's
just
metadata,
I,
don't
particularly
find
it
to
be
secret,
but
I.
Think
from
an
auditor
perspective
like
I.
Would
I
would
like
to
know
that,
like
before
you
know,
we
removed
basic
auth
right?
If
you
were
using
basic
odd
to
authenticate
the
clusters,
I
probably
want
to
know
and
be
like
hey.
G
H
The
way
that
you
quite
as
a
West
controller-
and
it
has
access
to
all
secrets-
and
it
can
become
every
controller
manager
in
the
system
until
we
get
that
close
I
mean
that's
what
I
mean.
My
principles
is
like
we're
at
the
limits
of
what
like
every
one
of
us
is
trying
to
improve
in
some
dimension,
the
operational
usability
of
cube
for
features
that
involve
things
that
aren't
clearly
workload.
H
Api
is
like
every
API
we've
discussed
here
is
an
administrative
or
an
infrastructure,
API,
not
a
workload
API,
and
we
don't
really
have
like
a
hard
set
of
rules
or
guidelines
to
say
what
our
roadmap
is
for
that
stuff
like
we
all
want
things,
but
we
don't
have
like
a
clear.
We
agreed
to
do
these
things
in
this
way
from
these
principles,
and
each
cap
can
base
off
of
those
I.
H
Don't
know
guys,
like
I
I,
mean
I
love
to
see
subdivision
of
namespaces
and
like
better,
better
isolation
within
a
set
of
core,
like
it's
a
really
complex
design
that
without
a
set
of
principles,
there's
no
point
to
even
trying
to
go
further
on
that,
like
being
able
to
hide
a
namespace
from
the
set
of
namespaces
that
cluster
admins
can
edit
without
a
pro
set
of
principles.
There's
no
point
to
even
doing
that
design,
because
then
we
just
get
into
like
this
kind
of
discussion
of
like
well.
Does
this
make
sense
for
a
workload?
H
D
H
Program
flags
and
we
just
call
it
a
day-
loosely
deployed
controllers
as
pods,
and
you
know
at
some
point
like
to
some
of
this-
is
if
you're
running
as
a
service,
you
can
have
a
hard
boundary
and
everybody
gets
to
define
what
the
hard
boundary
they're
comfortable
with
and
if
you're
not
running
as
a
service.
You're
gonna
have
to
give
up
some
of
that
I
think
I'd
like
us
to
see
having
better
hard
boundaries
between
applications
on
a
cluster
and
I.
Think
that
overlaps
a
little
bit
with
what
we're
talking
about.
I
A
Forth
I
think
for
the
x.509
Authenticator
I
think
there
is
speaking
of
specifics
here.
The
x.509
Authenticator
makes
anybody
infrastructure
providers
or
just
end-users.
It
makes
deploying
a
front
proxy
challenging
more
challenging
than
it
was
potentially
already
pretty
challenging,
because
now
we
have
to
plumb
any
new
client
CA
bundles
of
the
serving
stack
which
is
maybe
non-trivial
I.
Think
for
the
webhook
mode.
I'm
particularly
concerned
with.
A
Funneling,
like
secret
material
back
back
at
some
user,
controlled
end
point
Ford,
like
GK,
you
use
first
party
credentials
to
talk
to
kubernetes
api
servers
and,
like
the
model
there,
that
you
have
like
cloud
platforms,
go
up
to
access
token
talk
to
a
kubernetes
api
server.
I
don't.
We
would
need
to
be
very
sure
that
those
aren't
being
forwarded
down
into
a
cluster.
When,
for
example,
like
a
web
hook,
we
did
a
web
hook
error
on
our
static
web
hook
up
indicator.
A
I
config
there
is
like
back
to
your
points
about
having
a
conversation
with
your
infrastructure
used
by
customers
having
a
conversation
with
their
infrastructure
providers.
I
do
think
that
there
is
a
model
here
that
we
can
potentially
get
to
where
that
conversation
doesn't
even
need
to
occur,
because
infrastructures
cannot
stop
you
from
deploying
a
front
proxy
I
do
think
it's
worth
pursuing
because
it
is
the
it
is
the
like
one
way
that
authentication
extension
can
be
achieved.
D
A
There's
gke
or
kubernetes
OS
bc,
proxy
there's
just
sac
one.
There
was
one
presented
in
the
cig
off
meeting
we
recommended
I.
Think
gravitational
has
one
for
certificate,
so
there
there
is
an
ecosystem,
and
it
is
what
like,
when
people
have
asked
about
like
this
type
of
extension
in
the
past,
it
is
what
we've
recommended
for
them
to
do,
and
I
think
it
can
be
made
improved
like
the
developer
experience
and
that
I
think
is
probably
worth
it
just
because
it
has
all
the
benefits
of
this
proposal
and
we
need
to
work
to
limit
I.
A
I
Do
you
hear
me
now
yeah?
Okay,
sorry,
you
probably
didn't
hear
my
earlier
comment.
Either
I'll
leave
my
comments
about
authentication
proposal
on
the
PR,
so
we
can
have
taught
it
yeah,
so
we're
gonna
pick
up
the
conversation
from
last
week
we
wrote
up
to
one
pagers
kind
of
elaborating
on
the
proposals
that
we
discussed
last
week,
not
sure
how
we
want
to
go
about
discussing
these
here.
J
J
At
least
from
my
perspective,
I
mean
the
big
ones
just
getting
kind
of
getting
this
configured
on
existing
clusters.
I
know,
we've
talked
about
there's
other
ways
of
doing
that
there,
like
maybe
a
config
file
that
realizes
that
the
API
server
restarting.
J
You
do
have
to
turn
on
all
the
events
in
that
case,
you're
always
sending
out
all
the
events
to
that
piece
which
it
seems
like
that's
gonna,
be
a
hard
sell
for
the
provisioners
I
just
knew
and
going
around
trying
to
get
dynamic.
You
know
the
Alpha
stuff
configure
for
people
in
the
in
the
provisioners
is
pretty
hard
to
just
get
them
to
turn
a
flag
on
a
couple
of
flags.
This
was
pretty
challenging,
so
I
think
you
know
adding
a
whole
thing
and
then
always
sending
all
the
events.
Never
did.
J
A
J
J
Any
given
name
space
but
then
the
policy
would
be
applied
from
that
pod
out
to
whatever
Onix
sings
comm.
On
the
other
side,
the
other
version
you
would
have
esthetic
policy
kind
of
so
in
the
note
he
could
have
dynamic
sinks
and
they
would
just
be
emitting
whatever
events
or
send
that
static
policy.
I
guess
the
idea.
Here's
we
got
kind
of
hung
up
on
the
policy
stuff
and
having
a
static
policy
also
solves
some
of
the
performance
issues
you
ran
into.
I
Just
just
to
clarify
in
the
other
proposal,
you
also
have
a
static
policy.
That's
applied
before
the
events
are
sent
to
the
proxy
I
think
the
Momo
I
thought
we
were
doing
that.
Okay,
yeah,
so
I
think.
The
only
difference
really
between
these
two
proposals
is
whether
that
a
dynamic
audit
proxy
is
a
separate
process
or
separate
pod
or
if
that
is
running
in
the
API
server
process.
A
J
J
You
know
who
did
what,
when
looking
for
anomalies
and
can
also
store
that
data,
but
sort
of
drop
in
plugins
was
kind
of
our
main
use
case
and
then
Christine
joined
today,
but
I
know
you
know
she's
running
Falco,
some
poor
thing
where
they're
looking
you
know
want
to
take
you
out
of
log
some
write
rules.
They
can
see
our
logs
Freddie.
You
know
potentially
malicious
activity
going
out
of
there.
So
to
have
something
like
that.
J
You
basically
need
to
have
a
cluster
that
was
already
kind
of
configured
in
a
manner
that
was
putting
all
these
logs
tiers
of
a
pod.
All
the
time.
I
just
think
that
might
be
a
hard
sell
to
the
parishioners
just
going
to
based
on
what
I've
already
gone
through
trying
to
get
it
enabled,
let's
receive
a
way
to
kind
of
make
that
easier.
I
I'm
a
little
concerned,
this
kind
of
gets
back
Clayton's
earlier
point
actually
about
like
how
are
we
defining
the
boundaries
of
the
infrastructure
here,
yeah
I'm
a
little
worried
that
this
sounds
kind
of
like
the
cluster
provisioners
aren't
right
now
in
the
future
we
want.
So,
let's
make
it
a
mandatory
core
feature
before.
J
That'll
be
mandatory
at
all,
I
just
mean
it's
really
hard
to
even
get
them
to
turn
on
a
flag
right,
more
or
less,
to
have
the
conversation
that
you're
gonna
be
sending
all
of
your
audit
events
to
separate
pod
somewhere,
all
the
time
just
to
enable
you
know,
sort
of
dynamic
syncs
out
of
it.
I
even
floated
the
idea
kind
of
internally
to
you
know,
people
that
we
would
have
a
high
amount
of
influence.
J
A
A
Like
pushing
through,
we
have
like
a
TLP
I
I
think
it
stands
for
data
loss
prevention
pipeline
that
looks
for
employee
names
like
may
be
necessary.
That
was
like
debugging
or
somebody
that
a
support
person
that
used
that
console
to
look
at
the
API
or
look
at
the
API
server.
We
want
to
make
sure
that
privacy
of
employees
is
maintained
when
it,
when
these
audit
bench
riches
customer
and
we
can
like
we'd
like
to
replace
those
so
I'm
like
Google
at
Google
or
at
google.com,
or
something,
and
also
to
make
sure
that
we
are
not.
A
F
Yes,
so
I
wanted
to
bring
up
a
discussion
that
we
had
earlier
in
the
policy
working
group
related
to
these
proposals.
So
I
think
the
question
was
are
either
of
these
looking
at
also
being
allowing
to
configure
different
audit
event
sources
other
than
the
API
server
the
example.
There
would
be
policy
engines
which
want
to
report
on
compliance
of
policy
rules.
Things
like
that
I.
A
So,
with
one
with
one
minute
left
with
Hall
senior
I
was
not
aware,
it
is
dark.
I
would
take
read
working
from
that
Patrick
and
Tim.
A
I
I
F
I
Think
in
the
interim,
let's
continue
the
discussion
in
the
comments
and
try
to
come
prepared
for
more
discussion
next
meeting.
In
any
case,
none
of
these
changes
are
going
to
make
it
into
119.
So
this
is,
we
have
time
to
kind
of
figure
this
out,
but
I
would
like
to
get.
I
would
like
to
get
a
conclusion
or
a
decision
on
kind
of
what
it
is
this
cycle,
so
that
we
can
maybe
start.