►
From YouTube: Kubernetes SIG Auth 20170308
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20170308
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
A
We
don't
actually
have
any
flake
issues
attributed
to
our
cig
at
the
moment.
So
that's
great.
If
you
become
aware
of
one
be
sure
you
tag
the
cig
in
it
or
mention
the
sig
and
we'll
get
that
looked
at
right
away,
bugs
there
is
only
one
that
is
potentially
in
the
milestone
right
now.
That's
the
open,
ID
connect
bug
Eric.
You
want
to
talk
to
that
real
quick
yeah.
B
It's
just
a
above
on
our
side
with
verifying
the
metadata.
That's
returned,
that's
a
discovery.
It's
we
only
hit
it.
Somebody
recorded
hitting
that
corner
case
and
it
doesn't
have
any
functional
changes
because
we
don't
actually
use
the
field
that
we're
verifying,
but
the
guy
is
just
getting
on
having
little
trouble
with
good
apt.
So
all.
A
A
Summary
on
our
back
I've
posted
in
the
Flex
channel,
linked
to
the
one-sixth
documentation.
So
that's
also
linked
from
the
agenda
feedback
on
this,
like
I
said,
is,
is
welcome,
so
you
can
either
get
you
back
in
the
slack
channel
or
open
issues
in
the
doc
repo
or
if
you
want
to
open
up
your
yourself.
A
So,
let's
just
be
aware
of
that,
if
you
want
to
open
a
PR,
the
different
deployment
methods,
cube,
Adam
and
cube
up
are
enabling
are
back
by
default.
Cops
is
proposing
to
enable
by
default
they
do
their
one.
Six
development
work
after
cube,
actually
cuts
the
one
six
release,
so
that's
a
link
to
their
roadmap,
and
it
looks
like
they're
interested
in
enabling
about
a
twelve,
which
is
great.
A
A
Six
and
I
will
have
some
work
to
do,
for
the
release,
notes
to
get
people
pointed
to
the
documentation,
we're
using
our
back
one
of
the
things
that's
in
the
the
length
of
documentation
is
a
migration
strategy,
so,
if
you're
upgrading
from
our
previous
release,
how
you
get
from
sort
of
an
insecure
or
a
really
permissive,
a
back
policy
to
a
more
controlled
our
back
policy.
So
there
are
a
couple
strategies
discussed
there.
We
just
want
to
anytime
you
tighten
permissions.
A
It
can
be
disruptive,
so
we
want
to
give
people
pointers
to
the
steps
they
need
to
take
it'll
work
best
for
them.
If
they
don't
care
about
scoping
permissions
and
there's
you
know
one
step,
they
can
run
to
get
back
as
a
permissive
permissions.
If
they
do
care
about
scoping,
we
want
to
walk
them
through
that.
A
A
I
thought
we
would
sort
of
spend
the
rest
of
the
time
talking
about
whatever
people
want
to
talk
about.
That
seems
to
be
sort
of
one
sevenish
stuff.
If
people
have
a
longer
term
stuff
they
want
to
talk
about.
We
can
certainly
add
that
to
the
end
of
the
list,
but
there's
a
lot
of
things
we
can
go
do
and
so
knowing
what
people
are
planning
to
go
do
is
helpful,
so
we
don't
duplicate
effort
and
and
then
knowing
areas
where
we
might
need
some
help.
A
If
you're
looking
for
something
to
work
on
and
aren't
sure,
aren't
sure
what
to
work
on
so
the
first
three
things
I
threw
in
the
list.
These
are
sort
of
taking
taking
the
work
that
we're
doing
and
helping
other
parts
of
the
project
or
people
using
kubernetes
to
actually
understand
it
and
make
use
of
it.
So
before
we
get
to
the
next,
the
next
set
of
features
or
improvements
we
want
to
do
down
at
the
bottom.
I
really
do
think.
A
We
need
to
pay
attention
to
how
a
document
stuff
making
sure
people
are
aware
of
what
needs
to
be
done
to
harden
kubernetes
install
what
the
different
threat
models
are
and
then
we
have
pretty
good
admin
focused
documentation,
but
essentially
no
user
focused
occupation.
So
this
release
is
kind
of
the
first
release
where
it's
been
reasonable
for
an
admin
to
set
up
a
cluster
and
give
users
permissions
and
individual
namespaces.
And
so
we
need
documentation
for
those
users
that
don't
have
the
ability
to
go.
A
A
Okay
and
as
we
go
through
these,
if
you
are
interested
in
helping
or
participating,
speak
up
or
add
your
name
beside
this
and
I
think,
ideally,
we
would
get
polls
open
for
some
of
these
areas
and
then
let
groups
of
you
know
two
three
four
or
five.
Whoever
wants
to
do,
obviously,
but
just
get
a
few
people
working
on
each
of
these
areas,
kind
of
split
up,
divide
and
conquer.
A
A
If
we
have
people
who
ideally
are
users
of
various
deployment
methods
that
want
to
sort
of
help
contribute
from
an
auth
perspective
to
those
so
I
don't
know
if
we
have
cops
users
in
the
sig.
But
if,
if
you
are
then
you're
familiar
with
cops
and
with
the
law,
stuff
would
be
be
great
if
you
could
kind
of
take
some
of
the
best
practices,
the
hardening
steps
and
and
at
least
inform
and
ideally
contribute
some
of
those
to
the
different
deployers.
A
So
again,
if,
if
you're,
a
user
or
interested
in
working
with
a
particular
deployment,
throw
your
name
in
there
and
and
we'll
just
try
to
organize
that
make
sure
we're
not
duplicating
effort
and
then
the
line
the
last
category
I
had
was
the
add-on
stuff.
So
in
1,
6
we've
defined
our
back
roles
for
all
of
the
core
controllers
and
all
the
control
plane
components.
So
the
API
server
and
the
nodes
and
controller
manager
and
scheduler
and
proxy
and
then
a
couple
of
the
sort
of
fundamental
add-ons
like
heap,
stir
and
DNS.
A
The
first
step
is
to
figure
out
what
we
would
like
add-ons
to
look
like
from
an
authentication
authorization
perspective
and
that'll
involve
working
with
the
add-on,
the
developers,
the
add-on
manager
and
kind
of
working
together
to
figure
out
how
to
accomplish
that
and
then
putting
together
documentation
sort
of
the.
So
you
want
to
write
an
add-on
document
that
says
one
of
the
things
you
need
to
think
about
is
what
permissions
does
your
add-on
need
and
and
how
do
you
get
those
permissions
so.
A
There
are
a
few
things
that
I
think
are
must-haves
for
one
seven
feature
was,
and
everyone
probably
has
their
own
idea
of
what
does
it
must
have,
but
we
have
to
remember
if
we
go
add
new
features
and
we
don't
document
them
and
help
people
use
them,
then
it
doesn't
matter
that
they
exist
if
they're
not
being
used.
So.
A
Again
before
we
move
on
to
specific
auth
features,
are
there
any
other
sort
of
outward
facing
engagement?
I
guess
aspect
that
you
think
we're
overlooking
as
far
as
getting
getting
the
right
information
to
the
right
people
so
that
they
can
make
use
of
what
we're
building
or
have
already
built
you
you,
you
all
right.
Well,
if
you
think
of
any
throw
it
in
the
list
or
any
other
group.
A
Okay,
let's
move
on
to
kind
of
specific
plans,
and
really
this
was
just
trying
to
capture
work
that
is
already
in
progress
or
that
people
are
already
planning
to
do.
I
know
we
all
have
a
long
list
of
things
we
wish
we
wish
existed,
but
I
think
first
I
want
to
find
out
like
what
are
people
already
working
on
or
planning
to
work
on,
and
then
what
bandwidth
do?
People
have
to
kind
of
take
on
additional
feature
requests
and
then
prioritize
among
those,
so
I'll
just
go
really
quickly
see
the
first
year.
A
A
There
are
a
couple
of
improvements
that
have
been
requested,
so
the
ability
to
use
a
particular
product
security
policy
within
a
particular
namespace
so
that
you
could
love
someone
run
privileged
bugs,
but
only
in
one
namespace
and
then
some
of
the
some
similar
work
to
what
we
did
with
our
back,
where
we
define
a
few
sort
of
same
policies.
You
know
one
that
allows
privileged
stuff,
one
that
allows
restricted
stuff
and
just
restricted
volumes
in
a
pod,
just
part
of
making
it
more
usable
so
that
users
don't
have
to
start
from
scratch
and.
C
Another
question
be
whether
we
even
want
to
consider
enabling
that,
by
default,
like
the
node
authorizer,
isn't,
is
mostly
a
problem
today,
because
you
can
take
over
any
node
trivially
if
you're
a
pod
and
so
like
what
are
the?
What
are
the
overlapping
like?
How
does
the
security
model
of
pod
security
policy
need
to
be
strengthened,
defaulted,
etc,
whether
we
would
consider
doing
an
out-of-the-box,
secure
policy,
whether
everybody
would
complain
if
that
was
them,
which
it
would
yeah.
A
D
A
Think
a
lot
of
this
comes
back
to
working
with
the
deployments
right.
So
if
you
just
start
up
run
communities
raw
I'm,
not
sure
which
mode
I
would
expect
it
to
start
in.
But
if
you
you
know,
if
you
run
a
cube
atom
deployment
or
a
cube
limit
or
a
cops
deployment,
I'd
probably
expect
it
to
start
locks
down.
I.
Think.
A
C
C
You
know
escalate
very
quickly,
and
so
it's
kind
of
a
general
hygiene
defense-in-depth.
We
did
least
discussed
putting
in
a
a
not
overly
complicated
but
reasonably
secure
option
that
someone
might
turn
on
to
encrypt
the
secrets
themselves
and
it
CD.
The
proposal
is
fairly
far
along
on
the
mechanics
and
some
of
like
how
we
might
do
this
in
storage.
C
An
implementation
of
the
storage
layer
is
actually
mostly
done.
It's
not
a
very
complex
setup.
The
harder
things
are
around
things
like
key
rotation,
and
how
do
you
actually
then
expand
that
security
model?
So
one
of
the
least
their
proposals
was
to
take
this
in
baby
steps
to
have
the
capability
to
do
the
simple
encryption
to
manage
the
keys
yourself
and
then
over
time.
E
Can
I
add
something
yeah
who
we
chat
about
this
a
little
bit
I
think
we
said
was
we
both
kind
of
thought
that
devolution
is
not
to
continuously
add
more
better
encryption
and
features
two
communities
core
secrets?
But
after
this
initial
round
of
improvements
in
terms
of
the
encryption
at
rest,
to
begin,
relying
more
on
externals
secret
stores,
provide
more
features
and
I
have
got
a
roadmap
plan
for
how
we
can
do
that.
What
I
can
share
yeah.
C
C
Good
point,
like
the
breaking
out
the
pieces
that
they're
not
a
big
ball
of
mud
from
a
security
perspective,
is
the
thing
that
makes
us
more
secure
and
that
fits
with
the
other
things
like
teasing
apart
that
concerns
kind
of
fits
into
like
a
long-term.
You
know:
how
can
we
do
this
better,
and
how
can
you
be
more
flexible.
A
A
D
D
This
will
be
a
cluster
wide.
Okay,
all
right,
I'll,
look
for
an
issue
in
file,
one
if
I
don't
see
one
okay,
and
the
second
step
is
that
you
210
I'm,
not
sure,
if
I'm,
how
much
I'm
gonna
be
able
to
do
on
it
so
definitely
help
wanted.
If
anyone
else
has
interest
in
that
the
it
should
be
very
straightforward,
especially
if
we
get
pod
security
policy
enabled
by
default
and
the
only
complication
is
to
think
about
how
how
to
roll
it
out
without
breaking
existing
clusters.
Mostly
around
upgrades
yep.
A
Yeah
I
would
encourage
us
to
think
about
that
as
we
are
building
out
these
things.
One
of
the
one
of
the
things
we
sort
of
lucked
into
with
the
Union
authorization
stuff
is
that
you
can
run
your
legacy
policy
in
union
with
the
our
back
stuff
afterwards,
and
so
you
could
log
denials
from
our
back
but
then
fall
back
to
your
legacy
policy.
So
we
shortened.
A
A
One
thing
I
didn't
see
in
here
that
I
kind
of
wanted
to
talk
about.
Maybe
this
is
part
of
the
secret
stuff.
Is
the
idea
of
service
account
tokens
that
are
generated,
but
not
persisted
so
right
now,
service
account
tokens,
get
persisted
and
then
pulled
from
the
cubelet
and
mounted
into
pods,
like
forgetting
what
currently
exists
it
would
be.
It
would
be
nice
to
be
able
to
for
the
correctly
authorize
parties.
A
You
ask
for
a
service
account
token
and
the
cluster
make
one
and
some
record
of
that
token
be
persisted
so
that
you
could
revoke
it
or
see
that
it
had
been
requested,
but
for
the
secret
itself
to
not
be
persisted
or
be
retrievable.
So
it's
a
read
once
type
of
operation,
so
Eric
I,
don't
know
if
you
were.
If
you
had
that
in
your
head
for
some
of
the
secret
proposal
stuff,
you
were
looking
at
or
set
up
a
separate,
a.
A
It's
I
wouldn't
say
it's
urgent,
it's
just
yeah.
We
can.
We
can
talk
about
the
general
stuff
and
then
see
see
if
that
falls
out
of
that
Eric
Chang,
you
said,
probably
not
the
CRL
stuff.
Is
there
someone
who
is
familiar
with
cyril
mechanisms
who
would
like
to
work
on
this?
This
is
this
is
something
that
I
I
wish
we
already
had
addressed
and
would
like
to
see
addressed,
but
it
isn't
kind
of
perpetually
not
at
the
top
of
my
list.
Yeah.
B
B
A
A
A
That's
unfortunate,
I,
guess,
I
guess
the
more!
We
get
these
clients
our
rotation
in
place.
The
shorter
our
client
certs
durations
need
to
be
some
of
the
core
ones
like
the
control
plane
and
the
bootstrap
super
user
still
might
be
difficult
to
rotate
like
every
four
hours
or
something
but
yeah.
Okay,.
B
That
was
our
general
attitude
to
we're
really
interested
in
seeing
couplet
server,
sir
good
strapping
in
one
seven.
So
it
sounds
like
that's
already
being
worked
on
based
off
of
conversations.
We
had
actually
big
issues
with
doing
the
couplet
API
authentication
in
like
auto-scaling
groups,
because
it's
hard
to
uniquely
generate
starts
without
knowing
beforehand
what
the
IP
is
going
to
be.
A
A
The
certificate
approver
will
only
approve
serving
search
for
nodes
that
have
subject
all
names
that
match
the
IPS,
that
the
node
is
claiming
I
think
somebody's
can
start
to
fit
together
to
make
sort
of
a
coherent
picture,
but
yeah
kind
of
need
several
pieces,
yeah
and
Jacob.
You
actually
are
working
on
the
server
search,
stuff,
I
know:
I
had
the
client.
D
So
I,
actually,
when
I
was
first
developing,
this
particular
rotation
bit
I
worked
on
the
server
and
I
have
something
that's
fairly
close
right
now:
I
have
a
branch.
It
needs
to
be
cleaned
up
a
little
bit
more
okay
and
then
I
in
conjunction
with
that.
There's
the
cubelet
client
certificate
rotation
that
I'm
going
to
be
doing.
D
D
B
A
So
this
is
the
CSR
approval
like
auto
approval,
so
right
now
it
is
based
on
a
flag
passed
to
the
controller
manager,
but
I
think
we
switch
that
to
be
an
authorization
check.
So
we
would
do
the
same
detection.
We're
doing
now
to
say.
Does
this?
Is
this
a
node
clients
or
like
this
bit
the
characteristics
that
we
expect
for
a
node
clients
or,
and
if
so,
does
the
user
that
submitted
this
CSR?
Are
they
authorized
to
ask
for
a
new
client,
sir,
and
so
we
would.
A
We
would
change
that
from
a
hard-coded
check
against
a
flag
group
to
an
actual
authorization
check,
and
then
that
would
expect
a
similar
detection
to
say.
Does
this
look
like
a
node
server
request
and
is
the
user
that
requested
it
authorized
to
do
that
so
I
plan
to
work
on
that
and
I
forgot
to
add
that
it.
A
C
Yeah
I,
don't
I,
think
the
there's
a
little
bit
of
overlap
with
that
one
from
this
secret
segmentation
under
Eric's
roadmap,
the
secrets
but
I
think
there's
an
orthogonal
one,
which
is
just
so
folks
or
where
it
discussed
this
on
a
previous
call,
but
I
want
to.
As
we
get
more
and
more
integration
type
scenarios,
we
really
only
have
two
buckets
or
four
integrators
to
use.
They
can
integrate
with
the
whole
cluster
and
basically
see
everything
or
they
can
integrate
with
one
namespace
at
a
time
or
they
can
do.
C
You
know
fan-out
queries
for
every
single
namespace
that
they
want
to
integrate
with
and
there's
probably
some
as
people
run
larger
and
larger
clusters,
as
people
want
to
segment.
Who
can
do
what
it
will
be
very
difficult
to
effectively
build
integrations.
Also,
as
we
move
towards
self
hosting
code
that
you
want
to
run
almost
everything
on
the
cluster
except
you
don't
want
it
accessing
the
self
hosted
controllers,
because
then
it
can
get
access
to
the
service
accounts
and
then
it
can
route
your
cluster
etc.
C
A
A
C
I
think
it's
worth
mentioning
at
least
the
extension
admission
control
extension
will
be
unsick
API
machinery,
but
people
who
want
to
build
policy
and
additions
that
aren't
covered
here.
The
hope
would
be
that
the
API
machinery
is
able
to
assist
people
who
wanted
to
have
a
little
custom
policies
that
are
not
captured
in
a
list
of
things
that
we
would
go.
F
C
A
B
C
G
Just
wanted
to
assess
kind
of
like
what
the
priority
of
this
type
of
thing
is:
I
put
my
name
down
as
a
person
that
would
be
interested
in
helping
to
implement
parts
of
this
a
bit
not
being
very
familiar
with
the
sig
off
type
of
code.
But
the
immediate
use
case
that
we
had
for
this
is
thinking
forward
towards
like
multi-tenant
clusters
and
whatnot.
You
can
block
different
namespaces
from
accessing
other
namespaces
DNS,
but
there's
still
an
information
leak,
so
you
can
still
product
DNS
and
find
out
other
people's.
G
When
we
first
proposed
this
the
sake
network,
it
seemed
to
kind
of
bounce
back
like
well
what
if
we
had
a
way
of
controlling
access
like
this
generically
and
got
brought
up
when
I'm
mailing
lists
somewhere
and
then
that's
what
I
guess
Clayton.
You
had
already
been
pondering
this
for
a
while.
So
this
issue
is
open,
so
I'm
definitely
interested
in
helping
to
work
on
this
I'm,
not
sure
if
napkin
design
is
just
like
yeah
or
they're,
like
JK
Rowling,
just
scribbling
out
some
great
stuff,
but
it's
gonna
take
a
while
yeah.
C
The
DNS
stuff
was
definitely
a
very
specific
ones,
make
it
a
little
bit
harder
because
that's
actually,
yes,
I,
think
the
the
use
cases.
This
was
a
little
bit
more
targeted
towards
the
API
side
and
definitely
DNS
is
an
API,
but
because
it's
on
a
different
type
of
server,
it's
a
little
bit
more
complex,
so
I
didn't
want
to
say
the
DNS
stuff
depended
on
this.
C
There's
some
overlap
in
terms
of
how
people
think
about
you
know
if
you're
going
to
do
deep
segmentation,
if
you're
not
exposing
the
API
to
end
users,
it's
less
of
an
issue.
This
is
a
little
bit
more.
You
know
if
you,
if
you
have
n
users
doing
the
Kyuubi,
see
I
directly
how
you
start
getting
some
subdivision
beyond
just
the
namespace.
G
Right
and
I
guess
that's
part
of
it
plays
into
like
the
multi-tenant
part
that
we
wanted
to
so,
for
example,
if
you
do
have
say,
grab
like
as
a
particular
team,
that's
responsible
for
service
a
and
the
others
responsible
for
service
B,
but
they
need
to
communicate
with
each
other,
but
no
other
services.
That's
where
I
guess,
I
thought
the
project
or
or
organization
whatever
the
in
beta
name
was,
would
be
helpful
because
you
would
be
able
to
wrap
together
different
namespaces.
G
C
A
Mean
one
of
the
things
we
talked
about
specifically
around
the
network
stuff,
the
network
and
DNS
issues
is
requiring
requiring
things
to
declare
like
what
services
they
are
using
and
that
that
gives
a
point
at
which
to
apply
policy.
So
if
policy
says
this
name
services
a
lot
of
consumer
services
in
that
namespace
and
you
try
to
you-
try
to
record
that
I
consume
service,
whatever
in
some
other
names
face
like
policy
can
prevent
you
from
recording
that
and
that
gives
into
the
DNS
server
or
the
cube
proxy
like
specific,
persisted
objects.
A
That
say
this
pod
should
only
be
able
to
talk
to
these
services
or
ask
DNS
about
these
namespaces,
because
we've
persisted
the
the
services
that
we
depend
on
until
we
have
like
a
persisted
declaration
of
what
things
we
are
consuming,
I
think
applying
policy
to
a
DNS
server
or
a
cube
proxy.
Just
sort
of
out
of
thin
air
is
gonna,
be
really
tough
because
it
it
then
has
to
authenticate,
essentially
I'll
syndicated,
all
requests
to
it
and
figure
out
who
you
are
and
requested
in
us.
Don't
normally
carry
authentication
so
I.
F
A
C
I
think
this
is
a
challenge
to
with
the
waiting
cube
does
DNS
like
we
had
taken
a
different
approach
in
OpenShift.
We
actually
because
the
service
proxy
effectively
is,
has
all
the
same
info
that
you
need
for
DNS
and
because
we
had
gone
down
a
path
of
doing
multi-tenant
name,
network
isolation
out
of
the
box
like
every
namespaces
is
hard
isolated
from
every
other
namespace.
C
That
ends
up
resolving
to
the
service
box.
Anyway,
they
share
the
same
that
data,
the
service,
proxy
and
or
a
local
node
agent
is
the
one
who
has
the
best
info
from
a
security
perspective
about
where
a
pact
where
a
dns
packet
comes
from.
If
you
know
where
that
comes
from,
and
you
can
track
it
back
to
that
node
you
effectively,
you
know
your
query
set
to
figure
out
the
who
what
is
somewhat
reduce.
So
you
can
say
you
know
pot
IP.
What
are
the
pods
in
this
node?
Okay,
that's
that
pot.
C
It's
in
this
namespace
follow
the
links
that
today,
the
way
that
we
implement
qg
nests
will
be
a
bit
more
difficult,
certainly
not
impossible,
though,
and
I
think
that's
where
it
starts
bouncing
back
to
the
networking
team.
So
I
think
this
is
an
unfortunate
case
of
I
think
you
could
go
build
this
today.
It's
thing
it's
pretty
hard
to
build
and
cube
today
or
to
get
over
the
hump,
the
various
humps,
because
it's
not
a
clear
use
case
for
anybody-
who's
not
doing
namespace
isolation,
aggressively
yeah,.
A
G
A
C
It
is,
and
the
network
egress
policy
is
not
sort
of
that,
but
that
is
that
has
to
be
imposed
coming
out
of
the
net
namespace
so
that
that
that
is
a
spot
where
you
could
do
it
for
sure,
like.
If
you
had
a
virtual
IP
for
every
DNS
server,
every
namespace
is
DNS
server.
You
know,
maybe
you
can
figure
out
a
way
to
you
know,
pump
it
back
to
that
or
something.
C
Mean
I
think
they're,
both
I
think
a
book
is
related,
but
I
wouldn't
say
you
know.
If
you
wanted
to
get
to
that
point,
I
would
say:
don't
wait
for
the
bulk
access
part
of
the
reason
I
worry
about.
That
is
the
bulk
namespace
access
control
is
a
bunch
of
you
know
interesting
storage
level
and
code,
abstraction
problems
as
well
as
just
plain
old
like
how
can
you
scale
and
what
are
the
pragmatic
trade-offs?
C
C
So
it's
not
like,
given
the
normal
amount
of
given
the
rate
at
which
an
average
design
proposal
goes
through
kubernetes
and
given
the
number
of
thorny
and
or
argumentative
and
or
complex
problems
that
nobody
really
feels
comfortable,
saying
that
they
know
the
exact
solution.
I
think
this
is
about
a
hundred
times
more
difficult
than
a
normal
issue,
design-wise,
not
to
say
that
we
can't
or
shouldn't
go.
A
The
network
egress
policy
like
having
pushing
forward
on
the
design
for
that
with
an
eye
towards
how
it
would
connect
to
authorization,
checks
and
connects
to
declarations
of
what
services
you
consume.
I
think
that
can
be
done
in
parallel.
The
bulk
namespace
stuff
I
see
as
a
more
convenient
way
to
manage
policy
across
a
set
of
namespaces,
but
you
could
accomplish
the
same
thing
less
conveniently
by
you
know
just
set
the
same
policy
in
these
five
namespaces
and.
A
G
I,
it's
we've
kind
of
just
brought
up
the
problem
quite
a
lot,
and
it's
I
think
it
just
got
pushed
out,
because
there
was
some
rumblings
of
this
type
of
organization
or
thing
that
combined
together,
multiple
namespaces
and
that's
why
I
think
it
got
pushed
like?
Well,
let's
just
wait
for
that
type
of
a
thing,
but
yeah
I
guess
quite
easily
like
if
you,
if
you
do
just
say,
okay,
every
namespace
can
only
see
DNS
entries
within
its
namespace.