►
From YouTube: Kubernetes SIG Auth 2021-09-15
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2021-09-15
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
Hi,
everyone
welcome
to
the
september
15
2021
iteration
of
sig
auth.
Thank
you
all
for
joining
us.
First
for
the
polls
of
note,
it
looks
like
we
have
a
new
cap
for
the
secret
store,
csi
driver,
okay,
cool.
B
Yeah
this
one
we
had
on
the
issue
in
our
backlog
after
we
moved
it
as
a
cigarette
sub
project
to
add
a
retroactive
gap.
So
we
just
went
through
the
process
and
added
a
kept
and
then
also
try
to
answer
some
of
the
questions
in
prr
as
we
are
looking
to
go
stable.
A
Awesome,
who
would
be
the
good
good
reviewers
for
this
camp,
I'm
happy
to
review.
Did
you
have
anyone
in
mind.
B
Yes,
I
tagged
you
and
rita,
because
I
think
you
were
the
initial
folks
who
helped
more
the
project
under
cigarette,
and
I
also
tagged
more
on
this
one.
So.
A
A
Great
well,
let's
review
it
offline
and
yeah.
Thank
you
for
making
sure
that
these
changes
are
documented.
Thank
you,
okay.
So,
let's
start
off
with
designs
of
nope
rob.
Do
you
want
to
yeah.
C
Yeah,
so
we've
got
a
lot
going
on
in
gateway
api.
I
I
sent
an
email
out
to
the
mailing
list
as
well
and
got
a
bit
of
feedback
there,
but
the
high
level
idea
here
is
that
we're
trying
to
enable
cross
namespace
references
safely.
C
C
This
has
also
come
up
this
kind
of
need
for
or
desire
to
enable
cross.
Namespace
references
came
up
with
bucket
api,
and
so
I
just
linked
that
kept
there
and
there's
discussion
somewhere
down
at
the
bottom,
where
I
think
tim
references
that
hey
this,
this
could
be
very
relevant
to
what
gateway
api
is
doing
with
reference
policy
and
tim
had
suggested
hey.
A
Yeah,
what's
the
best
way
for
us
to
get
an
understanding
of
how
it
works?
Do.
C
Your
screen
yeah
no,
this
is
this
is
fine
you're,
fine.
What
you
have
is
fine.
So
in
this
example,
really
a
reference
policy
has
a
from
and
a
tube
that
that's
all
a
reference
policy
is
the
from
includes
a
group
kind
and
namespace,
and
the
two
includes
a
group
in
kind.
So
the
reference
policy
exists
in
the
target
namespace
and
it
allows
the
target
to
say
I'm
going
to
accept
references
from
these.
C
These
kinds
in
these
name
spaces,
and
I
recognize
that
for
a
more
generic
audience
resource
may
be
a
better
fit
than
kind,
but
beyond
that
distinction,
this
is
a
fairly
generic
way
of
saying.
I
accept
references
from
this
namespace
and
this
you
know
resources
of
this
kind
in
this
namespace
that
that's
the
high
level
idea
here.
I
it's
a
fairly
simple
concept.
I
think-
and
it's
very
useful
for
gateway
api,
but
potentially
useful
elsewhere.
A
Yeah,
I
I
guess
this
reminds
me
of
this
seems
like
a
similar.
I
guess
a
similar
model
that
I
can
think
of
that
I've
seen
used
before
is
to
use
the
same
authorization
language
I
think
in
kubernetes,
open
source.
That
would
be
our
back,
but
have
something
like
resource
print
principles.
So
I
think
the
thing
that's
missing
from
our
back
is
the
ability
to
specify
a
resource
as
the
subject
of
a
policy
yeah.
D
As
far
as
the
the
general
purpose
nature
of
it,
is
there
a
concern
that
if
you
do
this,
that
you'll
get
bogged
down
later
or
something
does
generically
come
up,
there's
certainly
a
I've
seen
a
couple
of
proposals
like
this
over
the
years
in
various
ecosystems
around
cube.
So
it's
not
as
if
it's
kind
of
been
one
of
the
it's
been
punted,
no
one's
actually
gone
through
with
it
or
extensively
used
it.
C
So
I
think
this
is
really
just
we've
gotten
feedback
from
api
reviewers
that
this
feels
like
it
is
not
a
unique
problem
to
gateway
api
and
specific
to
this
specific
bucket
api
example.
Here
this
is
one
where
they're
trying
to
find
another
way
to
do
cross
name
space
references
in
it,
I
think
tim
is
saying.
Basically
do
we
have
to
you
know,
create
this
whole
process
all
over
again
or
can
we
share
concept?
Can
we
share
a
resource?
Can
we
you
know?
I
don't
know.
D
Yeah
I
mean
I
only
bring
that
up
in
the
context
of
if
you
had
to
introduce
a
new,
more
general
purpose
reference
policy,
there's
nothing
that
prevents
your
controllers
from
also
honoring
that
so
it's
not
as
if
it's
a
less
painful,
it
is
less
painful
than
other
types
of.
If
we
put
an
object,
maybe
something
will
come
along
later
to
replace
it,
since
there
hasn't
been
a
huge
amount
of
people
who
hit
this
problem.
Maybe
the
pattern
right
now
is
more
interesting
than
the
generic
capability.
D
That
would
probably
be
my
gut
just
be
I
mean
I,
I
know
people
who
want
things
like
this
in
different
contexts,
but
I'm
not
sure
they
want
exactly
what's
there,
so
I
couldn't
even
support
a
generic
one
with
more
than
maybe
a
couple
of
examples
versus
like
10
or
20
people
who
would
all
be
banging
on
the
door
trying
to
to
get
that.
E
I
had
a
question
about
the
so
there's
a
discussion
of
encoding
in
our
back.
What
is
the
barrier
to
encoding?
It?
Not
I'm
not
saying
necessarily
that's
a
great
idea,
but
it
seems
like
you
could
just
have
just
like.
Today.
Users
are
referenced
with
system
and
service
account.
You
could
have
system,
co
and
resource
code,
namespace
go
and
type
and
just
encode
it
directly
in
our
back.
D
D
It
I
don't
know
so,
at
least
on
that
angle,
allowing
other
types
of
kinds
on
the
subject
of
our
back
doesn't
seem
either.
I'm
sure
there's
subtle
things,
but
it's
not.
If
you
don't
understand
something
you
can't
grant
access
to.
It
is
a
little
bit
different
from.
If
I
don't
know
how
to
use
this
so,
but
that's
also
a
bigger
change
in
a
v1
api,
so
that
would
probably
have
a
little
bit
higher
of
a
bar.
Is
there
a
way
for
us
to
experiment?
D
A
I
I
think,
the
pro
the
problem
is
being
able
to
issue
a
subject:
access
review
to
match
that.
So
I
I
think,
like
I
think,
resource
principles
are
generally
useful
and
to
be
able
to
check
authorization
on
which
I
think
this
is
what
the
problem
is.
I
don't
know
like
for,
like
the
things
that
I
think
about,
for
that
is
like
a
gcs
bucket
being
able
to
read
a
key
or
a
storage
bucket
being
able
to
read
a
key
in
kms
or
something
like
that.
A
I
think
that
is
a
useful
thing
to
be
able
to
declare
in
an
authorization
policy
as
far
as
experimenting
with
that
in
kubernetes.
I
don't
know
if
there's
a
good
way,
because
you
need
to
be
able
to
issue
a
subject-
access
review.
A
What
why
not
or.
D
G
D
I'm
not
opposed
to
the
resource
principle,
either
it's
a
little
further
afield
than
the
reference
policy
it
might
take
longer.
So
that's
a
one
knocking
sure.
F
D
Yes,
we
could
do
that
within
a
username
and
you
would
be
a
bad
person
for
suggesting
it,
but
no
I
mean
that
is.
That
is
actually.
That
is
the
most
concrete
way
that
you
can
test
it
right
now
and
then
you
would
have
to
be
able
to
impersonate
that
username
to
do
the
check
through
subject
access
review.
D
F
Okay,
but
stepping
back
for
a
second
is
the
the
ignoring
our
back
or
the
actual
implementation
is
doing
cross
namespace
stuff
in
general.
A
good
idea
like
I
like
that
my
name
spaces
can't
talk
to
other
namespaces.
That's
the
point.
D
H
Hi,
hey
so
I'm
from
the
sig
storage
group,
working
on
object,
storage,
access
and
and
object
storage
is
is
one
kind
of
storage
which
is
always
over
the
network
and
and
in
our
work
we
need
the
ability
to
share
object,
storage,
buckets
across
namespaces
yeah,
so
we
we've
been
asked
to
look
into
the
reference
policy
design
that
you're
working
on.
That's.
Why
I'm
here
today!
So
that's
one
use
case
where
we
need
cross
name,
space
access
or
referencing
yeah.
C
Yeah
and
I'd
say
it's:
it's
also
a
very
common
request
for
in
low
balancing
infrastructure,
whatever
whatever
you
would
classify
that
as
to
really
have
define
your
load,
balancers
configure
them
in
one.
You
know
sealed
off
space,
a
separate
name
space,
but
then
still
be
able
to
reference
your
applications
running
in
their
regular
application,
namespaces,
especially
if
that
can
be
done
safely,
especially
if
there's
some
kind
of
handshake
mechanism
involved.
F
So
clayton
on
the
kcp
stuff
that
you
guys
have
been
working
on
with
like
that
workspace
concept.
Doesn't
some
of
that
stuff
like
help
with
this
right
like
if
you
can
group
namespaces
together
and
then
the
grouping
is
what
contains
the?
What
is
the
the
gateway
thing
or
whatever
you
want
to
share
your
storage
thing
like
it
feels
like
you,
wouldn't
necessarily
need
a
handshake
or
something
so
fancy
if
you
could
have
have
a
different
grouping
concept
outside
of
nature.
Well,.
G
D
There's
the
angle
of
you
accidentally
give
more
permissions
than
you
intend
to
a
trusted
party
like
confused
deputy
sorts
of
scenarios,
and
the
flip
side
is
that
someone
else
confuses
you
by
giving
you
access
to
a
whole
bunch
of
stuff.
You
didn't
want,
and
so
they're
they're,
that
the
kind
of
handshake
is
a
pretty
fundamental
thing.
Actually,
I
was
gonna.
I
was
going
to
continue
that
thought
that
you
had
mo
as
well
as
the
previous
one,
which
is
what's
the
better
model
of
a
handshake.
Is
it
domain
specific
or
generic?
D
If
it's
in
our
back,
you
get
a
certain
set
of
things
you
can
represent
as
policy,
but
you
lose
the
ability
to
do
anything
domain
specific
over
time,
and
I
think
this
was
the
spec
status
question,
which
is
a
reference
policy
that
is
specific
to
gateway,
can
carry
additional
useful
policy
information
over
time.
Reference
policy,
that
is
generic
or
resource
principles,
cannot
and
would
never
be
able
to.
D
H
Assuming
reference
policy
is
in
place,
we're
thinking
of
building
a
new
resource
on
top
of
reference
policy,
to
do
what
you
just
mentioned
so
yeah.
If
there's
a
better
way,
if
reference
policy
can
can
be
extended
to
do
that,
for
us,
that's
even
better.
D
There
is
a
like
this
goes
all
the
way
back
to
like
data
modeling,
but,
like
I
think,
I've
been
thinking
about
this.
A
lot
recently
like
a
generic
object
is
great
when
almost
everybody's
problem
is
generic,
so
ingress
and
it's
terrible
when
they
only
look
similar
and
you
want
to
add
a
bunch
of
stuff.
So
maybe
that's
like
the
specifically
for
object,
storage
and
for
gateway.
It's
is
this
truly
generic.
D
If
it
truly
is
generic
and
there's
enough
people
over
time,
we
might
get
more
examples
of
it
such
that
we
would
be
like.
Oh,
the
generic
object
is
obvious,
but
the
generic
object
will
come
with
a
lot
of
constraints
so
yeah.
I.
D
Yeah
trying
to
think
if
there's
any
other
examples
in
cube
that
have
come
up
recently
like
a
lot
of
like
creating
namespaces
and
giving
people
access
to
them.
You
know
if
you,
if
you
let,
if
you
have
a
system
that
allows
people
to
create
name
spaces
on
a
cluster,
and
you
and
mode
knows
like
this
kind
of
mo,
I
think,
is
some
of
what
we're
both
thinking.
D
There
is,
if
you
allow
people
to
go
and
create
namespaces,
and
then
you
can,
by
virtue
of
automatically
getting
access
in
the
namespace,
you
can
also
add
people
to
that
namespace
and
then
somebody
goes
and
tries
to
do
the
hey.
We
want
to
list
all
of
the
things
you
can
do
that
opens
up,
like
you
know,
name,
squatting
and
typo
squatting
type
of
challenges.
D
I
don't
think
it's
a
fundamental
problem
for
gateway
in
this
context,
but
the
more
general
problem
it
might
be,
the
more
the
more
you
have
more
untrusted
entities
working
together,
the
less
well
a
lot
of
these
apis
work,
because
then
you
need
additional
apis
that
let
you
understand
the
list
of
things
that
you
can
use
so
because
cube
is
kind
of
one
trust.
Domain
and
namespaces
are
only
a
weak
separator.
A
C
So
this
is
still
in
a
proposed
unimplemented.
Well,
v1,
alpha
one
is
implemented,
but
this
version
of
the
api
is
unreleased
as
of
today.
So
the
proposal
is
that
implementations
controllers
will
check
this
as
they
are,
as
they
are
basically
building
out
their
proxy
rules,
building
out
envoy
nginx,
whatever
configuration
that
they
need
to.
A
What
happens
if
the
reference
policy
is
changed.
C
Yeah,
it's
it's
going
to
be
like
everything
else,
where
you
just
one
more
resource
that
you
need
to
watch,
and
so
you
watch
routes
in
this
case
and
you
watch
reference
policy
and
if
any
part
of
that
changes,
you
need
to
re
rewrite
your
rules.
Basically.
D
Like
all
other
exclusion
policies,
we've
ever
done
in
cube,
we
always
need
to
go
through
the
what's
the
default
and
does
the
default
make
sense,
and
can
someone
specify
the
default
globally,
like
we've
done
for
network
policy
and
egress
policy
and
seven
other
billion
policies?
Do
you
all
have
a
decision
on
that
today?
Is
there
another
object
that
someone
has
to
read
to
understand
the
default
policy
project?
C
D
And
there
are,
there
are
some.
This
came
up
in
a
completely
separate
discussion
recently,
but
there
were
some
arguments
that
if
we
had
in
cube-
and
if
this
had
made
sense
at
the
time
when
cuba
would
we
have
made
cross
namespace
network
policy
deny
by
default,
there
were
some
arguments
that
maybe
that
would
have
been
a
more
normal
default
and
then
you
would,
you
know,
define
your
policy
up
front.
There's
some
similar.
D
D
D
Okay,
but
it's
a
small
number,
we're
talking
tens,
yeah,
yeah
yeah
at
most
yeah.
Did
anyone
suggest
or
ask
about
todd's
secret
references,
or
was
that
just
that
might
be
like
a
third
example?
I
don't
know
if
someone
had
actually
explored
it
or
it
was
just
punted
by
tim
or
by
someone
in
a
conversation.
D
D
If
I
could
see
someone
making
an
argument
that
I
might
want
to
create
a
secret
reference
policy
that
listed
the
you
know
secret
reference
policy
for
something
that
a
pod
can
mount
across
namespaces
and
again,
I'm
not
arguing
for
it
necessarily
be
it.
I
could
see
that
a
set
of
arguments,
a
secret
that
you
intended
everyone
on
the
entire
cluster
to
use,
would
have
a
very
long
list
of
people
who
could
reference.
It
is
that
that's
probably
the
that's
where
we
start
getting
into
okay.
D
C
It's
it's
possible
it
could.
You
could
have
a
a
very
long
list
of
resources
that
wanted
to
target
the
same
route.
I
think
that's
an
uncommon
use
case,
but
there's
there's
nothing
that
restricts
that
from
happening.
C
C
One
of
the
things
we
want
reference
policy
to
enable
is
to
allow
you
to
reference
a
certificate
in
a
different
name
space,
so
that
would
be
stored
in
a
secret.
So
if
a
gateway
is
in
an
infra
namespace
and
then
you
have
a
separate
namespace
that
contains
your
certificates,
you
could
enable
that
reference
with
reference
policy.
D
I
mean
even
even
a
thousand
entries
is
not
the
end
of
the
world
for
a
list.
It
does
get
you
it
does
get
into
the
vein
of.
If
you
have
a
problem,
that's
best
solved
by
having
a
list
of
a
thousand
namespaces.
D
I
think
that's
what
moe-
and
I
were
chatting
about
earlier,
is
like
in
the
long
run
like
super
dense,
complex
clusters,
with
lots
of
different
entities
of
different
security
levels
is
lacking.
Constructs
that
allow
you
to
partition
them,
and
that's
some
of
that
discussion.
Smaller
clusters
or
smaller
chunks
of
name
spaces
is
one
way
to
approach
that.
D
So
it
doesn't
seem
like
a
blocker
to
me,
even
in
the
thousand
case,
to
have
a
list
with
a
thousand
items
on
it,
and
it's
up
to
you
to
maintain
that
you,
the
user,
who
wants
to
allow
all
these
people
to
reference
it.
D
Yeah,
just
based
on
everything
you
said,
I
put
this
in
chats
like
this:
it,
the
generic
the
genericness
of
this.
I
can
see
in
mike's
point
about
principles,
also
really
interesting,
but
what
you've
expect
here
seems
very
reasonable
and
tractable,
and
you
could
be
an
awesome
guinea
pig
to
see
if
there's
other
patterns,
a
generic
pattern
isn't
truly
required
to
make
this
successful
and
if,
unless
you
can
think
of
a
reason
or
you
come
up
with
a
reason
where
no
no,
the
generic
pattern
would
be
expensive
because
of
problem
x,
y
or
z.
C
That
makes
a
lot
of
sense,
so,
just
to
clarify
it
sounds
like
the
design
generally
makes
sense
it
it.
It's
not
very
clear
that
this
makes
sense.
It's
a
generic
type.
Yet
my
last
question:
I
know
this
is
already
past
time,
probably,
but
I
is
for
the
current
state,
where
we
have
you
know
bucket
api
working
and
wanting
to
do
cross.
Namespace
references
would
the
guidance
for
them
be
to
just
create
their
own,
similar
concept
to
this.
For
now,
and
eventually,
maybe
we
can
merge
these
into
a
generic
resource
in
the
future.
D
I
don't
think
I
can't
think
of
anything
wrong
with
doing
that.
You
all
might
you
might
want
to
have
a
name
that
is
specific,
like
a
kind
and
a
resource
name.
That
gives
a
little
bit
more
context,
because
you
know
obviously,
two
people
with
two
reference
policies
with
the
same
resource,
name
and
you're
always
having
to
disambiguate
by
group,
but
we've
dealt
with
that
problem
before
like
istio
has
a
service,
so
I
don't
have
a
straw.
I
don't
have
a
strong
opinion,
one
way
or
another.
D
If,
if
there's
nothing
like,
if
you
didn't
think,
oh,
we
want
to
call
this.
The
gateway
reference
policy
and
reference
policy
felt
superior.
That
might
be
the
only
subtle
wrinkle
there
about
having
multiple
of
these
all
called
reference
policy.
On
the
other
hand,
certainly
if
everybody
had
their
own
reference
policy,
then
maybe
it
would
force
us
in
the
client
tools
to
be
a
little
bit
clearer
about
group
ownership
and
showing
that
if
they
had
similar
schemas,
they
might
feel
very
compatible
for
users,
and
that
would
be
an
argument
for
a
common
schema.
A
So
what
version
of
gateway
are
you
planning
on
introducing
this.
C
We're
hoping
to
release
by
the
end
of
the
month,
so
it
is,
it
is
coming
pretty
soon
unless
we
hear
anything
that
blocks
that.
A
I
mean
it
is
an
alpha,
so
I
think
you
know
it's
reasonable
to
experiment
there
and
yeah.
I
I
would
like
to
see
maybe
how
bucket
reference
policy
thing
and
gateway
reference
policy
develop,
maybe
independently
to
solve
kind
of
two
specific
use
cases
and
see
if
they
have
the
same
scoping
semantics
would
benefit
from
maybe
more
generic
authorization
model,
and
then
that
would
be
useful
information
to
decide
whether
we
want
a
unified
reference
policy
or
want
to
fold
it
into
kubernetes
authorization.
F
C
Yes,
we
we,
we
considered
many
variations
here,
including
labels
and
and
ultimately
this
api
and
v1
alpha
1
was
pretty
heavy
on
labels
and
we
found
it
very
confusing
to
work
with
I-
and
this
was
one
concept
at
least
again
within
the
scope
of
gateway
api,
where
the
number
of
cross
namespace
references
felt
like
it
would
be
much
simpler
to
manage
with
these
kind
of
direct
references
than
a
set
of
labels.
But.
D
Yeah,
that's
actually
a
good
bit
of
input
like
if
cardinality
of
references
references
is
low.
That
might
be
a
good
way
for
us
to
say
like
hey,
this
pattern
works
when
you
have
low
cardinality.
Do
we
have
anybody?
Who's
got
a
high
cardinality
pattern,
and
does
it
not
work
like
pod
namespace
policy
on
pod
label?
Selectors
is
an
example
of
that
that
triggered
one
more
question
for
me
in
the
references
namespace
you
had
and
object.
D
You
would
I'm
not
sure
it's
truly
required,
but
because
this
is
a
you're
granting
access
to
someone
by
name.
Obviously,
names
can
be
reused
if
the
name
spaces
is
deleted
and
recreated.
Are
you
making
the
same
argument
there
you
have
with
the
the
reason
you
didn't
support.
Resource
name
is
because
anybody
who
controls
that
destination
namespace
could
obviously
lie
or
would
a
namespace.
You
had
an
object.
You
would
be
something
that
you
could
maybe
anticipate
having
to
add
later.
C
Yeah,
a
eu
it
has
has
come
up
as
a
potential
extension
to
this,
but
again
we're
trying
to
start
with
the
simplest
use
case
and
not
over
complicate
as
much
as
possible.
So
we
have
left
resource
names
out
entirely.
I
think
the
first
resource
name,
we
would
want
to
add,
is
actually
on
the
target
side.
That
seems
like
something
like
that
is
a
very
I
don't
want
to
say
glaring
omission,
but
something
that
is
is
a
common
request.
A
C
C
D
D
One
of
the
one
of
the
nice
advantages
of
materializing.
This
object
is
that
somebody
who
you
know
had
a
bunch
of
free
time
on
their
hands
could
go
build
a
nice
visualization
of
who's
referencing.
What
and
it
makes
consumption
there
easier.
That's
not
an
argument
against
the
rbac
model,
but
it's
certainly
you
know
there
is
a
relationship.
D
You've
explicitly
described
the
relationship,
you
could
attach
data
to
the
policy
object
and
you
could
do
this
for
our
back
too
with
bindings
if
we
had
to,
but
you
could
imagine
things
like
why
this
was
getting
granted
as
well
as
ownership
of
that
object,
being
something
that's
pretty
clear.
So.
C
So
on
that
note
one
more
question:
I
had
it
in
the
notes
at
some
point
here,
but
this
is.
This
does
seem
very
similar
to
the
rbac
types
that
already
exist
and
those
exclude
status
we've
gone
back
and
forth
on
whether
we
need
spec
and
status
or
not
on
this,
it's
it's
hard
to
completely
understand
what
we
put
in
status,
but
there's
some,
you
know
edge
cases
where
it
could
potentially
be
helpful.
C
Yeah-
and
you
could
argue
the
same
in
our
case-
we
have
multiple
controllers
that
are
going
to
be
reading
this
any
given
reference
policy,
and
so
it's
difficult,
though,
not
impossible
to
populate
any
meaningful
status
as
a
result
of
that.
D
I'd
say
in
secrets
and
config
maps
this
has
been
asked
of
as
well.
What
was
the
other
one
are
back
where
this
sort
of
really
drawn
a
blank
on
some
of
the
really
old
objects.
I
you
know
I
wrote
the
section
of
recommendations.
We
had
an
argument
about
like
spec
status.
Does
everything
have
to
have
status?
D
It's
kind
of
most
things
once
beckon
status
if
they
feel
like
objects
and
a
reference
policy
doesn't
feel
like
an
object,
the
same
way
that
a
gateway
or
an
ingress
or
an
http
route
does,
and
so
I'd
probably
say
we
adding
status
to
ingress.
D
We
had
use
cases
for
adding
status
to
policy
occasionally
comes
up.
I
think
someone
asked
about
this.
Someone
asked
about
like
a
policy
document.
They
wanted
to
know
when
it
had
been
processed,
but
I
think
that
gets
into
what
mo
brought
up
as
well
as
there
may
be
many
different
people
consuming
it,
and
so
it's
not
always
clear
what
data
you
would
feedback.
If
there's
no
authoritative
quote-unquote
policy
then
becomes
a
little
bit
problematic
to
have
status.
E
D
And
yeah,
and
that's
basically
what
I
meant
like
there
could
be
multiple
responsible
controllers.
Is
it
when
all
controllers,
then
you
have
to
start
thinking
about
lists
of
controllers.
You,
like
pods,
one
of
the
nice
things
about
like
pod
status,
is
the
argument
and
not
just
from
the
fact
that
it's
a
real
object,
but
I
was
like
the
first
cube
object
where
we
made
a
lot
of
these
concrete
arguments.
Is
that
there's
a
cl
a
single
owner?
The
cubelet
is
a
controller.
D
The
there's
an
ownership
relationship
deployments
is
another
example
of
there
is
an
authoritative
owner
if
you,
if
you
couldn't
immediately
only
ever
be
one
owner
of
a
particular
object
or
ownership
hands-off.
So
like
pods
have
scheduler
name
technically,
you
can
imagine
a
different,
you
know,
scheduler
being
responsible,
we
talked
about
deployment
controller,
having
like
custom
controller
field.
That
says,
like
look
regular
deployment
controller
ignore
me,
but
in
all
of
those
cases
it
was
a
singleton
ownership,
and
in
this
case
I
guess
it
would
be.
D
Could
you
ever
imagine
two
controllers
using
supporting
this
policy
for
different
for
different
reasons
or
at
the
same
time,
in
which
case
status
gets
a
little
confusing.
C
Yeah
and
we
have
a
very
similar
problem
elsewhere
in
the
api,
but
it's
entirely
possible
for
two
different
gateways
implemented
by
two
different
implementations
to
target
and
serve
the
same
application
and
and
so
that
that's
valid,
and
so
how
we
imagine
status
working
here
is
that
you
would
each
controller
could
write
what
relationships
this
reference
policy
was
enabling
for
that
like
what
they
you
know.
Basically,
this
reference
policy
has
enabled
this
gateway
to
reference,
this
secret
or
this
service,
whatever
it
is,
but
that
list
could
get
very
long.
Yeah.
D
That's
a
that
feels
like
one,
which
is
it's
come
up
in
a
few
other
places
and
I'd
probably
say
unless
you
have
a
really
concrete
reason
now,
starting
without
spec
and
status,
and
then
you
know.
G
D
At
worst
down
the
road,
you
have
to
add
another
word
and
you
introduce
a
new
resource
type
and
you
just
read
both
and
it's
kind
of
the
that
could
happen
on
a
three-year
time
frame.
But
in
the
meantime
you
would
know
for
certain
whether
you
need
status
pretty
early.
If
you
don't
know
that
you
need
status,
you
probably
don't
need
status.
E
If
I
was
a
user
of
this
and
I
was
modifying,
like
my,
I
had
a
gateway
implementation
installed
and
set
up
some
reference
policies,
and
now
I
modify
one
of
the
reference
policies
and
that
causes
the
gateway
controller
to
choke,
and
it
continues
because
it's
using
a
watch
again
and
he's
using
the
old
version
of
the
reference
policy.
How
do
I
without
a
status
condition?
E
F
So
this
is
mostly
a
question
for
the
room.
Has
anyone
seen
experimentation
around
this
specific
kind
of
problem
where,
instead
of
having
a
status
on
the
resource,
you
have
a
status
that
reflects
like,
for
example,
if
your
gateway
controller
is
a
series
of
pods,
each
pod
creates
a
like
an
instance
of
some
custom
resource
that
contains
a
status
where
it
effectively
is
writing
out
its
understanding
of
the
world
right.
So
it's
not
hey
some
random
status,
that's
shared
by
a
bunch
of
controllers
like
this
pod
right
here.
F
Has
this
status
right
now,
because
it
owns
this
one
instance
of
this
custom
resource
and
that's
where
it
stores
its
state
for
the
rest
of
the
world
to
just
observe
right.
It's
not
there's
no
spec,
there's
no
driving
using
that
it's
just
a
way
of
it
telling
the
world
hey.
I
have
these
18
caches,
and
this
is
the
level
they're
all
at.
If
you
like
are
wondering
where
I
got
to
before
I
broke.
F
But
you
could
represent
that
as
an
account
all
right,
I'm
using
pod
as
a
air
quotes
here,
because
it
doesn't
have
to
be
applied,
but
you
could
imagine
a
per
instance
of
gateway
controller.
It
could
say
I
have
achieved
the
correct
state
for
this
resource
version
of
this
policy,
meaning
all
the
things
that
are
supposed
to
have
been
done,
as
in
you
know,
remove
whatever
load,
balancing
networking
stack
for
that
revision.
I
have
accomplished
that,
but,
like
just
me,
not
the
18
other
instances
of
me
running
on
this
cluster.
F
D
So
I
would,
if
I
had
to
put
like-
and
this
is
a
more
an
art
cut
line-
it
really
comes
down
to
a
do.
You
need
that
complexity
and
does
that
complexity
matter?
So
historically
we
have
said
if
it
should
happen
fast
and
we
can't
come
up
with
a
use
case
that
requires
us
to
actually
communicate
this
to
users
because
it
just
doesn't
come
up
often
enough.
We
haven't
done
it.
So
our
back
policy
propagation
is
a
great
example.
D
In
our
ede
test
we
run
singleton
h
a
cube
api
servers
in
many
distros
or
production
environments.
People
running
in
in
those
configurations,
the
caches
get
out
of
date,
watch
caches,
get
out
of
date,
informer
admission,
caches
get
out
of
date,
and
in
general
it's
in
that
like
point,
oh,
it's
somewhere
less
than
one
percent
of
a
problem,
and
so
it
just
falls
into
that
we
could
fix
it.
Would
it
be
worth
the
time,
effort
right,
budget
complexity
and
generally
it's
has
been?
No,
so
I
think
it's
really
it's.
D
If
you
believe
that
users
will
continuously
have
slow
controllers
gateways
reading
these
policies,
I
think
it
would
come
up
more
if
you
can
credibly
that
you
can
get
the
end-to-end
propagation
latency
down
under
20
or
50
or
100
milliseconds,
you
know
borrowing
latency,
then
it
may
just
not
be
an
issue.
That's
that's
basically
been
the
heuristic.
I
think
that
is
in
practice,
gotten
implemented
in
cube
and
if
anybody
disagrees
feel
free
to
correct
me.
D
This
is
what
I'm
thinking
about
like
admission
and
the
core,
like
the
namespace
namespace
admission
is
driven
by
a
cache
that
cache
can
get
actually
get
out
of
date
and
you
can't
create
namespaces,
because
I
cached
that
date.
We
did
a
few
mitigations
and
said
it's
good
enough.
Stick
the
duct
tape
on
and
move
along.
A
A
A
All
right,
thank
you
for
the
discussion.
I
hope
that
was
helpful.
I
thought
that
was
pretty
interesting.
F
No,
this
was
meant
as
a
pr
yeah.
It
was
it's
meant
as
a
question
for
what
we
should
do
to
move
this
forward,
so
so
guys
you're
on
the
call.
So
you
can
provide
more
context,
but
the
tl
dr,
is.
We
made
a
bad
metric
that
can
break
people's
clusters
and
what
should
we
do
to
not
break
people's
stuff?
Should
we
delete
the
metric?
Is
there
a
different,
better
metric
that
still
gives
us
what
we
expected
out
of
this
metric.
I
Yeah,
I
can
you,
can
chime
in
hey,
there's
a
search
from
the
off
team
in
open
shift,
and
yes,
the
thing
is:
this
is
a
metric
that
was
introduced
to
track
the
amount
of
root,
ca
search
published
in
the
root,
ca,
cert
publisher
and
it's
a
cool
debug
metric
to
have.
However,
we
have
a
little
problem
with
that
and
if
you
scroll
down
to
the
metrics
test,
you
can
see
it
right
away
exactly
where
the
namespace
labels
are
there,
because
these
are
problematic
up
a
little
bit
up
up.
I
Where
is
it,
I,
I
literally
saw
it
in
the
scroll
it's
gone.
The
thing
is
this
metric
has
tracks
the
amount
of
root
case
published
per
namespace.
Here
we
go.
This
is
a
this
is
yeah
exactly
so.
What
you
can
see
here
is
that
we
got
in
line
123.
I
We
got
the
root
ca,
cert
published
in
namespace
test
ns
with
the
http
status
code
200
two
times
right,
so
this
is
sort
of
like
what
the
metric
says
and
obviously
that's
that's
good
for
debugging,
because
we
have
the
information
for
namespace
and
well,
you
know
sometimes
the
roots
you,
a
cert
publishing
can
fail
for
a
given
namespace
like
in
this
test.
I
You
have
the
case
for
four
or
four
or
even
a
500.,
so
that's
a
very
nice
debugging
metric
to
have,
however,
it
causes
as
unfortunately,
it's
often
the
case
that
played
movies
on
the
code.
Probably
knows
that
because
we
ever
often
debug
why
prometheus
tends
to
oom
in
our
setups
namespace.
Essentially,
is
an
unbounded
label
value
compared
to,
for
instance,
the
http
status
code,
which
is
just
a
known
set
of
values?
The
namespace
count
can
vary
drastically
and
what
happens
essentially
is
that
we
have
like
an
individual
indefinite
number
of
metrics.
I
The
question
to
this
group
is
twofold:
a
for
this
specific
metric.
Does
it
make
sense
to
track
it
per
namespace?
Yes
or
no?
If
not,
if
it's
good
enough
to
have
it
per
code
globally,
then
we
could
simply
remove
the
namespace
and
the
label
is
gone
and
for
the
record
in
openshift.
We
specifically
disabled
this
very
metric
during
the
last
three
days,
because
it's
simply
crashed
prometheus.
D
Yep
I
was
gonna,
ask
I
don't
do
we
allow,
I
thought
sig
instrumentation
had
a
no
unbounded
cardinality
on
user
input.
Exactly
for
metrics
principle
is
that
is
that
a
hard
or
soft
principle.
I
It's
a
soft
principle,
so
we
have
a
cap
out
there
for
metric
cardinality
enforcement
and
you
can
specify
as
a
command
line
parameter,
allow
metric
labels
and
it's
an
opt-in
setting.
So
what
you
have
to
do
is,
oh,
you
recognize,
there's
something
wrong
with
your
metrics
and
then
you
can
instrument
api
server
to
limit
the
set
of
labels
and
label
values
to
a
known
set.
But
you
have
to
do
this
sort
of
like,
after
you
figure
out
a
problem
and
that's
sort
of
like
the
problematic
thing.
What
we
do
completely
in
openshift.
I
We
simply
don't
scrape
that
metric
at
all.
We
we
disregard
it,
but
it's
it's
an
opt-in
way
right
and
I
think
that's
that's
a
little
bit
problematic.
I
don't
know
if
that
answers
your
question.
D
Yeah
I
just
I
we
we've
had
some
of
these
regressions
over
the
last,
like
five
six
years,
where
we
we
we
stripped
cardinality
out,
we
got
the
cap,
I
was,
I
vaguely
remember
the
cap
to
go.
Do
it
is
in
this
particular
case
like
it
would
certainly
be
like
anything.
That's
that
has
this
high
cardinality
would
have
to
be
mega
important
for
a
large
class
of
people
using
kubernetes
and
historically,
we've
said,
like
only
like
a
very
few
things.
D
Have
this
cardinality,
like
the
number
of
requests
coming
into
resources
and,
like
that's
bound
in
other
dimensions
like
people
with
lots
of
crds,
are
already
starting
to
break
some
of
those,
but
this
one
seems
very
unusual
in
cube
core
today
of
one
label
per
namespace,
I'm
not
I
mean.
Are
there
any
others
that
that
exist
right
now,.
I
I
don't
know
of
the
top
of
my
head,
but
I
do
know
we
disabled
a
couple
of
other
cases
where
we
are
we're
tracking
the
namespace
label.
It's
a
common,
it's
a
common
theme
where
especially
the
namespace
label
turned
out
to
be
problematic,
but
I
would
I
would
have
to
enumerate
there
are
a
couple
of
cases,
I'm
sure.
Yes,.
D
I
Except
I
I've
been
asking
the
question
to
the
original
poster.
He
said
it's,
it
would
be
nice
for
debugging
and
I
think
you
know
what
we
could
do
is
I
I
think
per
code
is
essentially
good
enough
right
now.
I
I
believe
unless
more
maybe
you
have
a
little
bit
more
insights
on
the
root
ca,
cert,
publisher,
semantics,
if
having
the
namespace
label
so
is.
Is
there
a
case
where
the
root
case
or
publisher
can
fail
on
a
namespace
but
the
500
but
succeed
on
another
namespace
with
the
200?
For
some
reason,
maybe
mo
you
have
some
more
historical
knowledge
about
this
specific
publisher.
F
I
A
I
was
just
gonna
say
I
think
that
would
be
unusual
and
yeah,
so
I
guess
looking
at
the
change
here,
it
seems
kind
of
weird
to
me
to
reduce
a
counter.
A
I
thought
my
expectations
and
those
are
usually
monotonically
increasing,
so
I
don't
know
what
downstream
effects
that
would
have
deleting
values
from
them
and,
honestly,
I
cannot
think
of
a
case
where
namespace
is
actually
that
useful
for
monitoring.
You
know
debugging.
There
are
other
tools
such
as
looking
at
logs
to.
I
I
Yeah
exactly
so
standard
standard
contributed
that
fixed
and
that
this
is
a
clever
fix.
In
case
I
mean
the
question
for
this
group
is
a
do.
We
need
the
name
space
desperately
if
we
do
need
the
namespace
for
this
metric
and
it
I'm
hearing
between
the
lines.
We
don't
necessarily
need
it,
but
if
we
would
need
it,
then
this
is
a
clever
solution
by
standard
who,
essentially,
whenever
a
namespace
is
being
deleted,
the
corresponding
metrics
are
removed
from
memory
which
would
cause
them
not
to
be
rendered
on
the
slash
metrics
endpoint
from
api
server
right.
I
So
this
is
a
nice
workaround
and
then
also
this
cardinality
issue
would
be
somehow
reduced
because
prometheus
would
then
not
scrape.
You
know
all
the
metrics.
This
is.
I
just
noticed.
This
is
a
histogram.
I
D
We
yeah,
like
already
it's
deleted
across
high
cardinality
yeah,
so
yeah.
I
I
think
like.
I
would
like
at
least
my
gut
on
this
just
looking
at
knowing
some
of
the
context
behind
a
little
bit
of
the
metric,
but
not
all
of
it
is
like
someone
needs
to
make
the
case
that
namespace
is
important
and
it
has
to
be
a
very
good
reason,
like
very
good.
That
would
fit
with
our
general
stance
on
review
of
metrics
in
cube
enforced
across
most
of
the
cigs
that
I,
because
I
helped
with
some
of
this
stuff.
D
Historically,
so
I'm
not
completely
making
this
up
as
I
go
but
like
it
would
have
to
be
a
really
good
reason
and
the
sig
would
need
to
justify
why
it's
a
really
good
reason
to
have
namespace
label.
I
Very
cool,
okay,
so
here's
my
suggestion,
but
we
will
set
up
or
we
will
submit
a
pr
removing
and
the
good
news
is
this
metric
is
an
alpha
api
level,
so
we
can
sort
of
like
break
the
the
metrics
api
like
the
the
api
or
the
contract.
For
this
api
should
be
fine.
We
will
submit
a
pull
request
completely,
removing
the
namespace
label
and
standard.
We
have
been
shortly
talking
about
this
as
well
right.
That
would
make
the
whole
logic
easier.
G
A
Cool
so
mo
thoughts
on
formalizing
scd
api
used
by
cube
as
kms
replacement.
Please
expand.
F
Yeah
I
was
like
I
don't
know,
five
minutes
is
enough,
but
it
if
it's
not.
We
can
just
continue
the
conversation
next
time,
but
the
tldr
is
I've
been
thinking
about,
like
the
various
improvements
that
nish
and
rita
are
proposing
for
kms
to
fix
the
issues
we
have
with
it
in
the
current
beta
state
and
sort
of
stepping
back
from
it.
F
The
question
I
would
ask
is:
should
we
should
we
try
to
expand
the
surface
area
of
that
api,
a
bunch
to
sort
of
help
it
express
what's
missing
and
try
to
address
all
the
issues,
performance
and
scalability,
as
well
as
the
various
other
bits
that
we've
talked
about
over
the
years,
or
would
it
be
simpler
or
easier
to
like
work
with,
like
sig
api
machinery,
to
more
formal
like
to
formulize
the
bits
of
the
xcd
api
that
we
rely
on,
which
is
you
know,
transactions
range
requests,
compaction
leases,
probably
something
else
that
I've
forgotten.
F
And
sort
of
write
that
down-
and
I
don't
enforce
it
at
the
code
level,
somehow
that
you
cannot
start
using
new
bits
and
pieces
of
the
std
api
without
writing
a
cap
or
something
once
that
is
formalized
as
this
is
the
api
contract.
So
it's
not
that
we
rely
literally
on
std
it's
to
rely
on
this
subset
of
the
cd
grpc
api.
F
Then
it
becomes
easier
for
someone
to
build
a
shim
layer
that
does
anything
they
want
in
between.
So
as
a
for
example,
rancher's
kind
project
sort
of
does
that
today
or
sql-based
back-ends,
but
but
they
do
that
by
just
making
assumptions
about
the
api
calls.
The
cube
api
server
will
make
right
for
the
grpc
api.
It's
not
you
know,
they're,
not
relying
on
a
stable
interface
they're,
just
they're,
just
relying
on
the
observation
of
these
are
the
apis.
F
We
happen
to
call,
but
once
once
you
have
such
a
shim
right,
I
could
imagine
us
building
the
current
kms
implementation
out
of
tree
or
just
moving
it
out
of
the
tree
using
the
shim.
So
we
could
continue
to
support
that
as
a
backwards
compatibility
thing.
You
have
it,
you
can
use
it
if
you
want
it,
but
then
it
would
allow
us
to
have
an
encryption
layer
that
is
has
basically
different
opinions.
Right
like
today,
we
have
the
envelope
encryption.
F
That
says,
you
know
that
we're
gonna
generate
data
encryption
keys
and
hand
you
that
over
the
kms
grpc
api
and
you're
gonna
use
the
kms
key
encryption
key
to
encrypt
that
and
then
send
it
back
to
us
and
like
there's
a
prefixing
series
and
like
there's
a
bunch
of
assumptions
about
like
we
give
you
one
data,
encryption,
key
per
write
and
all
that
stuff.
F
I
feel
like.
That
would
be
valuable,
but
you
can
imagine
if
std
doesn't
want
to
get
into
that
business,
that
if
you
could
implement
a
small
shim
layer
to
do
that
for
you,
you
may
make
different
choices
that
are
not
bound
by
whatever
we
think
and
cigot
is
the
correct
way
to
do
envelope
encryption
as
a
for
example,
so
that
that's
the
high
level
tldr
is
just.
A
Moving
the
what
we
do
in
cube
down
a
layer
or
is
there,
is
that
the
correct
summary
of
the
problem
that
this
would
solve.
F
Basically,
we
have
made
a
very
strong
set
of
opinions
about
how
encryption
at
rest
with
kms
works
and
I'm
and
an
alternate
solution
that
I'm
sort
of
proposing
to
the
kms
problem
is:
can
we
just
let
you
do
whatever
you
want
for
your
encryption?
That
way,
instead
of
us
asserting
every
single
opinion
about
it?
Maybe
you
know
your
kms
has
different
needs
or
requirements.
F
D
D
Has
anyone
ever
seriously
considered
doing,
etc,
encryption
arrest
in
ncd
I
mean
that
was
one
of
the
actual
in
the
original
discussion,
when
we
added
secret
encryption,
I'm
pretty
sure
one
of
them
was
like
it'd
be
really
nice.
If
we
just
this,
wasn't
our
problem
and
at
the
time
there
wasn't
really.
The
most
of
the
effort
on
etsy
was
focused
on
stability
and
scale
and
perf
stuff.
F
D
This
might
be
a
good
api
machinery
question.
I
haven't
seen
any
designs
come
across
recently,
but
I
haven't
been
looking
for
those
specifically,
but
I
mean
it's
a
great
question
right,
like
the
the
core
question
you're
asking
is:
should
we
be
doing
really
complicated
encryption
at
rest
above
a
storage
layer
when
it's
at
rest
encryption
only
and
we're
loading
it
all
the
memory
and
holding
it
all
memory
in
like
500
caches
across
the
entire
distributed
system?
Probably.
F
Yep,
oh,
like
I,
I
intended
to
talk,
and
I
know
we're
at
a
time
where
intel
was
talking
about
how
they
built
the
kms
that
used
their
secure
enclave
to
do
this
encryption,
I
was
like
you
realize
that,
literally
all
of
the
data
is
sitting
in
memory
of
the
api
server
process
right
like
who
cares
that
you
use
the
kms
to
like
hold
on
to
the
keys.
Literally,
the
unencrypted
data
is
sitting
in
memory
of
the
api
server
process
like
I
don't
well
in.
F
F
A
Yeah,
I
think
in
general,
when
I
look
at
kms,
I
do
think
that
it
has
enough
adoption
in
beta
that
we
have
to
be
delicate
with
how
we
evolve
it
towards
ga.
So.
A
I
I
think
that
would
be
the
like
success
criteria
there,
especially
like
around
performance,
where
we
know
that
performance,
not
necessarily
this
type
of
performance,
we're
copying
through
a
shim,
but
performance
issues
have
been
something
that
is
plaguing
the
beta,
so
just
to
be
careful
awesome.
Let's
hoist
this
up
for
next
meeting,
though
yeah.
F
A
All
right,
thank
you,
everyone
for
joining
us
a
little
over
time.
I
guess
we
will
see
you
all
in
two
weeks.