►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 14 June 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
B
Good
morning
hey
so
my
linux
machine
stopped
working
with
zoom
so
doing
this
from
my
ipad.
B
B
B
Okay,
it
sounds
better
now,
so,
let's,
let's
define
three
teams:
one
is
the
photo
stream.
One
is
the
video
stream
on,
let's
say,
there's
a
profile
team,
let's
just
assume
in
the
scenario
that
the
profile
has
both
photos
and
videos,
so
the
photo
stream
would
manage
the
photos.
The
videos
team
would
manage
the
videos
and
the
profile
team
would
need
data
from
both.
B
So
in
this
scenario,
let's
define
some
users
in
the
photos
team.
This
alice,
who
belongs
to
the
group
photos.
Then
there's
bob
in
the
videos
team
who
belongs
to
the
group
for
videos-
and
let's
say
foo-
belongs
to
the
profile
team
and
foo
would
be
a
part
of
photos,
videos
and
profile
groups.
B
So
these
are
access
groups,
not
not
the
kubernetes
api
groups.
Kubernetes
lets
you
define
this
kind
of
hierarchy
of
access,
including
the
groups
part.
So
this
this
model
kind
of
makes
sense
right
like
th.
This
is
the
kind
of
model
we
can
support
in
terms
of
bucket
like
this
is
the
kind
of
model.
A
B
B
We're
talking
about
clustered
resources,
so
if
it's,
if
it's,
if
we
put
some
metadata
in
there
and
if
someone
edits
it,
it's
it's
considered,
you
know
their
fault,
it's
not
it's,
not
a
user,
doing
it.
We
we
could.
A
Well,
yeah
I
mean
I
we
should
hear
the
rest
of
this
proposal,
but
I
mean
I
just
want
to
make
sure
that
it's
it's
clear
that,
like
the
rbac
system
in
kubernetes
executes
when
you're
talking
to
the
api
server
to
do
something,
but
then
after
the
object
is
created,
it
forgets
who
did
what
right?
It's
just
an
object.
Now.
D
It's
again
for
we're,
saying
that
we're
talking
about
creating
bucket
access,
basically
right,
that's
that's
the
flow
where
we
are
looking
for
right,
controlling
with
access
control
or
so
right,
right.
A
Right
but
remember
that
the
way
that
we
structure
these
things
is
you
create
an
object
and
then
some
are
back
runs
that
says.
Yes,
you
were
allowed
to
create
that
object,
but
then,
when
it
becomes
time
to
bind
a
bar
to
something
like
a
controller,
does
that
work?
The
controller
just
sees
a
bar,
it
doesn't
know
who
made
it
right.
They're,
just
abr
was
successfully
created
by
someone
and
it
has
some
values,
and
now
you
have
to
decide
what
to
do
with
it
and
see.
B
A
B
Yeah
so
yeah,
I
see
what
you're
saying
so
so,
let's
let's
go
to
this
step.
Let's
say
team,
one
that
is
photos
team
create
bucket
one
and
team.
Two
video
stream
creates
bucket
two
bucket.
One
is
created
by
alice
in
bucket
2
speed
by
bob
so
and
who
gets
access
to
both
so
so
in
in
this
case.
First,
during
the
creation
of
the
bucket.
B
The
group
will
be
designated
by
the
bucket
class,
so
the
bucket
class
should
have
some.
I
mean
this
is
one
of
the
proposals,
so
we
can
figure
out
where
the
group
comes
from,
but
inside
the
bucket
object.
Somehow
we
we
have
a
field
that
that
stores
the
the
bucket
group.
C
D
A
A
A
D
B
No,
no
I'm
saying
I'm
saying
we
we
we
get
that
we,
we
have
a
gate
around
it.
We
say
that
only
if
you
have
the
permission
to
create
a
br
for
this
bucket
will
it
go
through.
C
D
D
D
A
D
D
D
A
D
A
D
D
D
Are
we
trying
to
really
have
clear
you
know
access
control
inside
the
ninjas,
or
is
it
mostly
access
separation
inside
namespaces,
because
I
I
thought
that
we're
not
really
trying
to
say
like
restrict
access
inside
the
namespace,
because
anybody
can
basically
see
anything
right
around
there,
but
we're
talking
about
the
fact
that
we're
not
giving
out
the
workload
who's
like
running
inside
the
pod
access?
If
we're
not
meaning
to
do
so
and
like.
A
C
D
Access
to
something,
but
eventually
from
kubernetes
point
of
view,
you
create
a
bar
which
is
a
request
for
you
know
a
ba
basically,
and
you
get
allocated
a
ba
and
from
kubernetes
perspective
it's
you're
not
really
accessing
anything.
Really
you're,
just
talking
about
kubernetes
it
doesn't
seem
like
namespace
is
the
right
boundary.
A
The
name
spaces
are
kind
of
all
you
have.
I
mean
I,
I
I
see
what
you're
trying
to
do
with
the
users
in
their
groups,
but
just
it
feels
like
a
like
a
misuse
of
the
kubernetes.
Our
back
system,
because
the
users
are
are
meant
for
like
controlling
whether
you
can
interact
with
the
api
server
in
a
certain
way,
but
but
we're
not
interacting
with
the
api
server.
We're
interacting
with
a
controller
and
the
controller
doesn't
know
who
you
are
so
and
and
that's.
B
Okay,
so
no
no,
we
can.
We
can
actually.
So,
let's
think
about
this.
So
what
we're
saying
is
using
these
users
and
groups
the
whole
problem
before
was
what
would
prevent
users
from
say
listing
all
the
buckets
and,
seeing
you
know,
buckets
they're
not
supposed
to,
because
earlier
we
said
through
through
our
back,
we
could
only
allow
we
could
either
allow
users
to
see
all
buckets
list,
all
buckets
or
no
buckets
at
all.
But
I'm
saying
that's
not
the
case
anymore.
We
could
there
is.
B
A
A
A
D
A
B
Referring,
let's
talk
about,
let's
talk
about
referring
so
right.
We
can
prevent
referral
through
an
admission
controller
that
that
is
a
valid
thing.
You
know
admission
control
doesn't
just
admission
controller,
I
mean
our
back
is
used
for
access,
but
then
admission
controller
is
is,
for
you
know,
weeding
out
things
that
are
not
supposed
to
go
through.
One.
B
No,
it's
not
about
br
from
being
creative.
We're
saying
that
some
bars
can
have
invalid
values.
We
can.
We
can
figure
that
out
up
front
no.
A
D
That's
a
lot
of
namespaces
the
pro
it's
the
same
as
the
loud
names
proposal.
That's
just
the
location
of
the
verification
is,
instead
of
being
at
the
creation
of
anything.
That's
just
in
the
you
know,
response
for
the
request.
So
that's
the
cozy
controller,
it's
pretty
simple
for
it
at
least
to
be
there
in
this
in
the
flow
of
the
side.
A
B
Think
what
it
should
do,
rather,
is
it
it
should
append
user
information.
B
A
D
D
B
A
B
A
B
Yeah,
a
lot
of
groups
is
what
I
was
going
for.
The
idea
being
see
allowed
namespaces
is
one,
you
know
you
need
to
know
the
list
of
namespaces
upfront.
It's
it's
not
really.
A
A
I
was
to
point
out
that
it
is
the
one
and
only
security
mechanism
that
prevents
somebody
from
gaining
access
to
a
bucket
that
we
don't
want
them
to
have
access
to
if
they,
if
they're
able
to
guess
the
name
of
the
bucket,
and
if
a
bar
can
point
directly
to
a
bucket,
then
a
loud
name
spaces
is
the
only
thing
stopping
someone
from
gaining
access
to
a
bucket.
That
was
that
was
the
point
I
was
trying
to
make
it,
but
I
actually
like
namespace
as
the
primitive
for
access
control.
I
think
it's
the
right,
primitive.
B
Faces
no
see
here's.
My
only
problem
with
all
means
is
the
the
traditional
boundaries
that
existed
for
storage,
don't
exist,
for
something
like
buckets.
A
bucket
is
more
like
a
node.
It's
like
saying:
nodes
are
only
allowed
for
certain
names.
Just
we
don't
put
that
abstraction
for
a
node.
We
don't
put
that
from
it
on
a
node.
B
C
B
D
D
The
data
we're
in
bars
we're
just
talking
about
access.
No,
no,
I
mean
I
mean
that
it
doesn't
control
the
life
cycle.
It
doesn't
delete
it.
It's
not
about
that
right.
It's
it's
about
providing
access
to
some
workload
that,
obviously
you
want
to
provide
access
to.
Otherwise.
B
A
B
A
I
think
that
the
important
thing
is
that
there
is
an
access
mechanism
that
that
that
does
something
and
that
it
fits
well
with
within
the
model
of
kubernetes
which
is
based
on
you,
know,
users
and
groups
and
namespaces
and
accounts,
and
all
this
stuff
and
and
and
the
reason
that
we
were
leaning
towards
namespaces
originally,
is
because
it's
such
a
natural
way
to
use
the
kubernetes
sort
of
multi-tenancy
model
or
multi-user
model,
as
name
spaces,
are
how
everything
else
is
segmented
and
and
divvied
up.
So
it's
such
a
natural,
it's
within
kubernetes.
A
It
is
the
natural
way
to
say
like
yes
to
this,
and
no
to
that
right
is.
Is
it
in
the
same
name
space
or
is
it
on
the
list
of
names
faces?
It
makes
sense.
I
I
totally
agree
with
you
like
it's
a
weird
way
to
control
access
to
storage,
but
like
we're
not
we
don't
have
the
luxury
of
like
designing
the
the
optimal
way
of
doing
object,
storage,
access
control
because
we're
doing
it
within
kubernetes,
so
we
have
to
live
within
now.
The
weird
part
is
kubernetes,
the
weird.
B
A
I
guess
the
way
I
would
like
to
come
at
this
is
come
up
with
a
list
of
what
the
users
really
wanted.
A
couple
of
real
use
cases.
You
know
the
the
use
case
of
I
want
to
share
my
bucket
with
everyone.
The
use
case
of
I
want
to
share
my
bucket
with
no
one.
The
use
case
of
I
want
to
share
a
bucket
with
this
guy,
and
not
this
guy.
A
And
then
the
question
is:
can
you
do
that
with
namespaces
or
not,
and
I
think
we
just
go
and
answer
that
you
know
we
figure
out.
You
know
if
I
do
a
lot
of
name
spaces
and
I
set
it
up.
Like
you
know
like
this,
will
I
get
the
you
know
the
result
I
want?
If,
if
yes,
then,
then
it's
good
enough
to
address
the
use
case,
namespace.
E
The
question
got
a
question:
did
you
see
how
eks
is
working
for
doing
the
same
thing,
how
they,
because
they
they
they
have
a
way
to
allow
specific
parts
to
access
specific
buckets?
B
B
E
E
But
maybe
later
we
can
move
towards
a
more
let's
say,
evolve
model
so,
for
instance,
eks
so
easy
case
you
can
try,
you
can
bind
identities
to
pods.
B
A
B
F
B
Pod
in
the
bucket
we
would
tell
we
would
tell
the
back
end
hey,
you
know,
associate
this
service
account
to
this
bucket
with
this
bucket,
but
it's
still
under
our
control
right.
B
There'll
be
a
service
account
field
in
the
br
and
that
service
account
would
get
associated
with
the
appropriate
credentials,
with
the
appropriate,
not
credentials.
The
appropriate
roles
such
that
such
that
any
part
that
uses
the
service
account
gets
access
to
the
bucket.
B
Actually
come
from,
there
are
no
credits,
so
when
a
pod
with
the
service
account,
is
associated
with
the
bucket
s3,
clients
use
the
local
metadata
service,
so
yeah
the
creds
actually
come
from.
You
know
a
link
local
ip
address.
B
F
A
D
D
I
I
want
to
say
something
that
I
do
think
that
we
should
use
the
name
spaces
boundaries.
D
I
think
it's
good,
but
also
I
want
to
say
that
I
think
the
permission
model,
for
you
know
kubernetes
is
is
less
it's
not
that
the
names
just
can
do
anything
right,
but,
like
the
the
only
permission
model
you
do
get
is
in
our
back
or
you
know,
whatever
permission
other
permission
models
you
can,
you
can
use,
which
is
just
users
and
service
accounts
like
this
is
the
the
permission
model
for
kubernetes
to
if
we
hook
into
that
that
will
provide
us
with
permission
model.
D
D
We
can
implement
our
own
model,
of
course
like
say
you
know,
have
like
namespaces
and
group
them
together
or
have
any
other
annotations
or
anything
that
cozies.
You
know
admission
controls,
mutate
and
add
some
metadata
for
that.
D
But
the
question
is:
if
we
need
more
than
just
knowing
which
kubernetes
entity
identity,
sorry
is
the
one
that
requested
the
request.
The
request
of
the
access-
and
then
you
know
we
we
could
then
query
kubernetes
in
the
same
way,
in
order
to
check
if
that
identity
has
permission
to
you
know
do
that
to
do
such
operation
right,
create
this
var,
but
now
kubernetes
doesn't
allow
us
to
really
say
what
it
can
do
to
which
like
to
which
one
right,
unless
you
start
specifying
specific
resource
names
in
every
case
there.
D
D
A
C
D
A
D
A
A
If,
if
you
say,
okay,
that
that
stinks-
and
it's
not
granular
enough,
what
you
could
do
instead
is
say:
okay,
if
you
want
to
obtain
access
to
an
existing
bucket,
where
we
invent
an
entirely
new
way
of
doing
that,
like
instead
of
a
a
bucket
request,
your
bucket
access
request,
you
could
have
like
a,
I
don't
know
what
you
would
call
it.
An
existing
bucket
request
that
lets.
You
basically
request
access
to
a
bucket
that
already
exists
and
specify
whatever
information.
A
An
access,
control
decider
would
need
to
decide
whether
you
get
access
or
not,
and
then
we
could.
You
could
have
a
you
know,
a
special
object
to
handle
this
this
case
with
a
special
controller
and
just
treat
it
as
a
entirely
separate
case
that
you
know
so
we'll.
So
in
that
case,
brs
would
be
only
for
greenfield.
A
D
A
It
okay,
let's
see
because
once
you
allow
bars
to
refer
directly
to
buckets
like
now,
you
have
this
sort
of
hole
in
your
security
model
where
you
know
if
your
namespace
is
on
the
bucket
list,
all
you
have
is
on
the
list
of
the
loud
name
spaces,
and
you
know
the
name
of
the
bucket.
You
can
always
get
access
to
it.
A
If
what
you're
saying
is
you
want
something
more
sophisticated,
we
could
say:
okay,
get
rid
of
that
and
say
bars
always
have
to
refer
to
brs,
but
the
way
you
get
a
br
that
is
bound
to
something
that
already
existed
is
some
other
api
mechanism.
Some
new
object
that
we
haven't
talked
about.
Yet
that
has
whatever
information
you
would
like
to
use
to
make
that
decision.
B
So
I
have,
I
was
just
looking
at
how
to
answer
your
question.
I
was
just
looking
at
how
aws
does
this
a
little
more
complex
form
of
authentication
where,
where
you
have
you
have
a
service
account
being
related
to
some
some
sort
of
role
and
then
any
any
access
coming
from
that
service
account
immediately
gets
you
know
if
it
has,
the
role
gets
access
the
bucket,
if
not
doesn't
so,
we
could
replicate
that
infrastructure
in
in
cozy.
B
A
B
Yeah
about
that,
so
we
could
make
it
work.
Is
what
I'm
saying,
because
the
the
mechanism
for
providing
you
know
the
mechanism
of
which
it
works
is
any
so
in
the
way,
an
instance
figures
out
like
when
a
request
goes
from
an
from
from
an
instance
with
the
appropriate
roles.
The
way
it
works
is
the
s3
client
on
that
instance
looks
for
something
called.
You
know.
It
looks
for
about
seven
different
sources
for
authentication.
B
One
of
them
is
the
instance
metadata
where,
where
it
pinks
the
link
local
ip
address,
169
to
416
254,
and
if
data
gets
back
from
the
link
local
ip
address,
it's
not
an
authenticated
system.
If
the
data
comes
back
and
if
the
data
comes
back
with
the
instance
id
or
the
role
set
to
something,
it
just
assumes
that
it
is
the
right
value.
B
Yeah
yeah,
that
address
is
it's
called
the
link
local
ip
address,
because
for
a
for
a
particular
switch,
it
comes
from
the
hardware
days
where,
for
a
particular
switch,
you
can
have
only
one
of
these
ip
addresses
and
and
even
when
going
across,
such
as
you
get,
you
get
different
servers,
responding
to
168
look
for
69.4.
B
B
We
would
have
our
own
controller,
it
will
be
like
so
first
you
would
have
to
add
some
ip
table
rules
through
q,
proxy
or
something
which,
which
ends
up
routing
any
any
requests
from
from
a
bucket
using
pod
to
the
iprs
162
from
system
to
four
to
come
to
our
controller,
and
our
controller
would
know
what
the
source
ips
transistor
request
is
coming
and
based
on
that
it
would
give
back
data
appropriate.
For
that
part.
D
D
To
ask:
how
do
you
sorry?
How
do
you
think,
then
this
I
mean,
how
do
you
describe
this
thing?
Is
it
is.
D
I
mean
I
mean
in
terms
of
what
kubernetes
what
kubernetes
is
doing
here.
In
that
sense,
it's
like
we
are.
We
are
taking
over
this
link
in
terms
of
networking
here
right,
we're
saying
that
cozy
is
owning.
This
link
local
address,
because
it
is
being
used
for
bucket,
like
for
bucket
authentication
in
some
way
right,
yeah.
B
So
any
request
coming
from
from
a
bucket,
you
know,
part,
that's
using
a
bucket
to
this
ip
address
would
come
to
our
controller
and
our
controller
would
give
back
some
identifying
information
about
that
part.
Saying
that
you
know
this,
this
sports
service
account-
or
this
part
has
these
roles
or
just
whatever
sort
of
information
that
isn't
like
in
it's.
It's
a
it's
a
instance.
Metadata
is
a
you
know,
versioned
api,
it's
well
defined,
but.
A
D
B
B
A
case
yeah,
I
should
use
this
one.
Six,
nine
two
one
six
into
four
google
uses
a
different
ip,
but
yeah.
D
The
other
thing
is
that
this
is
not
just
for
bucket
credentials
in
these
environments,.
D
E
B
B
Well,
not
really,
I
think
we
would
have
to
work
with
sig
cloud
to
enable
instant
metadata
instance
metadata,
but
I
don't
think
I
don't
think
you
know
it's.
It's
not
exclusively
for
us,
like
you're,
saying.
D
So
I'm
saying
that
applications
that
use
this
on
the
cloud
will
will
not
be
able
to
use
cozy
and
other
apis
from
the
cloud
right.
B
It's
like
it's
like,
you
know
how
we
have
how
we
have
load
balancer
service
on
the
cloud.
It
uses
the
cloud
version
of
it,
but
anywhere
else
it
uses
our
version
of
it.
B
Oh
these
things,
so
when
you
started
you
started
with
parameters.
So
so,
when
you
start
kubernetes
on
the
cloud
you
you
start
with
different
parameters
compared
to
when
you
start
kubernetes,
you
know,
then
how
does
cosi
provide
credentials
in
the
cloud
cosy
would
not
provide
credential
in
the
cloud
inside
the
cloud.
It
would
rely
on
the
instance
metadata,
as
as
it
always
does,.
D
B
D
Much,
I
think,
I
think,
that's
kind
of
worries
me
that
it's
like
it's
the
same
address
or
you
know,
well-known
address
that
they
use
for
other
things.
So
maybe
that's
something
we
we
need
to
see
if
the
sdk
allows
more
configuration
there.
I
don't
think
so.
Yeah
yeah,
if
it
did.
B
D
B
C
D
Make
it
pretty
simple
to
yeah,
it
could
be.
B
I
mean
these
things
are
hard
coded,
but
yeah,
I
I
don't
think
yeah
the
diaper
is
a
hardcore.
I
don't
think
it's
configurable,
or
at
least
none
of
the
client
implementations
make
it
that
way
because
it's
expected
to
be.
B
B
Yeah,
it
was
worth
asking
this
question
what
if
we,
the
question
was
basically
what,
if
we
built
the
same
infrastructure,
that
would
be
there
provided
by
the
cloud
for
the
service
account
based
authentication.
It
looks
like
that
also
might
be
well
in
terms
of
implementation.
It,
it
is
very,
it
is
very
difficult
because
we'd
have
to
go.
I.
D
Don't
think
we're
missing
infrastructure
to
I,
I
don't
think
we
have
any
infrastructure
missing
like
it's.
Just
that
there
is
like
kubernetes
is
not
providing
us
with
a
well
for
formatted.
You
know
capability
to
to
provide
access
to
these
resources
for
reference
right.
It
doesn't
does
allow
us
to
run
that,
but
we
can
so.
We
need
to
define
our
own
model,
regardless
of
how
we
actually
validate
the
validation,
I
think
is
we're
going
to
validate
and
then
give
that
pod
credentials
and
it's
fine.
It's
it's
going
to
work.
D
We
we're
missing
the
model
that
that
we
rely
on
to
decide
whether
somebody
has
access
to
a
bucket
right
can
get
bucket
access,
basically
yeah
yeah.
D
A
C
A
E
I
think
it's
a
good
default
way
of
doing
it.
At
least
we
there
is
something
right.
D
D
Would
say
is
like
if
we
need
to
prepare
the
structure
to
be.
You
know
like
maybe
like
a
role
spec
or
anything
like
that
other
than
just
being
a
list
of
something
fixed.
That's
a
list
of
namespaces,
or
should
we
say
it's
a
list
of
api
of
things
right
that
we
can
refer
to
from
from
a
bucket
like
it's
a
list
of
service
accounts
or
it's
a
list
of
I
don't
know
pods
I
mean
I
mean
I'm
just
thinking
for
extensibility
of
what
we're
the
the
crd
based.
A
A
You
just
know
that
there
is
a
bar
in
a
namespace
you're,
deciding
whether
to
create
a
ba
or
not,
and
if
you
create
a
ba
the
act
of
doing
that
is
going
to
involve
going
to
the
driver
and
minting
your
credential
and
storing
it
in
the
ba
and
then
from
that
time
on
any
pod
that
uses
that
bir
will
just
get
the
credential
and
and
run
with
it
and
like
no
service
accounts
are
going
to
enter
into
the
picture.
No
tokens
are
going
to
enter
into
the
picture.
A
A
D
I'm
not
I'm
not
suggesting
today
we
do
it,
but
like
so,
I
think
for
today
we
we
actually
want
this
list
to
to
contain
only
namespaces,
but
maybe
in
in
the
form
of
referring
to.
You
know
to
an
api
object,
like
you
know,
just
like
the
spec
of
like
the
royal
spec
does,
but
but
going
forward.
It
could
be
that
the
the
pod
permission
could
be
to
mount
that
bar
right.
It
could
affect
it
anyway.
I
don't
know
I'm
just
thinking
if
it
makes
more
sense
as
a
structure.
D
Anything
I'm
talking
about
the
future,
I'm
not
saying
that
it's
relevant
today
for
for
like
getting
to
this
fine
grain.
I
don't
think
I
know
if
it's
ever
relevant,
but
I'm
just
saying
that,
in
terms
of,
if
I
want
to
say,
do
I
to
who
I'm
giving
permission
to
do
something,
perhaps
it's
more
complex
than
just
saying
it's
a
namespace.
That's
the
only
thing
I'm.
D
E
E
There
are
these
two
two
use
cases
of,
I
would
say:
kubernetes
either
you
you
work
directly
on
the
platform.
Typically,
it's
a
big
data
platform,
it's
shared
by
multiple
people
and
then
the
namespace
approach
is
good
right,
because,
typically,
you
put
all
your
tools
in
a
specific
name:
space
and
user.
Don't
have
access
and
you
just
give
access
to
groups
or
to
environment
people
in
specific
name
spaces
and
you
create
buckets
for
them
for
doing
whatever
analytics
or
whatever
computations.
E
The
other
use
case
is
when
you
use
kubernetes
as
a
platform
for
running
your
own
service,
and
then
in
this
case
you
don't
give
you
know
access
to
people
individually.
They
are
you
know,
you
just
operate
a
system
in
communities
and
in
this
case
you
don't
need
like
fine-grained
access,
because
it's
it's
managed
by
the
software.
You
are
deploying
communities,
so
we
might
not.
D
It's
true
that
we
don't
need
it
right,
we're
saying
that
mvp
can
run
without
without
more
granular
access
permissions,
but
you
probably
have
seen
that
all
the
cloud
providers
have
very
very
fine-grained
access
permissions.
So
you
know
you
get
to
a
point
where
this
is
getting
more
complex
as.
E
E
Clusters
right,
they
did
not
consult
anybody,
they
just
you,
know
their
cloud
providers
yeah,
they
integrated
kubernetes
because
they
had
two
and
then
they
said.
Oh,
that
would
be
cool
to
integrate
with
their
own
iem
system
and.
B
B
Yeah,
instead
of
instead
of
going
through
our
system,
I
mean,
then
we
we've
done
it
wrong
at
that
point,
we've
failed
them.
My
question
is
coming
from
the
point
that
just
trying
to
answer-
I
don't
know
the
answer,
I'm
not
it's
not
a
question.
Is
there
do
you?
Do
you
predict
that
people
will
be
fine
with
that
non-granular
access
control.
F
A
D
A
Well,
the
the
way
that
we've
defined
our
our
the
the
csi
plug-in.
That's
going
to
inject
the
credentials
into
the
pod
is,
if
there's
a
bar,
and
if
that
bar
is
bound
such
that
it
has
an
access,
key
and
a
secret.
And
if
the
pod
refers
to
the
bar,
it's
definitely
going
to
get
the
access
key
in
the
secret
nothing's.
Going
to
that's.
D
Fine
about
it,
but
it
has
to
be.
I
mean
it's
not
that
anybody
like
in
the
sense
that
any
code
running
within
a
namespace
is
really
exposed
to
the
credentials.
If
I
do,
if
I
didn't
create
a
pod
and
have
a
reference
to
a
bar
in
the
name
space,
as
you
know,
as
the
the
one
that's
deploying
the
workload
right,
then
it's
not
there.
There
is
no
link
for
that
pod
to
read
the
credentials.
A
B
B
Can
we
do
this,
then?
Okay?
So
let's
do
this
then
ben.
I
think
I'm
on
the
same.
What
is
you
maybe
a
loud
namespace
is.
B
Can
we
at
least
clearly
define
what
are
the
limitations
and
move
on
with
this
conversation,
because
I
feel
like
we've
spent
enough
time
yeah,
I'm
not
I'm
not
entirely
sure.
There
is
a
simple,
better
approach.
B
A
D
A
Right
but
but
you
need,
we
need
to
write
down
what
happens
when
you
mutate
the
list
to
remove
a
namespace
that
has
a
bound
bar
and,
and
my
my
humble
recommendation
would
be
that's
fine.
C
A
Once
it's
bound,
it's
same
thing
as
a
storage
class,
if
you
delete
a
storage
class
after
you
have
a
pvc,
it
doesn't
affect
the
pvc
because
you
only
looked
at
the
storage
class
at
the
moment
of
creation.
After
it's
created,
you
don't
look
at
it
anymore,
so
we
could
say
after
bar
binds
to
a
ba.
We
never
look
at
that
list
of
allied
namespaces
again.
So
if
you
really
want
to
revoke
access,
you
not
only
have
to
change
the
list
of
allowed
namespaces.
A
Yeah
so
so
I
yeah,
I
think,
as
long
as
we
write
down
that
kind
of
stuff,
and
then
we
can
say
you
know
and
in
principle
anyone
is
welcome
to
write
a
controller
that
auto
mutates
this
list
based
on
some
rules
and
we
don't
care
how
you
do
that
right,
because
it's
as
far
as
we're
concerned,
it's
just
a
list.
B
A
A
A
B
A
B
And
and
here
here's
the
other
thing
allowed
namespaces
if
it
was
a
selector,
if
you
wanted
to
give
access
to
new
new
new
name
spaces.
Add
the
add
the
selected
labels
to
the
new
namespaces,
but
it's.
A
A
B
A
A
That's
my
intuition,
yes,
is
to
centralize,
where
the
edits
happen
to
the
bucket
itself.
Isn't
that
kind
of
okay,
so.