►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 10 June 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
B
B
My
opinion
is,
you
have
to
support
it
like
we
can't
just
say:
nope,
that's
out
of
scope,
but
but
it
does
raise
a
bunch
of
very
thorny
problems.
C
So
so,
how
do
you
think
the
first
set
of
users
are
going
to
use
cozy
they're
going
to
have
a
bunch
of
I
mean?
Currently,
people
who
are
using
buckets
are
probably
going
to
be
the
early
adopters
and
so
far
everyone's
been
using.
You
know
creating
buckets
outside
of
cozy.
Obviously,
and
you
know,
let's
say
we
ask
them
or
or
some
one
of
the
vendors
ask
them
to
start
trying
cozy
out
they're,
going
to
want
to
see
how
they
can
use
existing
buckets
with
cozy.
Aren't
they.
E
B
You
can
never
access
it
from
another
namespace,
which
is
the
same.
That's
what
household
pvcs
work
right?
You
create
a
pvc.
No
one
in
any
other
namespace
will
ever
see
that
pvc,
unless
you
do
something
very
unnatural,
but
we
so
so
yeah.
There's
the
question
of
what
do
we
do
about
the
buckets
that
were
created
outside
of
cozy
and
then
there's
a
question
of?
B
C
I
was
actually
thinking
that
yesterday
yeah
it
doesn't
have
to
be
one
solution.
So
here's
the
thing:
what
do
we
do
about
pvcs
with
you
know?
Multi
read,
write
mode
where
you
can
you
can
you
can
mount
like
nfs
volumes
where
you
can
you
can
mount
it
in
multiple
places.
B
B
B
That's
the
idea
is
redirect
many
gives
you
more
than
one
pod
at
one
time,
but
because
it's
still
a
name-based
object,
you
still
can't
share
it
across
namespaces.
Unless
you
do
something
unnatural,
I
mean
I
want
to
emphasize.
There
are
tricks
you
can
play
to
get
around
this,
but
they're
totally
unsupported.
C
Okay,
let
me
ask
you
this,
so
are
there
any
resources
in
kubernetes
where
we
have
a
bunch
of
namespace
object
objects
not
by
so
I
think,
I
think,
by
by
using
the
word,
bind
we're
making
things
complicated.
D
C
C
I
mean
we
if
we
were
to
consider
buckets
and
bucket
requests
that
way
where
creating
a
bucket
is
different
from
I
mean
we
already
kind
of
have
it
like
that.
Creating
a
bucket
is
different
from
using
a
bucket
when
you
use
a
bucket
there's
no
real
binding
on
the
bucket
back
to
the
bucket
access
request
and
creating
is
a
completely
external
process.
C
C
But
yeah
what
was
the
answer?
If
anyone
can
remember.
B
So
so
the
the
idea
was
when
you're
doing
this
dynamic
provisioning,
a
user
is
going
to
make
a
request
and
then
a
controller
is
going
to
see
that
request
and
respond
to
it
by
creating
an
actual
object
and
then
and
again,
and
you
need
to
be
able
to
delete
it
by.
You
know
deleting
that
object
and
the
controller
sees
the
deletion
request
and
then
deletes
it,
and
the
problem
is,
if
you
do
that,
all
with
just
the
names
based
object,
trying
to
remember
what
the
issues
that
arise
are.
C
What
I'm
saying
is
right
now
the
whole
problem
comes
from
it's,
it's,
not
even
a
big
rework.
Actually,
it's
a
tiny
change.
If,
if
you
think
about
how
to
implement
this,
all
I'm
saying
is
maybe
bucket
request
shouldn't
be
namespaced.
D
B
Talking,
okay,
we're
we're
talking
about
the
the
brownfield
case
or
the
all
of
the
api
considerations
around
allowed
namespaces
in
brownfield,
so
so
the
issue
with
making
them
non-namespace.
Is
you
don't
know
who
owns
what
like.
B
B
C
Okay
last
we
spoke
we,
you
know
our
bag
didn't
seem
like
a
good
approach,
because
that's
just
one
form
of
access
control,
because
you
can't
reasonably
expect
all
customers
to
use
iraq.
Can
we.
B
Yeah
I
mean
it's:
it's
fundamental
to
kubernetes
like
there's
a
lot
of
kubernetes
use
cases
like
the
namespace
as
a
service
where,
like
you
just
you
come
in,
you
say
hey.
I
need
a
cluster
to
do
some
work
on
this.
Oh
here's,
a
namespace
in
a
kubernetes
cluster
somewhere,
you
know,
go
nuts
and
go
go
run
your
workload
there
and.
C
Is
here
with
the
custom
credentials?
Sorry
the
custom
admission
controller:
we
can
do
that
with
the
cluster
scope
objects
as
well.
We
can
do
that
back.
B
Outside
of
like
no
we're
talking
about
like
when
someone
creates
a
bucket
who
was
allowed
to
delete
that
same
bucket
or
even
access
it
or
or
yeah,
or
even
access
it,
so
the
name
space
gives
you
a
a
marker
of
where
it
is
and
who
owns
it
and
and
then
the
kubernetes
are
back
system
is
naturally
built
around
namespaces,
so
that
if,
if
alice
is
in
alice's
namespace,
then
she
can't
see
the
stuff.
That's
in
other
people's
namespaces
and
can't
mess
with.
It
is
okay.
C
B
B
Like
you
just
explicitly
make
the
shared
ones,
not
namespace,
I
think
in
his
proposal
you
still
had
namespace
ones
for
the
ones
you
didn't
intend
to
share
and
and
that
that
model
kind
of
works.
But
now
you
have
like
namespace
buckets
and
non-namespace
buckets,
you
have
you
have
ordinary
user
buckets
and
cluster
buckets,
there's
like
two
different
kinds
of
buckets
right.
C
What
was
the
issue
with
the
having
just
non-namespace
again
like
if
you
didn't,
have
the
namespace
buckets
well.
B
If
you
don't
have
the
there's
still
the
question
of
who
has
access
so
so
I
think
if
you
go
with
the
cl,
if
you
run
with
tim's
proposal,
if
you
have
user
buckets
in
the
namespaces
and
cluster
buckets
that
are
in
the
that
are
non-namespaced,
then
in
order
to
do
anything
with
clustered
buckets,
you
need
cluster
roles
and,
and
ordinary
users
wouldn't
be
expected
to
have
those.
So
then
the
ability
to
manage
the
shared
buckets
would
require
elevated
privileges.
C
You
could
have
a
namespace
selector
kind
of
field
in
there.
That
would
work
very
well
there
like
when
you
create
the
bucket
you,
you
put
a
namespace
selector
in
there
and
all
namespaces
with
that.
Selector
is
allowed
to
you
know,
or
you
can
just
have
a
loud
name
spaces
there.
Just
a
list
of
namespaces.
B
On
the
clustered
resource
and
and
update
and
delete
and
all
the
other
ones-
and
it's
it's
not
clear
to
me
that
even
if
you
try
to
encode
some
sort
of
a
of
a
axles
into
the
clustered
bucket,
like
the
I
don't
know,
who's
going
to
deny
the
delete
call
when
it
comes
in
like
the
rbac
system
is
going
to
say.
Oh,
your
role
is
allowed
to
delete
these
things.
Okay,
it's
deleted,
like
who's
going
to
veto
that
delete
request.
B
C
B
C
Okay,
yeah,
that's
the
question:
do
do
ordinary
users
need
to
create
buckets
like
imagine
a
world
where
they
didn't.
B
Well,
I
mean
this
so
in
the
very
old
days
of
kubernetes.
This
is
how
volumes
work
right.
The
way
no
one
could
create
volumes,
except
for
the
admin
and
people
thought
that
was
a
terrible
state
of
affairs.
So
we
invented
these
dynamic
provisioners
where
you
just
asked
for
storage,
and
it
was
given
to
you
automatically
by
the
system
because
people
hated,
you
know
where
all
the
volumes
were
statically
created
by
the
admin,
and
I
think,
we'll
quickly
end
up
there
with
buckets.
F
C
B
F
C
F
B
F
B
B
C
So
the
whole
reason
we
even
started
this
discussion
was
because
one
a
lot
of
namespaces
is
it's
kind
of
weird,
because
you
know
if
I
have
a
new
namespace
that
I
add
later
on,
I
I
you
know
I
I
can't
use
that
bucket
until
an
admin
intervenes.
B
C
H
Omitted
I
may
have
I
had
a
short
interruption
ben,
but
I
wanted
to
get
back
to
your
point
unless
you've
already
discussed
it
about
controlling
delete
access
because
allowed
namespace
is
just
saying
any
any
request,
any
any
request.
Any
bar
in
this
name
space
is
allowed,
but
can
any
can
anyone
delete
a
bucket
if,
if
user,
if
bar1,
if
br1,
creates
a
new
bucket
and
then
there's
some
bars
that
reference
the
bucket
outside
of
br1's
namespace,
so
we're
now
in
a
different
name
space?
I
reference
the
b
directly
in
my
bar.
H
B
So
so
jeff-
I
was
talking
about
this
in
the
context
of
the
br
or
or
whatever
we
replace
br
with,
is
a
clustered
object
right
that
that's
what
sid
had
been
suggesting,
and
I
was
saying
in
that
world
then
the
rbac
system
requires
you
to
have
access
to
the
clustered
object,
to
create
them
and
to
delete
them,
and
that's
that's
an
elevated
privilege
and
so
the
the
in
that
world.
If
that's,
if
that's
all
you
give
people,
then
ordinary
users
can
never
create
orderly
buckets
and
and
to
to
the
to
the
response
that
you
know.
B
B
So
I
I
think
I
think,
if
we
have
some
sort
of
a
clustered
object,
then
the
answer
is
only
someone
with
elevated
privileges
can
manage
the
the
clustered
ones
right.
If,
if
there's
a
shared
clustered
bucket
object
that
that
can
be
seen
by
multiple
namespaces,
then
only
so
only
an
admin
can
create
that
and
only
only
an
admin
can
delete
it
or
someone
who
has
a
special
role
for
managing
those
kinds
of
things.
H
Who
has,
if
you
ignore
the
leading
clusters
with
our
current
model
with
br's,
namespace
and
bars
namespace,
the
bucket
delete
is
triggered
by
the
deletion
of
a
br
under
normal
circumstances
right
and
who
can
delete
the
br.
It
should
only
be
someone
in
that
name
space.
So
the
issue.
E
H
B
Well
well,
tim's
suggestion
had
been
do
both
have
the
namespace
ones
so
that
ordinary
unprivileged
users
can
create
and
delete
buckets
that
are
only
usable
in
that
one
namespace,
but
also
have
a
clustered
version
of
them
that
allows
someone
with
elevated
privileges
to
create
and
delete
buckets
that
can
be
seen
by
more
than
one
more
than
one
namespace
and
it
feels
like
a
workable
model.
I
mean
it
has
more
objects
in
it,
which
is
something
we'd
have
to
think
about,
but
it's
not
a
crazy
way
to
do
it.
B
The
the
specific
well,
okay,
my
concern
with
that
is,
it
still
doesn't
help
you
with
you
know
what
what
what
happens
when
I
want
to
share
across
clusters
right
that
questioning
across
name
spaces
is
addressed
by
just
crit,
just
make
a
clustered
object,
and
then
everyone
can
see
it.
Sharing
across
clusters
is
not
addressed.
That
way.
B
You
have
to
do
something
in
the
different
clusters
and
and
my
preference
had
been
to
say
well,
let's,
let's
not
create
any
clustered
objects,
let's
make
everything
namespace
and
then
you
know
just
accept
that
whatever
whatever
mechanism
works
for
sharing
cross
cluster
will
also
work
for
cross
name
space
right
and
that
that
felt
better
to
me,
but
I'm
I
mean
I,
I
don't
have
a
that's,
not
a
deal
breaker
it
just.
I
I
like
that
model.
The.
E
B
H
H
The
user
side
of
things
is
the
declarative,
spec
portion,
say
of
a
br,
and
then
status
is
what
cozy
adds
in
that
could
be.
Basically
the
b
in
the
b
r
b
pair.
You
know
the
br
is
declare
b
r
dot,
spec
is
declarative,
user,
stuff
and
b
r
dot
status
is
cozy
bucket
instance.
Stuff,
I
think,
is,
I
think
if
I
understood
what
tim
was
saying
and
he
wasn't.
C
C
C
That
being
said,
you
know
when
I,
when
I
think
more
about
it,
it
was
just
a
suggestion,
but
when
I
think
more
about
it,
there
do
seem
to
be
issues
there.
I
want
to
quickly
visit
what
ben
was
just
suggesting,
what
if
we
said
it's
all
only
namespace
and
if
a
second
namespace
wants
to
use
the
bucket,
they
can
go
ahead
and
create
the
bucket
again
and
if
it's
an
existing
bucket,
they
just
just
get
an
existing
bucket,
instead
of
instead
of
ending
up
creating
a
new
one
like
what.
If
we.
H
C
B
C
B
E
B
C
B
C
B
C
On
the
what,
if
what
if
we
rely
on
the
the
back
end,
somehow,
what
does
that
mean?
Can
we
ask
the
back
end
delete
this
when
all
accesses
are
done?
C
How
would
how
would
the
back
end
track
all
accesses?
Okay,
so
here,
okay,
so
I
mean
the
back
end
might
have
the
ability
to
do
that,
but.
B
B
B
B
C
That's
the
in
that
sense,
so
so
so
yeah!
So
no,
I
do.
I
do
see
the
problem
with
this
approach,
like
I
wasn't
sure
where
I
was
going
with
this
whole.
Can
we
rely
on
the
back
end,
but
what
if
we
we
didn't
even
have
like?
Why
can't
we
address
this?
Just
with
deletion
policy,
where
you
have
multiple
copies
of
the
bucket
like.
B
How
do
I
you
can
and-
and
it's
just
it
just-
I
just
wanted
to
point
out
that
you
have
to
set
the
deletion
policy
right
or
you
get
strange
behavior.
So
this
is
the
decision
that
was
made
with
snapshots
and
I
can
I'm
living
with
it
right
now.
It's
it's
it!
No!
It's
fine!
You
just!
You
have
to
set
the
deletion
policy
right.
Do
you.
B
C
To
be
honest,
that
seems
like
a
cleaner
model,
like,
I
think
I
think
the
risk
of
I
I
think
the
risk
of
having
a
more
complex
system
is
greater
than
than
having
a
system
where
it's
harder
to
delete
like
like
the
way
we
have
it
right
now.
Even
to
us,
it's
not
very
clear
how
we're
going
to
do
sharing,
but
with
with
this
idea
where
buckets
are
name-spaced,
this
problem
won't.
B
Exist,
yeah,
but
but
you'll
also
recall
from
our
our
prior
conversations
that
you
know
if
we
ever
want
to
get
into
the
bucket
mutation
business
you'll
have
all
of
the
same
problems,
amplified
even
larger
right
and
that's
when
we
started
talking
about
when
you're
going
to
need
a
mutation
policy.
That
says
whether
you
know
if
you
have
multiple
sources
of
truth,
which
which
one
to
listen
to,
and
we
we
didn't
like
that.
And
unfortunately
we
don't
have
any
proposals
to
do
actual
bucky
mutation
on
the
table.
B
But
we
feel
like
it's
something
we
could
have
someday
and
we
were.
We
were
trying
to
figure
out
if
there's
a
way
that
we
could
avoid
having
the
same
problem
as
deletion,
but
with
mutation,
where
you
have
to
figure
out
which
one
is
allowed
to
enact
changes
on
the
actual
bucket.
When
you
have
multiple.
D
B
Yeah
this
is
this
is
where
we
had
eventually
landed
before
we
went
into
the
mode
where
we
were
trying
to
get
the
kept
approved.
We
started
focusing
on
other
things.
We
we
we
had
this
idea
where,
instead
of
cloning,
the
bee
you
we
could
just
share
it
across
name
spaces
by
having
other
bars
directly
reference
the
b,
and
you
only
ever
need
one
per
cluster.
You
still
need
a
second
one
if
you
go
to
a
second
cluster,
but
you
know
at
that
point
I
mean
cross.
B
C
C
B
C
Yeah
yeah,
so
let's
say
you
have
a
bucket
okay,
so
you
don't
mute
it
through
the
bucket
object.
Let's
say
you
muted,
through
a
bucket,
just
like
you
create
a
bucket
through
a
bucket
request,
you,
you
mutate
it
by
say
you're,
using
a
bucket
mutation
request
and
all
the
buckets
get
updated
values
throwing
it
out
there.
B
C
Actually
competitive
right
because
you're
defining
what
the
mutation
should
be.
Oh,
I
get
yeah
it's
it's
a
question
of
desired
state
versus
like
do
this
now
right,
no
you're,
saying
okay,
so
a
bucket
mutation
request
would
show
the
new
state,
not
it
wouldn't
show.
What's
the
if
you're
in
this
state
go
there.
C
And
then
take
care
of
updating
the
bucket
and
then
back
populating
it
into
the
bucket
object.
B
C
B
B
F
B
B
And
then
create
a
new
one
right,
but
I
mean,
but
that's
that's
an
imperative
api
style
right.
It's
like
do
this
now
and
then
once
it's
done,
you
know
there's
nothing
else
to
do
the
declarative
apis
are.
I
wanted
to
be
in
this
state
and
I
wanted
to
stay
in
this
state
and
if
it
changes
put
it
back
in
this
state
right,
it's
an
expression
of
the
desired
state
which
is
constantly
reconciled.
D
C
B
I
C
C
Later
defunct,
once
it's
defunct,
you
can't
you
can't
re
rerun
the
job,
it's
again
one
shot
and
then
you
know
yeah.
B
Yeah
the
jobs
jobs
are
interesting
because,
because
a
job
is
a
way
to
sort
of
run
a
pod
and
then
re-run
it
and
then
rerun
it.
You
know
if,
if
it
fails-
and
so
it's
like
a
it's-
a
nice
wrapper
for
pods
to
sort
of
keep
trying
until
you
reach
some
terminal
state
and
so
having
a
tracking
object
around
to
sort
of
pay.
Attention
to
the
pods
is
useful
and
then
yeah.
It
reaches
some
some
terminal
state,
but
like
two
jobs
can
never
interfere
with
each
other.
B
K
C
C
C
So
anything
you
do
to
the
bucket
is,
is
or
when
using
the
bucket
you're
only
using
the
local
bucket,
whether
you're,
using
it
from
a
different
cluster
where
you
use
it
with
the
namespace
or
whether
using
using
it
within
the
same
cluster
but
but
in
different
name
spaces.
Each
each
namespace
gets
its
own
copy
of
the
bucket.
B
So
so
the
the
closest
analog
we
have,
I
think,
is,
is
resizing
a
pvc,
because
pvcs
have
this
size
property
and
if,
when
you,
when
you
create
a
pvc,
you
specify
the
minimum
size
and
then,
if
you
later
change
your
mind
about
it,
you
can
increase
that
value.
B
I
don't
think
buckets
have
a
concept
of
a
size,
but
you
could
imagine
a
scenario
where
they
did
and
and
the
problem
you're
going
to
run
into
when
you're
trying
to
share
one
of
these
buckets
is
that
if
one
guy
tries
to
resize
it
and
the
other
guy
doesn't
which
size
is
correct,.
K
K
K
B
C
B
Right
so
I'm
saying
with
pvcs
are
in
a
situation
where
they're
not
shared
they're,
always
just
in
one
name
space,
and
so
it's
perfectly
reasonable
to
say
you
know
that
five
gig
pvc,
I
want
it
to
be
20
gigs
and
for
kubernetes
to
go.
Do
that
for
you
that
it's
a
reasonable
workflow
when
we
implement
it
it's
extremely
complicated
by
the
way,
but
it,
but
it
does
work
and
it
partially
works
because
you're
not
sharing
it.
B
K
B
Not
with
csi
like
you,
you
can,
you
can
create
outside
of
kubernetes
in
nfs
share,
using
whatever
you
want
and
then
go
and
create
nfs
pvs
on
two
different
clusters
and
point
them
at
that
storage.
But
then
an
attempt
to
resize
those
won't
go
anywhere
because
there's
the
nfs
driver
doesn't
know
how
to
resize.
It
just
says
whatever
I
don't
know
how
to.
I
don't
want
to
do
with
this.
B
It's
just
nfs
access
is
an
access
only
csi
driver
like
if
you
actually
know
that,
like
the
netapp
trident
driver
which
can
give
you
an
nfs
volume,
it
was
never
going
to
give
you
a
volume
that
you
can
use
on
multiple
clusters,
you're
only
going
to
get
a
pv,
that's
bound
to
one
pvc
on
one
cluster
and
one
name:
space
through
a
traditional
csi
driver.
D
C
I
mean,
if
at
all,
it's
possible
to
you
know,
make
it.
You
know
cluster
proof,
it's
not
a
bad
idea,
so
so
so
the
whole
reason
we
discussed
this
this
whole
back
populating
thing
was:
how
do
we
do
bucket
mutation?
C
C
One
approach
we
discussed
was
you
have
something
called
as
a
bucket
mutation
request
that
will
describe
the
final
state
and
once
the
controller
mutates,
the
bucket
all
the
buckets
will
get
the
updated
copy
of
the
current
state.
So
all
the
buckets
will
have
the
state
saying
encryption
is
true.
Now
we
decided
that.
K
It
doesn't
change
anything
right
it
just
it
just
makes
the
requests
ignorant
to
what
happens
after
it's
being
yeah.
You
know
processed,
that's,
that's
all
it
does
right.
It
explains
to
the
caller
to
the
one,
creating
the
request
that,
after
you
get
the
status
back
saying
it
succeeded.
You
cannot
know
anything
other
than
it
once
succeeded.
C
K
C
B
The
only
other
approach
I
could
see
would
be
to
have
like
a
mutation
policy
where
it's.
If
it's
set
to
false,
you
basically
ignore
any
any
specs
for
an
existing
bucket
and
then,
if
it's
true
you,
you
apply
the
reconciler,
you
reconcile
it's.
B
B
K
It's
more
than
that,
it's
it's
also
that
whenever
you
start
sharing
like
I
said
before,
when
you
start
sharing,
you
cannot
go
back
right
you
it's
not
that
you
can
now
say:
okay,
now,
I'm
owning
it
suddenly.
Just
because
I
feel
like
changing
this
property,
you
don't
know
what
will
happen
because
somebody
else
can
decide
the
same
thing.
So
we're
not
adding
any
synchronization
mechanisms
here,
we're
just
we're
just
allowing
somebody
to
say
I
I
am
the
owner
and
that's
it
I.
I
know
what
I'm
doing
right
yeah.
K
I
want
to
to
be
able
to
reconcile
this
over
and
over
and
if
somebody's
also
thinking
the
same,
we're
going
to
fight
over
this
resource
external
to
the
cluster
cluster
nope
like,
and
neither
one
of
us
can
can
really
know
that.
We're
thinking
that,
with
with
somebody
else,
also
doing
that.
But
if
somebody
wants
to
just
assume
ownership,
it
can
do
that.
That's
what
we're
saying
right!
K
It
just
allows
us
to
somebody
say
I'm
the
owner
of
that,
whether
it's
through
bucket
creation,
like
bucket
request,
and
then
I
say
I
created
this
bucket,
so
I
own
it
and
I
can
change
things
about
it
or
by
sharing
a
bucket
and
then
saying.
Well,
I'm
the
administrator
of
this
environment.
I
know
that
this
bucket
should
be
owned
somewhere
and
I'm
going
to
set
the
ownerships
on
this
cluster.
K
B
How
could
the
other
stance
we
had
had
is
that
we
could
say
well
we're
just
never
ever
going
to
do
any
kind
of
mutations
of
the
buckets
and
then
stashed
out
of
this
area,
and
that
it's
perfectly
reasonable
for
like
a
1.0,
but
it
feels
a
little
less
reasonable
for
the
long
run.
E
K
Because
just
we
we
had
to
have
creation,
so
we
probably
have
deletion
but
you're
saying
maybe
if
we
don't
add
mutation,
we're
gonna,
you
know
we're
gonna
stay
in
the
safe
zone
of
what
we
know
from
other
crds
from
other
entities,
resources
and
kubernetes
that
do
the
same
yeah.
G
K
L
Sorry
this
is
mauricio,
it's
my
first
time
joining
this
game
and
I
was
wondering
if
you
had
a
document
with
the
notes
that
are
taken,
that
we're
talking
about
for
the
things
that
we
don't
know
being
listening
to
the
multi-cluster
problem
that
we
have
when
we
try
to
share
an
ob
about
so
in
this
program.
I
think
that
there
is
no
source
of
truth.
We
would
have
two
clusters
like
two
brains
and
no
one
could
be
the
owner,
so
I'm
just
going
to
throw
an
idea
there.
L
So
one
way
of
doing
this
is
build
up
provisional
performance
and
what
we
could
do
is
have
one
coordinated
cluster
that
creates
the
bucket
and
updates
its
its
properties
through
terraform
and
the
state
will
be
stored
in
terraform
kubernetes,
without
just
as
approximate
to
this
terraform
state.
And
if
we
have
multiple
clusters,
then
all
of
them
would
go
through
perform.
So
the
idea
is
just
to
have
one
single
source
of
truth.
I
think
that
we
have
multiple
clusters
and
it's
kind
of
hard
to
decide
who
is
the
owner?
B
B
I
mean
at
least
in
principle,
we've
said:
that's
something
we
want.
I
I
think
it's
still
up
for
debate.
You
know
whether
you
can
move
from
the
so-called
greenfield
case
to
the
brownfield
case
or
or
whether
or
to
what
degree
we
want
to
support
brownfield
buckets
at
all
the
threats
where
we
started
this
meeting
was
you
know,
are
we
still?
I
think.
K
G
K
B
C
Yes,
okay,
what
if
we
said?
Okay,
so
going
back
to
the
old
question?
Okay?
So
what?
If
we
said
for
all
buckets
that
you
create
you
have
the
ability
to
mutate
or
delete
for
everyone
that
uses
an
existing
bucket
again,
the
greenfield
brownfield
dichotomy.
You
would
still
like.
We
still
keep
this,
keep
this
dynamic
of
having
a
bucket
per
namespace,
but
for
all
those
buckets
that
are
that
are
requested.
C
All
those
pre-cleared
buckets
are
requested.
They
just
don't
get
the
ability
to
mutate,
orderly
just
know
so
then
you
know
all
of
these
problems
go
away
because
you
have
you
know:
deletion
and
mutation
is
handled
in
the
namespace
that
created
it.
Everyone
else
just
gets
to
use
it.
They
don't
get
to
change
anything.
F
B
Okay,
so
so
so
we're
still
happy
with
everything
is
name
spaced
and
sharing
is
is
done
by
you
know,
creating
copies
of
things
and
other
namespaces.
B
C
That
was
do
that
across
I
mean
like
it's.
It's
symmetric,
it's
simpler
to
just
have
the
same
mechanism,
no
matter
how
you're,
using.
B
The
bucket
well,
but
we
did
convince
ourselves
that
there
was
some
advantage
to
this
other
scheme
and
I'm
trying
over
what
it
was.
I
think,
but
part
of
the
problem
is,
is
if
alice
and
bob
want
to
collaborate
in
the
in
the
schemer.
You
need
two
b's.
Neither
alice
nor
bob
can
create
the
second
beat.
You
need
an
admin
to
come
in
and
do
it
for
you
or
a
controller
to
to
clone
it
with
deletion
policy
set
to
false
and
then
give
the
name
to
the
to
the
to
bob.
B
E
B
If,
if
the
b
has
bob's
namespace
in
the
allowed
namespaces,
that
was
the
other
key
is
somehow
the
allowed
namespaces
needs
to
get
updated,
but
but
if
that
was
done
ahead
of
time,
then
then
you
don't
need
an
external
controller
or
an
admin
to
help
you
collaborate.
You
just
need
a
copy
of
the
bucket
in
the
in
the
cluster
where
you
are,
and
then
you
point
your
bar
at
it
of
course
collaborating
across
clusters.
It
doesn't
work,
you
still
need
someone
to
come
in
and
create
that
b.
C
C
C
B
K
But
what's
the
purpose
of
the
bucket
request
in
that
sense,
I
mean:
isn't
it
just
back
at
access
request
that
I'm
actually
interested
in
right,
I'm
just
trying
to
figure
out
the
the
intention
behind
having
a
br
in
that
flow?
Why?
Why
is
the
ba
r
not
enough
enough
to
describe
that
right?
That's
the
only
question
in
that
case,
why
do
you
need
a.
K
C
It
mean
if
it
does
not
exist
kind
of
semantics,
so
if
I'm
going
to
access
a
bar,
if
I'm
going
to
have
a
br
that
points
to
a
bucket
and
if
the
bucket
or
br
sorry
so
bars,
first
of
all
point
to
be
ours,
even
though
that
can
be
changed
here,
the
semantics
is
the
br
should
have
created
a
bucket
before
the
br
can
point
to
it.
Well,
if
you,
if
you
had
it
the
other
way
where
the
bar
pointed
to
the
bucket
directly,
then
then
the
bucket
should
have
already
existed.
B
C
So
how
okay
so
is,
that
is
that
better
than
using
another
br
and
and
having
the
bar
always
pointing
to
the
br,
so
so
in
that
model
in
in
the
in
the
model
that
we,
you
know
that
we
had
a
few
months
back,
we
said
either
the
br
points
to
a
bucket
or
the
bar
points
to
a
bucket
sorry.
So
either
the
br
points
to
a
br
or
either
the
or
the
bar
points
to
the
b.
B
B
C
No,
no,
you
need
it
still,
so
so
that
means
okay.
So
that
means,
if
I
have
a
bar,
pointing
to
a
b.
What
would
happen
is,
let's
say
the
b.
Let's
say
another
accessor
is
trying
to
point
to
the
same
bucket
since
the
bucket
the
copy
bucket
copy,
the
lifecycle
of
the
bucket
copy
is
tied
to
the
bar.
It
gets
kind
of
tricky.
I
think,
there's.
H
H
G
C
A
D
K
K
C
C
So,
okay,
I'm
a
little
confused.
So
did
we
resolve
that
whole
issue
like
we
last
we,
I
think
the
last
thing
that
I
don't
know
if
you
resolve
this.
The
question
was:
if
we
had
this
approach
of
bucket
per
name
space
and
we
we
disallow
mutation
and
deletion
from
any
name
space
where
the
bucket
wasn't
created.
Where
did
we
end
up
with
that
conversation?
Okay,.
B
So
so
we
agreed
that
that
would
work,
but
the
the
downside
of
it
is
in
order
for
alice
to
share
her
bucket
with
bob
somebody
has
to
create
another
bucket
and
the
non-namespace
object.
The
cluster
object.
B
We
had
a
proposal
on
the
table
from
a
couple
months
back
where,
instead
of
doing
that,
bob
just
creates
a
bar
that
points
directly
to
alice's
b,
because
she
gave
him
the
name
and
he's
on
and
bob's
namespace
is
on
the
allowed
list
for
the
bucket
and
then
that
bar
can
can
use
the
b
directly
without
ever
having
a
br
in
bob's
namespace
and
that
the
benefit
of
that
scheme
is
aside
from
not
having
multiple
buckets,
which
doesn't
really
add
much.
It's
just
that
you
don't.
B
So
so
you
can
just
more
easily
share
a
bucket
across
name
spaces.
That's
the
only
benefit,
but
but
we
had
liked
this
scheme
as
of
a
couple
months
ago,
before
we
were
trying
to
get
the
kept
approved.
This
was
their
official
stance.
B
G
B
By
an
admin
with
no
br
bound
to
it
whatsoever,
yeah,
that
was
the
stance
like
before
this
meeting
started.
Yes,
yes,
but
in
that
world
you
need
access
control
and
therefore
allowed
namespaces
has
to
be
on
that
b
so
that
when
charlie
comes
in
and
tries
to
create
a
bir
pointed
at
lsb,
the
system
can
say:
no,
I'm
not
doing
that,
because
you
didn't
share
that
bucket
with
charlie.
You
only
shared
it
with
bob.
C
But
that'll
be
required
even
in
case
we
do
the
sharing
the
other
way.
Don't
we.
C
C
That
thing
so
the
whole
conversation's,
okay
we're
out
of
time.
I
think
we
should
continue
this
on
monday.
I
think
this
was
again
a
good
discussion.
I
feel
like
last
last
meeting
that
is
on
monday
last
monday,
when
we
discussed,
we
were
closer
to
a
solution.
For
some
reason,
let
me
let
me
go
back,
I'm
going
to
go
back
and
look
at
the
recording
and
and
see
if
you
know,
there's
something
there.
I
think
today
it
was.
C
Maybe
it
was
worth
it
to
go
and
revisit
the
old
approaches,
but
we
haven't
yet
decided
on
you
know.
Today's
discussion
did
not
end
up
with
the
edition
on
how
we're
going
to
do
this
allowed
namespaces.
So
yeah,
let's
continue
on
monday
and
we'll
continue
the
conversation,
but
we're
definitely
making
progress.
It
was
definitely
good
to
go
back
and
see
where
we're
very
where
and
why
we
didn't
choose
what
those
things.
E
E
Okay,
like
why
we
are
making
this
decision,
you
know
why
we
go
with
this
alternative,
not
the
others,
and
this
way
we
can
so
that
what
is
from
going
into
circles
that
at
least
we
can
go
back
and
see.
C
So
what
he
said-
and
I
agree
with
that
is
allowed
namespaces-
really
isn't
doing
anything
and
it's
actually
more
confusing
because
again
the
namespace
has
to
exist,
and-
and
you
know,
if
you,
if
you
want
to
create
a
new
namespace
that
needs
access,
it
gets
tricky
again.
So
I
don't,
I
don't
know
if
allowed
names.
This
is
the
right
approach
we
need.
We
need
some
way
to
tie
it
back
to
the
user.
That's
where
the
description.
K
I
just
speaking
my
mind,
I
I
think
in
the
beginning
of
the
working
on
the
caps,
you
had
pretty
good
diagrams
of
these
entities
and
capture
the
the
relationships
with
them,
and
maybe
we
can
go
back
to
just
having
like
comparing
two
diagrams.
C
Yeah
we've
been
discussing
about
it
for
a
while
nicholas
it's
it's
a
simple
use
case,
but
yeah
yeah.
I
wasn't
capturing
any
of
that
in
the
diagram,
but
yeah
we
can
definitely.
We
can
definitely
clarify
all
of
these
things,
and-
and
let's
start
with
that-
let's
start
with
defining
like
ben
said:
do
we
even
need?
Why
do
we
even
need
this
change?
Is
the
current
mechanism,
okay
or
not,
and
then
we'll
go
from
there?