►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 17 June 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
A
Yeah,
so
we
will
start
off
by
coming
to
a
conclusion
on
the
namespace
selector
issue
and
then
we'll
talk
about
how
to
go
from
here
to
basically,
you
know
alpha
alpha
release,
completeness
yeah,
let's
get
started
with
this:
let's
get
started
with
the
loud
name,
spaces
and
and
we'll
go
to
the
next
step
from
there
all
right.
So
let
me
start
by
sharing
my
screen.
A
Okay,
okay,
so
last
meeting
we
discussed,
we
talked
about
bucket
sharing
and
after
a
long
discussion,
we
basically
came
to
the
conclusion
that
the
best
way
to
go
forward
with
bucket
sharing
or
access
restriction
for
bucket
sharing
would
be
by
using
this
list
of
allowed
namespaces.
A
There
was
another
idea
put
into
the
mix,
which
was
to
use
a
namespace
selector
rather
than
a
list
of
namespaces,
so
name.
Space
selector
is
is
very
similar
to
how
say
services
select,
pods
or
any
other
selector
in
the
kubernetes
api
system.
A
It
seems
more
intuitive,
though
it
is
a
little
more
involved.
There's
a
little
extra
work
to
be
done
to
specify
what
the
namespaces
are.
The
benefit
of
allowed
namespaces
is
you
can
you
can
with
the
selector?
Is
you
you
can
add
selectors
retroactively
to
namespaces,
where
you
want
a
bucket
to
be
accessible
without
without
editing
the
bucket
itself.
A
So
so,
if
you
have,
if
you
use
the
selector,
then
you
can,
you
can
add
a
new
name
space
or
or
you
can
allow
a
new
new
namespace
to
access
the
bucket
without
changing
the
bucket
object
itself.
D
B
B
B
C
And
match
expression
right.
B
B
I
mean
I,
I
I'm
sure
users
will
ask
right,
they'll,
say:
look
I
created
this
bucket.
I
want
everyone
to
use
it.
I
don't
want
to
have
to
muck
around
with
anything.
Complicated.
Just
tell
me
how
to
share
with
everybody,
and
it
would
be
nice
to
be
able
to
have
a
simple
answer
for
that
guy
that
doesn't
involve
something
like
yeah
put
a
label
on
every
single
namespace
and
then
match
that
label,
because,
like
that's
just
more
work
from
from
someone's
perspective,
who
just
wants
to
share
with
everybody,
yeah.
C
E
B
That
is
also
something
that's
really
straightforward
and
simple
to
do
so
that
every
time
you
create
a
bucket
request
and
one
of
these
buckets
pops
out
it
already
has
a
populated
allowed
namespaces
that
matches
or
actually
maybe
it
can
be,
the
empty
set,
because
it's
already
bound
to
the
br.
So
you
don't
need
any
lab
namespaces
in
order
to
allow
that
vr
to
function
correctly,
so
maybe
maybe
for
dynamically
created
buckets.
This
really
is
empty.
C
Right,
I
think
so
that
was
my
thinking
that
the
br
reference
doesn't
require
selecting
an
hp,
spec
right
hope.
It's
consistent
enough.
B
B
Changes
the
selector
to
match
as
well,
but
it's
like
on
it
right,
but
we're
assuming
that
ordinary
users
don't
have
access
to
to
touch
these
objects.
These
bucket
objects,
no,
no,
the
bucket
a
lot
of
names.
Ordinary
users
cannot
see
the
buckets.
They
can
only
see
their
brs
and
then
the
name
of
the
bucket
that
their
br
got
bound
to.
But
they
can't
see
the
the
object
itself,
because
it's
a
it's
a
clustered
object.
B
For
for
the
ordinary
greenfield
use
case
yes
like,
if
you
just
create
a
br,
you
know
it
will
get
bound
to
something,
but
you
should
never
care
about
the
details
of
the
thing
it
got
bound
to,
because
your
handle
is
the
br
right.
It
only
becomes
interesting
when
you're
trying
to
share
it,
and
then
someone
else
has
to
find
a
way
to
reference
this
thing.
B
That
automatically
facilitates
granting
of
access
that
we
can
design
later.
C
C
With
a
list,
it's
the
same
issue
right
with
list
like
if
we
had
just
a
list
of
names,
namespaces
names
that
will
be
these
two
issues
that
you
just
raised
that
select
all
is
is
also
a
problem.
You
you
have
to
have
some
other
flag
or
something,
and
also
the
you
know
how
to
default,
to
select
myself
or.
C
B
A
Well,
no
you,
regardless
of
what,
how
many
names
you're
allowed
to
is
allowed
to
have
access
to
the
bucket
the
namespace
that
created
it
always
should
have
access
the
bucket.
Oh.
B
Right,
but
in
the
brownfield
case
there
will
be
no
namespace
that
created
it.
The
admin
will
create
it
and
it
will
not
be
bound
to
anything.
It'll
just
be
floating
there
for
people
to
use,
but
there
will
be
no
br
whatsoever
right.
If
it's
a
pre-existing
bucket,
you
don't
need
a
br.
You
just
create
the
b
there's
no
negative.
B
A
G
B
Oh
right,
I
would
say
no,
I
would
say
that
the
the
the
the
controller
that's
dealing
with
bars
and
trying
to
bind
them
if
you,
if
you,
if
your
bar,
specifies
a
br
and
it's
in
the
same
name,
space
you're
done
right.
That
is
your
access
check,
is
that
the
bar
and
the
br
are
in
the
same
name
space.
You
don't
need
to
perform
any
additional
checks.
B
B
B
C
Question
is
just
if,
if
deploying
if
like,
if
it
makes
a
difference
that
I
need
to
deploy
differently
basically
based
on,
if
I
created
a
vr
or
not
that's,
if
it
matters
to
anybody
well,
anybody
can
can
fix
it
like
if
we
explained
it
this
way,
I'm
sure
that
it's
you
know
any
deployment
can
be
done.
B
So
so
the
brownfield
case
that
I'm
most
interested
in
exploring
is
I've,
heard
whisperings
and
suggestions
that
at
least
some
people
are
thinking
about
creating
a
backup
api
that
will
layer
on
top
of
cozy
such
that
you'll
have
buckets
that
kubernetes
knows
about
that
will
be
targets
for
backup
operations
and
in
such
a
context
you
will
probably
have
buckets
being
shared
across
namespaces.
B
I
think
because,
because
I'm
just
imagining
a
hypothetical
backup
architecture
where
you
have
you
know
a
bunch
of
different
users
and
they're
shoving
data
into
some
backup
service,
you
probably
just
want
to
have
one
bucket
that
everyone
shoves
their
data
into,
and
then
you
have
some
backup
service.
That
knows
how
to
how
to
sync
data
and
source
data
from
that
bucket,
and
so
then,
in
such
a
in
such
a
hypothetical
world,
how
does
everyone
get
a
reference
to
the
bucket
that
they're
gonna
be
using
for
their
backup
target?
Like?
Is
it
a?
B
I
suspect
that
that
some
of
the
people
in
the
data
protection
working
group
are
interested
in
cozy.
Mostly
for
that
reason,
and
I,
but
I
haven't
thought
through
all
the
details,
so
I
mean
it
obviously
makes
sense
that
you
know
backing
up
to
object.
Stores
is
a
is
a.
You
know
very
obvious
thing
to
do.
B
They'll
be
referring
to
the
same
bug
again
like
without
having
a
a
prototype,
backup
api
to
think
about
this.
Like
I
don't
know
what
you
would
do,
would
you
have
some
sort
of
a
some
sort
of
a
separate
object
for
the
backup
api
that
would
refer
to
a
bucket,
or
would
you
individual
backup
requests
refer
directly
to
individual
buckets
or
bucket
access
requests?
I
mean
I
I
haven't
thought
it
all
through,
but
like
I
that
feels
like
a
a
use
case
that
we
don't
want
to
screw
up.
Let
me
put
it
that
way.
Right.
B
We
do
it
should
be
possible
to
layer
a
backup
api
on
top
of
cozy
and-
and
it
seems
fairly
obvious
that
the
brownfield
use
case
will
be
necessary
to
make
that
work,
because
no
one
wants
to
create
their
backup
target
as
a
bucket
request
inside
inside
a
you
know.
F
B
A
Restore
yourself
so
that
that
actually
brings
a
bigger
question
and
it
kind
of
leads
to
something
that
we
were
talking
about
earlier,
but
before
we
go
there
can
we
can
we
like
decide
on
this.
A
No,
we
will
do
that,
but
but
without
going
there
we
can.
I
think
we
can
still
decide
if
we
want
to
or
how
we'll
do
the
you
know
allow
all
or
allow
nobody,
but
myself.
B
A
F
C
B
C
C
It's
so
yeah,
that's
true!
So
what
you
do
is.
B
The
bucket
classes
are
relevant
because
you're
manifesting
the
bucket
directly
and
there
will
no
bucket
class
will
ever
get
referred
to
in
that
workflow
unless
well,
that's
why
I
wanted
to
pop
up
a
level
and
say
what
does
it
look
like
to
do
the
brownfield
use
case
like
I'm
in
my
imagination?
It's
just
you
know,
cube
cuddle,
create
bucket,
and
then
you
give
it
some
yaml
and
then
you're
done.
C
C
E
B
Perhaps
but
like
an
another
alternative
to
do
the
green
to
brown,
workflow
would
be
to
say
you
do
the
regular
greenfield
workflow
and
you
get
a
bucket
that
has
no
allied
name
spaces.
B
Then
you
do
something
else,
some
other
step
that
mutates
your
bucket
to
grant
access
to
other
specific
namespaces
that
you
specify
and
then
and
then
those
users
can
do
it.
So
so
we
could
make
it
a
two-step
process
where
you
have
to,
rather
than
just
picking
the
right
bucket
class,
where
you
have
to
use
some
other
object
that
we
haven't
designed
yet
to
mutate,
the
bucket
to
to
grant
access
to
somebody
else
right.
B
You
like
something
bucket
share,
object
or,
or
maybe
there's
some
extra
fields
on
your
bucket
request
that
you
can
specify
to
to
explicitly
enable
sharing
on
a
bucket
that
you
own,
because
you
have
the
vr.
We
could
think
through
the
the
green
to
brown
use
case.
So.
A
A
There's
a
reason:
we're
not
able
to
come
to
a
conclusion
and-
and
the
reason
is
quite
obvious-
of
the
three
use
cases
that
I've
listed
here-
we're
only
able
to
do
two,
but
not
all
three,
and
and
we
do
two
of
them
and
then
we
try
to
do
the
third
one
and
then
the
whole
thing
breaks
down.
A
Maybe
there
is
a
solution
to
have
all
three,
but
as
soon
right
now
it
looks
like
we
can
do
two
of
these
very
reliably
and
right,
the
third
one
we're
not
able
to
so
so
right
now
we're
talking
about
bucket
sharing,
so
you
know
take,
for
instance,
you
know
we
give
our
bucket
self-service.
Let's
just
take
that
as
an
example
and
and
then
we
only
allow
these
two
only
we
support
these
two
use.
Cases
sharing
and
mutation,
giving
up
self
service
would
mean
brs
or
not.
Brs
are
not
are
not
namespaced.
A
I'm
talking
strictly
about
bucket
creation,
not
access
right
right.
So
if
you,
if
there
is.
A
We
can't
give
up
self-service,
I
think
that's
obvious.
Well,
I
don't
know.
If
that's
true,
I
really
don't
know
if
that's
true,
because
I
I
do
believe
that
majority
of
the
use
case
will
be
like
the
buck
will
be
like
the
backup
use
case
that
we
just
mentioned,
where
it's
brownfield
greenfield
seems
very
unlikely
and
then
from
from
seeing
how
everyone
has
done
it
so
far
among
among
all
the
customers
that
at
least
minion
has
nobody
is
doing
a
bucket
self
service.
A
Let
me
ask
you
this,
so
it's
easily,
it
can
easily
be
addressed
by
having
some
temporary
bucket,
that's
cluster
scope
and
accessing
it
as
a
brownfield
bucket.
A
Space
does
that
make
sense,
you
don't
need
to
go
through
a
bucket
creation
step
in
order
to
have
a
temporary
bucket
that
you
just
want
to
use
and
pull
down.
There
is
no
real
use
case
like
that.
I
think.
F
B
F
A
Don't
you
you
don't
lose
portability
by
saying
you
know,
br
is
cluster
scoped.
A
F
B
Okay,
I'm
thinking
like
namespace
as
a
service
models
are
becoming
more
common
right
where
you,
basically,
I
want
to
deploy
some
app
and
you
go
to
your
ops
people
and
they
say:
okay,
here's
a
namespace
in
a
cluster
and
you
have
access
to
it,
go
nuts,
but
they
don't
give
you
more
that
more
than
just
you
know,
one
namespace
worth
of
access
to
a
cluster
they're
not
going
to
give
you
access
to
clustered
objects
and
they're
not
going
to
get.
B
Yeah
yeah,
so
you
need
to
be
able
to
read
storage
classes
and
bucket
classes.
Well,
if
you
want
to
be
able
to
distinguish
between
them,
I
mean
the
real
portable
use
cases.
B
You
leave
the
storage
class
blank
and
you
accept
the
default
and
similarly
you
could
leave
the
bucket
class
blank
and
accept
the
default,
and
that's
the
most
portable
thing
you
can
do
is
just
say
hey
as
long
as
there
is
a
storage
class
for
a
bucket
class,
I'm
happy
with
using
it.
A
B
So
before
we
had
raw
block
volumes
that
that
really
did
work
for
storage
like
a
pvc
was
a
pvc
was
a
pvc
until
we
added
raw
block
volumes
and
even
now,
like
very
few
people,
actually
use
roblox
volume.
So
we
still
get
away
with
just
saying
a
pvc
is
a
pvc,
you
know
on
any
cluster
and
it
just
works
and
yeah.
If
you
want
a
roadblock
volume
and
the
the
default
storage
class
doesn't
support
it,
then
that's
that's
too
bad,
but
it.
A
How
does
it?
Okay,
let's
come
back
to
this
question
of
how
important
is
self-service,
I'm
see
I
went
and
looked
at
what
customers
use
or
how
customers
use
buckets
and,
and
it
no
team
is
actually
provisioning
buckets
on
their
own.
I
disagree
I
mean
in
the
cloud
every
team
is
provisioning,
their
buckets
it's
an
admin,
I
mean
not
not.
The
application
users
is.
That
is
that.
Is
that
fair
to
say,
guy.
C
I
think
buckets
are
getting
created
by
like
many
different
types
of
users,
not
necessarily
admins.
It's
not
a
you
know,
it's
not
a.
How
do
you
say
that,
like
it's
not
a
sensitive
resource
like
if
you
have
access,
if
it's
yours,
you
can
get
full
access
to
it
right?
It's
not!
You
cannot
really
make
too
much
bad
things
out
of
it.
B
A
B
A
Okay,
so
if
so,
let
me
ask
you
this
so
so
in
that
case,
since
it's
not
very
sensitive,
is
it
okay
to
give
users
cluster
roles
to
create
buckets.
B
B
It's
just
like
pvcs
right
like
because
once
you
create
a
pvc
and
start
putting
data
into
it
all
of
a
sudden
now,
if
someone
else
gets
a
hold
of
your
pvc
you're
in
trouble
as
long
as
the
pvc
was
empty,
who
cares
the
moment
you
start
using
it?
It
becomes
sensitive
and
I
think
the
same
is
true
of
buckets
right.
The
moment
you
start
putting
data
into
the
bucket,
then
now
you
care
who
who
can
read
it
and
who
can
write
to
it,
who
can
delete
it.
A
So
so
in
aws
it
seems
okay,
so,
okay,
okay,
let's
let's
get
I'm
just
thinking
this
through.
So
what
do
we?
What
do
we
gain
if
we,
if
we,
if
it
we
go
this
route,
what
do
we
gain?
Can
you
can
you
explain
so
yeah?
Let
me
let
me
explain
that
so
it
seems
to
me
like
we
can
only
get
again,
I'm
not
even
saying
we
go
this
route,
I'm
saying
more,
like
we're
able
to
get
two
of
these
use
cases
right,
but
not
all
three.
A
A
A
Since
it's
cluster
scope
in
in
such
a
situation,
sharing
and
mutation
becomes
really
easy
because
all
kinds
of
access
you.
B
It's
just
getting
very,
very
simple:
well,
I
think
that
this
is
how
kubernetes
ended
up
with
pvs
originally,
and
then
pvcs
and
storage
classes
coming
later
and
npv's
were
cluster
scopes
because
yeah
they
were
sort
of
a
global
shared
resource.
A
A
So
far
the
problem
has
been
trying
to
convince
ourselves
and
the
reviewers
that
that
you
know
this
will
somehow
work
for
all
three
use
cases
and-
and
the
answer
to
me
seems
like
no-
I
I
don't
think
we
should.
We
should
aim
to
do
all
of
it
right
now.
G
C
A
So
take
take
self
service.
We
can
go
through.
All
three
would
take
self
service.
If
you
didn't
have
self
service-
and
you
know
you
just
have
a
bucket
no
bucket
request
or
working
class,
then
all
access
to
the
bucket
would
be
via
bar
point
to
that
bucket,
pointing
to
that
clusterscope
bucket.
Just
all
access
would
be.
That
way
in
in
such
a
scenario,
sharing
is,
is
the
same
for
greenfield
or
brownfield
and
mutation
since
there's
only
one
copy
of
the
bucket.
It's
very
straightforward.
F
B
Service,
so
everything
is
just
brown
field
and
it's
it's
consistent
that
that's
the
benefit.
If
you
remove
green
field
and
say
it's
all
brown
field,
you
have
consistency
and
I'm
actually
sympathetic
to
this
wave
because
you
could
dramatically
simplify
the
api
by
saying:
look,
the
only
use
cases
we're
after
is
somebody
you
know.
Bucket
creation
is
out
of
scope.
Somebody
else
creates
the
bucket
and
then
tells
kubernetes
about
it,
and
all
cozy
has
to
do
is
manage.
G
A
C
The
problem
I
mean:
do
we
still
have
a
problem
to
have
to
have
the
access,
the
bucket
access
request
point
to
br
or
a
b.
I
mean
we
said
we
can
do
that
and
then
it
works
so
by
by
removing
the
the
green
field
or
the
the
dynamic
provisioning
here
or
the
self-service.
We
are
saying
we
are
simplifying
the
model,
that's
true,
but
it's
not
that
we
are
solving
anything
we're
just
delaying
it
and
not
sure.
E
C
J
C
J
That
model
will
happen
when
you
do
cross-cluster
sharing.
I
think
jeff's
saying
something:
sorry,
yeah,
yeah,
sorry
yeah!
So
all
that's
true!
Technically
you
have
mutation.
If
you
have
a
single
b,
that's
we
can
agree
on
that
and
we've
been
discussing
that
for
several
months,
but
I
think
there's
a
nuance
there
that
sid
and
I
talked
about
yesterday-
and
that
is,
if
you
have
multiple
users
sharing
that
b
and
then
and
then
the
admin
changes
the
b
there's
no
consensus
that
that
and
those
changes
propagate
to
the
back
end.
C
C
A
A
Of
change.
J
A
I'll
help
explain
it
so
if
you
did
have
self-service
and
if
you
did
do
it
by
making
changes
to
the
br,
even
deletion
is
a
valid
form
of
mutation.
Let's
say
you
deleted
the
bucket
by
having
the
br
be
deleted
or
any
other
form
of
mutation.
A
What
happens
is
all
the
other
name
spaces
that
rely
on
it
now
are
basically
controlled
by
another
namespace,
which
probably
doesn't
even
have
a
visibility
of
who
else
is
using
it.
It
actually
becomes
an
administration
nightmare,
but
when
you
take
this
out,
when
you
take
out
self-service
and
have
a
consistent
model,
it's
not
consistent.
C
You're
just
saying
that
it's
the
admins
headache
to
begin
with,
because
there
is
no
other
way
right,
the
admin
would
would
it
would
have
to
be
responsible
for
deleting
a
bucket
and
handling
all
the
bucket
access
requests
using
it
anyway.
In
that
model,
where
there
is
no
vr,
it's
still
the
admins
responsibility.
B
I
I
thought
we
were.
I
thought
we
were
going
to
get
the
best
of
both
worlds
with
our
model
of
you
know
greenfield
and
brownfield
where
the
greenfield
solution
was:
users
have
complete
control
over
their
buckets
and
the
brownfield
situation.
The
admin
creates
the
bucket
and
there
are
no
brs,
and
so
it's
clear
what's
going
on.
It
only
gets
weird
when
you
do
like
green
to
brown
transitions
and
we.
F
B
We
could
force
people
to
make
that
more
of
a
more
of
a
cl
of
a
sharp
break
when
they
wanted
to
move
from
one
mode
into
the
other
mode.
Instead
of
allowing
some
weird
hybrid,
we
could
say:
look
you
can
be
in
one
or
you
can
be
in
the
other
if
you
want
to
jump
from
one
to
the
other,
you
have
to
make
a
very
explicit
decision
to
like
delete
your
vr.
B
B
C
C
We're
losing
the
audience
and
what
I
and
and
I
accuracy
I
was
saying
that
I
don't
know
if
you
heard
me
now,
but
that
I
felt
like
we're.
We
were
closing
on
these
three.
F
B
B
A
switch
with
self-service
or
sharing,
but
not
both,
which
which
I
thought
you
we
kind
of
had
with
the
you
know,
creating
buckets
through
vrs
self-service
model.
You
know
by
default.
You
know,
that's
never
going
to
be
a
shared
bucket
right
unless
someone
does
something
special
and
similarly,
if
someone
just
creates
a
bucket
with
no
br,
it's
going
to
be
shared
period
and
there's
there's
no
self-service
around
that.
So,
like
you're,
forcing.
F
F
A
You
know
what
I'm
trying
to
say
is
that
that's
where
this
idea
came
from
it's
very
similar
to
the
idea:
you're
right
in
the
cluster
scope,
there's
no
br,
it's
just
b
no
bc.
Even
but
in
the
name
space
you
have,
you
have
the
br
and
probably
the
bc,
also
yeah
yeah.
That
would
drastically
simplify
this
model.
I
think
it
will
solve
a
bunch
of
issues
that
we're
dealing
with.
C
B
A
C
It
seems
like
it
works,
even
if
you,
if
I
created
the
bucket,
why
can't
I
share?
I
mean
I
mean
I
it's
not
that
I
want
to
mute
the
list
shares
now.
That's
not
what
I
meant,
but
this
is
not
what
I
mean
to
support,
but
why
can't
it
be
a
shared
bucket?
I
mean
from
the
bucket
class
having
a
selector
that
says
I
don't
know
all
backup
applications
can
access
it.
For
example,
right
just
this
is.
C
B
Once
you
go
across
more
than
one
kubernetes
cluster,
and
I
I
really
want
to
be
careful
about
talking
about
bucket
mutation,
because
it's
this
hypothetical
thing
that
we
have
no
concrete
examples
of.
So
it's
not
that
I
don't
believe
we
could
come
up
with
some.
But
I
feel
like
every
time
we
talk
about
it.
We
we
don't
actually
pin
down
what
we're
talking
about,
and
it
makes
it
very
hard
for
people
to
understand.
J
G
So
can
we
just
say
what
are
the
fields
that
we
allow
and
then
the
rest
of
them
are
not
allowed
to
mutate,
and
I
can
see
what.
B
B
B
B
But
when
we
talk
about
bucket
mutation,
I
think
what
most
of
us
have
in
mind
is
something
where,
like
you,
you
make
a
request
in
kubernetes
on
an
existing
bucket,
and
some
sidecar
makes
a
call
to
some
driver
that
goes
and
changes
something
on
the
actual
device.
That's
storing
those
those
bucket
bits,
right
and
and
and
no
one
has
spelled
out
a
specific
example
of
like
what
that
might
be.
J
H
A
Aws
has
the
concept
of
quotas
I'll
have
to
look
it
up,
though
you
might
be
right.
B
B
F
B
B
You
just
have
to
go
straight
to
the
back
end
and
go
make
the
change
that
you
wanted
out
now.
The
one
thing
you
can
mutate
about
pvcs
in
kubernetes
is
the
size
right.
If
you
create
a
10,
gig
pvc,
you
can
change
your
mind
later
and
come
back
and
make
it
20.
Gigs
and
kubernetes
will
do
that,
work
for
you,
but
it
will
not
make
any
other
changes
to
to
a
pvc
on
behalf
of
the
user.
H
B
H
H
J
Like
that,
we
clarify
mutation
because
there's
some
mutation,
that's
kubernetes
specific,
like
deletion
policy
or
allowed
name
spaces,
and
then
there's
mutation.
That
would
go
through
the
driver
to
the
back
end.
And
what
we're
saying
now
is
that
any
mutation
that
could
involve
the
driver,
we
won't
we're
not
supporting
it.
It's
an
opaque
type
who
cozy
doesn't
care
and
and
we're
not
we're
not
going
to
have
a
controller
looking
for
those
mutations
and
and
invoking
the
driver
that
type
of
mutation
for.
H
B
B
So
so
so
sid,
why
did
you
bring
us
to
this
slide?
It
was
just
to
illustrate
the.
I
B
So
what
we
can
say
is
documentation
is
the
least
valuable
of
these
three
things
you
care
about
bucket.
Sharing.
We
do
care
about
bucket
self-service.
I
mean
I
agree
in
principle.
You
could
take
an
approach
where
bucket
self-service
was
like
a
v2
thing,
but
I
don't
think
you
can
say
it's
something.
No
one
will
ever
want
the
best
you
could
do
is
say
we're
gonna
postpone
it
till
later
and
I
don't
think
that's
a
good
idea.
Just
for
the.
A
A
I
see
where
you're
coming
from
okay,
so
in
that
case
we
don't
have
to
worry
too
much
about
I
mean
we
don't
have
to
worry
about
bucket
mutation
and
portability
across
clusters
is
simple.
Now,
since
we
just
don't
do
mutation
through
cozy,
it's
always
an
autobahn.
So
that
being
said,
coming
back
to
node
selector.
A
So,
oh
sorry,
namespace
selector.
So
we
said
we'll
go
with
this
approach
and
we
said
in
you
know
in
case
of
green
field.
It's
it's
pretty
straightforward.
A
A
A
B
A
B
Guy
was
the
one
who
was
proposing
the
a
selector
with
both
match
labels
and
match
namespaces
last
or
on
monday,
and
we
lost
no.
A
B
F
B
B
Yeah,
okay,
yeah
I
mean
I,
I
don't
know
how
weird
that
would
be
from
an
api
perspective,
but
I
think,
having
both
gives
you
better
flexibility
yeah.
I
don't.
I
don't
have
super
strong
feelings
about
this.
I
just
doing
everything
through
labels
feels
like
it
could
put
you
in
a
weird
spot
at
a
minimum.
B
It
requires
that
anyone
who
wants
to
be
able
to
grant
bucket
access
to
namespaces
must
have
modify
namespace
access
in
there
in
their
service
account,
which
is
a
pretty
powerful
role
to
grant
to
us,
to
a
controller,
to
be
able
to
modify
namespaces.
B
A
All
right,
so,
okay,
let's
just
go
with
this,
then
so
I
think
I
think
other
than
this
there's
only
one
more
thing
that
needs
to
be
resolved
then,
before
before
we
you
know,
go
ahead,
I
mean
I
think
we
can
go
ahead
and
talk
to
the
api
reviewers
right
away.
The
one
one
open
question
that
I
have
is
something
that
jeff
brought
up
about
bucket
ownership.
A
J
J
But
you
know
we're
writing
a
driver
for
rook
seth,
for
seth,
rgw,
object,
store
and,
and
that
and
and
our
our
self
object
store
supports
some
user-based
policies
and
one
of
them
is
quota,
so
quota
per
user
per
and
and
it
keeps
coming
like
who's
the
owner
of
the
bucket-
and
our
answer
has
always
been
well.
Essentially,
the
driver
is
run
with
some
credentials.
J
Those
credentials
are
used
to
become
the
owner.
The
author
of
the
br
is
not
the
owner.
The
driver
is
essentially
the
owner,
but
that
makes
it
very
hard
to
support
user-based
policies
on
the
backend
bucket,
and
so
that's
what
I
try
to
discuss
a
little
bit
in
a
few
paragraphs
which
I'll
send
you
a
google
doc
link
to.
B
Yeah,
I
I've
seen
that
kind
of
problem
all
over
the
place
in
kubernetes,
where
you
have
tenancy
at
two
different
levels
and
somebody
wants
to
map
them
to
each
other
when
that's
not
a
natural
thing
to
do
yeah
and
yeah.
The
answer
I
would
give
like.
If
someone
wants
to
control
quotas
on
buckets,
what
you
really
need
is
kubernetes
quotas
on
buckets
and
yeah
that
could
that
can
be
enforced
all
the
way
down
like.
So
if,
if
alice
asks
for
100
gig
bucket,
you
right.
J
And
we
have
the
quota
system
and
it
can
it
can
you
could
control
number
of
resources
created,
number
of
br's
or
number
of
b's
or
whatever
and
then
size
as
long
as
we
put
in
that,
you
could
do
that
the
br
would
have
to
support
or
the
bc
something
would
have
to
support
a
size,
but
anyway
I'll
this
is
very
incomplete.
It's
just
some
thoughts
I
had
recently
and
I'll
just
share
the
link.
You
can
see:
storage,
cozy.
A
Thanks
jeff
sure
yeah,
please,
you
know
please
go
through
the
dock.
It's
very
well
thought
out
and
leave
your
comments
and
we
will
discuss
it
on
monday.
A
A
All
right,
so
this
gives
me
enough
info
to
go
finish.
The
cap,
and
you
know,
reach
out
to
tim.
A
I
gotta
run
now
all
right.
I
think
it's
time
up
good
discussion
today
by
before
the
monday
meeting
I'll
have
I'll
have
the
you
know
finished
out
cap
and
yeah
on
the
on
the
on
the
slack
channel,
and
you
know
we'll
have
the
next
discussion
and,
and
we
go
from
there.