►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting- 21 June 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
C
C
So,
in
terms
of
architecture,
we're
kind
of
okay,
so
is
what
concerns
me
about
the
architecture
we
have
today.
I
I
feel
like
we
haven't.
How
do
I
put
this?
I
I
feel
like
the
way
the
user
experience
is
set
up.
C
Today
is
is
kind
of
still
weird,
so,
for
instance,
you
know
we
don't
really
enable
self-service,
even
though
we
say
we
do.
We
don't
really
do
it,
because
because
we
still
need
an
admin
to
come
in
the
way
and
and
create
a
bucket
class
for
a
bucket
to
be
created.
D
D
C
Okay,
so
did
we
decide
then
so
there's
a
protocol
field
in
the
in
the
bucket
class.
Did
we
decide
to
move
it
to
the
bucket
request?
I
think
we
did
right.
Yes,.
D
Yes,
we
we
were
saying
that
protocol
is
tantamount
to
the
volume
mode
and
the
volume
you
know,
because
if
you
get
something
other
than
what
you
asked
for
it's
useless
to
you
and
there
and
therefore
it
should
be
on
the
the
br
and
then.
Furthermore,
if
it
is
on
the
br,
it
shouldn't
also
be
on
the
bucket
class,
because
that
just
creates
an
opportunity
for
conflicting
sources
of
truth.
Absolutely
for
no
benefit.
D
That
is
the
kind
of
thing
that
at
least
for
for
volumes
and
snapshots.
I
think
it
is
on
the
class
and
so
with
the
the
the
clustered
or
the
the
cluster
scoped
object
does
inherit
it
from
the
class
and
that
enables
the
admin
to
set.
I
mean,
I
think
you
could
leave
it
out,
and
then
you
just
get
the
default
of
true
for
the
deletion
policy
or
delete
for
the
deletion
policy,
which
means
that
by
default,
if
you
don't
do
anything
buckets
that
are
created
and
then
deleted
actually
get
deleted.
D
But
okay
I
mean
it.
Should
I
mean.
C
D
C
D
C
A
D
A
D
With
the
exception
of
you
know,
an
admin
specified
policy
to
retain
things
for
the
brownfield
case.
Like
that,
that's
that's
the
one
situation
where
I
feel
like
you
know.
You
need
a
new
way
of
saying,
like
look
this
this
bucket,
that
I'm
putting
into
kubernetes
represents
somebody
else's
bucket
that
we
don't
own,
but
we
need
to
put
it
into
the
system
so
that
we
can
use
it
and
therefore,
when
we're
done
with
it,
we're
just
going
to
delete
our
representation,
not
the
actual
bucket.
B
D
D
C
C
Give
me
one:
second,
I'm
gonna
drive
out
just
just
continue
talking
I'll
I'll
it'll
be
fine
in
a
second.
I
wanted.
B
D
B
D
Yeah
I
mean
so
so
it's
it's
tricky
when
we
talk
about
portable
versus
non-portable,
because
the
thing
that
is
portable
is
taking
the
default
storage
class
right.
If
I
create
a
pvc-
and
I
don't
specify
any
storage
class,
there
are
certain
assumptions
that
are
still
valid
for
any
storage
class
and
as
long
as
I
adhere
to
only
those
assumptions
like
I
should
my
pvc
should
be
portable
to
any
cluster.
D
It's
it's
when
you
get
into
the
special
stuff,
where
you
know,
maybe
I'm,
depending
on
some
subtle
aspect
of
the
storage
class
and
I'm
specifying
a
specific
storage
class
that
you
start
to
lose
portability
for
that
workload.
D
B
A
So
I
have
a
question
here:
can
you
hear
me
first
of
all.
C
C
D
I
I
think
of
it
as
like
a
as
a
menu.
You
know
you
go
into
the
restaurant
and
you
can
look,
you
know
they
got
the
hot
dogs
and
the
hamburgers
and
the
cheesesteak,
and
you
know
you,
you
pick
something
and
you're
gonna
make
a
lot
of
hamburgers
in
any
given
restaurant
right,
so
menu
items
might
get
chosen
more
often
and
some
might
not.
D
B
D
B
D
Let
me
think
about
how
to
express
this.
I
guess
the
way
I
think
about
it
with
deletion
policy
in
particular,
is
that
you
need
to
be
able
to
when
you
cr,
when
you
create
like
a
brownfield
type
of
volume.
You
know
if
you
have
a
pre-existing
volume
and
you
just
want
it
to
be
available
in
kubernetes.
You
create
the
pv
directly
and
then
there
never
is
a
storage
class
right,
because
your
storage
class
is
only
used
by
pvcs.
D
If
you
just
create
a
pvp
that
represents
a
volume
that
already
exists,
you
need
to.
The
system
still
needs
to
know
what
should
happen.
When
I
delete
this,
even
though
there
was
never
a
storage
class
associated
with
it
right
and
so
it
it
goes
on
the
object,
and
then
I
guess,
if
you
have.
D
Only
the
pv,
the
pvc
doesn't
get
it
so
so
that's
the
oh
yeah,
that's
the!
I
guess
I'm
trying
to
think
of
how
you
know
how
this
makes
any
sense.
I
guess
you
could
have
put
a
deletion
policy
on
the
pvc.
D
That
would
then
tell
the
system
like
what
kind
of
pv
to
create,
but
like
I
guess
it
just
feels
like
a
weird
thing
to
do
right
because
any
pvc
you
create
inside
kubernetes.
Presumably
you
want
to
be
able
to
delete
it
too,
to
create
a
pvc
that
that,
when
deleted,
will
leave
garbage
behind
right.
D
So
so
it
to
to
be
able
to
set
the
deletion
policy
at
the
pvc
layer
is
a
strange
thing
now
now
to
put
it
on.
The
storage
class
is
also
a
strange
thing
to
be
fair.
Maybe
the
idea
there
is
the
admin
wants
to
have
a
special
kind
of
volume
that
just
never
gets
deleted.
So
you
know
it's
like
a
garbage
generating
storage
class.
So
every
pvc
of
that
storage
class
will
become
garbage
upon
deletion
and
just
hang
around,
and
I
can't
think
of
a
good
reason
to
do
that.
D
But
I
don't
know
why
you
would
want
to
do
that.
I
I
you
know
the
the
main
reason
that
I
see
for
having
a
maybe.
B
D
Whatever
perhaps
yeah,
but
but
the
real
value
I
see
with
these
reclaiming
delete
policies
is,
is
the
brownfield
case
where
I
want
to
be
able
to
represent
in
the
kubernetes
cluster,
a
pointer
to
an
object
that
isn't
mine
and
therefore
it's
important
that
I
don't
try
to
delete
it
and
that's
the
main
value
I
see
with
with
reclaiming.
But
but
your
higher
level
question
of.
How
do
you
judge
what
should
be
on
the
storage
class
versus
what
should
be
on
on
the
pvc
or
bucket
class
and
bucket
request?
D
It's
going
to
come
down
to
things
that
are
things
that
only
the
admin
would
know
about
things
that
are
going
to
be
opaque
to
the
driver
or
the
things
that
are
going
to
be
opaque
to
kubernetes,
because
only
the
driver
understands
them.
Those
things
go
on
the
classes
because
they
would
have
to
because
the
end
user
won't
know
about
them.
D
D
So
so,
as
I
was
just
saying,
like
deletion
policy
feels
like
a
special
case
where
it
it
actually
represents
sort
of
a
complex
contract
between
the
end
user
and
the
admin.
You
know
where
you're
saying
look.
I
need
to
use
something
and
kubernetes
needs
to
have
a
reference
to
it,
but
kubernetes
shouldn't
delete
it,
and
so
I'm
going
to
create
this
representation
object
in
kubernetes.
That
only
could
have
been
created
by
an
administrator
or
someone
with
elevated
privileges.
C
Underlying
system
so
so
hold
on
a
second,
so
you're
saying
in
case
of
buckets
that
we
create
through
cozy
deletion
policy,
is
always
deleted.
D
By
default
I
mean
we
could
we
could
follow
the
the
pat.
You
know
what
what
they've
done
in
volumes
and
allow
you
to
set
a
different
default
in
your
bucket
class,
and
then
we
could
stamp
that
on
all
the
buckets
that
get
generated
by
that
bucket
class,
so
that
you
could
have
some
other
default
deletion
policy.
C
D
No,
no
remember
a
bucket
created
through
cozy.
The
way
it
gets
deleted
is
by
deleting
it
through
cozy.
So
so,
yes,
as
the
end
user,
you
may
have
knowledge
about
what
its
life
cycle
should
be,
but
the
way
you
control
its
life
cycle
is
by
whether
you
delete
it
or
not.
Like
the
whole
point
of
the
the
deletion
policies,
it
goes
a
little.
It
goes
underneath
that,
and
it
says,
look
even
when
the
end
user
deletes
his
thing,
still
don't
delete
the
it's
like
a
veto
point
on
on
deletion.
B
So
the
other
use
case
of
that
with
pvs
is
the
rebind
to
another
namespace
right.
C
C
C
D
C
A
D
C
C
D
C
D
The
bind
controller
will
handle
that
case.
If
you
have
an
unbound
pv
and
you
create
a
new
pvc
and
you
specify
the
tv
name
in
the
spec,
it
will
bind.
But,
like
that's
a
weird
thing
to
do,.
B
D
B
D
D
Yeah,
so
so
there's
a
presumption
that
the
admin
is
playing
some
game,
if
you're,
if
you're
mucking
with
these
policies
and
doing
things
and
and
and
the
the
the
thing
that
I
don't
fully
have
my
mind,
wrapped
around
with
cozy-
is
for
our
brownfield
bucket
use
case
like
can
we
get
out
of
the
world
where,
like
an
admin,
has
to
be
involved
because,
as
I
understand
it
right
now,
our
plan
is
the
way
you
do
brownfield.
B
You
know
if,
if
you
want
no,
maybe
it's,
it
will
be
easier
like
for,
let's
say:
aws
s3
right.
So
if
I
would
like
to
import
a
bucket
basically
to
my
kubernetes
cluster
from
aws,
then
one
way
is
to
have
an
administrator
create
at
the
ammo.
But
another
way
is
just
to
have
the
driver
generate
that
that
bucket,
because
it
knows
how
to
create
buckets
anyway.
B
Get
to
the
driver
like
what
is
the
so
I'm
saying
that
it
might,
it
might
be.
You
know,
out
of
the
scope
of
cozy,
and
you
know
just
in
whatever
api
that
the
driver
has
for
its.
You
know
not
client
in
the
sense,
but
like.
B
D
Yeah
but
but
the
point
the
point
I
wanted
to
make
is
no
matter
what
we
do.
As
a
cozy
community,
there
will
be
proprietary
side
doors
in
drivers
right.
We
can't
prevent
that,
so
you
have
to
just
assume
that
drivers
are
doing
the
proprietary
things
on
the
side.
The
question
is:
do
we
as
a
cozy
community,
want
to
attempt
to
define
a
standard
for
importing
buckets
or
do
we
want
to
say?
Look
we're
not
touching
that
that's
a
proprietary
thing
right.
D
B
B
D
I
think
of
it
differently,
so
so,
with
csi
at
least
the
way
this
breaks
down
is
there
is
a
side
car
actually
there's,
usually
multiple
controllers
right.
So
someone
creates
a
bucket
request.
There's
some
controller
that
sees
that
and
in
response
creates
a
bucket
and
then
there's
a
side
car
that
sees
the
bucket
and
it
goes,
and
it
interacts
with
some
driver
to
make
the
bucket
real
and
then,
when
the
bucket
is
real,
it
goes
and
updates
the
bucket
object,
with
the
relevant
information
about
the
actual
bucket
that
was
created.
D
D
B
The
wrong
term-
I
I
don't
mean
the
driver
in
the
sense
of
the
component-
I
didn't
mean
that
the
providers
you
know
suite
of
software
for
provisioning,
cozy
buckets
right
right.
C
B
Yeah,
I'm
imagining
like
any
any
one
of
them.
You
know
the
providers
we're
talking
about
right,
but
let's
take
aws
for
for
the
simplicity
of
everyone's
knowledge,
but
so
it
might
be
within
the
aws
provision
or
cozy
provisioner
right,
the
cosy
name
space
for
aws
that
you
have
one
of
the
controllers
there
you
know
be
able
to
intercept
some
kind
of
event
that
causes
it
to
import
a
bucket
right
and
create
a
bucket
for
that,
and
then.
B
B
D
That
would
cause
that
sequence
of
events
to
happen
that
you
just
described
and
that
could
become
an
additional
part
of
the
api
that
we
would
have
to
decide.
You
know
is
that
object,
name
space
that
object
non-name
space
who
has
access
to
it?
What
do
you
put
in
it
all
those
questions
and
then
and
then
we
could
have
a
standard
way
of
saying?
Okay,
if
you
create
a
bucket
import
request,
the
sidecar
is
going
to
do
x,
y
and
z,
and
it's
going
to
call
this
special
rpc
called
import
bucket
on
the
driver.
D
That'll
do
something
but
like
so
so
far.
We
have
said
no
like
we're
not
going
to
do
any
of
that
like
if
you
want
to
import
a
bucket.
You
just
have
to
manually
fill
in
the
bucket
object,
with
all
the
correct
details
that
the
driver
would
have
put
there
and
the
driver
doesn't
help
you
and
that's,
and
I'm.
B
Okay,
I
think
I
think
we're
we
are
okay
with
not
including
that
currently
in
cozy,
but
I'm
saying
that
providers
will
probably
have
some
way
of
them
being
the
generator
of
the
bucket
class
and
not
an
admin.
You
know
whatever
they
might
have
it
just
as
a
customized
template
somewhere
right
right.
It
doesn't
really
matter,
but
I
mean
I'm
saying
that.
D
Way
as
well,
I
guess
I
I
don't
know
if
I'm
okay
with
that,
I
mean
it,
it
feels
it
feels
like
a
in
an
area
for
deeper
exploration.
We
should
think
think
through.
You
know
a
concrete
use
case
and
the
different
ways
to
achieve
it.
B
D
B
That
admins
can
always
create
a
bucket
right,
that's
kind
of
so.
If,
if
somebody
can
create
a
bucket,
an
admin
gives
the
cozy
provisioner
of
aws
the
permission
to
create
buckets.
Then
I
mean
what
do
you
mean
you're?
Not
okay,
you're,
not
okay
with
not
including
a
standard
api.
You
mean
or
you're,
not
okay,
with
providers
doing
that.
D
I
mean
you're
correct
that
an
admin
will
be
able
to
do
it,
regardless
of
of
what
we
choose
to
do
and
and
similarly
a
vendor
will
be
free
to
with
the
appropriate
level
of
permissions
automate.
The
process
that.
B
Than
today,
that's
it
I
mean
we,
it's
not
that
we
are
preventing
this
from
from
further
enhancing
right,
we're
just
saying
that
today,
it's
not
included,
but
you
know,
and
and
like
obc
right
so
red
hat
redhead
started
with
obc
the
object
bucket
claims,
because
there
was
nothing
else,
but
we
never
said
well.
If
there's
something
standard,
we
won't
use
it,
but
of
course
we
will
adopt
it,
but
you
know
that
that
way.
I
guess
it's
the
same
here.
You
just.
D
B
Right
yeah,
it's
an
object,
bucket
claim
and
an
object
bucket
that
that
was
kind
of
the
analog
of
pvc
pv
and.
C
B
B
Which
is
fine,
I
mean
it
was
just
take
taking
time
to
get
around
this
effort
and
get
get
people
interested.
D
C
No,
I
think
before
that,
even
I
I
came
in
probably
like
july
of
last
year,
but
before
that
I
believe
it
was
running
for
more
than
a
year
even
before
that.
B
So
jeff
jeff
was
the
one,
the
common
person
between
these
two
efforts
mainly,
and
so
you
might
consider
what
he
did
with
obc,
trying
to
promote
that
through
storage
as
part
of
the
or
the
beginning
of
cozy
right
efforts.
It's.
D
C
You
know
the
design
guy
like
like:
are
they
name
space?
Are
they
clustered
the
claims
and
the
bucket
object
buckets.
B
So
it's
much
more
much
more
similar
than
what
we're
doing
to
pvc
and
pv
the
pvc
like
obc
and
ob
are
like
exactly
the
same,
but
it's
I
think
there
was
mostly
it
was
mostly
less
capable.
So
the
brownfield
case
was
very.
D
D
Google
guys
were
here
like
andrew
large,
and
he
was
arguing
strongly
for
having
a
bucket
access
request
and
a
bucket
access
object.
That
would
be
an
extra
level
of
interaction
so
that
you
could
do
credent
credential
minting
and
have
multiple
credentials
per
bucket
and
have
that
all
be
driven
through
cozy,
and
that
was
that
was
his
big
contribution.
I
think,
was
pushing
for
the
having
that
separate
access
from
the
bucket,
because
that
makes
the
design
a
lot
more
complicated,
but
also
more.
It.
B
B
D
To
be
involved,
but
right
haven't
seen
him.
B
For
a
while,
so
in
obviously
by
the
way
you
couldn't
do
that,
but
you
could
do
whatever
you,
so
it
was
less
standard
right
you
could,
you
could
generate
an
obc
and
get
the
driver
basically
or
right.
The
provisioner
would
decide
what
to
give
you
what
type
of
credentials,
but
it
had
to
give
you
credentials
with
the
the
request
right,
the
claim
as
a
response
it
was
it.
D
There
so
so
so
the
the
big
power
I
think
of
having
the
separate
access
requests
and
access
objects
is
that
it
makes
brownfield
way
more
practical
right,
because
because,
if
you
didn't
have
access,
then
the
guy
who
owned
the
bucky
would
basically
have
the
keys
to
the
kingdom
and
he
could
do
whatever
he
wanted.
But
but
then
sharing
would
become
really
really
hard
because
because
you
would
have
you
know,
multiple
people
trying
to
share
access
with
basically
the
same
credentials
and
if
you
wanted
to
revoke
someone's
access,
you'd
have
to
revoke
everyone's
access.
D
D
What
what
do
all
the
accesses
point
to,
if
not
a
bucket
request,
and
so
so
I'm
still
liking
the
you
know
having
them
point
directly
to
the
bucket.
I
just
I
I'm
not
personally
convinced
that
we
can
stop
there
and
say
well.
How
did
the
bucky
get
there?
Someone
created
it
right
and
it's
not
our
it's,
not
our
problem
to
worry
about
how
how
the
brownfield
bucket
got
into
the
system.
C
So
it's
it's
actually
pretty
straightforward
and
you
know
like
guys,
saying
it's
not
very
hard
to
get
there.
I
mean
I
mean
just
like
you
said,
bucket
import
request
or
why
not
just
have
have
a
type
inside
bucket
request
that
says
it's
an
either
type
import
or
type
create.
B
D
C
B
D
So
so
maybe
what
we
should
be
doing,
then,
is
making
sure
that
our
bucket
request
structure
is
is
flexible
enough,
that
we
can
sort
of
flip
it
around
and
say
I
don't
want
a
new
bucket.
I
want
an
existing
bucket
and
make
it
make
that
a
clear
decision
that
you
make
in
your
in
your
br
object
to
say
this
is
a
new
bucket
versus
this
is
an
existing
bucket,
and
here
are
the
details.
B
But
the
problem
was
in
the
details
that
we
were
always
saying
that
if
the
user
provides
something
that
only
the
driver
can
understand,
then
or
not
the
driver,
but
whoever
right
somebody
in
the
providers,
the
vendors
only
a
specific
vendor
can
understand
right,
then
this
is
kind
of
becoming
a
bad
api.
I
guess.
D
Oh
yeah,
I
I'm
agreeing
with
with
guy
here
the
the
the
contents
of
the
actual
bucket
object
will
end
up
being
driver
specific
and
an
end
user
couldn't
reasonably
be
expected
to
know,
driver
specific
things
so
like
if
I
guess,
maybe
there's
a
way
where
you
could
specify
some
right,
something
in
the
bucket
request.
That
was
not
driver,
specific
hand
that
to
the
driver
and
have
the
driver
crunch
on
it
and
turn
it
into
a
driver
specific
bucket.
But
then
the.
D
C
Maybe
there's
a
way
to
specify
it
in
a
different
way
like
where,
where
you
know
you
know
you
know
so
so
for
s3
protocols,
you
know
they
can
be.
The
market
name
is
the
unique
where
it's
specified
and
and
the
provisioner
is
in
the
bucket
class.
So
you
move
across
provisioners.
The
bucket
name
remains
the
same.
D
But
there's
a
bunch
of
other
stuff
like
there's.
There's
access
related
information
that
that
the
driver
will
need
to
do
its
job.
B
Think
about,
if
you
have,
if
you
have
a
you
know,
you
don't
have
a
bucket
class
right,
you're,
just
using
the
default
and
you're
specifying
a
bucket
name
right
as
an
existing
one.
What
are
you
saying
here
exactly
that?
No.
C
D
C
D
C
B
It
for
think
about
it,
I'm
if
I'm
I
can
ask
for
like
a
well-known
bucket
names
in
that
every
cluster
should
have,
for
example,
right
every
provisioner
should
have,
but
otherwise
I'm
bound
to
say
something
that
only
my
driver
can
resolve
to
something
that
I
mean
to
right.
If
I,
if
I
tell
you
my
bucket,
is
called
foo
on
aws
rather
versus
a
bucket
called
full
on
gcp,
I'm
not
getting,
I
mean
the
same
thing
at
all
right.
What
am
I
specifying
here
that
I
just.
C
D
C
But
create
itself,
is
you
know
so?
The
real
problem
here
is
these
are
not
important
operations
or
let
me
put
a
different
way.
These
are
not
declarative
operations,
as
in
as
in
you're,
not
declaring
that
the
bucket
should
already
exist
you,
you
know,
it'll
only
work
if
the
bucket
already
exists.
That's
kind
of
the
situation
we're
in.
B
You
know
maybe
what
I've
said:
it's
not
that
bad,
because
we're
just
trying
to
enable
somebody
who
knows
the
bucket
exists
to
be
imported
right.
A
B
To
make
all
the
all
of
this
request,
the
import
request
portable
to
any
environment,
but
it's
it's
like
that.
If,
if
I
do
want
to
adapt
to
the
to
an
environment,
I
can
say
okay,
so
now
I'm
importing
this
bucket.
That's
my
adaptation
to
this
environment
and
I
get
a
bucket
and
a
bucket
access,
and
I'm
now
my
pods
run
as
if
you
know
they
run
anywhere
else.
When
I
give
them
a
bucket
access
request,
but
yeah,
maybe
the
import
is,
you
know
it
doesn't
have
to
be
the
portable
piece.
B
Maybe
it
just
needs
to
be
the
the
adaptable
piece
for
for
getting
the
bucket
there,
but.
D
B
C
D
D
In
order
for
this
scheme
to
actually
work
like
you
need
to
be
able
to
specify
all
the
security
information
in
your
bucket
import
request
that
that
the
driver
is
going
to
need
to
know
to
actually
take
ownership
of
the
bucket
or
or
get
enough
control
of
the
bucket
that
it
can
that
it
can
create
accesses
right
right.
It
needs
to
be.
D
B
If
I
approve
the
import,
it's
not
that
because
once
the
bucket
is
there
technically
speaking,
I
am
allowed
to
ask
access.
You
know
it's.
It
still
needs
to
comply
with
a
lot
of
namespaces
or
whatever
the
mechanism
the
bucket
specifies,
but
if
it
does
like
anybody
can
now
get
mint
credentials
for
that
bucket.
Basically,
and
now
so
now,
import
becomes
a
question
of
who
can
import
something
that.
B
D
D
C
C
Somehow,
right
right,
I
I
still
think
you
know
there
should
be
some
some
kind
of
connection
between
who
is
requesting
the
access,
not
in
terms
of
namespaces
but
in
terms
of
the
workload
itself
or
the
user.
So
so,
instead
of
instead
of
instead
of
saying
you
know,
these
are
the
namespaces
that
are
allowed.
C
B
I
thought
about
it
later
after
after
the
previous
discussions.
I
thought
about
it
and
I
think
any
you
know
any
separation
with
inside
the
namespace
to
me
doesn't
make
sense
in
the
kubernetes
level
right.
So
if
I,
the
fact
that
I
have
like
three
service
accounts
and
each
one
has
different
roles,
that's
that's
great
in
terms
of
the
workload.
B
The
running
workload
cannot
do
certain
things,
that's
perfect,
but
if
I
am
the
user
of
this
namespace
I
can
you
know
I
can
just
choose
whatever
service
account
I
want,
so
I
don't
have
any
restrictions,
basically
inside
the
namespace.
It's
mostly
that
what
I
like
having
multiple
service
accounts
in
the
names
it
just
provides.
D
C
B
B
My
point
just
to
complete
that
is
the
bucket
access
request
being
able
to
ask
to
add
that
to
my
namespace
for
a
certain
bucket.
It's
enough
that
I
just
know
that
the
namespaces
allows
is
allowed
to
do
that,
and
now
it's
like
the
user's
fine
grain
decision
to
which
pods
to
give
it
to
right.
B
D
B
B
True,
also
to
a
service
account
in
the
name
series
that
has
permissions
like
high
level
permissions
like
if
I
have
a
service
account
in
my
namespace,
and
it
was
given
a
high
level
of
permissions
now,
every
pod,
technically
speaking,
can
be,
can
start
from
that
name
from
that
service
account.
But
I
mean
the
control
that
I
get
as
a
user
is
to
mix
and
match
service
accounts
to
pods,
so
that
I,
the
code
that
I
execute,
has
the
permissions
that
I
meant
to
have.
I.
D
B
That's
the
right
level
that
I'm
getting
I'm
not
getting
a
a
separation
that
tells
and
that's
okay.
I
mean
I
understand
that
I'm
not
getting
the
separation
that
inside
my
name,
space
that,
like
I,
have
two
users
inside
the
same
namespace
right.
That's
not
the
case.
It's
a
single
user.
B
Yeah,
you
want
to
run
one
code
with
a
different.
You
know
permissions
than
other
code,
but
your
your
tenants
inside
the
namespace
are
the
the
containers,
images
and
not
multiple
users
trying
to
prevent
each
other
from
accessing
different
things
inside
the
same
name
space.
It's
not
right.
It's
not
that
level
again.
B
D
And
if
a
user
did
want
to
have
two
different
service
accounts
with
different
levels
of
access,
he
could
create
two
different
bars
and
use
different
ones
for
different
pods,
so
that
you
know,
let's
say
he
had
one
workload
that
he
trusted
a
lot
but
another
workload.
He
didn't
trust
very
much
the
one
that
he
didn't
trust
very
much
if,
if
he
decided
that
he
wanted
to
cut
it
off,
he
could
just
revoke
the
bar
for
it
exactly
exactly
and
leave
the
other
one
intact.
D
And
so
it's
the
user's
decision
to
sort
of
subdivide
access
within
his
namespace
if
he
chooses
but
but
nothing
is
going
to
prevent
the
user
from
accessing
something
he
wants
to
within
his
own
namespace.
It's
about
him
preventing
his
own
containers
from
accessing
things.
He
doesn't
want
his
containers
to
access.
B
C
D
C
D
And-
and
it's
it's
important
to
realize
that,
like
this
is
how
kubernetes
is
designed,
it's
namespaces
are
the
are
the
walls
between
things
and
users
have
a
little
bit
of
extra
information,
but
not
enough
that
we
can
build
a
security
mechanism
around
it.
D
This
is
my
bucket,
so
so
any
any
import
mechanism
would
have
to
involve
a
scheme
where
you
supply
enough
credentials
to
to
prove
ownership,
or
you
would
have
to
be
an
admin,
and
you
know,
in
which
case
the
admin
has
access
to
the
credentials
of
the
cozy
driver
itself.
B
So
there's
there's
one
level,
first
of
all,
there's
the
level
of
the
cluster
like
who,
which
cluster
can
claim
a
bucket
from
like
the
store,
the
the
service
and
that's
kind
of
resolved
by
itself
by
the
credentials
that
the
driver
has
for
that
system
right.
Basically,
so
it
can
be
everything
because
I
get
like
the
the
you
know
permission
to
do
anything
and
like
have
an
s3
admin
kind
of
aws
permission,
and
then
I
see
all
the
buckets
in
my
account
and
then
that's
the
this
level
of
the
cluster
and
then
we're
asking.
C
Well,
we're
saying
that
if
you
actually
went
ahead
with
this
import
mechanism
and
if
it
was
just
allowed
just
free
range
allowed
for
anyone
to
just
import
a
bucket,
then
even
the
loudness
that
we
have
doesn't
matter
anymore,
because
anyone
could
just
guess
the
bucket
name
and
get
full
access
to
the
bucket.
So
how.
B
Or
wait
wait
a
second
but
there's
an
alternative
to
that
saying
that
that
the
administrator
has
set
up
the
aws
provision
or
the
nws.
You
know
cozy
provisioner,
so
that
it
has
some
permission.
Some
connection
to
one
account
right
and
this
account
has
you
know,
let's
say
10
buckets
right
and
then
the
only
importer
requests
would.
B
Now
what
it
does
is
is
saying
this
that
I
have.
I
I
assign
the
cluster
for
the
for
that
scope,
right,
the
account
or
or
even
something
sub
account
with
it
within
aws.
So
I
can
provide
an
iim
which
has
you
know,
permissions
to
see
just
three
buckets
right
and
not
the
entire
list
of
the
account,
and
then
I
give
that
to
my
kubernetes
cluster
cozy
provisioner
of
aws
and
then
inside
the
cluster.
Now
the
question
is:
what
type
of
mechanism
do
we
need
to
prevent
different
namespaces
from
specifying
import?
B
You
know
who
who
wants
to
import?
What's
the
flow
of
that,
because
you
could
also
imagine
that
that
provisioner
would
just
pre-populate
all
the
buckets
that
want
to
be
created.
But
then
we
were
saying:
okay,
it
is
an
interest.
In
some
cases.
We
won't
want
to
import
everything.
We
just
want
to
selectively
import
things
from
the
account.
D
Right
right,
but
but
but
still
what
what
that
means
is
you've
made
the
the
policy
decision
to
have
credential
as
imports,
which
means
you
have
no
security
within
that
domain.
Right
within
the
domain
that
you
set
for
yourself,
any
user
can
jump
on
any
bucket
and
take
it
and
nothing
prevents
them
from
doing
so.
D
C
C
D
Csi
wasn't
designed
to
work
that
was
csi
was
designed
to
be
kubernetes
agnostic
right,
but
because
some
people
had
designed
csi
drivers
that
would
only
work
with
kubernetes.
They.
They
actually
needed
additional
information
in
order
to
work
with
kubernetes,
and
so
that
was
added
as
like
a
as
a
hack
for
that
special.
For
that
specific
case,.
D
I
don't
know
we're
at
the
top
of
the
hour,
and
I
think
I
feel
like
this
is
a
good
discussion
from
maybe
making
brownfield
more
automated
and
more
integrated.
But
I
I
don't
know
is
this:
the
direction
you
want
to
be
going
sid,
where
we
dig
deeper
into
brownfield
and
figure
out
a
way
to
properly
include
it
in
the
spec
and
not
just
punt
on
it.
C
I
don't
mind
pointing
on
it
currently
in
terms
of
priorities.
You
know
the
highest
priority
is
actually
going
through
the
api
review,
but
but
here's
my
problem
with
you
know
just
thinking
of
it
that
way,
because
there
is
something
wrong
with
our
access
model,
we're
seeing
some
flaws
in
it,
which
is,
I
mean,
at
least
for
the
brownfield
case
that
we
just
talked
about
and,
and
that
might
you
know,
imp
that
might
end
up.
You
know
fixing
that
might
end
up
making
changes
to
the
existing
access
model.
We
have
something
about.
C
The
existing
access
model
doesn't
sit
right
with
me.
I
still
can't
point
my
finger.
D
C
Well,
no,
if
you
want
to,
if
you
want
to
access
a
bucket
from
a
namespace
different
from
the
name
systems,
you
created
it,
it's
kind
of
a
weird
workflow.
I
can't
tell
you
what
exactly
is
wrong.
You
know,
thinking
about
it.
Technically
everything
seems
right
yeah,
but
but
you
know
if,
if
I
don't
come
up
with
a
better
solution
or
if
I
don't
come
up
with
the
real
reason
we'll
go
with
what
we
have
but
but
other
than
that
you
know,
my
focus
is
just
on
you
know
getting
the
cap
through.
C
So
in
the
meantime,
if
we
figure
out
this,
that's
fine,
there
is
one
other
priority
that
we
have
in
terms
of
what
we
need
to
figure
out,
which
is
which
is.
How
is
this
going
to
tie
into
the
pod
life
cycle
like
what
are
we
going
to
do
in
the
cubelet
eventually,
but
other
than
that
in
terms
of
big
picture
stuff?
I
think
I
think
they're,
mostly
okay,
there
are
some
user
experience,
things
that
that
still
are
on
my
mind,
but
but
those
are
minor.
B
And
how
do
we
get
through?
The
api
review
seems
like
this
is
a
political
or
a
hard
to
get
like
you
know,
people's
time.
D
Like
chickens
with
their
heads
cut
off,
if
we
did
it
right
now,
most
of
the
api
guys
are
not
looking
at
caps
they're
looking
at
code
or
doing
something
else,
and
we
could
we
could
get
their
attention
but
yeah.
If
we
wait
another
month,
we'll
be
back
in
the
boat,
where
it's
staring
at
enhancements
freeze
in
the
face
and
we're
trying
to
scramble
to
make
the
deadline.
C
Yeah,
the
last
time
it
was
it
was,
you
know
we
were
in
a
tough
spot,
because
you
know
they
were
reviewed
on
the
day
before
the
end
you
know
deadline
and-
and
we
were
actually
trying
for
six
weeks
before
the
deadline.
When
I
mean
this
time,
we
will
again
try
early.
C
I
mean
we've
addressed
some
of
the
major
concerns
over
there
and
and
let's
see
how
it
goes,
but
before
we
even
do
that,
like
ben
and
guy
I'll
share
whatever
I
have
with
you
once
my
internet
is
back,
and
you
know
let's
iterate
on
let's
iterate
on
this,
you
know
we
can
show
it
to
tim
right
away,
but
but
I'd
like
you
to
take
a
look
and
make
suggestions
as
well.
B
We
can
try
to
list
the
out
of
scope,
and
you
know
future
kind
of
outlook
on
these
topics,
maybe
better,
and
maybe
that
will
help
them
understand
like
if
there's
if
the
api
is
complete
enough
right.
Something
yeah
of
that
sort.
C
Well,
there
were
problems
with
completeness,
but
but
the
bigger
problems
were
with
simply
just
stuff
like
stuff,
like
you
know,
bucket
sharing
and
and
self-service
the
question
kept
coming
back.
You
know
the
user
can't
really
do
serve
service
because
because
there
is
that
admin
step
of
creating
the
bucket
class
involved.
Now
we
have
a
better
answer
to
that.
That's
all
has
changed
and
then
same
thing
with
bucket
sharing.
C
We've
we've
really
just
gone
ahead
and
said
you
know
we'll
stick
with
the
existing
model,
so
yeah
yeah,
I
yeah
a
major
problem.
I
think
last
last
api
review
was
it's
just
the
cap
needed
a
lot
of
improvements.
The
cap
was
not
in
a
good
shape.
That's
why
that's
why
I
took
over
and
you
know,
I'm
writing
it
from
scratch.
Just
cleaning
up
just
deleted
the
whole
thing
which
was
there
before
right
yeah
anyway.
C
So
so
you
know,
our
current
priority
is
finishing
the
cap,
but
I
don't
know
how
much
help
I'll
need
on
that.
You
know
I
can
write
the
whole
thing
and
I
think
that
might
be
better
for,
for
consistency
sake,
at
least
for
the
first
version.
Was
that
sit?
Are
we
going
the
direction
you
want
to
go
with
these
design
meetings
and
spending
time
on
the
appropriate
things?
C
My
my.
I
would
really
like
to
change
this
to
update
meetings
where
we
just
give
updates
on
where
we
are,
but,
but
you
know,
others
and
others
need
to
be
here
before
we.
You
know
move
forward
in
that
direction.
Okay,
okay,
yeah.
I
have
to
drop.