►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review - 13 August 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
B
Okay,
so
let
me
move
this
away
so
good
morning.
Everyone
today,
I'm
continuing
where
we
left
off
last
week,
where
we
were
discussing
the
design
of
this
entire
proposal.
Last
week
we
discussed
how
bucket
provisioning
happens.
B
We
defined
three
different
actors
in
this
model:
the
user
or
admin
being
one
cozy
is
our
system
and
the
backend
is
the
actual
object:
storage
backend.
Now
the
the
workflow
we
described
was
a
user
or
a
admin
request.
A
bucket
and
cozy
uses
that
information
to
communicate
to
the
backend
and
actually
create
the
bucket.
B
We
describe
the
resources
that
are
required
when
I
say
resource
I
mean
the
kubernetes
api
types
that
were
used
in
this
workflow
and
those
were
a
bucket
request,
bucket
class
and
a
bucket.
B
So
last
week
we
after
discussing
this,
we
went
on
to
talk
about
how
once
we
talked
about
provisioning,
we
talked
about
how
an
application
is
provided
access
to
this
bucket.
So
I
want
to
continue
from
that
point.
B
So
we'll
talk
about
grant
access
right
now
so
for
granting
access.
There
is
a
fourth
actor
involved,
and
that
is
the
application.
B
So
after
a
user
or
admin
starts
a
bucket
request
to
create
a
bucket,
the
user
or
admin
again
requests
credentials
for
an
applications
to
for
an
application
to
consume
that
bucket
and
the
way
they
would
do
it
is.
B
B
Operations
to
start
out
with,
let's
talk
about
the
the
first
two
actors
highlighted
or
selected
using
this
dotted
box
here,
so
the
interaction
between
the
user
and
cozy,
the
way
user
would
request
a
bucket
is
using
a
bucket.
A
bucket
access.
Sorry
is
by
using
a
bucket
access
request.
B
A
bucket
access
class
is
a
grouping
of
a
similar
type
of
bucket
accesses
and
you'll
see
what
that
looks
like
here.
So
this
is
the
definition
of
bucket
access
class.
Where
you
specify
what
kind
of
policy
you
want
and
the
every
cloud
provider
we've
seen
so
far:
s3
gcs
azure
and
other
private
cloud
vendors
for
that
give
object,
storage
as
a
service.
B
They
seem
to
have
a
different
type
of
model
for
policy
actions
and
the
best
way
we
couldn't
find
a
common
abstraction
and
it
doesn't
seem
likely
that
it'll
be
easy
to
find
one.
So
the
method
we've
gone
with
here
is
making
policy
actions
and
opaque
value
passed
into
a
config
map.
B
So
that
is
the
field
policy
actions
config
map.
That
map
would
have
the
vendor-specific
representation
of
the
policy
actions
and
we
also
have
a
parameters
field
here
to
pass
in
any
additional
parameters
for
the
back-end
to
actually
provision
the
access
say
in
gcs.
There's
a
project
id
something
similar
in
azure.
B
B
Parameters
so
so
this
is
for
provisioning
access
the
whole
point
of,
like
you
know
this
resources.
Entire
existence
is
to
specify
the
policy
actions.
You
know
that's
kind
of
how
we
looked
at
it
and
and
reasoned
about
making
this
a
first
class
field.
D
Oh
yeah,
can
I
can
I
just
interrupt.
Your
question
is
a
good
question
and
it's
one.
We
wrestled
with
some.
What
what
is
the
philosophy
in
these
in
the
kubernetes
apis
when
you
have
a
required
feel,
but
it's
field,
but
it's
passed
through,
in
other
words,
kubernetes
isn't
going
to
look
at
it.
Does
that
automatically
mean
it
shouldn't
be
a
first-class
field
if
kubernetes
is
not
examining
it.
C
C
The
problem
is,
I
don't
think
we
can
do
multi-object
validation,
so
you
know
somebody
puts
a
somebody
puts
a
config
map
here.
All
you
can
do
is:
is
the
field
not
empty?
Yes,
if
not
that's
the
best
validation
you
can
do,
and
I'm
not
sure
that
really
buys
you
that
much
at
that
point
might
as
well.
Stick
it
into
a
parameter
and
let
the
back
end
do
the
real
validation
which
is
going
to
do
anyway.
Yeah.
B
It's
going
to
do
it
anyways.
My
question
here
is
as
someone
writing
this
spec
not
having
a
policy
actions
here.
What
would
that
communicate
in
the
sense
that
if
they
don't
fill
in
any
parameters,
what
would
this
bug,
but
I
mean
if
it's
if,
if
that
field
was
in
a
first-class
field,
do
you
think
there'll
be
a
possibility
that
people
would
create
bucket
access
classes
without
the
policy
actions.
B
Okay,
so
if
it's
already
the
way,
we
do
it
no
problems.
I
think
this
is
very
simple,
just
put
into
parameters
there's
one
other,
so
there
is
one
other
thing
which
is
keeping
it
a
config
map
versus
just
keeping
an
opaque
string.
I
would
still
prefer
config
map
and,
I
believe,
there's
a
way
to
pass
some
config
maps
through
parameters.
B
C
It
okay,
I
can
buy
that
logic.
I
think
I'm
fine
with
that.
If
you
have
a
parameter
that
points
to
a
config
map,
no
issues
with
that.
Okay.
E
Yeah
guys
a
quick
question
for
driverless
what
happens
here.
E
B
B
Yep
we've
yeah
we'll
go
with
parameters.
B
Jump
a
step
yeah
so
yeah
using
the
bucket
access
request
and
the
bucket
access
class
cosy.
Then
figures
out
creates
a
copies
over
the
field
from
those
two
objects
into
a
bucket
access
resource.
So
it
takes
I've
represented
the
bucket
access
request
and
the
bucket
access
class.
On
the
left
hand,
side,
the
one
tagged
with
the
green
keys
are
the
request
fields
that
have
copied
into
bucket
access
and
the
orange
ones
are
the
access
class
fields
copied
here
in
the
bucket
access
itself
policy
actions,
conflict
map
data.
B
I'm
not
sure
if
we
should
leave
that
as
the
parameter
here
in
the
bucket
axis
or
have
that
as
a
first
class
field.
I
would
I
would
leave
that
in
the
parameters
given
the
recent
decision,
because
it
would
be
consistent
with
the
other
object.
However,
that
being
said,
we
talked
about
this,
where
we
thought
we
should
copy
over
the
data
from
the
config
map
into
this.
B
C
Behind
that,
no
that
makes
sense
you
know,
once
you
create
a
physical
object,
you
should
be
cut
off
from
the
you
could
be
cut
off
from
the
class
object,
so
everything
inside
the
actual
object
should
have
everything
that
you
need
without
the
class
object.
So
if
you
copy
over
your
parameters
in
this
case,
you'll
have
your
policy
as
well.
B
So
at
this
point,
if
you
notice
the
status,
so
when
you
create
the
bucket
access
object,
the
status
is
set
to
false
access
has
not
been
granted.
Yet
it's
just
created
this
object
and
then
it
goes
ahead
and
uses
this
bucket
access
to
talk
to
the
backend.
It
talks
to
the
back
end
using
the
grpc
protocol.
B
B
As
the
name
suggests
principle,
this
is
the
principle
that
is
actually
getting
the
access.
I
will
go
into
it
shortly.
Access
policy
is
a
separate
field
because
we
were
modeling
in
a
separate
field
there.
However,
this
would
go
into
parameters
now,
because
vendor
specific
implementations
would
rely
on
their
specific
parameters.
B
So
I've,
given
an
example
of
what
that
would
look
like
so
bucket
name,
is
a
generated
bucket
name
based
on
the
example
from
last
week,
parameters
yeah,
we
changed
it
from
bucket
context
parameters,
so
this
should
be
parameters
region,
usc,
swan
within
that
we'll
also
have
this
access
policy.
I've
given
an
example
of
an
access
policy
here
for
read
once
right,
many
bucket,
where
you
have
access
to
the
first
line,
is
a
put
on
put
object
access.
You
can
write,
objects,
get
object
which
you
can
read,
but
you
can't
do
anything
else.
B
B
Okay,
so
now
going
into
principle
going
back
one
slide,
so
the
principle
represents
the
resource
or
the
workload.
That's
getting
the
access.
Now
we
need
this
field
to
be
provided
so
that
we
can
use
this
for
revoking
access,
which
is
the
main
reason.
This
is
needed.
C
Can
we
go
back,
I
wanted
to
understand
the
file
path
and
what
what
purpose
it
serves
so.
B
B
Oh
no
we're
using
we're
using
a
csi
driver
that
is
common
across
workloads.
It
doesn't.
E
Know
right
you're
mounting
that
csi
volume
at
a
particular
location.
So
is
this
path
relative
to
the
root
of
that
volume,
or
is
this
an
absolute
path?
If
it's
an
absolute,
it
doesn't
seem
like
you're
writing
the
credentials
to
the
volume
itself.
You
must
be
writing
them
into
the
container
file
system.
That's
the
confusion,
yeah!
So.
B
B
Right
so
I
should
I
should
have
been
clear
so
yeah,
so
so
this
this
needs
to
go
into
the
container
file
system
outside
of
so
okay,
here's
here's
the
here's,
the
issue.
B
So
if,
if,
if
I'm,
if
I
have
a
workload
that
needs
to
use
aws's
credentials,
there's
a
standard
path
for
having
aws
credentials,
that
is,
the
user's
home,
slash
dot,
aws,
slash
credentials
now
that
means
we'll
have
to
rely
on
the
volume
volume
mount
to
tell
us
exactly
the
say:
the
directory,
where
this
file
has
to
go
in.
B
The
means
what
I'm
saying
is
the
volume
amount
has
to
exactly
point
to
some
some.
How
do
I
put
this
like
some
directory
that
contains
one
of
the
directories
in
the
chain
that
contains
the
credentials
file?
So
you
know
you
could
say
slash
root,
then
I
would
put
it
in
dot
aws
credentials.
It
could
be
mounted
at
slash,
root,
slash
dot,
aws.
Then.
I
would
sorry.
E
B
D
I
mean,
I
think,
it's
a
good
question.
You
asked
andrew
and
it
it's
it
for
sure
we
need
the
response
needs
to
contain
the
credentials.
I
guess
you
could
think
that
the
sidecar
or
the
csi
node
adapt
cozy
node
adapter
could
then
write
it
to
the
volume
mount
that
it
can
get
from
the
pod
manifest
right
or
the
pod
itself.
D
So
ultimately,
these
credentials
have
to
live
in
the
container
file
system
in
the
expected
in
the
place
where
the
cloud
api
you
know,
the
bucket
api
or
sdk
expects
to
see
that
right
like
well
or
do
they
have
to
be
a
sim
link
from.
E
E
Saying
I'm
saying
that
that
the
you
are
providing
credentials
to
an
application
right,
so
one
way
you
can
do
that
is
through
environment
variables.
Another
way
you
can
do
that
is
through
file
based
credentials.
The
problem
is
the
file
based
credentials
where
the
volume
is
mounted.
All
of
that
is
already
under
control
of
the
workload
definition
right,
and
so
I'm
saying
that
in
that
same
workload,
definition
you
should
handle
the
entire
thing.
E
You
should
basically
say
this
is
where
my
credentials
are
being
written
both
because
I'm
telling
you
where
the
volume
mount
is
and
I'm
telling
you
where
the
file
is
in
that
volume
mount
and
I'm
going
to
pass
that
file
to
the
command
line
of
the
of
the
of
the
application
or
in
an
environment
variable
to
the
application.
B
The
assumption
was,
there
is
a
there's,
a
standard
path.
However,
again
like
I
think
this
is
not.
This
is
something
yes
we
can.
We
can
definitely
do
maybe
just
the
credentials.
File
name
may
be
needed.
However,
even
that
is
not
needed
some
somehow
we
someone
needs
to
communicate
the
names
the
file
and,
like
you
said
it
might
make
more
sense
for
the
workload
to
define
that,
because
that's
the
one
that's
going
to
consume
it.
The
credentials
file
path.
B
E
That
is
a
that
is
a
control
protocol,
specific
thing
right:
there
should
only
be
three
possible
values
for
this
right,
the
the!
If
we
it
you
know
or
or
n
possible,
where
n
is
a
small
number
yeah
right,
any
s3
control
path,
piece,
no
matter
what
the
actual
back
end,
no
matter
what
the
actual
driver
should
always
say
this,
so
I
don't
really
understand
why
the
driver
is
returning
this
information,
so
that
you
know
the
workload
doesn't
have
to.
That
was
the
only.
I
think
that
you
already
know
that
this
is
an
s3
class.
C
B
That's
that's
what
you're
suggesting
so.
Let's
say
I
have
a
new
one
coming
in.
There
needs
to
be
a
way
to
pass
that
in
let's
say
they
have
to
override
that
path.
That
also
has
to
be
added.
E
So
the
problem
is
that
how
do
you
make
the
workload
portable?
The
only
way
to
make
the
workload
portable
across
multiple
different
implementations
of
the
same
protocol
is
to
have
an
interface
guaranteed
to
that
protocol.
Here's
the
thing
every
workload
I
can't
expect
right.
I
can
expect
that
I'm
going
to
use
the
following
data
path
protocol
and
that
and
that,
with
that
data
path,
protocol
comes
an
understanding
of
how
I'm
going
to
do
authorization,
communication
right.
F
F
In
aws
credentials
is
is
fine
for
some
applications
that
are
using
the
aws
sdk,
but
you
know
one
example
I
gave
was
the
s38
file
system
client
it?
It
wants
those
credentials
to
be
baked
into
kind
of
a
a
different
type
of
file.
So
I
could
see
you
know
a
controller
implementation
for
an
s3
dialect
might
want
to
present
multiple
ways
of
surfacing
those
credentials
as
different
file
types,
whether
it's
just
kind
of
like
you
know
a
dot,
aws
credentials,
style
or
or
a
file
like
a
way.
That's
expected
of
other
applications.
C
I
think
this
is
a
area
where
having
a
cozy,
csi
adapter
is
kind
of
causing
messiness
in
our
user
interface,
because
the
way
that
csi
drivers
are
built
is
that
the
user
specifies
where
they
want
that
particular
volume
to
be
surfaced.
What
path
they
want
it
to
be
surfaced
right,
which
means
our
initial
version
of
this,
the
poc
or
the
alpha.
C
We
necessarily
are
going
to
have
some
knob
that
the
user
can
control.
That
says
I
want
this
to
be
surfaced
at
this
location,
and
so
we
need
to
incorporate
that
into
our
design.
But
I
completely
agree
with
what
andrew
is
saying,
which
is
we
have
this
concept
of
protocols
within
cozy
protocols
are
something
that
kubernetes
will
natively
understand.
C
The
whole
point
of
having
these
protocols
is
that
kubernetes
can
dictate.
Here
is
how
I
expect
to
surface
them,
and
you
know
for
s3.
I
will
expect
that
they're
going
to
be
surfaced
at
this
particular
path
as
this
particular
way,
and
that
allows
a
consumer
to
kind
of
be
agnostic
saying.
Well,
I
support
the
s3
protocol.
I
will
always
expect
the
credentials
to
show
up
here
great.
C
I
do
like
the
point
that
was
made
that
you
should
be
able
to
override
that
as
a
user,
so
we
should
have
a
saying
default
based
on
the
protocol
and
then
the
user
should
be
able
to
override
that
for
their
workload.
The
question
is:
how
should
I
go
ahead?
Andrew
fit?
It
finish.
I'm
sorry
go
ahead
finish.
E
No
go
for
it,
I
I
just
I
know
one
of
the
other
things.
That's
bothering
me
about
this.
The
root
aws
credentials
is
not
a
standard
really,
even
at
that
level.
The
the
api
definition
is
the
credentials
bit
and
maybe
even
the
dot
aws
credentials
bit,
but
you
have
to
imagine
that
there's
going
to
be
workloads
that
aren't
going
to
be
running
as
a
root
user,
right,
right
and
so
being
able
to
say
where
to
place.
This
is
fundamentally
has
to
be
controlled.
E
E
B
Sense
so
yeah,
even
even
with
the
default,
you
won't
know
what
the
your
home
user
is
or
home
path
is.
That's
what
you're
saying
yeah
that
makes
sense
and
and
yeah
that's
that's
really
not
a
big
change
from
where
we
are
today.
This
yeah,
I
I
don't
know
what
else
to
say
this
seems
this
seems
valid,
and
I
think
we.
C
C
But
the
set
of
defaults
is
relative,
like
andrew
said
right
and
then
the
root
is
defined
by
the
user
through
the
mount
point.
D
Something
the
summary
yes,
this
summarizing.
What
andrew
said
and
thanks
andrew
for
pointing
this
out,
the
the
workload
knows
the
root
path,
the
root
part
of
the
directory,
the
beginning,
the
protocol,
the
claim
is
the
protocol
knows
closing,
knows
the
middle
part,
the
dot
aws,
because
we
know
the
protocol
and
the
last
part
of
the
path
name,
is
what
also
in
the
manifest
were
you
saying,
andrew
the
credentials.
E
So
yeah,
so
so
so,
let's,
let's
say:
let's
just
start
where
you
were,
which
is
yes.
The
first
part
is
clearly
has
to
be
coming
from
the
workload
now.
There's
some
interesting
part
about
the
second.
So
we
believe
that
there
is
a
convention
that
that
this
api
will
look
for
your
home.
You
know
dollar,
home,
slash,
dot,
aws,
slash
credentials
literally
credential
right.
So
now
you've
got
an
interesting
problem.
Do
you
want
to
mount
the
volume
at
root
aws?
E
Or
do
you
want
to
mount
the
volume
at
root?
The
problem
with
mounting
the
volume
at
root
is,
there
might
be
other
files
that
are
expected
below
the
root
user,
and
so
it's
not
safe
to
mount
it
at
root,
because
then
anything
that
the
container
provides
by
way
of
config
for
other
things,
won't
work,
and
so
you
might
end
up
having
to
have
the
volume
mounted
at
root.aws
and
have
a
credentials
file
that
is
placed
at
the
root
of
that
volume
right
and
so
then
we
need
foo
dot
aws
or
something
like
that.
E
So
I'm
saying
having
the
freedom
to
specify.
You
know
that
that
overlap
of
this
is
the
dot
aws
in
the
volume
or
not,
I
think,
is
an
interesting
question,
but
I
think
that
the
credentials
bit
is
certainly
hard-coded,
and
so
there
might
end
up
having
to
be
some
sort
of
path.
Check
like
how
deep
is
the
volume
out
does
the
volume
amount
include
a
part
of
what
I
expect
to
be
part
of
my
signature?
If
so,
then
I
only
write
in
the
root
of
the
volume
or
something
like
that.
C
The
one
more
thing
I
think,
I'd
like
to
add,
is
the
ability
to
override
the
protocol
default.
So
we're
saying
something
like
that:
aws
slash
credentials
is
the
default
for
s3
in
your
storage
class.
You
should
be
able
to
override
that
and
save.
C
So
I
I
I
there's
two
parts
to
this
right:
there's
the
root
which
should
be
defined
by
the
bucket
access
request,
and
then
there
is
the
kind
of
protocol,
specific
suffix
of
the
path,
and
we
have
a
default
for
that
per
protocol
and
the
override
for
that,
I
believe,
should
live
in
the
bucket
class.
C
E
So
let's
imagine
that
I
have
an
application
that
that
wants
to
do
root,
aws
whatever,
but
it's
just
not
going
to
be
appropriate
to
mount
the
volume
at
that
location,
but
the
application
is
aware
of
cozy,
and
so
what
it'll
do
at
startup
is
build
a
sim
link
or
something
dynamically
to
point
to
the
actual
location
or
something
like
that,
and
so
we
can
craft
something
either.
E
As
a
you
know,
what's
that
called
an
init
container
or
something
that
it'll
come
up
and
make
that
that
symbolic
for
us
to
the
the
proper
location?
So
I'm
saying
that
the
flexibility
of
that
at
mapping
this,
because
at
the
end
of
the
day
this
doesn't
change
the
format
of
the
credentials
itself.
It's
just
changing
the
path,
and
I
guess
my
point
is:
I
don't
see
how
the
path
to
the
credentials
is
in
any
way
a
class
thing,
because
that's
completely
going
to
be
about
authoring,
your
your
workload,
you
know,
yeah.
I
think
I'm.
F
You
know
just
because
I'm
able
to
run
applications
doesn't
mean
I'm
going
to
be
able
to
make
a
bucket
class,
whereas
I
probably
am
going
to
be
able
to
make
a
bucket
access
request,
so
probably
should
be
in
the
request,
because
the
the
developer
that's
running
the
application
that
needs
the
mapping
needs
to
have
the
credentials
mapped
to
a
specific
path.
Is
you
know,
probably
not
not
going
to
be
the
same
person?
That's
able
to
create
a
a
bucket
access
class?
F
We
don't
want
to
see
this
like
massive
proliferation
of
bucket
access
classes,
just
because
applications
need
credentials
at
a
different
file.
You
know
at
some
point
people
will
start
to
conform
to
you
know
if
we
have
a
default
right,
so
people
might
go.
Oh
okay,
well,
qazi
creates
it
here,
and
so
I'm
gonna
make
my
application.
C
No,
that
that
makes
sense.
I
I
think
we
don't
want
to
end
up
with
a
explosion
of
classes,
just
because
you
know
every
single
application
wants
it
at
a
different
path.
I
think
where
I
was
coming
from
was
we
had
said
we're
going
to
break
this
path
into
two
pieces:
the
prefix
and
the
suffix.
C
The
prefix
is
going
to
be
defined
by
the
workload
and
the
suffix
is
going
to
be
defined
by
the
protocol,
and
so
I
guess
my
question
is:
how
likely
is
it
that
the
suffix
is
going
to
the
the
workload
is
going
to
want
to
change
the
suffix?
E
So
so
here's
how
I
evolved
my
thinking
in
real
time
here.
So
this
whole
question
of
root,
aws
or
root
and
then
aws
right
at
the
end
of
the
day,
the
workload
knows
the
path
that
it
needs,
and
so,
and
so
what
I'm
just
saying
is.
It
makes
sense
that
the
workload
control
the
entire
path.
It
controls
that
way
by
deciding
the
volume
mount
point
and
part
way
by
deciding
the
relative
path
within
the
volume
mount
point
where
the
credentials
should
be
written.
E
So
if
you
put
both
of
those
in
the
workload
definition,
then
you
just
don't
have
to
worry
about
you
know.
The
only
thing
is
that
the
driver
is
going
to
know,
I'm
writing
a
file
and
here's
the
contents
of
the
file.
You
tell
me
where
to
write
it
and
it'll
yeah
it'll
be
up
to
the
workload
to
stage
that
in
the
proper
location.
E
But
but
that
would
that's
just
two
paths
right.
One
is
the
volume
outpath
which
they
have
to
provide,
and
then
the
other
would
just
be
the
offset
to
that
and
and
we
could
do
magic
or
we
could
just
have
them
explicitly
say.
Here's
where
I
want
you
to
play
yeah.
No,
that
makes
sense.
I
I
agree.
D
I,
like
the
workload
defining
the
entire
path
andrew,
but
when
you
talk
but
but
in
terms
of
how
does
how
do
the
credentials
get
into
that
file?
And
you
said
the
driver
could
write
to
it,
and
my
thinking
has
been
that
the
cozy
node
adapter.
You
know
this
thing
that
the
cubelet's
waiting
to
for
it
to
call
node,
unpublished,
volume
that
that
piece
of
code
has
the
credentials
gets
the
credentials
and
it
writes
it
to
the
right
mount
point.
E
D
I
was
trying
to
think
of
not
having
the
driver
do
it.
I
guess
the
driver
would
just
be
passing
in
the
in
the
response.
Spec
the
driver.
B
E
B
E
D
C
So
the
second
thing
I
wanted
to
talk
about
was
related
to.
I
think
the
pointers
that
we
started
talking
about
this
on
monday.
B
Yeah
yeah
later
yeah
I
did.
I
did
have
something
to
show
for
that:
okay,
so
once
the
access
is
granted,
the
status
is
updated
and
bindings
are
updated.
In
the
bucket
object
saying
here,
we
have
a
binding
from
this
bucket
to
the
bucket
access.
B
In
the
provisioner
namespace,
it's
created
under
access
secret
name,
which
will
be
filled
in.
I
should
have
added
that.
B
I
mean
they
would
create
a
bucket
access
that
would
point
to.
C
B
C
Okay
and
so
then
you
would
have
two
different
modes
of
operation
if
it
was
dynamically
created.
You're
sticking,
the
secret
in
the
provisioner's
name
space
and
the
namespace
here
would
point
to
the
provisioner's
namespace.
If
it
was
manually
provisioned,
you
would
choose
where
you
wanted.
To
put
it
any
security
concerns
around
that.
If
I
am
a
malicious
user,
I
create
a
bucket
access.
I
can
point
to
any
secret
in
any
name
space.
E
B
E
C
Sorry,
so
the
idea
is
that
for
brownfield
effectively
we're
saying
you
have
to
create
it
in
the
provisioner's
namespace
there,
so
all
secrets
have
to
be
created
in
the
provisioner's
namespace
and
you,
if
you're
a
user,
that's
trying
to
do
brownfield.
You
have
to
have
enough
permission
to
be
able
to
create
secrets
and
the
provisioner's
names
yeah.
I
don't
like
that,
though,.
B
No,
you
would
just
ask
for
a
bucket
access
request.
You
just
create
a
bucket
access
request.
The
seeker
will
get
created
in
the
provisional
namespace,
a
user,
even
if
it's
a
brownfield
case
would
request
for
access
using
the
bucket
access
request.
They
would
never
look
at
the
bucket
access
object
itself.
E
And
we,
if
you
have
a
workload,
know
your
bucket
and
know
the
credentials
to
your
bucket.
This
whole
system
isn't
for
you
right
right
if
this
is
only
if
you've
got
an
admin,
provisioning,
the
bucket
access
piece
and
then
a
user
is
asking
for
requests
right.
So
this
is
the
kind
of
admin
user
split,
and
it
just
makes
sense
to
put
the
secrets
in
there
with
the
bucket
access
itself.
Both
of
those
are
in
a
protected
namespace.
Regular
users
can't
get
to.
E
E
Right,
the
whole
thing
here
is
that
you
could
swap
out
buckets
out
from
underneath,
but
if
you
can't
do
that,
because
you
have
a
fixed
set
of
credentials
and
a
fixed
bucket
name
right,
if
you
don't
have
credentials,
if
you
have
a
sort
of
workload,
identity
or
something
like
that
and
so
you're
effectively
relying
on
that
having
been
done
out
of
band,
that's
a
different
story.
But
then
this
whole
question
of
where
you
put
secrets
as
developed.
C
I
I
guess
the
case
that
I'm
thinking
about
is.
I
do
want
that
portability
of.
If
I
move
this
application
around
and
you
know
if
the
bucket
doesn't
exist,
I
want
it
to
be
dynamically
provisioned,
but
for
my
initial
bootstrap
on
my
initial,
you
know
instance,
I
already
have
the
bucket
manually
created.
I
have
the
credentials
already
set
up.
I
just
want
to
get
the
application
bootstrapped.
C
D
C
E
C
E
D
B
Yeah,
that's
about
it
and
then
you
know
we
update
the
status
and
the
bucket
access
request
to
true
yeah
and
we'll
change
this
to
a
binary,
whatever
the
boolean
status
access
granted
to
true,
instead
of
conditions.
Okay,
then
it's
given
to
the
application.
Somehow
we
talked
about
that
just
right
now,
yeah
that's
about
it,
and
I
want
to
go
into
this,
how
the
objects
work
with
each
other.
B
So
we
do
so.
We
have,
on
the
right
hand,
side.
I've
represented
all
the
objects,
so
a
bucket
request
points
to
a
bucket
below
it
and
then
a
bucket
class
on
the
right
and
a
bucket
als,
a
bucket
points
to
a
bucket
access
through
the
bindings
and
a
bucket
access
points
to
a
bucket
access
request
which
points
to
the
bucket
request.
So
there
is
a
cycle
there.
So
this
is
really
what
the
list
of
references
looks
like.
C
B
So
one
problem
would
be:
if
so,
let's
go
one
by
one
so
bucket
access
request.
If
it's
pointing
to
the
wrong
bucket
request
or
a
non-existent
one,
there
is
really
no
problem
there,
because
we
would
just
not
provision
a
bucket
access
for
that
request
for
a
bucket
request
pointing
to
the
wrong
bucket
class
again,
it
would
be
an
error
condition
because
we
wouldn't
be
able
to
use
the
bucket
class.
B
B
To
the
bucket
access,
instead
of
the
other
way,
around
list
of
bucket
points
to
a
bucket
access
has
a
list
of
bindings.
So
the
bucket
knows
who
is
using
the
bucket
in
order
to
be
able
to
delete
the
bucket
we
need
to.
I
think
that
was
done
based
on
references
by
bucket
requests.
B
B
B
So
if
I,
if
I
try
to
delete
a
bucket
by
deleting
the
bucket
request,
the
next
step
is
to
go
see
if
the
bucket
is
being
used
by
anyone
in
the
in
the
in
the
cluster
and
if
it's
not,
then
sure
I
can
delete
it.
Otherwise,
you
know
it
waits
on
that
finalizer.
For
for
everything
to
all
the
accesses
to
go
away.
B
E
Bucket
access
requests
have
a
dependency
on
bucket
requests,
so
you
know
that's
that's
single
directional,
so
if
you
maintained
that
bucket
access
had
a
dependency
on
buckets,
I
would
understand
that,
because
again
it's
now
it's
still
nicely
layered.
But
what
you've
created
here
is
a
weird
cycle
of
interdependency
and
I
just
doesn't
feel
like
now.
If
this
is
solely
for
the
purpose
of
being
really
careful
about
deleting
buckets
and
you
don't
want
to
delete
them
until
all
the
bucket
accesses
are
gone,
then
you
know
oh
okay,
I
guess,
but.
B
C
D
E
So
you've
got
so
you've
got
a
leading
edge
dependency.
That
says
you've
got
to
have
a
bucket
request
before
you
can
provision
bucket
access
request.
That
makes
perfect
sense.
So
then
you
get
then
your
bucket
gets
provisioned.
Then
your
bucket
access
gets
provisioned
right,
so
that
would
be
sort
of
the
ordering
of
things
and
then
so
all
right.
So
the
interesting
question
then,
is:
if
I
don't
delete
my
bucket
access
requests,
then
I
haven't
deleted
a
pods
handle
to
this
and
all
right.
E
So
maybe
it
is
okay
to
have
two
lists,
but
I
still
think
then
you've
got
two
lists
right
because
I
don't
think
it's
safe
to
delete
buckets
when
your
accesses
go
away
alone.
I
think
you
also
got
to
make
sure
your
bucket
requests
have
gone
yeah.
That's
how
we
we
do
it
right
now.
That's
how
we've
thought
of
it.
Okay,
you!
You
don't
seem
to
have
a
reference
from
bucket
to
bucket
request.
That's
the
only
question
then
right.
So
if
you're
maintaining
it
at
one
point,
there
was
a
list
of
bucket
requests
that
were
maintained.
E
A
Hey
sorry,
one
minute
warning
I
I'm
gonna
drop
off
at
11,
but
I
you
guys
can
continue.
I
will
just
stop
recording
when
I
drop
off.
Okay.