►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket Provisioning Review - 06 August 2020
Meeting Notes/Agenda: -
https://docs.google.com/document/d/1KTh1y9klby64t7btNULtxLWDkRC9SAWE-SZnJeFZqug/edit?pli=1#heading=h.5xeufhnfakeh
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
That,
just
as
a
quick
brief
introduction,
this
is
sort
of
our
third
focused
review
meeting
where
we're
seeking
approval
of
the
cap
get
it
merged,
get
to
the
opportunity
to
use
the
official
repos
for
the
prototyping
and
and
and
and
future
code.
The
kep
is
big
and
it's
long,
it's
difficult
to
go
through
it
all.
In
a
meeting.
A
There
were
some
great
points
brought
up
at
the
previous
review,
which
have
been
addressed
and
should
be
visible
in
the
current
version
of
the
kef,
and
we
said,
came
up
with
the
idea
that
maybe
a
a
simple
focused
slide
deck
that
summarizes
the
cap
will
make
it
easier
to
digest
and
get
through
and
with
with
five
to
ten
minutes
left
at
the
end
of
the
meeting
were
were
striving
to
get
to
the
point
where
we
could
do
a
vote.
A
B
C
That's
exactly
how
it's
done.
I
mean
that's
how
I've
done
it
today,
I'm
glad
you
actually
brought
that
up,
because
that
was
my
intention
behind
making
these
slides.
C
Main
thing
is,
I
think,
in
order
to
give
a
high
level
complete
picture
of
what
this
proposal
is
doing
from
the
perspective
of
one,
the
user,
that's
going
to
consume
it
and
to
the
vendor,
that's
going
to
write
the
driver
for
it.
One
of
the
things
we
I
felt
like
it
was
hard
to
cover
last
time
was
the
grpc
spec,
which
is
the
contract
between
the
vendor
and
kubernetes
bucket
apis.
E
C
Okay,
so
I'm
I'm
going
to
start
out
this
slide
that
really
really
in
a
basic
way
quickly,
we'll
jump
into
the
details
I've
taken
for
for
explaining
some
of
the
workflows.
I've
taken
the
simplest,
successful
use
case
and
and
gone
forward
at
that.
You'll
you'll
see
it
shortly,
so
so
we're
defining
a
new
standard,
I'm
starting
with
the
absolute
basics,
just
like
the
definition
for
anyone
that
wasn't
a
part
of
this
all
these
months.
C
Just
so
just
so,
we
clearly
explain
what
you're
doing
it's
a
standard
for
consuming
object,
storage
and
kubernetes.
It's
basically
like
csi
is
a
standard
for
file
and
blocks
we're
creating
a
new
one
for
object,
storage,
we're
calling
it
container
object,
storage
interface,
cos
si
or
cosy
yeah,
and
just
to
reiterate
what
object
storage
is
I'm
sure
we
all
know
what
this
is
it.
C
It
stores
data
as
immutable
logs
within
organized
into
buckets
and
bucket
here
is
going
to
be
a
primary
unit
of
management
in
in
in
our
system
that
we
are
building,
we
provision
deprovision
grant
access
all
at
the
bucket
level.
C
Now
we
are
going
to
do
it
across
multiple
object:
storage,
vendors.
In
a
vendor
agnostic
fashion
and
let's
get
into
what
these
four
things
mean,
provision
grant
revoke
and
deprovision.
C
So,
let's
start
with
provisioning,
provisioning
is
basically
creating
a
bucket
and
providing
it
to,
but
just
creating
a
bucket
or
or
checking
the
bucket
exists
in
the
back
end.
So
what's
the
back
end
here,
let
me
okay,
so
so
you've
defined
three
separate
components
here.
C
The
one
on
the
left
is
the
user
or
admin
middle
is
the
cosi,
which
is
our
system,
and
third,
is
the
back
end,
which
is
the
cloud
or
the
object,
storage
provider
that
actually
creates
the
bucket
the
workflow
is
a
user
admin
creates
a
bucket
or
requests
a
bucket
to
be
created,
and
the
system
understands
this
request
somehow
and
then
communicates
at
the
back
end
to
provision
this
bucket.
C
So,
let's,
let's
look
at
what
the
contract
between
the
user
and
the
user
or
admin
and
and
cozy,
so
we've
we've
created
a
so
one
second,
so
cozy
runs
within
kubernetes
and
the
declarative
standard
way
of
communicating
with
kubernetes
is
through
the
objects,
kubernetes
objects
and
we've
created
in
order
to
model
how
user
admin
requests
a
bucket
we've
created
a
bucket
request
resource.
C
Now
this
resource
has
fields
required
to
request
a
bucket
in
the
back
end
as
a
name.
It
belongs
to
a
certain
namespace,
whichever
the
user
is
originating,
the
request
from
and
our
namespace
the
user
has
access
to
a
protocol
which
says
what
kind
of
object
storage
protocol
do.
I
expect
about
bucket
prefix
some
form
of
prefix
to
say
this
is
how
I
want
the
bucket
name
to
start
and
a
bucket
class
name.
C
I've.
I've
created
a
dummy
kind
of
use
case
where
the
name
of
the
bucket
or
the
name
of
this
request
is
something
called
profile.
Pictures
under
the
namespace
profiles
and
the
bucket
class
name
is
write
one's
readmany
to
to
explain
a
scenario
where
you
create
profile
pictures.
You
read
them
many
times.
C
You
don't
delete
them,
though,
so
here
we
showed
that
there's
a
bucket
class
name
bucket
class
groups,
a
bunch
of
bucket
requests
based
on
certain
parameters
in
this
in
in
a
bucket
class,
we
have
a
provisioner
set,
which
is
the
actual
vendor
specific
code
that
will
go
and
create
the
bucket
in
the
back
end
again,
protocol
is
set
here
which
tells
you
what
kind
of
protocol
we
expect
the
bucket
to
serve.
C
Then
we
have
something
called
this:
the
anonymous
access
mode,
which
sets
the
access
policy
on
the
bucket
itself.
A
bucket
can
be
either
a
public
worker,
so
everyone
can
read
and
write
or
read
only
bucket
self
explanatory
read,
write
or
private
where
by
default
nobody
gets
access
to
it
unless
access
is
explicitly
granted
to
it.
C
A
release
policy
says
what
should
be
done
with
the
data
in
the
bucket
after
this
bucket
is
deleted
from
kubernetes,
not
the
back
end.
Just
kubernetes
allowed
namespaces
is
the
list
of
namespaces
kubernetes
namespaces.
We
want
this
bucket
to
be
provided
to
so
when
requesting
the
bucket
so
say
for
in
this
example,
a
profile
picture
bucket
would
ideally
only
be
relevant
in
this
made-up
scenario,
say
it's
only
relevant
for
teams
that
work
with
profiles,
and
let's
say
they-
they
use
an
install
profiles,
and
so
you
set
the
allowed
namespaces
to
this.
C
I
was
yeah,
I
I
thought
the
same
as
well,
I'm
explaining
it
exactly
as
we've
we've
put
in
the
cap.
One
of
the
reasons
being
I
I
came
in
kind
of
midway
so
in,
but
I
completely
agree
with
you.
It
should
be
just
the
name
and
if
someone
does
do
that,
where
they
delete
the
namespace
and
bring
it
back
in,
it
should
apply
to
the
new
one.
So.
A
E
Right
in
general,
that's
what
we
care
about
in
kubernetes,
there's
very
few
cases
where
we
actually
care
about
the
uid
where
we
care
about
the
uid
is
things
like
the
persistent
volume
claim
and
persistent
volume
objects
where
the
specific
instance
actually
matters,
but
outside
of
that,
it's
generally
the
name.
That's.
A
Great,
I
mean
it
makes
it
simplifies,
it
makes
it
easy.
So
we
will.
We
will
include
that
suggestion.
D
Absolutely
then,
we
have
sorry
a
had
a
one
question
so
so
in
general,
in
kubernetes,
there's
a
concept
of
kubernetes
admin,
creating
a
namespace
and
assigning
some
kind
of
a
storage
quota
and
stuff
like
that.
So
how
can
a
could
have
been
limited
for
objects
created
using
this
object
class?
C
Yep,
okay,
yeah,
so
parameters
here
is
a
generic
map
string
string
structure
that
lets
a
cloud
provide
a
vendor
specific
parameters
to
pass
be
passed
in.
In
this
case.
I've
just
set
the
region,
however,
like
if
you
look
at
gcs,
there's
a
project
id
and
or
azure
has
a
project
id
and
another
container
name.
C
G
C
But
yes,
the
original
intention
was
here
that
we
have
a
list
of
allowed
namespaces.
The
idea
was
only
the
namespaces
that
we've
deemed
as
safe
for
you
know
safe
for
providing
access
to
this
bucket,
or
these
classes
of
buckets
should
be
allowed
to
do
them
should
be
allowed
to
access
them.
However,
our
concern
was
if
we
set
the
allowed
namespaces
to
a
bunch
of
namespaces
and
someone
goes
ahead
and
deletes
those
and
recreates
a
new
one
with
new
credentials.
They
shouldn't
be
allowed
access
to
these
buckets.
G
Yeah
yeah,
okay
and
then
the
namespace
can
have
multiple
labels
as
part
of
the
metadata
correct.
C
G
Just
the
whether
or
not
the
bucket
class
could
be
tied
to
a
namespace.
C
Yeah,
I
think
we
can
correct
that
in
the
documentation
for
sure
we
can.
We
can
add
a
statement
clearly
saying
how
we're
going
to
represent
this
we're
going
to
move
forward
with
this.
Thank
you
for
your
patience
here.
Oh
absolutely,
no
worries
so
so
earlier.
We
saw
how
the
user
admin
communicates
cozy
once
the
bucket
requests
and
bucket
class
are
provided
to
the
system,
because
it
goes
ahead
and
processes
it
before
translating
it
to
a
request
that
the
backend
understands
and
what
it
does
is
it
takes
the
bucket
request,
object.
C
I've
just
put
the
relevant
fields
here
and
then
it
takes
the
bucket
class
object
again
put
the
relevant
fields
here.
Basically,
all
the
fields
and
respect
for
both
of
them
and
I've
marked
the
bucket
class
fields
with
orange
and
bucket
requests
in
green
and
my
my
mouse
scroll
bar
is
super
sensitive
and
I
don't
know
how
to
fix
it.
It's
the
apple
magic
mouse.
I'm
just
going
to
go
ahead
with
this,
so
the
bucket
object
is
created
by
the
system.
C
C
The
release
policy
is
from
bucket
class
anonymous
access,
modbus
class
again
there's
a
bindings
field
we'll
get
into
it.
In
a
few
slides
allowed.
Namespaces
remove
the
uid
parameters
comes
from
the
class,
and
the
protocol
structure
generates
a
bucket
name
based
on
the
prefix.
That's
provided
and
adds,
as
you
ready
to
it,
and
then,
when
it,
when
it
first
gets
created
the
status
of
the
bucket.
C
E
I
know
we
talked
about
the
using
conditions,
but
instead
of
conditions
we
may
want
to
just
use
boolean
values
here
things
like
bucket
ready
and
then
you
could
add
additional
booleans
in
the
future
for
any
other
kind
of
binary
state
that
you
want
to
add.
As
long
as
it
doesn't
invalidate
the
previous
state.
C
E
C
Shang
also
mentioned
it,
and
I
I
think
that
makes
sense.
We
just
went
ahead
with
the
usual
way
of
doing
things
with
the
conditions.
However,
there's
no
reason
why
we
couldn't
do
the.
E
Direct
yeah,
that's
fine.
I
think
we
could
fine-tune
that
in
api
review.
It's
not
a
huge
deal
right
now.
C
Yeah
I'll
go
with
them:
jeff,
okay,
thanks
yeah
yeah,
all
right.
So
so
we
saw
what
cosy
did
so.
It
takes
the
bucket
request
in
the
bucket
class
and
creates
he
creates
a
bucket
object.
Now,
then,
the
bucket
object
is
passed
along
to
the
vendor
side
of
the
the
back
end
or
vendor
side
of
the
system
where
we
use
a
grpc
protocol.
C
The
grp
pc
protocol
defines
a
contract
between
cosy
and
the
vendor
library
of
the
vendor
driver
or
the
provisioner,
and
here's
what
the
protocol
looks
like
and-
and
these
are
the
fields
that
that
driver
requires
to
go
ahead
and
create
a
bucket
in
the
back
end.
So
it
needs
a
bucket
name
needs
a
bucket
context
for
different
cloud
providers.
E
Is
a
bucket
context
here
mapped
to
the
parameters
from
the
class?
Yes,
okay,
could
we
consider
renaming
this
two
parameters
to
match
that.
C
Yeah,
absolutely,
I
think,
that's
that's
not
yeah.
That's
definitely
possible
cool.
You
can
do
that.
Yeah,
hey
is,
is
the
shrinier
jeff
taking
notes
on
these
suggestions?
If
you
could
do
that,
that
would
be
great.
C
So
the
fields
for
you
know
filling
in
these
fields.
For
the
example
we
just
showed.
The
bucket
name
is
the
auto
generator
one
profile:
pics
dash,
uuid
bucket
context,
parameters,
region,
usc,
spawn
and
then
the
anonymous
bucket
access
mode
set
to
private,
which
is
what
we
wanted
in
the
bucket
object.
B
B
Yep
yeah,
so
the
invoking
of
the
correct
provider
provisioner
specifically
what
happened
at
the
earlier
stage
and
the
correct
fields
will
then
be
passed
down.
Yes
and
what
happens
if
a
provision
or
a
back
end
can't
be
found
that
maps.
B
Indefinitely
and
it
will
keep
trying
until
such
time
as
the
back
end
comes
into
existence.
C
Yeah
back
in
that
that
advertises
the
specific
provisioner
comes
into
existence.
Yes,
okay,
good
and
the
response,
if
everything
is
successful,
is
an
empty
response.
If
there
is
an
error,
there's
a
separate
error
channel
in
the
rpc,
along
which
an
error
would
be
passed,
I'm
going
to
go
ahead
without
going
into
the
errors.
Since
I'm
only
explaining
the
easiest
simplest
successful
case
here,
we
will
get
into
the
errors
as
the
second
step
of
this
presentation,
but
for
now
we'll
move
forward
with
the
provisioning.
B
Okay,
so
the
that
single,
successful
indication
going
back,
that's
kind
of
implicitly
synchronous
in
the
grpc
protocol.
I
C
How
do
you
mean
the
grpc
protocol
simply
just
informs
that
it's
done
and
then
there's
a
there's,
an
asynchronous
set
of
steps
that
leads
to
this.
B
Okay,
so
that
was
that
was
the
question
that
I
had.
If
my
provisioner
takes
10
minutes
to
set
everything
up,
will
the
grpc
call
just
be
left
open
for
10
minutes,
or
will
the
gp
rpc
call
return
immediately
and
then
there's
some
other
indications,
some
asynchronous
indication
that
then
triggers
the
bucket
available
being
changed
to
status.
C
Oh
that
way,
so
the
grpc
request
has
a
timeout.
So
let's
assume
that
the
timeout
expires,
how
much
every
time
it
takes
to
create
the
bucket
is
longer
than
the
timeout.
Then
another
status
is
returned
and
a
bucket
available
is,
you
know,
continues
to
be
false,
but
then
we
keep
retrying.
So
when
error
status
occurs,
we
push
the
request
to
the
back
of
the
queue
and
the
queue
of
requests
that
need
to
be
served
and
when
it
gets
retriggered
again
say
the
bucket
is
created.
B
E
B
A
Yeah-
and
you
know
david-
it's
a
good
point-
you
brought
it
because
the
bucket
name
can
be
the
same
for
different
requests.
There
could
be
a
green
field
request
and
right
behind
it,
a
brownfield
request
and
they're
both
the
same
bucket
name,
right
and
they're,
trying
to
do
green
to
brown
and
and
so
a
driver
could
get
the
same
bucket
name
provisioning.
The
bucket
food
takes
a
long
time.
You
can
get
food
coming
back
again,
but
this
time
it's
a
brownfield
request
to
food
right
yeah
we
do
have
to
handle
that.
C
So
yeah
at
this
point
is
where
provisioning
ends.
Provisioning
is
strictly
creating
the
bucket
loan.
You
know
in
case
it
doesn't
exist
and-
and
that's
the
simple
use
case,
if
shown
here
and
at
this
point
provisioning
ends
and
and
any
application
or
any
user
that
wants
to
consume.
This
bucket
has
to
first
request
for
access
to
this.
C
With
the
second
workflow,
the
grant
access
workflow,
I
can
hear
someone's
background
noise,
okay,
so
the
grand
access
workflow
and
this
in
in
in
for
this
workflow
there's
a
there's,
an
additional
component,
which
is
the
application
that's
going
to
consume
this
bucket
once
it's
granted
access
so
just
shown
in
the
arrows
here.
The
user
talks
to
cosy
to
request
a
request,
access
for
an
application,
cosy
talks,
the
back
end
gets
the
access
and
then
from
the
back
end
it
it
takes
the
credentials
and
gives
it
to
the
application.
C
The
application
can
then
use
the
credentials
to
make
object,
requests,
object,
requests
of
different
sorts.
B
C
C
C
B
C
Yeah,
so
the
bucket
request-
I
I
think
I
did
not
capture
this,
but
once
the
once
the
bucket
is
provisioned.
Let
me
go
back.
C
C
Oh
yeah,
so
so
the
bucket
request
has
a
I'm
not
captured
this
year.
So
bucking
request
has
a
bucket
instance
name
or
a
bucket
name
in
in
the
cab
we
get
into
it,
and
that
is
a
pointer
from
the
bucket
request
to
the
bucket.
So
the
the
name
for
the.
B
Oh,
I
thought
we
have
a
single
layer
of
abstraction.
The
bucket
name
is
a
protocol
specific
construct
with
respect
to
the
kubernetes
environment.
You
wouldn't
ever
refer
to
bucket
name
directly.
You'd
refer
to
this
other
name
that
you
used
to
access
it.
Is
that
correct
summary:
when
you
mean
other
name,
you
mean
the
one:
that's
not
shown
in
the
bucket
request
that
you
were
just
alluding
to.
C
B
C
Yes,
the
credentials
yes,
so
so
there
is
a
there's,
a
tiny
error
here.
I
don't
know
if
it's
worth
pointing
out,
but
quickly
I
can
show
you.
The
name
of
the
bucket
is
actually
derived
based
on
the
bucket
request
and
the
name
space.
B
I
C
B
I
Can
you
can
you
concatenate
them
with
a
dot
because
it's
like
it's,
I
mean.
I
That's
I
mean
I'm
just
saying
it's
a
typical
use
case
and,
yes,
you
have
dots
and
names
because
you'd
have
dots
in
urls.
So
that's
why.
C
Right
right:
okay,
yeah:
we
can
look
into
it,
but
I
think
the
convention.
Sorry
do
you
mean
in
the
name
of
the
resource,
or
do
you
mean
the
name
of
the
actual
backing
bucket?
I
mean
the
actual.
C
C
A
E
G
E
E
That
way
you
can
differentiate
between
two
different
bucket
requests.
We
may
want
to
think
through
whether
we
want
to
do
that.
E
D
A
F
E
C
A
provisional
field
is
set
here
to
specify
which
provisional
understands
this
axis
class
and
again
this
has
a
set
of
parameters,
a
bunch
of
opaque
parameters
that
are
passed
into
the
back
end
and
then
policy
actions,
policy
actions
define
how
this
bucket
is
going
to
be
accessed
by
anyone
consuming
using
this
bucket
access
class.
C
In
this
case,
I've
written
out
a
spec
access
control
spec
for
aws
s3,
where
you
can
write
and
read
so
put
object
and
get
object
are
allowed
and
everything
else
is
denied.
So
this
is
right
once
read
many
times,
which
is
right
once
read.
Many
is,
is
the
access
control
we
wanted
to
provide
for
this
bucket.
Based
on
the
example
I
explained
and
yeah.
This
is
the.
E
C
Should
we
make
it
an
opaque
string,
yeah,
I'm
actually,
so
again
we
had
a
conversation
about
this
jeff
ion
string.
Yesterday,
however,
again
we
wanted
to
make
sure
that
when
we
bring
about
changes,
we
want
to
include
everyone.
I
think
that's
a
valid
suggestion
and
yeah
it's
something
we
can
add
yeah,
so
that.
E
F
I
agree
because
the
other
effect
that
it
would
have
is
that
the
user
may
assume
that
it's
validating
these
two
against
each
other,
like
it's
going
to
be
up
to
the
driver
to
make
sure
one
of
the
a
lot
of
resources
isn't
on
both
the
allowed
and
deny
list
and
we're
not
doing
that
with
the
actual
cosy
piece.
L
C
So
the
service
account
use
case
is
where
I
mean
the
access
policy
would
still
look
the
same.
It's
just
how
the
access
is.
Provisioned
is
what
changes.
C
What
I
mean
by
that
is,
is
a
set
of
credentials
passed
into
the
application,
or
is
the
application
identity
somehow
known,
or
you
know
any
request
coming
from
that
application
with
that
identity
should
somehow
be
given
access
like
in
aws.
You
have
instances
having
iem
roles,
or
you
know
even
pods,.
M
The
problem
is:
is
that
the
identity
you're
mapped
to
on
the
server
you
might
map
to
that
identity
from
several
places,
and
so
therefore,
theoretically
from
those
different
places,
you
could
also
all
access
the
same
brownfield
bucket
and
I'm
trying
to
figure
out.
If
I'm
talking
myself
into
a
hole
here.
M
So
so
I
I
guess
yeah
so
this
this
is.
This
makes
the
assumption
that
a
particular
bucket
access
can
be
uniquely
crafted
at
this
point
and
that
if
you
therefore
have
multiple
cases
of
that
user
trying
to
access
that
bucket,
that's
kind
of
its
own
problem-
and
I
guess
there's
no
way
around
that
and
it
doesn't
matter
that
these
policy
actions-
I
don't
think
make
that
problem
worse.
F
M
So
so
the
the
original
mechanism
that
was
in
the
cap
was
this
idea
that
you
would
mint
a
new
server-side
user
for
every
one
of
these.
In
that
case,
that
implies
that
every
bucket
access
can
have
a
separately
defined
set
of
accesses.
M
But
in
the
case
where
you're
doing
a
service
account
mapping,
there
is
no
longer
a
guarantee
that
every
single
access
is
uniquely
minted
right.
It
could
be
that
oh,
the
access
is
already
defined
there,
or
at
least
the
user
is
already
defined
there.
So
I'm
not
minting
a
new
user
on
the
other
end,
but
what
it
implies
is
that
if
I
craft
multiple
cases,
even
different
clusters,
same
cluster
whatever,
where
I'm
asking
for
the
same
service
account
to
have
access
to
the
same
bucket,
I
can
have
conflicting
permissions.
L
K
M
You
basically
have
you
know,
it
already
exists
kind
of
thing,
and
then
you
leave
it
up
to
the
driver
to
deal
with
that
or
something.
B
M
I
mean
the
new
user
approach
makes
sense
if
you're
doing
the
sort
of
credential
minting,
but
if
you're
not
using
a
credential
minting.
If
you're
using
the
gcp
or
amazon
approach
of
doing
service
account
mapping,
then
the
credentials
are
already
known.
The
users
on
both
sides
are
known.
The
only
thing
that
you're
minting
is
the
particular
access
and,
but
even
then,
if
that
access,
if
an
access
binding,
already
exists
for
that
user.
M
F
M
But
okay,
so
the
policy
actions
are
unfortunately
as
articulated.
These
are.
Are
these
really
generic?
I
mean
these
look
specific
to
s3.
B
And
that's
why
we
wanted
to
move
it
into
parameters,
because
these
are
specific
to
the
type
to
the
protocol.
C
That
is,
if
a
second
bucket
access
class,
we
have
a
service
account
field
in
the
bucket
access
class
and
if
a
second
request
comes
in
for
the
same
service,
account
in
a
different
bucket
access
class
be
rejected.
M
So
there's
a
couple
of
of
scenarios
right,
so
one
of
the
possible
scenarios
is
that
I
am
crafting
the
complete
access
out
of
band
right,
so
I've
I've.
I've
got
a
service
account
on
the
on
the
cloud
side
that
I've
created.
I've
set
up
roles
for
that,
so
I've
got
all
of
my
permission
in
place.
For
that
account
to
get
to
that
object.
M
Then
I
then
I
set
up
a
workload,
identity
or
the
equivalent
aws
to
map
a
kubernetes
service
account
to
that,
in
which
case
there
is
really
nothing
for
a
driver
to
do
potentially
right.
One
possibility
is
the
driver,
could
just
say:
oh
it's
already
there
and
that
I
I
don't
think
you
would
necessarily
want
that
to
be
an
error.
M
You
might
want
to
control
whether
it's
an
error,
but
but
that
seems
like
a
perfectly
reasonable,
because
otherwise
you're
walking
down
the
path
of
having
to
say
we
want
to
construct
every
possible
mode
of
access
in
the
kubernetes
object,
and
I
don't
know
how
you
deal
with
sharing
in
that
case,
because
then
you
end
up
with
conflicts
expressed
at
the
kubernetes
level.
A
L
M
Sure
I'm
just
suggesting
a
a
mode,
and
this
gets
back
to
the
whole
problem
of
sort
of
greenfield
versus
brownfield
right
in
greenfield.
You
really
want
to
automate
all
of
it.
You
want
to
automate
bucket
creation.
You
want
to
automate
if
necessary,
service
account
minting.
You
want
to
automate
credential
minting.
M
You
want
to
automate
passing
the
credentials
into
into
the
the
client
side
into
the
app,
but
all
that
stuff
you
may
have
to
support
kind
of
from
scratch
and
then
there's
various
modes
on
top
of
that,
like
workload,
identity
or
the
equivalent,
where
I'm
not
minting
service
accounts,
but
I
may
be
minting
just
credentials,
but
then
you
could
even
say
I'm
not
minting
service
account
or
credentials.
I'm
just.
M
I
have
a
mechanism
here
that
says
this
is
how
I'm
accessing
it
and
but
it's
effectively
a
no-op,
because
all
of
that
access
is
already
plumbed
in.
I
just
think
we
want
to
be
able
to
support
all
of
those,
and
I
don't
think
it's
too
hard.
M
It's
just
one
of
the
things
we
got
to
do
is
to
say
something
in
a
bucket
access
class
or
something
that
says:
hey,
I'm
expecting
this
to
have
already
been
handled
or
overwrite
on
you
know
or
don't
override,
or
you
know
what
I
mean
some
kind
of
of
deconfliction
policy
statement
at
the
class
level.
I
think,
does
it.
D
K
M
L
Like
like,
I
I
like
the
like,
I
like
the
service
accounts
and
I
love,
but
I
also
like
the
idea
of
being
able
to
like
like
set
set
initial
policy
and
have
and
have
a
service
account
and,
like
you
know,
effectively
kind
of
like
a
arn
roll
or
whatever
to
be
minted,
be
minted.
L
For
me,
you
know
that
that
that
that
corresponds
to
the
policy
right,
so
that
I
can
I
can
you
know
I
can
I
can
craft
this,
this
bucket
access
class,
cr
or
whatever,
and
and
it
you
know,
if
it's
minting,
if
it's
minting
something
new
it
it.
You
know
it,
it
creates
the
the
I
am
roll
or
whatever
it
was
necessary.
That
would
be
mapped
in
here.
I'm
not
sure.
Maybe
not.
I
don't
know.
M
Right
right,
I
think
that's
one
option,
which
is
to
basically
treat
it
as
a
effectively
a
kind
of
item,
potency
kind
of
statement.
That
says:
look
I'm
going
to
add
this
access,
but
if
you've
already
got
access,
that's
fine,
but
then
the
question
becomes.
How
do
you
handle
conflicts?
M
Do
you
reject
something
whose
policy
actions
don't
match?
What
you
currently
have,
I
mean
that's
a
possibility,
and
then,
if
you
don't
provide
these
policy
actions
at
all,
then
effectively
you're
making
some
statement
about
then
right
I
mean
I,
I
think,
with
just
a
little
bit
of
fleshing
out
yeah.
We
could.
We
could
come
up
with
something
reasonable
here
that
that
isn't
dangerous.
L
I
I
mean
it,
it
might
be
as
simple
as
just
if
it's,
if
we're
minting
something
new,
then
then
you
can
impart
policy
and
you
create
a
new
iam
that
only
like
a
new
iam
user
that
only
kubernetes
can
only
this
specific
kubernetes
can
use
and
then
basically
then
you
know
that
particular
kubernetes
is
the
only
one.
That's
going
to
be
able
to
update
that
policy
object
and-
and
you
could
you
know
you,
could
you
could
share
that
with
you?
Could
you
could
also
share
that
bucket
with
with
the
application
user?
F
C
Yeah,
I
I
mean,
I
think
suggestion
and,
and
I
think
we
can
figure
it
out
in
the
future,
but
the
issue
is
only
when
there
are
conflicts
with
two
different
market
policies
pointing
to
the
same
service
account.
M
What
I
imagine
is
several
levels
of
action
you
might
take
in
response
to
a
bucket
class
right.
One
action
is
and-
and
I'm
going
to
use,
I'm
going
to
try
to
use
generic
terminology
and-
and-
and
you
know,
hopefully
it
ports
to
the
different
platforms.
So
one
action
is
I'm
going
to
mint
a
user
in
response
to
this.
M
On
the
on
the
bucket
side,
then
I
am
going
to
add
to
that
user
permissioning
to
access
this
bucket
and
and
I'm
assuming
by
the
way
the
bucket's
already
there.
This
is
the
bucket
access,
so
so
I'm
going
to
add
a
user
and
I'm
going
to
set
permissions
for
that
user.
M
So
that
is
a
question
of
what
I'm
going
to
provision
on
the
bucket
side.
I
may
also
then
mint
some
sort
of
access,
token
or
credentials
for
being
that
user
in
the
context
of
an
api
call,
because
I
need
to
do
that,
you
know
coming
from
a
client.
So
that's
one
of
the.
I
M
C
F
C
I
want
to
finish
this
and
you
know,
come
to
a
just
get
through
this.
To
be
honest,
there's
only
like
two
or
three
slides
left
and
I
think
it'll
paint
a
picture
of
what
the
response
looks
like
and
then
that'll
give
us
more
context
after
after
you
know
we
go
through
and
make
this
bucket
access
request
call.
C
So
I'm
going
to
quickly
just
go
through
this,
and
so
that
we
have
a
picture
of
everything.
That's
going
on
so
with
this
flawed
model.
Okay,
so
so
you
have
to
use
an
admin
request.
Cozy
cozy
goes
and
creates
a
bucket
access,
object,
copies
over
fields,
and
then
it
goes
ahead
and
calls
the
back
end
call
this
parameters
again
bucket
name.
So
here
we
have
something
called
as
a
principle,
which
is
what
I
want
to
introduce
you
to.
C
So
what
we've
thought
of,
as
the
principle
is
the
unique
identity
of
who
the
new
user
is,
that
gets
access,
or
if
it's
an
existing
user,
that
you
want
to
update
or
provide
new
access
to
you
pass
in
the
principal's
unique
id.
The
access
policy
is
the
access
policy
we
had
earlier
and.
C
So
access
policy
is
a
direct
copy
of
whatever
is
then
the
bucket
access
class
bucket
context
is
again
a
copy
of.
What's
in
the
parameters
and
the
name
we've
said,
we've
set
the
principle
empty
in
this
case,
and
the
response
is
creates
a
new
user
gives
that
as
the
principle
so
far,
what
we
start
is
the
credentials
file,
content
and
credential
file
path
will
decide
how
this
access
is
provided
to
the
application.
In
case
it's
minted
credentials
in
case
it's
a
service
account.
C
C
B
And
will
will
there
be
some
guidelines
to
make
sure
that
a
driver
can't
stomp
over
something
else.
C
All
right,
so
I
think
we
should.
We
should
write
down
things
that
we
want
to
discuss
for
next
week,
specifically
around
bucket
access
class
and
jeff.
Did
you
want
to
take
it
forward?
Now?
We
have
like
three
four
minutes
left
conclude.
A
It's
another
good
set
of
suggestions.
Can
we
get
a
straw
man
vote
on
how
people
are
feeling
right?
Now
we
haven't
done
that,
yet
it
would
just
be
interesting.
There's
16
participants
on
the
call
right
now.
E
I
think
this
is
a
this
is
a
really
productive
meeting.
I
like
the
provisionary,
ass
or
the
provisioning
aspect
of
it.
I
think
that's
solid.
It's
looking
good
from
the
access
perspective.
E
I
think
it's
close,
but
we
need
to
probably
go
through
the
flows
for
both
a
service
account
that
already
exists
using
workload,
identity
and
minting
a
new
service
account
walk
through
both
of
those
flows
just
to
make
sure
that
they're
handled
properly,
and
then
I
think
the
third
thing
we
need
to
walk
through
is
the
binding
logic,
because
that
will
be
that's
going
to
be
crazy
here,
because
we're
proposing
something
completely
new,
which
is
a
many
to
one
binding.
So
I
want
to
make
sure
we
review
that
as
well.
B
And
there's
some
additional
additional
grpcs
that
we
need
to
review.
J
Yeah
yeah
I'll
just
echo
that
that
I
think
these
these
review
meetings
are
extremely
productive
and
we
should
keep
going
as
long
as
people
keep
having
questions
and-
and
I
don't-
I
don't
see
any
reason
to
hurry
towards
having
it
be
done.
I
mean
if
there
is
a
a
reason
like
a
deadline
we
have
in
mind.
Maybe
we
want
to
go
faster,
like
we
should
just
schedule
more
time
per
week
to
do
these
kinds
of
meetings
two
hours
three
hours
whatever
it
takes,
but
we
should
take
the
time.
C
Completely
agree
with
you
on
that:
how
do
you
all
feel
about
joining
us
on
our
engineering
meeting
on
monday
at
11
pst?
We
can
I
really
want
to
go
over
this
bucket
class
design.
A
Storage,
you
know
ben
you
mentioned
urgency
and
I
agree
we
want
to
get
it
right.
I
mean
there's
no
point
in
rushing
and
and
getting
it
wrong,
but
what
what
I'm
thinking
is
that
if
we
get
this
pr
merged,
we
can
start
using
official
repos
and
we
can
get
some
prototype.
You
know
pre-pre
pre-alpha
code
out
there
and
start
using
it,
and
I
think
when
you
have
it
in
your
hands
and
you're
using
and
you're
trying
to
write
a
driver,
things
are
going
to
come
up
that
we
aren't.
We
may
not
just
discover.
J
C
True,
we
can
work
on
our
own
repositories.
Code
yeah.
I
don't
think
I
don't
think
we're
really
gated
on
having
this
merged
before
we
can
start
writing
code,
but
I
think,
having
you
know
it
be
more
official,
would
would
certainly
not
hurt
us
anyways.
So
I
think
I
think
the
the
next
step
is
really
to
get
nailed
the
design
of
the
bucket
access
pipes
and
then
yeah,
hopefully
I'll
see
you
all
on
monday
at
11.
C
jeff
do
you
think
can?
Can
we
send
an
invite
to
everyone
somehow.