►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review - 20 August 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
I
I
wanted
to
start
the
meeting
just
quickly
and-
and
I
I
think
we're
all
on
the
same
page
of
wanting
to
keep
the
kept
moving
forward
and
not
introduce
any
unnecessary
delays,
but
feel
confident
that
it's
going
to
work
and
so
there's
probably
more
urgency
with
some
people
than
others.
But
this
is
what
sid
and
sereni
and
myself
are
striving
to
achieve
to
to
build
up
confidence
among
this
team
that
that
we
have
a
working,
a
workable
architecture
at
api
and
grpc
specs
that
we
can
move
this
forward.
A
Not
every
detail
is
ironed
out
and
there
will
obviously,
as
we
get
to
coding,
there
will
be
changes
that
likely
will
occur,
that
we
will.
We
will
expose
and
get
feedback
on.
So
the
cap
doesn't
have
to
be
perfect.
It
has
to
be
good
enough
that
that
you
guys
have
confidence
and
that
that's
that's
the
litmus
test
I
believe
sid
will
be
presenting
some
things.
A
There's
one
part
of
the
meeting
where
we're
going
to
show
you
two
alternatives
and
get
your
input
on
that
real
time,
and
is
that,
did
I
forget
anything:
sid,
no,.
B
I
think
I
think
you
got
it
all
just
one
thing
is:
if
we
do
find
issues
say
even
you
know
some
even
major
issues
after
we
start
coding,
we
can
always
go
back
even
to
the
drawing
board
and
and
correct
it
as
needed,
but
for
now
I
think
it's
best
to
focus
on
the
larger
picture.
B
So
I'll
start
now,
so
so
as
of
last
week,
where
we
left
off
was
looking
at
this
reference
diagram
of
how
what
the
relationships
between
different
api
objects
in
this
cozy
world
are
so
I'll
quickly
go
over
what
this
diagram
represents.
B
Now
all
the
bucket
access
all
the
users
that
have
access
to
this
bucket
are
captured
in
this
bindings
field
of
a
bucket,
which
has
a
list
of
bucket
accesses
that
have
access
to
this
bucket.
So
this
is
the
architecture
as
we
left
off
last
week,
and
some
questions
came
up
about
how
to
do
deletion,
and
for
that
I
have
the
same
diagram.
However,
I've
added
a
bunch
of
new
bucket
access
requests.
B
The
the
workflow
surrounding
this
is
like
this.
When
a
brownfield
bucket
access
is
required,
that
is
say
a
user
in
a
different
name.
Space
wants
to
access
the
bucket.
B
B
B
B
Does
it
through
the
creation
of
bucket
request,
the
deletion
also
should
be
done
through
the
bucket
request,
that
is,
the
deletion
of
the
bucket
request
should
result
in
the
deletion
of
the
bucket
now,
given
this,
and
given
that
we
have
multiple
bucket
requests,
the
deletion
of
which
bucket
request
should
lead
to
the
deletion
of
the
bucket,
becomes
the
question
and
how
would
the
user
know
which
bucket
request
to
delete
to
trigger
the
relational
bucket?
B
C
So
this
is,
this
has
been
sorry.
I
was
on
vacation
last
week,
and
so
I
had
to
how
did
david
bring
me
back
up
to
speed.
Why
I
don't
understand
why
we
don't
follow
the
model
we
followed
for
snapshots,
where,
like
we,
don't
actually
have
multiple
things
pointing
to
the
same
object,
but
instead
we
just
have
multiple
unnamed
spaced
objects,
each
with
a
one-to-one
relationship
because
it
solves
so
many.
C
Yeah
yeah,
just
and
you'll
end
up
with
multiple
buckets
objects
pointing
to
the
same
actual
bucket,
but
then
deletion
becomes
simple
because
you
have
one-to-one
relationship
between
your
bucket
requests
and
your
buckets
and
they
all
have
deletion
policies
and
whatever
one
says
to
actually
delete.
It
causes
the
deletion
to
occur
and
all
the
other
ones
don't
yeah.
So.
B
Let
me
take
a
crack,
so
if
we
did
that,
if
we
did
that,
we
would
end
up
with
with
multiple
bucket
objects
pointing
to
the
same
back
in
bucket,
then
the
problem
becomes
deletion
of
the
back
end
bucket
where
multiple
bucket
objects.
We
will
still
have
this
problem.
I
think
there's
a
there's
a
better
solution
for
this,
so
what
we
explored.
We
had
a
conversation
about
this
earlier
this
week
in
the
engineering
meeting.
Was
this
idea
of
a
single
ownership?
B
That
is
the
bucket
request
that
creates
the
bucket.
It
ends
up
being
the
ends
up
being
the
owner
of
the
bucket
all
the
other
bucket
requests.
The
deletion
of
them
would
not
trigger
the
deletion
of
the
bucket,
but
the
owner
bucket
request.
If
you
deleted
it,
it
would
lead
to
deletion
of
the
bucket.
C
B
Any
names
you
should
be
able
to
leave,
I
mean
they
should
be
able
to
delete
access
to
the
bucket,
but
deleting
the
actual
bucket
should
be
given
to
namespaces.
That
did
not
originally
create
the
bucket.
C
Right
right,
the
the
key
would
be
that
when
you
set
up
the
the
new
bucket
request
to
bucket
binding,
you
would
decide
whether
that
one
had
the
capability
to
delete
it
or
not
and
and
in
theory
either.
The
administrator
who
did
that
or
the
controller
that
did
that
on
the
user's
behalf,
would
would
know
whether
that
was
the
right
thing
to
do
in
that
situation.
G
C
It
it
puts
the
problem
in
the
hands
of
of
that
entity,
not
not
at
the
kubernetes
api
level,
because
then
the
kubernetes
api
becomes
simple.
You
just
say
well,
if
yeah.
If,
if
this
bucket
is
marked
as
delete
on,
you
know,
actual
the
deletion
policy
of
delete
will
do
it
and
if
it's
not,
we
won't-
and
it's
someone
else's
job
to
set
that
correctly.
So
they
get
the
behavior
they
want.
C
A
What
happens
when
there's
two
different
storage
classes
point
to
the
same
back
end
one
storage
class
says
retain
and
one
storage
class
says
delete.
Then
the
delete
will
one
delete.
It.
A
B
So
so
he
doesn't
mean
retain
policy.
The
retain
policy
is
a
release.
Policy
is
for
the
contents
of
the
bucket.
He
means
the
bucket
itself.
That
is,
should
the
so
it
won't
have
this.
This
second
option
of
retain
it'll
it'll,
be
a
boolean,
but
which
will
say
either
this
one
deletes
the
bucket
or
it
doesn't,
and
because.
C
B
Right
right,
that's
fair,
and
if
you
want
to
grant
privileges
to
any
other
namespace
that
wants
to
delete
deleted,
they
should
be.
You
should
be
able
to
do
it
right.
Yes,
now
now,
what
happens
if
multiple
namespaces
have
deletion
permissions
on
the
same
bucket?
It's
okay
right.
C
C
I
I
I
think
I
think,
when
it's
with
volume
and
snapshots,
maybe
it's
a
little
bit
different
because
of
volume
and
snapshot.
This
situation
is
kind
of
it's
very
rare.
When
you
have
that
situation,
I
I
would
say
from
my
experience
like
why
would
you
have
this
like
same
volume,
pointing
to
the
same
under
like
multiple
volumes
pointed
to
the
same
underlying
volume?
C
C
I
E
A
Box
I
wanted
to
bring
up
andrews
two
things:
what
you,
what
you
and
ben
are
that's
a
great
question
ben,
because
that
that
is
the
nature
of
this
design
and
architecture.
So
that's
a
fair
discussion
to
resolve
and
make
sure
that
we
have
a
good
solution
and
second
point
is:
we
have
a
one-to-one
mapping
between
a
bucket
access
request
and
the
bucket
access
cluster
scoped
object,
and
that
shows
the
access
to
a
bucket.
A
We
have
a
many
to
one
between
br
bucket
request
and
the
b
instance
the
bucket
instance,
and
that
reflects
more,
naturally,
the
abstraction
of
you
know
the
kubernetes
abstraction
of
the
physical
world,
and
so
you
do
you
we.
We
are
blending
both
in
this
in
this
design.
Right
now,
right.
E
A
Go
ahead,
sir!
Well,
I
just
wanted
to
point
out
that
there
is
this
one
to
one
man
and,
if
you
think
of
brownfields,
especially
any
kind
of
static
brownfield
where
an
administrator
is
creating
bucket
instances
to
reflect
an
existing
bucket.
A
If
you
do
one-to-one
mapping,
that
administrator
is
creating
a
lot
of
bucket
instances
that
all
are
basically
identical.
So.
E
I
I
think
that
is
the
exactly
the
discussion
that
led
us
to
do
this
multi-one
binding,
but
I
is:
can
we
test
that
because
that,
actually,
if
that
isn't
true
in
practice,
if
the
number
here
is
finite
and
not
overburdensome,
then
I
would
have
to
agree
with
ben
that
just
making
every
bucket
access
request
bucket
be
a
one-to-one
every
bucket,
sorry
bucket
quest
to
bucket
one
white
everything
everything
being
one-to-one,
because
the
whole
reason
for
the
many-to-one
was
this
idea
that
an
administrator
would
only
have
to
provision
one
bucket
and
then
multiple
people
could
get
access
to
it.
E
The
many-to-one
has
had
a
number
of
issues
that
we've
talked
about
reference
counting.
How
do
you
know
when
to
delete
everything
else,
and
I
think
what's
happened
is
not
that
we
can't
come
up
with
a
technical,
workable
approach.
It's
just
that
people
are
uncomfortable
for
a
variety
of
reasons,
with
the
different
approaches,
because
they
don't
reflect.
E
They
don't
allow
for
certain
use
cases
they
expect
or
they
seem
over
burdensome
to
manage.
And
so
I
guess
I'm
just
I'm
asking
a
push
on
the
other
side
and
say
what
what
would
you
be
bad
with
a
full
one-to-one.
F
B
B
Why
not
so
if
you
have
multiple
bucket
requests
or
sorry,
multiple
bucket
accesses,
oh
you're,
saying
it'll
directly
point
to
the
bucket.
E
B
Right,
so
you
can
look
at
the
whole
thing
manual
and
requiring
admin
to
be
present
in
the
workflow
of
creating
a
bucket
and,
of
course,
brownfield
field.
Not
green,
and
I
would
say
most
use.
Cases
are
brown
field
only
yeah,
so
the.
B
Manually
managing
the
buckets
which
is,
which
is
to
say,
if
I
have,
if
I
hadn't
had
to
delete
a
bucket
they'd,
have
to
know
manually
that
this
there
is
no
bucket
access
using
that
bucket.
At
that
point,.
F
B
Back
in
bucket,
I'd
have
to
basically
peruse
through
100,
different
buckets
and
and
and
then
make
sure
that
nobody's
using
the
access
and
then
actually.
C
You're
presuming
that
no
one's
going
to
write
controllers
to
help
with
these
things-
I
I
mean
I,
I
don't
think
anyone
you
know
is
manually
twiddling
with
yaml
most
of
the
time
when
they're,
when
they're
interacting
with
kubernetes.
Usually
you
you
build
higher
level
abstractions
on
top
of
these
low-level
abstractions,
to
make
these
kinds
of
things
less
painful.
I.
B
I
I
have
to
respectfully
disagree
with
that.
One
point:
maybe
even
the
bigger
point
that
we're
trying
to
make
is
a
different,
but
but
just
that
one
thing
where
most
people
are
still
struggling
with
the
animal
having
worked
with
customers.
I
unfortunately
know
this
part
and
it's
always
like
a
simple
stupid
mistake:
that's
bringing
down
the
whole
cluster
like
they
get
a
spelling
wrong
or
something
like
that.
B
I
think
there
is
a
lot
of
uncertainty
around
how
this
is
going
to
be
used,
and
I
think
I
think
it's
worth
actually
putting
it
out
there
and
testing
how
it
goes
and-
and
that
would
mean
we-
we
go
ahead
with
the
approach
we
flushed
out
the
most
right
now,
while
keeping
in
mind
that
there's
a
better
approach
and
then
we
will
treat
this
like
a
testing
exercise
and
and
based
on
what
we
learn
we
can.
We
can
go
back
and
try
the
other.
I
I'll
guess
correctly,
if
I'm
wrong,
we
are
the
main
problem
comes
with
use
case
when
we
go
from
green
field
to
brownfield,
because
it
looks
like
for
pure
green
field,
like
it's
obvious
that
the
guy
who
created
bucket
can
delete
bucket
for
brownfield.
It
looks
like
admin
creates
bucket,
and
I
think
retention
policy,
if
it
has
a
retention
policy,
should
be
always
like
retain
right.
So
nobody
deletes
it
because
it
just
should
be
there
and
we
are
talking.
The
main
problem
is
when
it
goes
from
green
field
to
brown
field.
E
Two
different
bucket
classes,
one
that
was
used
to
provision
a
green
field
and
has
in
it
that
you
have
delete
and
the
other,
which
is
for
all
of
the
other
brown
fields
and
you
have
in
it
retaining.
So
I
don't
think
that
if
you,
if
you
do
the
multiple
bucket
approach,
there's
any
question
about
who
deletes
or
anything
else,
it's.
B
That
the
admin
won't
make
mistakes
at
that
point,
we're
saying
they
they'll
do
it
perfectly
where,
where
they
will
not
end
up
with
unintended
buckets
with
with
deletion
policy.
There's
also,
this
other
confusion
there
of.
If
I
have
the
the
first,
I
I
create
two
bucket
objects,
let's
say
the
first,
both
of
them
point
to
the
same
backing
bucket
and
the
first
one
says:
retain
the
data
after
you
delete
this
object.
The
other
one
says:
delete
the
data
after
you
delete
this
object.
How
do
you
resolve
that
right?.
J
This
week,
so
I
think
I
mean
the
suggestion
here
is
basically,
as
far
as
I
understand
is,
is
is
to
discard
so
that
there
will
be
no
life
cycle
management
in
cozy
at
all,
and
basically,
what
we
are
saying
is
we
are,
we
are
only
managing
access
right
and
we
have
an
automation
saying
well,
I
want
you
know
to
do
something
when
this
access
drops
right
because,
like
you
said,
if
we
have
different
policies
on
the
bucket
that
we
want
to
apply
like
deleting
the
data
at
the
end,
not
deleting
you
know,
setting
any
permissions
or
whatever
for
the
rest
of
the
world.
J
Things
like
that
who
who,
who
is
the
bucket
to
be
representing
the
actual
thing
outside.
Of
course,
there
is
sharing
with
the
ot
with
the
rest
of
the
world,
but
in
this
case,
what
we're
saying
is
that
we're
not
even
giving
a
single
view
for
all
the
namespaces
right?
We
have
this
like
disconnected
views
of
the
bucket
between
them.
So
I
think
it's
it's
fine,
but
then
I
think
what
we're
saying
is
that
we
don't
want
to
automate
bucket
life
cycle
things
we
just
want
to
allow
provisioning
and
deletion.
E
I
I
don't
think,
that's
true,
because
there's
a
difference
between
brownfield
and
greenfield
so
for
greenfield.
We
absolutely
completely
automate
bucket
provisioning
and
life
cycle,
but
what
what
we're
just
saying
is
that
for
the
brownfield
case
there
are
a
number
of
sticky
edges
and
one
possible
approach
for
brownfield
is
to
just
say:
look.
You
got
to
directly
deal
with
all
those
sticky
edges,
we're
not
going
to
try
to
be
magic
and
and
and
oh
and
intuit,
some
sort
of
policy
out
of
this
by
convincing
everybody.
J
E
The
api
objects
in
the
name
space
is
because
they're
dealing
with
two
different
problems,
so
one
of
them
is
whether
you
are
pointing
at
an
existing
bucket
underlying
bucket
or
whether
you're
minting
an
underlying
bracket.
That
problem
is
completely
distinct
from
providing
access
to
your
user
or
your
application
to
access
that
bucket.
There
are
two
different
questions:
one
is
bucket
provisioning
and
life
cycle.
The
other
is
bucket
access,
provisioning
and
life
cycle
and
both
are
reasonable
problems.
E
The
first
has
a
share
brownfield
kind
of
mode.
The
second
has
a
indirect
mode
where
you've
sort
of
handled
it
out
of
band,
so
they
they
both
have
modes
to
allow
for
management
outside
the
kubernetes
space,
but
they
are
different
problems.
I
mean
you're
you're,
managing
different
things,
which
is
why
there's
two
namespace
objects
right
right.
B
I
I
actually
don't
follow
here,
to
be
honest,
so
so
what
are
we
saying
right
now?
Is
there
a
problem
with
with
having
multiple
bucket
requests,
pointing
to
a
single
bucket.
A
E
Where
is
the
problem
with
multiple
bucket
requests
pointing
to
the
single
bucket?
The
problem
is,
it
is
not
easy
to
express
how
you
should
manage
life
cycle
in
that
case,
that
problem
exists
in
two
ways:
it
exists
with
bucket
requests,
multiple
bucket
requests
tied
to
the
same
bucket,
and
it
exists
with
multiple
bucket
accesses
tied
to
the
same
bucket.
E
So
you
have
effectively
kind
of
two
sets
of
references
that
that
you
have
to
figure
out
when,
when
you
can
actually
take
action
on
the
underlying
bucket
is
problem,
one
and
problem
two
is
which
of
these
bucket
requests
kind
of
own,
the
management
of
that,
because
they
could
come
in
from
different
classes,
so
they
could
have
different
policies
at
their
level.
Yet
normally
we
reflect
that
policy
at
the
bucket
level,
not
in
the
bucket
request
level.
So
how
do
so?
This
now
gets
us
into
a
situation
where.
E
F
B
B
The
the
life
cycle
does
not
need
to
be
managed,
because
and
and
the
bucket
deletion
can
always
be
manual,
and-
and
you
know
we
can
go
back
to
one
of
the
suggestions
that
ben
made,
which
is
have
the
policy
on
the
bucket
request
which,
when,
when
provisioning
the
bucket
request
or
when
creating
that
you
could
say,
I
want
delete
permissions
on
this
bucket
request.
That
is,
if
I
delete
this
bucket
request,
the
underlying
bucket
should
be
deleted.
That's
that's
one
way
of
doing
it.
B
I
think
I
think
the
way
I
think
this
this
style
of
doing
things
leaves
us
still,
you
know,
leaves
it
still
flexible
enough
to
to
try
different
experiments.
E
Yeah,
I
guess
my
point
is
that
that
I
don't
think
you've
actually
reduced
the
work
when
you
have
a
heterogeneous
set
of
delete
policies
among
the
clients
of
a
bucket
yeah
one
question:
you
have
to
create
multiple
buckets
in
the
other
case,
you
have
to
go
into
that
bucket
and
manage
per
bucket
request
policy
statements
about
who's
allowed
to
do
what.
So,
it's
still
a
cardinality.
B
E
Makes
sense
in
a
greenfield
case
right
for
brownfield,
there's
no
owner,
because
I
mean
you
have
a
weird
edge,
which
is
this
green
promoted
to
brown.
But
I
I
question
whether
that's
ever
actually
going
to
happen
likely
to
have
pure
brown
or
pure
green.
And
if
you
look
at
those
two
scenarios,
pure
brown
nobody's
going
to
be
deleting.
B
Nobody's
going
to
be
doing,
but
the
bucket
object
can
be
deleted.
You
see
that
that's
where
I
think
we
both
see
the
difference,
which
is
the
bucket
object.
Deletion
is
okay
and
that's
all
we're
going
to
be
doing
the
owner
still
makes
sense
when
that's
how
you
look
at
it
and
and
the
the
the
thing
is,
we've
thought
about
this,
and
we
also
have
an
alternative
approach
that
I
want
to
propose
that
just
just
so
we
have
more
information
about
where
this
is
going.
B
So
guy
and
jeff
also,
you
know,
brought
up
a
very
interesting
approach,
which
is
we
don't
have
multiple
requests
per
bucket.
Rather
it
might
make
more
sense
same
diagram
without
the
multiple
bucket
access
requests.
What
we're
seeing
is
have
the
bucket
access
request
directly
point
to
the
bucket
and
have
only
one
bucket
request.
B
A
Just
say
it's
green
field
sure!
Well,
it
is
what
ben
and
it
is
this
one-to-one
mapping
between
a
br
and
a
b,
but
there's
still
only
one
b,
but
you
get
a
multiple
accessors
to
the
b
and
that's
where
the
bar
to
ba
mapping
comes
in
and
that's
one
to
one
and
and
so
what
all
you're
saying
is
that
if
you,
if
you
look
at
brownfield,
why
should
I
have
to
create
a
br
and
a
bar
as
a
user?
E
E
B
So
so,
to
explain
that
a
little
bit
more,
if
you
have
brownfield
access,
you
know
the
name
of
the
bucket
name
name
of
the
bucket
up
front.
However,
if
it's
greenfield,
you
don't
know
the
name
of
the
bucket
up
front,
because
you,
you
only
have
the
bucket
request
and
you
have
the
bucket
access
request,
but
the
actual
bucket
hasn't
been
done
yet
absolutely
true
for
this,
we
so
so
the
two
approaches
we
have
to
to
and
which
is
where
we
need
your
help,
is
what
what
we're
thinking
is.
B
We
have
the
bucket
access
request
pointing
to
the
bucket
name
if
it's
brown
field
or
bucket
request
name,
if
it's
green
field,
this
introduces
a
difference
in
ux
user
experience
for
greenfield
versus
brownfield,
and
I
think,
maybe
guy
or
jeff,
who
came
up
with
this
idea.
If,
if
you'd
like
to
say
a
few
things
about
what
we
you
know
how
we
came
up
with
this,
please.
J
I
I
can
just
give
her
the
rationale
I
think,
compared
to
what
ben
suggested,
I
think
benches
suggest
that
we
sort
of
allow
every
namespace
to
be
a
manager
essentially
of
the
life
cycle
right.
So
we
we
extend
bucket
request
to
every
namespace,
basically
right,
so
it's
possible
we're
not
saying
it
has
to
be,
and
in
our
case,
what
we're
saying
is
we
actually
restrict
sort
of
restricted
it
to
only
the
cluster
scope.
J
Cluster
scope
buckets
for
brownfield
right,
which
administrators
sort
of
have
to
or
operators
needs
to
manage
or
their
greenfield
case,
and
I
think
what
we
said
here
was
that
we
just
I
don't
remember
if
we
had
two
options,
but
I
do
remember,
we
had
one
option
which,
which
will
be
to
have
like
one
optional
field,
for
a
bucket
request
name
and
then
the
controller
will
actually
fill
up
the
bucket
name
for,
for
the
greenfield
case,
right.
E
So
I
would,
I
would
just
observe
that
this
achieves
the
same,
which
is
it
gives
us
one-to-one
binding
between
bucket
request
and
bucket.
So
it
actually
does
for
that
part
of
the
architecture,
more
aligned
with
the
the
sort
of
kubernetes
approach
bucket
access
has
also
one-to-one,
and
so
that
so
then,
the
only
thing
that's
left
is
this
weirdness
of
bucket
having
no
knowledge
of
bucket
access.
B
Right
right,
we've
saw
we've
thought
about
that
too,
so
we
actually
don't
need
the
bindings.
If
this
is
how
it
looks,
what
we
do
instead
is
have
a
finalizer
added
for
each
bucket
access
request
that
gets
tied
to
a
bucket.
B
So
only
when
all
the
finalizes
are
gone
can
we
delete
the
bucket.
It's
only
relevant.
This
whole
bindings
concept
is
only
relevant
because
we
want
to
know
if
we
can
delete
the
bucket
based
on
the
fact
that
nobody's
using
it.
So
so
by
using
finalizers
we
we
circumvent.
I
mean
it's
not
really.
I'm.
I
A
list
of
bindings,
I
have
to
say,
yeah
yeah,
I
think
I
think
we
were
talking
about
one
more
disadvantage
of
this
model
right
is
that
users
need
beforehand
know
if
it's
a
greenfield
or
brownfield
if
it's
an
existing
bucket
or
if
it's
auto,
created,
buckets
so
yeah
and
actually
application
deployment.
Yaml
should
actually
really
be
different
for
these
two
cases,
so
we
lose
some
portability
of
this.
In
this
case,.
A
G
A
E
K
E
I
agree
I
agree
I
mean
greenfield.
Is
I
don't
really
care,
I'm
not
going
to
be
sharing
this,
I'm
going
to
meant
a
magic
name
brownfield.
Is
I'm
going
to
be
coordinating
through
this
with
other
people,
or
else
I'm
starting
with
a
provision
set?
So
I
think,
by
the
time
you're
crafting
the
ammo
you're
usually
going
to
know.
A
E
Yeah,
I
I
think,
that's
you
know
if
you
just
think
about
as
an
application.
Why
would
I
want
greenfield?
I
mean
it.
It
causes
all
kinds
of
problems
right,
it's
like
it's
non-deterministic
bucket
name,
so
there
are
probably
a
class
of
things
where
hey
man,
I
just
need
storage
and
I'm
the
only
one
who's
ever
going
to
use
it
and
then
yeah.
E
It
is
kind
of
like
block,
except
that
it's
object
right,
you're,
the
one
who's
going
to
be
interacting
with
it,
but
I
think
it's
a
very
different
use
case
when
you're
using
object
for
the
purpose
of
interacting
with
other
applications,
for
you
know,
data
processing,
pipelines
or
anything
else
where
somebody's
going
to
be
consuming
it
yeah.
I
I.
B
I
So
yeah,
but
at
the
same
time
user
like
may
think
like
yeah
as
andrew
mentioned,
I
need
storage
and
basically
I
don't
care
if
it's
storage
like
existing
or
a
new
one.
I
just
need
some
storage
and
I'm
gonna
use
it
and
it
basically
it
was
up
to
admin
to
tell
you
which
storage
to
use-
and
now
it's
basically
it's
more
like
going
to
user,
so
user
needs
to
say.
Oh,
I
need
this
type
of
storage.
It's
not
it's!
Not
admin
decision
anymore!
It's
more
like
up
to
user,
too.
J
I
think
that
that
sums
it
up
right,
because
when,
when
you
have
a
request,
when,
when
you
just
want,
when
you
want
the
storage,
the
admin
can
can
come
up
and
and
match
you
with
you
know
or
an
operator
for
the
admin,
would
do
that
for
you
right
and
match
your
request
with
a
storage
even
existing
one
or
a
new
one
can
be
different
reaction
to
your
request
right
and
I
think
what
we're
saying
is
that
when
you
are
requesting
access
to
an
existing
bucket,
so
in
I
think
most
cases,
you
just
have
to
know
that
right,
you
you
are.
E
J
F
J
That's
that's
a
good
point,
because
in
in
many
cases
what
I
would
want
the
application
is
to
to
request
is.
I
would
like
to
request
a
let's
say
a
logs
bucket
right
and
I
would
like
the
admin
to
decide.
What
is
the
logging?
You
know,
operation
policy,
for
that?
How
do
I
provision
logs
buckets?
Do
I
provision
this
all
in
the
same
one
or
what
you
know.
G
Yeah
so
I
mean:
are
we
saying
that
minting
a
bucket
that
is
going
to
be
for
brownfield?
We
don't
have
a
case
for
that,
so
because,
if
you're
minting
a
green
field,
it's
going
to
be
a
a
non-deterministic
name
right,
so
you're
not
going
to
be
able
to
have
an
automated
way
of
making
that
brownfield
in
the
future.
E
Well,
it's
non-deterministic,
but
once
it
has
been
created
you
know
what
it
is.
So
if
you
so,
if
a
greenfield
app
has
written
into
a
bucket
and
created
a
bucket
and
then
at
a
later
point
in
time,
you
want
other
apps
to
be
able
to
access
it.
Then
you
would
just
provision
the
brownfield
pointing
at
that
specific
bucket.
E
Would
I
be
able
to
automate
that?
Well,
the
problem
with
automating
is
then
it's
an
order
of
operation
thing
which
guy
is
going
to
create
the
bucket.
If
you
bring
the
other
guys
up
that
that
want
access
to
it
before
the
bucket
is
cr
brownfield
before
the
greenfield
bucket
is
created,
then
you
kind
of
are
locking
yourself
into
a
deployment
model.
So
if
you,
if
you.
I
A
Well,
where
the
name
matters
the
most,
I
think
is
this
green
to
brown
like.
Why
did
I
provision
a
bucket
if
I'm
not
going
to
access
it
right,
so
I've
got
an
app.
It
needs
a
block,
block
mode
type
access
to
a
bucket
and
it
needs
to
it
needs
to
provision
the
bucket
and
then
use
the
bucket.
So
in
that
case
it
it
somehow
needs
to
know
the
name
of
the
bucket
instance
that
we've
created
for
and,
and
that
name
is,
has
randomness
in
it.
So
how
does
it
know?
A
It's
not
a
deterministic
name
like
it's
been
mentioned,
so
that
is
what
sid
mentioned
a
few
minutes
back.
Is
that
in
that
case
of
green
going
to
brown,
which
is
not
rare,
it's
every
greenfield
case
is
a
green
to
brown,
because
I
need
to
access
the
bucket
and
therefore
what
you
would
do
in
your
bar
is
you
would
reference
the
name
of
your
br
and
in
the
br
b,
so
you're
still
good,
you
don't
need
to
know
the
name
of
the
bucket
to
create
the
br
or
the
bar.
E
A
B
That's
that's
you
know,
and
and
yeah
we
could
do
the
matching
like
pvpc,
where,
based
on
the
bucket
access
class
parameters,.
E
No,
I'm
not
I'm
actually
suggesting
it's
it's
it's
more
than
that
when
you're
doing
a
bucket
access
class,
you
know,
in
other
words,
instead
of
the
bucket
access
request,
having
to
know
that
it's
greenfield
or
brownfield.
We
make
the
bucket
access
class
know
what
greenfield
or
right,
and
so
it's
the
one
that
contains
the
reference
to
the
bucket
or
to
right.
Yeah.
A
And-
and
we
did
that
in
the
old
library
design
to
unburden
the
user
from
having
to
know
the
name
yeah,
I
think
yeah.
J
D
J
The
other,
the
the
concept
that
ben
raised,
having
just
a
straight,
you
know
bucket
requests
in
the
bucket.
It
makes
a
little
bit
more
sense
actually,
because
you
know
if
what
we
are
saying
is
that
give
the
admin
to
just
control
sharing
and
not
only
control
but
be
responsible
to
do
everything
about
it
or
automate
it
on
its
own.
But
cozy
is
not
part
of
it.
Then
it
makes
more
sense
to
just
have
it.
J
Like
you
know,
bucket
request
in
the
bucket
represent
a
single
namespace
sort
of
you
know,
management
and
not
you
know,
because
just
you
know
having
another
role
for
the
access
class
to
represent
exactly
which,
which
bucket
right,
it
suddenly
becomes.
J
Some
more
you
know,
and
a
class
suddenly
becomes
something
we
instantiate
in
normal
workflows,
which
you
know
sounds
weird
yeah
also.
B
Yeah,
it
is
weird
that
is
true,
so
I
think
I
think
what
I
want
to
do
now
is.
I
think
I
think
we
should.
I
I
think
we
should
test
it
out.
I
I
think
we
should
test
out
and
see
what
the
users
are
going
to
use,
how
the
users
are
going
to
use
this.
Basically,
I
want
to
know
out
of
these
two
approaches,
which
one
should
I
try.
First,
my
my
preference.
B
Is
this
the
second
approach
where
we
have
the
bar
point
to
the
bucket,
because
I
I
believe
once
we
start
putting
it
out
there
and
getting
feedback,
I
think
we'll
see
even
more
problems
or
we'll
see
solutions
that
we
couldn't
possibly
think
of
right
now,
and
so
that's
what
I
need.
B
I
need
some
input
on
which
is
based
on
the
consensus
here.
I
A
Yeah,
so
so
so
I
like
it
too,
I
like
it
from
the
point
of
view
of
you
unburdened
the
developer
from
the
the
brownfield
case
from
needing
to
know
at
the
in
the
user
space.
What
the
name
of
the
the
bucket
instances-
and
I
like
that
part,
the
the
a
downside
potentially,
is
that
you
need
a
right
now.
The
bac
is
an
abstraction
of
policy
right
to
well.
A
B
Opens
up
a
can
of
worms.
To
be
honest,
I
mean
I'm
not
opposed
to
trying
it
out,
but
maybe
we
should
flush
it
out
more.
If
you're
gonna
go
down
that
path.
E
I
think
so
yeah
good.
Let
me
just
say
that
that
it
strikes
me.
The
difference
here
is
between
accepting
this
wart
of
having
to
provision
bucket
access
classes.
No
note
that
it's
one
per
bucket,
not
one
per
access
right,
correct.
So
so
it
you
know
it's
basically
one
per
brownfield
bucket
that
you're
trying
to
provision.
J
E
Right,
so
so
what
what
you
introduced
earlier,
that
I
was
arguing
with
you
with
I'm,
not
what
I
tend
to
do
this,
trying
to
explain
other
people's
ideas.
I
apologize
for
that,
but
what
I
understood
was
that
you
were
suggesting
that
look.
Let's
just
have
a
buck,
but
one
to
one
bucket
at
request,
a
bucket
whether
it's
greenfield
or
brownfield.
E
I'm
not
sure
that's
true
for
brownfield,
but
but
but
regardless
just
to
distinguish
between
the
two.
It
would
mean
that
effectively,
the
difference
is
a
bucket
per
user
of
the
bucket,
where
user
is
namespace
or
a
bucket
per
underlying
bucket,
but
still
being,
but
both
approaches
being
able
to
preserve
the
one
request
to
one
bucket.
E
E
I
E
Yeah,
that's
a
great
question,
which
is
if
a
bucket
access
request
is
depending
on
a
bucket,
that
is
green
field.
How
is
that
expressed
in
the
bucket
access
class
and
the
bucket
access.
E
H
B
I
mean
yeah,
I
I
think
I
think
what
we're
right
now
trying
to
discern
between
is
a
tiny
detail.
Whether
and
I
call
it
that
deliberately,
which
is
whether
the
bucket
access
request
directly
points
to
a
bucket
from
the
origin,
from
the
design
that
we
explained
or
if
it
points
the
bucket
through
the
bucket
access
class.
I
think
that
can
be
resolved,
so
we
only
have
very
little
time
left,
so
I
think
that
can
be
resolved
going
forward.
B
I
want
to
quickly
summarize
all
the
things
we
discussed
throughout
this
care
preview
in
the
last
four
weeks
and
the
changes
that
I've
just
summarized
here
and-
and
I
want
to
first
yeah-
go
through
this,
which
is
so
so
we
these
are
the
list
of
changes
that
that
we
that
we
brought
about
based
on
the
discussions
here,
if
I
have
missed
any
I'm
happy
to
add
them.
B
But
what
I
have
here
are
one
was
we
made
the
access
policy
opaque
instead
of
trying
to
model
a
love
and
deny
rules?
We
had
this
problem
come
up
with
service
accounts
where,
if
there
were,
if
there
was
a
conflict
of
access
for
the
same
service
account,
how
do
we
resolve
it?
We
resolved
it
by.
We
came
up
with
the
conclusion
that
it's
best
to
deny
any
new
access
requests
that
have
a
policy
that's
different
from
what
was
originally
given.
The
third
one
was
a
bucket
credentials
path
resolution.
B
We
weren't
entirely
sure
of
how,
where
we're
going
to
mount
or
provide
the
credentials
within
the
part
we
resolved
it
by
saying
there
should
be
some
sort
of
a
base
path
and
there
should
be
the
file
name
or
the
rest
of
the
path
of
where
the
credentials
should
be
given.
So,
for
instance,
if
you're
running
an
ubuntu
part,
the
home
directory
might
be
different
from
running
a
different
part
with
a
different
os,
and
so
so
that
should
be
left
to
the.
The
creator
of
the
part
is
what
we
decided.
B
The
final
thing
was
we
changed
from
phase
to
boolean
conditions.
It
wasn't
really
a
major
change,
but
it
was
there
if
I've
missed
any,
please
let
me
know,
but
but
I
wanted
to
kind
of
give
you
a
high
level
picture
of
all
the
changes
we've
done
and
you
know
it.
I
think
I
think
we
should.
B
We
should
talk
about
next
steps
in
terms
of
so,
like,
I
think,
after
today's
discussion,
I
think
the
decision
we're
going
to
end
up
with
is
is
some
form
of
this
design,
where
we
either
directly
point
to
a
bucket
from
bucket
access
request
or
point
to
the
bucket.
Through
the
bucket
access
class
and
in
green
field
cases,
we
have
the
bucket
access
request,
likely
point
to
the
bucket
request.
K
Son,
I
think
this
was
the
last
big
open
issue
in
terms
of
kind
of
data
flow.
Once
we
have
this
locked
down,
I
think
it
would
be
okay
to
sign
off
on
working
on
the
alpha
and
we
can
take
it
from
there.
So
maybe
let's
continue
this
discussion
next
week.
I
think
one
more
area-
and
it's
pretty
minor-
that
I
wanted
to
talk
about-
was
we
have
protocols
by
s3,
aws,
gcs
et
cetera,
small
minor
knit?
K
Do
we
need
a
version
number
on
that
is
that
going
to
be
important
at
all
a
specific
version
of
the
s3
protocol,
for
example,.
B
Yeah
so
s3
has
deprecated
s3v,
so
there's
s3,
v2
and
s3
v4
weirdly,
there's
no
s3v3
and.
B
J
B
Yeah
s3
v2
is
being
deprecated
and
I
think
we
should
just
go
s3
before.
J
J
K
B
A
Well,
it's
protocol
signature,
but
I
don't
know
if
that
makes
if
that
meets
the
requirement
that
sod's
ringing
up.
It's
a
good
point.
Saad
so
and
the
way
I
have
modeled
it
in
the
api
is,
the
signature
has
either.
A
J
J
K
B
Yes,
yeah
the
the
bucket
creation
does
not
restrict
the
version
so,
for
instance,
if
if,
if
a
user
requests
for
a
bucket,
regardless
of
the
best
they
can
do,
is
tell
us
what
version
of
the
s3
api
to
use
right.
That's.
J
What
all
clients
do
right?
You,
you
even
compile
this
into
the
client.
As
you
see
here
like
it's,
this
is
the
client
code.
Who
actually
has
this?
You
know
stamped
inside
the
code
saying
this
is
how
the
code
works.
It
cannot
work
differently
right.
J
J
B
J
B
K
The
only
reason
that
we
care
about
this
is,
if
how
we
surface
the
information
bucket
information
into
the
container
changes
by
version
all
right,
otherwise,
for
us
it's
effectively
passed
through,
we
don't
really
care
what
the
version
is.
If
it's
s3,
depending
on
the
protocol,
we
change
how
we
surface
up
the
information
into
the
container
right.
B
K
Passed
through
yeah,
and
so
then
the
question
is:
would
it
change
from
version
to
version
in
terms
of
how
we
surface
up
into
the
container?
I
think
going
back
to
what
ben
said.
It
may
not
be
the
case
today,
but
it's
possible
in
the
future.
K
B
Was
my
original
idea
yeah.
K
I
think
that's
fine.
We
just
need
to
kind
of
write
that
out
and
agree
on
it
and
make
sure
everybody's
okay
with
that
right.
A
Yeah,
that
might
be
more
flexible,
built
building
it
into
the
name
like
s3v4,
because
then
some
of
the
fields
under
there
may
be
different
right
in
that
structure.
C
B
Yeah
yeah,
yeah
and
then
we'd
have
to
do
parsing
and
to
check
yeah
yeah,
maybe
maybe
given
that
it
might
make
sense
to
have
two
different
fields:
yeah
yeah,
it's
a
good
point
like
a
protocol
family
and
football.
B
Version
yeah,
I
think
I
think
I
think
I
think
this
is
a
good
discussion.
I
think
yeah,
it's
it's
very.
It's
tough
to
decide
between
the
two
approaches
we
we
kind
of
shared
here
and
and
once
we
figure
that
out,
I
think
we'll
be
clearer.
K
B
A
Great
everyone's
invited
it's.
This
is
the
same
link.
Isn't
it
yeah?
It
would
be
great.
To
get
more,
I
mean,
I
think
I
see
the
finish
line
in
front
of
us,
so
we're
really
close.
If
we
can
just
keep
the
momentum
up
another
week,
I
think
we're
going
to
get
there.
Yeah
monday
meeting
will
be
good.
Did
you
say
it's
the
same
zoom?
It's
the
same.
A
Yeah
and
then
we're
running
we're
four
minutes
late,
so
can
we
wrap
up
now?
Is
that
all
right,
sid
yeah
yeah
yeah,
I'm
done
thanks?
Everybody
really
appreciate
it.
This
was
a
constructive
dialogue.