►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 04 March 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
All
right
good
morning,
everyone,
so
a
bunch
of
exciting
things
have
been
happening
in
the
last
week
alone.
Last
friday
I
spoke
with
nicholas
about
them
implementing
a
scality
or
s3
compatible
operator
or
cozy
driver.
B
Yesterday
we
had
someone
from
red
hat
reach
out
on
the
public
cozy
channel
in
the
kubernetes
slack
saying
that
they've
started
writing
a
cozy
self
driver,
so
things
are
definitely
picking
up,
which
is
which
is
really
good.
B
Has
two
protocols
is
s3
and
swift,
openstack
swift.
B
A
B
The
whole
reason
they
added
the
s3
gateway
was
so
that
you
know
seth
s3
gateway
the
whole
reason
they
added.
It
was
so
that
you
know
swift
can
be
slowly,
you
know
deprecated,
so
so,
and
this
this
is
really
for
the
openstack
stack.
B
I
don't
believe
many
people
are
on
this
stack
anymore,
though
there
are
some
people
that
are
still
using
it.
However,
it's
like,
like
all
the
openstack
companies,
have
gone
out
of
business.
I
think,
except
just
one
of
them,
maybe
a
red
hat.
The
last
one
was
right
around
with
the
medantis.
A
B
B
Yeah
yeah,
so
yeah
they're
still
compatible
that
person
hasn't
joined
the
meetings
as
of
yet
so
maybe
someone
from
red
hat
could
invite
them
or,
or
you
know,
explain
to
them
or
you
know,
just
have
them
reach
out
to
us.
I
have
a
question
about.
B
A
If,
if
it's
the
same
protocol,
do
you
actually
need
to
write
different
drivers
for
safe
s3.
A
And
then
somebody
else
also
is
s3
contact
right.
Another
driver
is
that
is
that
needed
for
you.
B
You
would
need
a
new
driver
because
the
authentication
model
for
stuff
is
very
different.
It
uses
kerberos
by
default,
sorry
for
the
admin
api,
not
for
the
bucket
api
itself
and
then
and
then
the
end
points
are
unique
to
ceph.
Some
of
the
parameters
are
unique
to
ceph,
so
you
know
we'll
need
a
separate
driver.
All.
D
B
Right
right
all
right
anyway,
so
a
bunch
of
pull
requests,
I've
reviewed
a
bunch
of
them,
but
not
all
of
them.
I'll
get
to
the
rest
today
and
tomorrow.
So
first
thing
I
wanted
to
bring
up
was
blaine
brought
up
a
good
good
request.
I
think-
and
I
think
it's
you
know.
I
think
I
think
we
should
discuss
it
all
together.
B
B
The
first
question
is:
is
that
a
problem
and
the
second
one
is
technically?
Is
it
feasible
to
to
not
have
that.
D
Well,
well,
yeah.
I
think
you
can
have
a
like
discriminated
unions
that,
where
there's
like
a
a
list
of
possible
struct
values-
and
only
one
of
them
can
be
populated
in
at
least
in
golang,
and
it
may
be
at
the
serialization
layer
and
so
yeah
as
long
as
as
long
as
the
thing
that's
serializing
and
deserializing,
your
objects
enforces
a
one
of
kind
of
a
relationship.
B
The
one
thing,
though,
is
so
so
when
I
first
looked
at
it,
the
the
the
deserialization
of
of
something
like
this,
an
mts3
structure
leads
to
a
nil
value.
B
So
you
know
if,
if
there
was
this
concept
of
a
default
protocol
structure,
then
then
this
will
get
interpreted
as
nil
and
all
of
the
others
will
also
be
nil.
So
if
you
know
whoever
is
deserializing
will
not
be
able
to
disambiguate
between
and
default
and
nil
value.
Unless
you
have
no.
D
No
like,
if
you
get
a
serialized
version
where
all
of
the
structure
empty,
then
you
deserialize,
it
and
they'll
all
be
empty
and
you
won't
know
what
it
is.
That's
what
I'm
saying
that'll
be
an
error,
but
I
mean,
but
that's
okay,
if
that's
an
error,
you
just
make
sure
that
everyone
that
you
produce
does
have
one
filled
in
and
you
treat
one
that
doesn't
have
any
filled
in
as
an
error
and
you
stay
out
of
trouble
that
way.
D
F
D
B
Yeah,
but
pv
is
right
like
this
is
a
default
this
empty.
There
is
a
in
the
sense.
If
everything
is
nil,
we
just
fill
it
into
empty
there.
C
I
I
did
I
I
guess
for
part
of
this
discussion.
I
did
look
into
so.
The
open
api
v3
schema
allows
like
setting
like
a
one
of
validator.
C
C
And
it
also
could
be
like
a
not
a
mutating
webhook,
but
a
validating
webpack.
That's
the
word.
So
there
there
there
are
options
for
that,
but
I
I
I
think
I
saw
the
the
the
cozy
spec
uses
the
the
crd
generation
code
and
I
did
do
a
little
investigation
to
find
out
that
the
the
like
one
of
relationship,
at
least
can't
be
right
now
put
into
the
validation,
and
there
has
been
some
talk
about
a
like.
C
Oh
there,
there
are
some
like
additional
parameters
like
the
x-kubernetes
dash
thing,
and
so
they've
been
talking
about
x-dash
kubernetes
union
as
an
option,
but
I
don't
I'm
pretty
sure
it
doesn't
exist
yet,
or
at
least
google
searches
really
turns
it
up
nothing
about
it.
D
But
but
I
I
really
like
the
idea
of
doing
it
like
kubernetes,
does
it
and
just
relying
on
one
of
the
structs
to
be
filled
in
because
if
you
do
that
and
have
a
string,
you
still
have
all
of
the
same
error.
Cases
related
to
two
structs
being
filled
in
or
no
structs
being
filled
in
and
you're,
adding
new
error
cases
where
the
struct
that's
filled
in
doesn't
match
the
string
so
right
it
doesn't,
doesn't
buy
you
anything
to
have
this
string.
It's
just.
This
is
just
hard
work.
B
I
wouldn't
say
it
doesn't:
buy
you
anything,
here's
the
one
place
where
it
actually
helps
passing
in
passing
in
this
field.
So,
first
of
all,
it's
easier
to
read
and
you
just
go
read
the
struct,
then
passing
it.
The
application
has
to
pass
this
structure.
B
You
know
the
application
part
would
have
to
pass
this
and
again
if,
if
there
was
a
possibility,
where
there
is
an
empty
struct,
empty
s3
stock
being
valid,
you
wouldn't
be
able
to
tell
whether
whether
you
know
if
it
was
filled
in
or
if
it's
an
error
case
or
you
know
if
it's
valid
or
if
it's
another
case
again.
This
all
depends
on
this
idea
that
there
is
the
concept
of
there
is
such
a
concept
as
as
empty
protocol
struct.
D
C
I
think
there
is
also
a
a
potential
option
like
again
whether
this
is
with
the
cozy
controller
or
with
an
actual
mutating
web
hook.
In
this
case,
if
there
are
like
the
default
values
set,
we
we
could
just
set
those
defaults
to
instead
of
being
like
an
empty
string,
or
you
know,
a
zero
value
could
actually
specify
that
that
default.
C
Unless
you
think
that
the
application
itself
might
want
to
have
a
quote-unquote
default,
in
which
case
like
we
actually
want
to
pass
the
the
application
a
a
an
empty
string
or
something
like
that
in
in
all
cases,
I
I
still
wonder
if
there's
a
way
to
pass
information
to
the
application
in
such
a
way
that
they
can
see
whether
you
know
one
of
the
properties
or
like
one
of
the
protocols
is
not
no,
while
the
rest
are
nil
and
then
have
the
cozy
controller
in
some
way
verify
that
yeah
not
more.
B
You
showed
that
here
right,
if
it
is,
if
it
is
set
this
way
and
nullable
is
set
to
true,
then
then
it
gets
passed
this
way
where
it's
not
an
empty,
it's
not
nil,
but
but
it's
an
empty
object
right
right,
yeah
that
might
work
so
here's
the
other
thing
so
now
we're
asking
the
application
part
to
like
if
they
were
to
do
you
know,
you
know,
reflect
on
a
zero
value.
B
I'm
sure
there's
like
a
there's
a
way
to
tell
if
using
reflect
package
and
go
if
it's
the
zero
value
or
not.
So
do
you
think
you
know
so
the
reason
I
bring
that
up
is
say
an
application
where
to
check
which
one
is
empty
and
which
one
is
not,
and
it
goes
through
the
list
of
fields,
one
by
one.
C
Three,
I
I
I
guess,
I'm
trying
to
understand
in
what
way
the
protocol
is
passed
to
the
application,
whether
that
is
passed
like
via
golang
like
is
it
you
know
a
a
serialized
or
just
direct
golang
object,
or
is
it
actually
passed
as
a
kubernetes
resource.
B
Yeah
I'll
show
you
so
it's
actually
passed
in
as
a
file,
and
the
file
structure
looks
like
this
I'll.
Give
you
one.
Second
I'll
put
it
up.
B
D
C
B
D
D
So
so
yeah
I
I
don't.
I
don't
see
a
big
problem
here
right.
B
Yeah,
I
I
I
get
it
yes,
that's
a
you
know.
I
think
it'll
be
more
in
line
with
how
persistent
volumes
do
things
now.
C
Okay,
yeah,
I
I
there
might
be
a
little
bit
to
prove
out
just
to
make
sure
that
there's
not
the
the
case
where
you
know
make
sure
there's
not
the
possibility
that
the
json
is
being
rendered
with
multiple
non-nil,
even
if
empty
structs,
but
it
it
seems
like
it,
should
be
technically
pretty
possible.
And
I
yeah
I
mean
my
my
thinking
is
basically
just
that
it
mimics
the
like
the
volume
sort
of
api
spec
that
kubernetes
has
had
for
a
while
yeah.
B
That
is
definitely
a
positive
yeah.
It's
going
to
be,
you
know,
symmetric
with
the
other
other
kubernetes
resources,
all
right
so
coming
back
to
this
struct.
So
this
all
depends
on
this
idea
that
we
can't
have
you
know
complete.
You
know,
no
value
is
not
not
not
a
valid
value.
B
I
think
I
think
that
might
that
might
be
the
case
actually
so
so
take,
for
instance,
yeah,
so
s3
struct
today
has
the
bucket
name
inside
of
it,
and
if
that
is
empty
like
in
this
case,
you
can't
really
do
anything
with
the
structure,
so
so
that
is
an
invalid
value.
B
B
Do
do
we,
do
we
really
not
know
or
isn't
that
a
fair
assumption
to
make
that
all
of
them
will
have
a
bucket
name
or
some
identifier
for
the
bucket.
D
B
D
I
mean
imagine
a
system
that
always
identifies
buckets
by
number
or
by
uuid
or
no
there's.
D
D
E
One
could
argue,
for
example,
that
cdmi
does
not
have
the
bucket
name
concept.
So
what
what
doesn't
have
it
cdmi,
which
is
a
not
very
widely
used,
but
still
object,
storage
protocol
and
it
doesn't
really
have
an
object?
Sorry,
a
bucket
name.
I
see
so
it's
like
a
global
namespace
of
objects.
Kind
of
yes,.
B
D
B
Well,
you
know
I'll
play
devil's
advocate
to
what
I
just
asked,
just
like
he
just
said
cdmi,
which
has
the
global
name
space,
so
there's
no
concept
of
bucket
there.
You
just
need
an
end
point
and
you
know
you're
good
to
go.
D
C
I
I
think
I
I
mean
for
me
at
least
I
I
think
the
the
biggest
argument
to
having
the
bucket
name
be
inside
each
individual
protocol
is
sort
of
like
nomenclature
that
different
protocols
might
use
that,
like
all
protocols,
will
want
all
protocols
but
want
something
to
be
an
identifier,
but
not
all
protocols
will
use
the
term
bucket
necessarily
like
some
might
use
like
like
container
or
like
I
id
or
or
whatever.
C
So
I
I
think
that
flexibility
then
makes
it
more
straightforward
to
this
kind
of
like
map
from
a
a
protocol
to
the
cozy
standard.
When
there's
not
a
possibility
of
having
to
translate
well
like
hash
in
this
protocol
translates
to
bucket
name
in
the
cozy
spec.
If
you
can
just
have
hash
in
the
protocol
itself,
I
see.
B
So
we
are
okay,
so
I'm
trying
to
you
know
kind
of
get.
My
thoughts
on
you
know
wrap
my
head
around
this,
and
so
we
do
abstract
on
you
know
over
this
concept
of
bucket
in
in
terms
of
provisioning,
and
you
know
granting
access
it's
all
around
this
concept
of
a
bucket.
B
So
on
one
hand
I
can
see
that
being
you
know
shouldn't
that
argument
apply
there
also,
but
on
the
other
this
is
going
to
be
user
facing
and
and
we
can,
we
can
expect
the
vendors
to
to
convert
their
concept
of
whatever
into
a
bucket,
but
it's
a
little
more
difficult
for
the
for
the
application
writer
to
do
the
same.
D
B
Oh
sure
I
mean
like
again
the
reason
I
was
bringing
that
up
is:
is
there
any
field
inside
of
the
protocol
if
it's
common?
Why
can't
it
be
here
at
all?
Why
can't
it
be
here?
We
just
you,
know,
discussed
why
not?
It
was
more
of
a
question
rather
than.
B
D
B
As
of
now
yeah,
okay,
so
that
makes
sense
so
we're
going
to
remove
the
version
field.
As
we
discussed
last
week,
right.
D
D
F
Yeah,
that's
fair
ben,
I'm
just
looking.
I
guess
the
review
is
just
going
to
be
very
light
on
the
cap
at
this
point,
and
so
maybe
it's
just
good
enough.
There's
several
commits
in
that
cap
and
you
know
when,
when
when
sid
thinks
it's
good
enough
to
merge,
we
can
squash
and
merge
it,
but
I
I
will
take
out
our
no
room,
I'm
sorry!
I
I
joined
the
meeting
late.
I
had
another
meeting
that
ran
along.
What
can
you
remind
me
ben
refresh
my
memory?
D
The
the
the
version
that
was
at
the
same
level
as
the
bucket
at
the
same
level,
the
protocol,
because
we
need
signature
version
inside
s3
to
discriminate
between
like
s3
v2
and
s3
v4,
and
a
potentially
future
s3
v5
yeah.
That
could
be
relevant
because
we,
the
only
alternative
for
that,
would
be
to
force
something
like
an
s3
v5
into
its
into
an
entirely
new
protocol,
which
we
said
we
didn't
want
to
do
so.
Yeah.
F
I
know
the
initial
reason
for
that
field
being
up
a
level
out
of
the
protocol
was
that
we
thought
version
might
be
common
across
all
protocols,
so
it
was
a
guess
and
therefore
it
would
have
been
a
common
field
and
that
it
might
matter
to
a
workload
right.
I
don't
want.
D
F
F
Yeah,
we
even
thought
just
so.
You
know
how
we
were
thinking
at
the
time
that
you
know
the
bucket
class
would
have
the
version
for
the
protocol
as
well.
So
if
the
workloads
version
so
the
workload
says
bucket
class,
you
know
foo
and
the
workload
and
bucket,
let
me
think
version
wasn't
isn't
in
the
br
right
sid,
it's
not
in
the
br,
but
it
is
in
the
bc
that
the
bf
points
to
yes,
never
mind.
F
F
B
D
D
And
and
then
and
then
the
idea
is
the
client
could
attempt
to
negotiate
down
if
it
knew
how
to
do
that.
But
but
this
is
the
server's
way
of
telling
the
client
the
highest
version
that
it
knows
how
to
talk
right
and
yeah.
So
so
so
in
the
future,
where
there
is
like
an
s3
v5.
D
D
Yeah,
so
so
for
that
protocol
you
probably
want
to
have
a
a
list
or
a
struct
with
with
the
keys,
and
you
know,
keys
and
values
so
that
you
could
express
that,
but
but
for
the
I
think
most
people
understand
s3
signature
versions
like
they
don't
change
very
often,
and
we've
been
on
four
for
a
very
long
time
and
while
five
could
show
up,
it
would
be
surprising
if
it
did.
E
D
I
was
just
going
to
say
it
and,
and
other
protocols
would
have
could
have
whatever
they
want.
You
know
they
could
have
a
version.
They
could
have
a
bunch
of
capability
fields.
They
can.
You
know
whatever
makes
sense
within
that
protocol,
that
the
workload
and
the
server
need
to
agree
on
right.
B
Yeah
make
sense
all
right.
So
one
more
thing
is
you
know
someone
from
s3,
we
understand
quite
well,
but
gcs
and
azure
either.
Someone
from
the
you
know,
gcs
and
azure-
have
to
come
in
and
pitch
in,
make
sure
we
get
the
protocol
structs
right
or
one
of
us
should
go:
do
the
research
and
figure
out
what
what
should
go
in
there?
B
We
did
the
research
up
front
early
on,
but
the
amount
of
time
we've
spent
on
s3
is
much
higher
than
what
we've
spent
on
the
other
two,
and
I
think
the
other
two
also
needs
the
same
amount
of
rigor
so
yeah.
So
I'm
just
I'm
just
stating
it
here.
I've
reached
out
to
the
gcs
and
azure
people,
you
know
until
until
they
arrive
until
they
come
and
start
start
working
on
this
yeah
one.
B
One
of
us
will
have
to
go
and
understand
how
the
other
protocols
work,
because
one
will
help
us
to
have
much
more
informed
conversations
about
these
fields
and
also
make
sure
that
we're
not
making
a
design
choice
that
wouldn't
work
with
one
of
the
protocols
yeah
will
figure
out
who
and
what
needs
to
be
worked
on
later
on.
I'm
just
staring
at
here.
So.
D
Yeah,
as
I
think
about
it,
though,
I
feel
like
it's
that's
the
kind
of
thing
where
it's
okay
to
screw
it
up
in
the
alpha,
because
the
point
is
the
point
of
the
alpha
is
to
get
the
framework
out
there
to
get
people
using
it
to
make
sure
that
your
primary
use
case,
which
is
s3,
is
working
okay
and
then,
if
people
come
in
and
actually
start
writing
drivers
and
say
wait
this
the
gcs
is
all
screwed
up.
You
say:
oh
well,
it's
alpha,
we'll
just
fix
it.
Yeah.
B
That's
a
fair
point,
yeah
and
and
yeah
that
does
make
things
easier
all
right.
So
one
of
the
next
thing:
okay,
so
I'll
kind
of
talk
about
the
bigger
picture
of
you
know
going
into
the
protocol
fields
right
now.
So
one
of
our
main
goals
right
now
is
to
do
the
api
review,
and
then
you
know
pass
the
api
review
test
for
that.
D
B
B
B
D
I
I
think,
there's
ways
to
address
sharing
that
involve
just
more
controllers
and
more
automation.
Even
if
we
stick
with
our
current
br
is
namespace
and
b
is
not
namespaced,
but
like.
I
still
want
to
hear
what
the
alternative
is
we
never.
As
far
as
I
know,
the
the
network
network
guys
never
showed
up
and
told
us
about
their
design.
B
Not
yet,
let's
reach
out
to
them
one
more
time,
maybe
maybe
we'll
have
them
also
join
us
another
time,
but
yeah
aren't
we
following
the
same
model
as
pvcs
and
pbs.
B
Network
gateway,
so
it
was
something
like
this
yeah
yeah.
It's
going
to
be
something
like
a
namespace
pair,
like
network
gateway,
request
network.
I
don't
know
the
exact
names,
but
you
know
something
like
that.
You
know
it's.
It's
it's
equivalent
to
how
cluster
roles
and
roles
work.
So
there's
a
cluster
all
binding
in
this
role.
There's
a
role
in
the
role
binding
one
is
namespace.
One
is
not.
D
Okay,
I
thought
I
had
heard
that
as
I
didn't.
I
heard
that
suggestion
I
didn't
know.
That
was
what
they
had
done.
So
then
the
idea
is,
if
you're
creating
buckets
for
use
in
your
namespace.
You
would
just
create
the
namespace
object
and
you
would
keep
it
in
your
namespace
forever.
But
the
moment
you
want
to
start
sharing
across
namespaces.
D
D
B
Right
and
on
one
hand,
I
can
tell
you
as
much
as
you
know.
I
don't
want
to
do
it.
I
can
tell
you
that
this
is
the
last
chance.
You'll
probably
have
to
change.
So,
if
you're
going
to
consider
it
seriously,
you
should
do
it
now.
B
I
didn't
even
have
the
discussion
seriously.
I
mean.
B
B
We
did,
I
did
find
some
issue
with
it.
I
can't
remember
that's
the
problem
because
see,
but
if
we
go
down
that
that
path
there
are,
you
know
all
the
different
concerns
we
had
earlier
have
to
be
addressed
again
and
and
with
the
current
approach.
Maybe
it's
not
perfect,
but
it
addresses
all
of
the
different
concerns
that
we
have.
B
Including
portability
and
and
stuff
like
making
sure
that
it's
an
important
declarative,
whatever
else
we
care
about,
we
just
have
to
go
through
those
those.
You
know
the
same
rigor
we
did
last
time
and
and
if
we
find
any
issues,
we
have
to
see
if
there's
a
simple
way
out
of
it
with
the
new
solution
or
you
know,
if
it's
too
complex,
we
can
let
it
go.
I
do
remember
doing
this
exercise
once
and
finding
that
there
was
some
problem.
I
can't
remember
what
was,
though
we
can
do
it.
D
I'm
trying
to
think
about
it
now,
and
I
just
like
the
main
difference
is
that
you
would
never
have
a
need
to
clone
the
bucket
object
right.
B
Well,
you
kind
of
have
to
because
how
do
you
share
a
bucket?
That's
been
created
within
the
name
space.
D
Well,
yeah,
so
if
you
had
a
bucket
that
was
created
in
namespace-
and
you
decided,
you
know
what
I
want
to,
I
want
to
share
it.
I
think
what
you
would
do
is
you
would
delete
the
bucket
request
in
the
bucket
and
replace
them
with
a
clustered
bucket
request
in
a
clustered
bucket
and
that
that
would
be
your
migration
path
from
greenfield
to
brownfield.
And
if
you
want
to
start
off
in
the
I
guess
the
question
would
be
like.
D
D
Everyone
can
just
refer
to
the
clustered
bucket
request,
which
is
visible
to
everybody
and-
and
you
have
to
do
our
back
on
it,
to
prevent
you
know
the
wrong
people
from
deleting
it
or
doing
things
to
it,
and-
and
I
guess
and
and
if
you,
if
you
started
off
as
an
ordinary
bucket
request,
just
within
your
name
space-
and
you
just
changed
your
mind-
the
migration
path
would
be.
You
could
delete
those
and
replace
them
with
equivalent
clustered
resources.
D
D
I
don't
know
if,
if
the
what
the
ba
the
bar
and
the
ba
objects,
how
those
would
change
in
such
a
world,
but
one
could
imagine
allowing
those
to
point
to
a
clustered
resource
instead
of
a
namespace
resource.
B
D
The
difference
I
see
is
that,
like
you
double
all
of
your
code,
that
has
to
handle
buckets
right
because
you
have
to
have
an
if
name
space
bucket
else,
if
clustered
bucket,
you
can
have
two
separate
controllers
that
I
don't
know.
No,
no,
no,
you
know
I
mean
that's.
The
thing
is
when
you
think
it
through.
D
You
need
one
controller,
because
you
only
want
one
instance
of
the
cozy
plugin,
but
that
one
controller
would
have
to
have
special
logic
to
deal
with
namespace
buckets
and
non-namespace
buckets,
and
I
mean
it
would
do
the
same
thing.
But
you'd
have
to
have
all
these
if
statements
everywhere,
because
the
api
objects
would
be
different
ones.
C
Well,
I
mean
I
I
maybe
what
sid
was
saying
is
that
the
the
like
cozy
controller
itself
would
have
two
reconcile
loops
and
one
reconcile
loop
would
be
for
namespace,
and
one
reconcile
loop
would
be
for
non-name
spaced.
D
Yeah
yeah,
I
mean
at
the
very
top
level
it's
it's
two
different
objects
that
are
being
watched
and
two
different.
You
know
event
handlers,
but
like
very
quickly
they're
going
to
fold
into
the
using
the
same
code
to
actually
do
the
work.
It's
just
that
this
you
know,
there's
going
to
be.
If
it's
a
clustered
one,
you
have
to
make
sure
you
work
on
the
clustered
object.
Otherwise
you
work
on
the
namespace
object,
but
but
all
of
the
interactions
with
the
cozy
driver
don't
care
right
once
you're
down.
C
C
D
That's
why
I'm
saying
like
it,
it's
gonna
double
part
of
the
code
and
and
not
the
rest
of
the
code,
but
it
is
higher
effort
in
terms
of
this,
the
number
of
api
objects
you
have
to
deal
with
and
the
number
of
lines
of
code.
You
have
to
write
to
do
that,
but
but
it
does
have
that
positive
result
that
you
know
no
one's
ever
going
to
go
around
deleting
these
no
one's
ever
going
to
go
around
cloning,
the
buckets
and
it
makes
the
cleanup
problem
a
lot
less
annoying,
I
think,
am.
B
D
I'm
good
I'm
just
so
I'm
you
know
my
background
is
from
the
volumes
and
snapshots
and
I
think
a
lot
of
us
know
because
we've
done
the
we've
done
the
thing
with
snapshots
that
we've
proposed
doing
for
for
cozy.
Here,
where
we
have
it's
a
name.
It's
always
a
namespace
thing,
and
I
was
wondering
what
if
we
had
decided
to
also
allow
non-namespace
snapshots
in
when
we
did
the
snapshot
work
like
what
would
that
change?
It
would
make
it
way
easier
to
share
snapshots
across
namespaces.
D
B
D
We
didn't
trod
that
path
with
with
snapshot
the
snapshot
design,
but
I
I
don't
know
I
I
find
that
at
a
high
level
appealing
and
I
think
that
we
won't
find
the
showstopper
issues
until
we
roll
up
our
sleeves
and
start
start
changing
everything
and
walk
through
all
the
workflows
and
then
say:
oh,
this
is
going
to
be
nasty.
You
know
yeah
yeah,
I'm
afraid.
B
Of
that,
though,
but
and
also
there
are
some
obvious
questions
that
aren't
answered
like
I
don't
see
a
big
benefit
of
that
or
what
we
have
I
mean
it's,
maybe
there's
a
small
benefit.
I
don't.
I
don't
see
something
compelling
the
the.
D
The
the
biggest
benefit
that
I
see
is
that
a
non-administrator
user
could
easily
get
access
to
a
shared
bucket
that
somebody
else
owns
right,
because
in
the
model
we
have
now,
you
either
need
an
administrator
to
come
in
and
clone
the
bucket
for
you,
so
that
you
can
get
access
to
it.
Another
name
space
or
at
some
point
we're
going
to
have
to
write
a
controller.
B
B
D
And
and
then
what
we're
when
we
write
that
new
controller
or
when
we
go
through
this
process,
we're
going
to
be
faced
with
a
bunch
of
our
back
issues
and
we're
going
to
have
to
think
it
through,
and
it's
like
the.
If
you
just
go
with
the
clustered
bucket
model
like
you,
can
lean
on
the
existing
rbac
system
and
say:
well,
you
know
it's
a
different
object,
so
you
know
the
when
you
set
up
your
rolls.
B
D
But
it
would
not
be
typical
for
ordinary
users
to
have
the
create
access
for
clustered
bucket
requests
right
and-
and
it
would
be
a
special
role
that
you
know
the
guy
who's
allowed
to
create
clustered
buckets
would
be
an
elevated
privilege
that
you
would
give
to
only
people
who
you
want
to
be
able
to
do
that
and
then
and
then
the
question
becomes
like
how
dangerous
is
that
role
like
what?
What
can
someone
with
that
rule
do?
D
Can
they,
I
guess
they
can
probably
delete
other
people's
buckets,
which
is
a
little
dangerous,
so
maybe
say
you
know,
only
give
it
to
people.
You
trust.
B
D
B
D
B
D
If
I
have
a
bucket
that
I
want
to
be
available
in
both
both
alice's
namespace
and
bob's
namespace,
the
the
model
today
is,
I
create
two
b's,
and
then
I
create
two
brs
one
analysis
namespace
and
one
in
bob's
namespace,
but
both
of
the
b's
are
non-namespaced
and
somebody
has
to
create
two
of
them.
Not
just
one
of
them.
B
D
D
C
B
I
don't
think
it's
good
at
this
point,
but
but
I
know
it's
good
that
we
started
the
conversation.
I
think
this
was
something
we
should
talk
about
once
and
and
really
again.
If
there
is
something
that
is,
you
know
very
hard
to
do
the
current
approach.
You
know
it
will
be
good
if
you've,
you
know,
discussed
this,
and
we
have
a
good
sense
of
what
the
trade-offs
are,
because
we
can
bring
it
back
up
if
there
is
some
step
with
some
point
we
reach
where
we
are
like.
B
Okay,
we
just
can't
solve
with
the
current
approach
unless
otherwise
I
don't
know
if
we
should,
if
you
should
go
back
to
square
one,
to
be
honest
now
again,
we
need
a
very
good
reason
to
to
make
such
a
huge
architectural
change.
Yeah.
D
But
I
mean
the
the
best
reason
that
I
think
we
have
available
is
that
tim
hawkins
seems
to
regret
the
you
know
the
pvc
model
and
he
likes
this
network
gateway
model
yeah.
That's
not
that's
and
he's
the
guy
who
has
to
improve
the
api.
B
Yeah
he
he's
he's
flexible,
though
that's
one.
The
second
thing
is,
I
I
keep
going
back
to
the
pvc
pv
model.
The
pvcp
model
is
difficult
because
you're
trying
to
do
a
two-way
binding
between
pvcs
and
pvs.
In
our
case,
we
don't
have
that
double
coupling
kind
of
thing.
We
just
have
one
way
right.
No,
it's
it's
two-way
bind.
It's
always
been
a
two-way
bind.
D
B
Slightly,
not
the
race
conditions,
otherwise,.
D
B
Right,
the
pvc
problem
is
two-way
binding
with
the
matching.
So
in
the
sense
when,
when
a
pvc
is
created,
you
you
don't
know
if
there
are
other
pvc
is
contending
for
the
same
pv
that
you're
trying
to
get
to
and
so
and
so
what
happens?
Is
you
create
a
pvc
you
try
to
match
with
one
pv,
you
start
by
matching
with
it
and
then
some
other
pvc
ends
up
also
trying
to
match
with
it.
There's
a
race
condition
there.
Only
one
of
the
requests
or
you
know
the
updates.
B
The
pv
succeeds,
you
don't
know
which
one
and
and
you
know
they
try
to
match
back
to
the
pvc.
Well,
you
do
know
which
one
it's.
B
Just
so
the
way
it
goes
is
the
pvc
will
look
for
a
volume
and
then,
and
then
the
pv
is
updated
to
point
back
to
the
pvc.
D
B
D
Static
provisioning,
where
you
had
sort
of
a
bunch
of
pvs
out
there,
statically
provisioned
and
the
only
problem
you're
trying
to
solve
is
like
which
one
do
I
get.
And
then,
when
we
moved
to
dynamic
provisioning,
we
didn't
rewrite
the
api.
We
tried
to
sort
of
graft
it
in
and
it
works.
But
it's
a
little
funny
now
because
of
that
history.
B
And
yeah-
and
you
don't
have
this-
you
know
right
now
with
with
the
matching
thing
you
don't
know
which
volume
you're
gonna
get
but
dynamic
provisioning,
it's
not
the
case
anymore.
So
it's
predictable.
B
D
D
The
whole
point
of
pvc
being
non-name
spaces
is
based
on
the
static
provisioning
assumption
that
there's
a
pool
of
stuff
out
there.
That
is
shared
and
you
don't
know
which
name
space.
Each
volume
is
going
to
end
up
in
so
you
need
a
non-namespace
pv
to
represent
them
and
once
they
become
bound
that
sort
of
puts
them
into
a
effectively
into
a
name
space
through
the
binding.
D
B
Well,
you
also
originally
had
this
idea
of
reclaiming
a
volume
and
then-
and
then
you
know,
once
you
are
recycling
a
volume
and
then
once
it
was
recycled,
someone
else
could
use
it
right
right,
but
that
goes
back
to.
B
D
But
just
just
because
in
general,
like
you,
you
can't
have
multiple
pods
attached
to
the
same
pvc
or
or
if
you
can,
because
it's
like
an
nfs
volume,
they're
still,
probably
going
to
be
in
the
same
name,
space
yeah.
That
is
I'm
not
sure
I
mean
like
I'm,
not
I'm
not
quite
doubting
you,
but
I'm
sure
you
can
come
up
with
cases
where
you
want.
B
D
Share
it,
but
but
like
those
are
special
and
not
not
the
the
main
situation
I
think
and
and
but
but
buckets.
I
think
it's
much
more
obvious
that
they're
likely
to
be
shared
right.
We
spend
a
huge
amount
of
time
talking
about
the
brownfield
shared
bucket
use
case,
because
that's
almost
the
common
one
and
yeah.
It's
always
really
that
many.
B
D
And
and
having
a
having
a
clustered
bucket
request
and
a
clustered
bucket
and
or
and
then
and
of
course,
I
guess,
the
other
corresponding
changes
are
our
brs
and
our
b's
would
become
or
both
become
namespace,
so
that
the
b's
would
no
longer
be
non-namespaced.
D
B
D
D
B
So
this
yeah
there's
one
more
so
I
want
to
kind
of
bring
up
one
distinction
between
network
gateways
and
buckets
network
gateways.
Don't
have
this
concept
of
sharing
whatsoever
because
network
gateway,
the
idea
behind
it
was
you
have
some
gateway,
some
some
network
paths
that
are
managed
by
a
central.
You
know
organization,
level
team,
like
a
admin
team,
and
then
there
are
some
that
are
managed
by
the
application
team.
You
don't
you
don't
generally
have
that
concept
there.
Of
of
this.
B
You
know
a
shared
network
gateway
where
you
know
you
need
to
have
different
forms
of
access
to
that
shared
network
gateway.
Unlike
buckets,
I
think
I
think,
given
that
network
gateways
are
more
transient
in
the
sense
that
they're
not
holding
any
space
they're
not
alive
once
they're
de-provisioned,
and
they
don't
mean
anything
once
they
deprovisioned
or
once
someone
stops
using
it.
B
I
think
I
think
it's
a.
I
don't
know
if
that
model
exactly
translates
here,
because
you
could
say
the
same
thing
about
cluster
roles
and
roles.
Also,
there's
no
concept
of
sharing
there.
You
someone
else,
isn't
going
to
want
that
role
from
another
namespace
it
can.
It
can
be
created
on
its
own
in
your
own.
Namespace,
have
the
same
rules
same
policies,
but
you
know
they
can
be
two
distinct
objects:
they're,
not
referring
back
to
the
same
central
role.