►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 18 February 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
A
All
right
so
I'll
give
a
quick
overview
of
what
our
current
priorities
are
and
where
we're
at
with
regard
to
these
things,
one
of
the
things
we
said
a
few
months
ago,
andrew
you
might
have
missed
this
and
anyone
who's
new
may
not
know
about
this.
Was
we
wanted
to
do
a
demo
as
we
wanted
to
keep
a
demo
as
one
of
our
milestones
and
the
reason
for
choosing
a
demo
as
the
milestone?
A
Is
it
forces
us
to
work
on
the
end-to-end
aspect
or
every
part
of
the
architecture
and
have
one
use
case,
working
well
right
forces
us
to
work
on
yeah
the
end-to-end
workflow,
the
demo
we
defined?
We,
we
defined
it
as.
A
Yeah
so
for
the
demo
we
defined
our
requirements
as
a
cozy
system,
which
would
be
able
to
create
a
bucket
grant
access
for
that
bucket
to
a
workload
and
and
and
then
take
or
grant
access
would
be.
Creating
a
token
for
that
workload
and
provisioning
that
bucket
into
a
pod
is
is
providing
that
token
to
the
workload.
A
So
we
added
a
few
more
constraints
in
the
demo.
We
said,
whatever
we
show
in
the
demo
will
also
be
you
know.
It
won't
be
a
smoke
and
mirrors
kind
of
demo.
It
will
be
a
demo
where
the
code
will
be
tested,
there'll
be
test
cases
for
there'll,
be
documentation,
it'll,
be
as
close
to
shippable
code
as
possible
and
I'm
happy
to
say,
we've
made
a
lot
of
progress
on
this
front.
A
There
was
some
uncertainty
until
last
week,
or
so
because
we
were
still
changing
one
of
our
jrpc
apis,
but
that's
been
resolved,
and
so,
and
so
we
should
be
on
track
to
show
you
a
demo
very
very
soon,
I'll
have
to
figure
out
the
progress
of
the
individual
tasks,
probably
during
this
meeting
or
after
so
that
was
one
of
our
milestones.
A
The
other
milestone
we
had
was
the
api
review,
so
we
got
started
on
the
api
review
and
really
interesting.
Question
came
up
from
the
apa
reviewer.
I
have
to
go
back
a
few
slides
for
that.
A
So
the
question
that
came
up
was
there
are
some
scenarios
where
we
might
possibly
envision
mutating
a
bucket
say,
for
instance,
you
want
to
make
a
bucket
go
from
private
to
public
or
say
you
want
to
make
a
bucket
to
have
a
configuration
set
on
it,
like
you
know,
lock,
lock
the
object
so
object,
locking
enabled
so
so
so
a
question
came
up
if
bucket
mutation
is
something
we
even
want
to
support
in
the
first
place.
E
And-
and
we
discussed
the
two
that
you
just
mentioned
specifically
and
decided
for
those-
probably
not
right.
B
A
Right
so
you
know,
since
there
are
new
people
and
all
people
that
haven't
participated
in
this
discussion,
I
wanted
to
take
them
through
this
again
yeah
and
you
know
define
that
addition,
one
more
time
yeah
so
for
bucket
mutation,
just
in
practice.
A
This
is
a
very
or
let
me
put
it
this
way.
So,
once
the
bucket
is
created
through
the
cosy
system,
we
expect
that
creation
to
be
automatic
in
the
sense
that
a
bucket
request
is
made
and
the
bucket
creation
happens
for
the
workload
and
when
the
workload
goes
down,
cosy
is
really
designed
to
tear
down
that
bucket.
Once
it's
done
in
in
such
a
scenario
since
bucket
is
created,
it's
not
always
the
case,
but
since
bucket
is
created
for
the
workload
bucket
mutation
is
not
a
high
priority.
A
I'm
trying
to
see
the
best
way
to
put
it
is
is
not
a
it's
not,
it
would
almost
be
an
anti-pattern.
A
That's
what
I
want
to
say
now,
like
like
ben
said
there
might
be
something
in
the
future,
where
we
might
see
that
we
need
to
support
bucket
mutation,
but
with
the
experience
that
I've
had
in
this
space,
seeing
how
mineio's
customers
use
buckets,
it's
almost
never
that
that
bucket
properties
are
changed.
A
F
So
can
I
add
a
couple
of
things
this
is
andrew.
F
Sorry,
in
case,
that's
not
obvious,
so
I,
and
by
the
way,
I
was
completely
in
line
with
that,
but
but
I
would
say
that
there
are
two
considerations
that
have
bothered
me
along
the
way
where
I
feel
like
we're
not
really
directly
aligning
with
this
one
of
them
is
you
know
we,
we
had
a
lot
of
discussion
and
I
don't
even
remember
entirely
where
we
landed,
but
on
the
control
over
who
cleans
up
a
bucket
when
you
de-allocate
the
original
request
for
it,
and
this
was
a
whole
greenfield
versus
brownfield.
F
But
I
do
feel
like
one
of
the
things
we
definitely
had
was
a
convoluted,
if
not
non-existent,
clear
ownership
of
bucket
life
cycle,
and
I
think
that
was
one
of
the
things
that
the
api
reviewer
responded
to
was
that
we
were
not
crisp
about
how
ownership
works.
Now.
I
think
he
also
asserted
that
ownership
then
becomes
also
the
point
of
control
where
you
could
mutate
the
bucket,
and
we
can
argue
that
we
don't
think
bucket.
F
A
Yeah,
there's
that
that's
one
and
number
two
is
talking
about
ownership.
So
there's
two
sorts
of
ownership
that
you
mentioned:
one
is
ownership
over
deletion.
Another
is
ownership
over.
You
know
changes
so
in
case
of
deletion.
We
were
talking
about
which
one
of
the
buckets,
if
there
were
multiple
clones
of
a
bucket
when
it's
being
shared
across
namespaces,
which
one
of
the
buckets
would
be,
would
trigger
the
relation
of
the
actual
backend
bucket,
and
I
think
we
we
reached
a
conclusion
on
that.
A
Where
we
said
we
would,
we
would
designate
one
of
the
buckets
as
the
owner
bucket
or
we
had
a
name
for
it.
I'll
have
to
look
back,
and
you
know
tell
you,
but
when
that
bucket
is
deleted,
we
pull
the
rug
out
from
underneath
all
the
other
buckets
as
well.
That
was
the
deletion
part.
The
ownership
over
deletion
is
different
from
ownership
over
mutation,
where
the
what
tim
raised
was
the
user
is
responsible
for
creating
the
bucket
shouldn't.
A
They
also
be
able
to
mutate
the
bucket,
and,
if
that's
the
case,
how
do
we
allow
them
to
do
it
or
that's
what
I've
written
here
written
down
here?
As
a
self
service,
so
the
that's-
that's
the
ownership
question
that
that
came
from
from
tim.
It's
all
assuming
that
bucket
mutation
itself
is
something
we
should
support
right.
So.
F
I
guess
what
I
would
point
out
is:
there's
ownership,
there's
really
two
questions
here.
One
is
the
mechanism
and
the
other
is
what
it
means
to
be
an
owner
and,
for
example,
one
of
the
things
that
tim
mentioned
to
us,
that
you
know
and
I'm
gonna
apologize
for
not
maybe
being
super
on
top
of
the
latest
design.
F
But
you
know
one
of
his
suggestions
was
that
brownfield
and
greenfield
maybe
should
even
look
different
that
that,
as
a
subscriber
as
the
you
know
on
the
claim
side,
that
what
you're
putting
together
should
be
very
clear
that
this
is
brownfield,
in
which
case
maybe
mutation
isn't
even
possible
or
it's
greenfield
in
which
case.
Why
not
allow
mutation,
if
you're
the
sole
owner
of
a
bucket?
E
That
was
the
path
we
started
to
go
down
when
we
were
trying
to
design
a
way
to
support
mutation.
It
was
basically
along
those
lines
like
there
would
be
a
flag
that
says
this
is
the
guy
that
is
the
owner.
We
were
calling
it
a
like
an
update
policy
yeah,
but
something
like
that
and
it
could.
It
could
be
distinct
from
the
deletion
policy
or
the
same
as
the
deletion
policy.
We
could
rename
the
field
that
doesn't
matter
but
yeah
some
field
on
the
the
non-namespaced
object.
E
That
says
this
one
is
the
is
the
original
this.
This
is
the
one
that
will
that
will
that
can
make
changes
and
can
do
deletions
and
anything
else
is
just
a
pale
shadow
of
it
and
can
access
it.
But
you
know
the
controller,
won't
respect
changes
through
those
replicated
handles
of
that
bucket,
but
but
even
that
turned
out
to
be
kind
of
gross,
and
so
I
thought
that
the
question
we
were
facing
was
like:
should
we
sidestep
all
of
this
and
just
not
allow
mutations?
F
A
Yeah,
like
cluster
role,
binding
and
role
binding.
I
remember
this,
but
again,
this
is
all
assuming
that
yeah
just
a
separate
context
again,
assuming
that
mutation
is
something
that's
important,
yeah
this
we
we
discussed
this
and
we
we
have
a
few
two
other
approaches
as
well,
yeah
in
in
based
on
in
what
tim
was
saying
about,
having
namespace
version
of
it
and
then
the
cluster
version
of
it.
A
There
are
still
a
lot
of
unanswered
questions,
and
I
don't
know
if,
if
we
should
go
down
that
road
just
yet,
he
was
saying
this
from
the
context
of
a
dns
policy.
I
believe
or
load
balancer
policy
that
was
being
applied
internally
in
google,
where
a
team
could
could
specify
a
policy
that
you
know
they
were
specific
just
for
their
needs
and
and
the
admin
would
be
able
to
specify
a
default
policy
for
the
entire
entire
setup.
F
Where
you
know
you
you,
you
move
much
more
of
control
and
then
and
then
even
the
resulting
resources
into
the
name
space
as
the
mechanism
of
that
and
you
can
still
apply
policy,
and
then
you
can
have
different
access
control
rules
that
allow
you
to
modify
the
spec,
but
not
the
status,
which
is
sort
of
a
default
right.
E
I
don't
understand
how
that's
different
than
the
direction
we
were
going
because
because
the
theory
would
have
been
that,
like
let's
say,
the
shared
versus
private
attribute
was
something
that
you
wanted
to
be
mutable.
You
would
have
a
field
on
the
bucket
request
that
that
specified
what
you
wanted
to
request
and
then
it
would
be
reflected
into
the
bucket.
The
bucket
object,
the
non-namespaced
object,
and
there
would
be
a
policy
on
the
non-namespace
object.
That
would
say
whether
changes
to
the
namespace
object
should
be
propagated
or
not.
E
And
if
it
said
yes,
because
it
was
the
so-called
greenfield
case,
then
you
could
change
it,
and
some
controller
would
notice
that
change
and
copy
it
into
the
bucket
object.
And
then
some
cozy
driver
would
go.
You
know,
do
the
work
to
make
that
happen
and
then
reflect
it
in
the
status
and
every
you
know
it
would
be
self-service
only
if
the
policy
was
set
that
the
whole
problem
was.
If
you
got
two
or
three
different
objects,
all
pointing
to
the
same
actual
bucket.
How
do
you?
What
is
your
source
of
truth
right
right.
F
E
E
F
Me
as
well,
I
mean
todd,
am
I
kept,
am
I
capturing
you
think
the
the
conversation
properly
yeah.
D
I
think
those
were
the
two
takeaways
I
had
from
the
conversation
with
him.
One
was
around.
D
How
is
access
control
going
to
work
and
the
whole
having
application
developers
have
control
versus
requesting?
You
know
permissions
from
some
cluster
admin,
and
the
second
was
this
approach
of
allowing
that
by
shifting
these
objects
into
the
name
space
and
making
it
more
application-centric
rather
than.
E
E
E
Cloning
controller
that
allows
you
to
like
make
your
request
to
get
a
copy
of
a
snapshot
or
make
a
and
and
then
and
the
controller
could
change
the
deletion
policy
as
part
of
that
operation
or
give
you
a
way
to
make
changes
to.
I
mean
it's
just
a
matter
of
writing
another
controller
to
to
make
these
changes
and
then
that
controller
is
your
becomes
your
enforcement
point.
You
don't
want
to
always
have
humans
involved,
it's
sort
of
a
fallback
while
we
don't
have
a
controller,
but
I
I
guess
I.
E
Yeah,
but
my
argument
would
be
like
you,
you,
you
address
the
greenfield
case
by
saying
ones
that
are
created
by
you
know
through
the
green
field
path,
where
I
create
a
bucket
request
and
it
gets
provisioned,
those
get
the
they
default
to
the
deletion
policy
and
the
mutation
policy.
Where,
if
you
change
your
copy
of
the
object,
those
changes
will
be
reflected.
E
For
you,
and
and
in
the
long
run
yeah
you
would
never
need
a
human,
it's
just
that
you
have
to
have
the
control,
and
so
so
we
we
make
it
sort
of
the
non-name
space
to
you
know
out
of
the
ordinary
user's
vision,
so
that
a
controller
or
a
human
has
to
do
it,
but
that
doesn't
mean
it
always
has
to
be
a
human.
E
It's
just
a
controller
we
haven't
written.
Yet
I
think.
A
Yeah,
so
one
thing
it'll
be
great:
if
tim
can
join
us
on
the
call
it's
it's.
It's
really
I
mean
this
conversation
should
be
either
on
github
or
on
the
calls.
F
A
Yeah,
because,
because
yeah
I
I
actually
asked
him
to
over
email,
because
it'll
be
good.
If
everyone
was
present
during
during
this
again,
I
want
to
set
the
context.
Do
we
want
to
support
mutation.
D
Can
we
be,
can
we
be
a
little
bit
more
crisp
on
what
mutation
means
specifically
what
type
of
mutation,
and
we
would
allow.
E
So
so
what
we
were
talking
about
is
there
was
a
I
don't
know.
Sid
you
had
a
slide
that
listed
like
a
whole
bunch
of
potential
first-class
features.
Yeah.
B
E
Point
and
what
we
had
talked
about
earlier
on
was
like
all
of
these
things
can
be.
You
know,
opaque
parameters
for
now
at
the
cozy
at
the
cozy
rpc
layer.
You
know
we'll
just
it's
just
opaque
parameters
until
like
multiple
vendors
get
together
and
say
we
want
to
support
a
common
version
of
feature
x
and
then
we
would
promote
that
to
like
a
top
level
field
with
a
crisp
definition
on
like
what
it
meant.
You
know
when
it
was.
E
You
know,
with
all
the
things
you
would
want
out
of
a
cozy
spec
for
that
particular
feature,
and
then
once
you
did
that,
then
it
would
become
a
first
class
object
on
the
kubernetes
side
too,
and
then
at
that
point
the
question
is
well.
If
it's
reflected
in
the
object
and
somebody
changes
it
should
that
change
get
reflected
back
into
you
know
the
cozy
driver
by
some
controller
and
the
logical
answer
is,
is
you
know
if
you
go
through
all
these
things?
Yeah
probably
or
I
mean
maybe
there's
something
that
would
be
clear.
E
A
Yeah,
so
actually
I
looked
through
these.
These
features
on
all
three
operators.
All
three
have
every
single
feature.
So
initially
I
thought
gcs
doesn't
have
all
of
them.
It
has
all
of
them,
and-
and
I
I
couldn't
find
one
that
was-
that
was
on
any
one
cloud
that
wasn't
on
the
others.
E
A
A
I
I
was
the
one
who
gave
him
that
use
case,
so
what
the
exact
use
case
I
told
them
was:
let's
say
this:
this
bucket
is
going
to
serve
as
a
website
a
static
website
for
whatever
application
until
the
website
is
ready
or
all
the
you
know,
all
the
different
pages
are
filled
in.
You
want
to
keep
it
private,
but
when
you're
ready
to
launch
it,
you
make
it
public
that
was
kind
of
the
use
case
that
I
that
I
explained.
A
G
But
why
sorry
for
jumping?
But
why
do
you
think
cores
lifecycle
logging?
All
of
these
are
not
user
configurations.
I
I
believe
that
the
workload
owner
is
typically
the
one
configuring
these
not
an
admin
of
a
platform.
A
So
we
have
to
go
back
to
understanding
or
what
we
decided
would
be
a
user
versus
what
we
decided
would
be
an
admin
if
we
define
the
user
as
an
application
developer.
Who
wants
to
consume
a
bucket,
then
they're
not
going
to
worry
about
these
infrastructure
level
concerns
here
things
like
do.
We
enable
logging
on
the
bucket
side
access
logging,
or
do
we
enable
versioning
on
the
objects?
A
Those
are
concerns
that
would
be
addressed
by
the
infrastructure
admin.
Whoever
is
responsible
for
you
know
ensuring
that
there
is
some
data,
durability
and
security
and
stuff
like
that.
Over
the
over
there.
E
But
unless
the
features
result
in
an
observable
change
from
the
workloads
perspective,
then
they
don't
matter
to
kubernetes
right
right,
because
the
job
of
kubernetes
is
to
take
your
request
and
then
satisfy
it
and
then
signal
to
the
workload
that
some
additional
capability
is
present.
But
if
the
workload
doesn't
care
or
can't
know
that
there's
no
point
in
in
making
anything
other
than
an
opaque
parameter.
E
E
Right-
and
I
agree
with
that-
I
think
it
should
be
in
yet
another
opaque
field.
If
you
were
to
take
that
example
and
say
yes,
it
is
a
first
class
field.
Yes,
you
can
change
it.
Then
the
idea
would
be
if
your
copy
of
the
bucket
request
was
bound
to
a
bucket
that
had
the
mutation
policy
set
to
true,
then,
when
you
changed
your
bucket
requests
object,
anonymous
access
from
false
to
true
or
whatever
the
controller
would
respect
that.
E
D
So
basically,
you
kind
of
get
both
the
by
default.
The
cluster
admin
gets
to
decide
everything
if
they
set
mutation
to
false
and
they
can
optionally
choose
to
allow
the
application
developer
to
mutate
by
setting
a
field,
and
if
they
said
it,
then
this
field,
then
this
type
of
property
could
be
mutated.
B
A
A
I
think
I
think,
actually,
even
more
importantly,
the
cap
doesn't
do
a
good
enough
job
of
explaining
it
well,
and
that
is
that
is
you
know
my
responsibility,
so
so
I'm
actually
working
with
jeff
to
improve
on
that,
but
I
I
would
still
if,
if
any
of
you
can
can
also
request
them
to
join
us
on
on
our
calls
that
will
make
a
big
difference,
sad
or
andrew
or
whoever.
D
A
Okay,
if
that's
the
case,
I
think
for
now
we
should
we
should
stick
with
how
we've
been
designing
it
unless
we
have
a
like,
like
we're
suggesting,
unless
we
have
a
compelling
use
case,
why
bucket
mutation
is
important.
A
Something
yeah
yeah.
No,
of
course
again,
tim
is
just
getting
into
this.
Now
he's
probably
looked
at
this
a
few
times
only,
and
so
the
reason
I
mean
I
I
would
be
more
concerned.
If
you
know
we,
he
understood
all
of
the
decisions
we
made
and
then
said
something
needs
to
be
changed.
I
I
don't
think
he
spent
enough
time
yet
just
because
of
how
much
is
there
to
look
at
to
for
us
to
be
actually
concerned
just
yet
again,
I
think
the
problem
is
he
he
isn't.
A
He
hasn't
been
fully
informed
and,
and
one
of
the
reasons
for
that
is
of
course,
the
cap,
the
other
is
yeah.
We
should
follow
up
with
him
and
have
him
join
a
few
of
our
calls.
E
A
Hold
on
so
as
of
right
now,
I
I
still
can't
see
a
strong
use
case
where,
where
a
user
would
have
to
mutate
the
bucket
I'm
not
even
saying
we
punt
on
it,
I'm
saying
we
design
it
this
way
where
the
user
does
not
change
these
things.
F
A
E
Yeah
yeah,
I
think
I
think
you
can
make
a
strong
argument
for
all
those
bits
you
can
twiddle
they're
things
that
you
should
know
at
bucket
creation
time.
You
know
either
you
want
the
feature
on
or
you
want
it
off.
If
you
said
it
one
way
and
you
change
your
mind,
it's
totally
reasonable
to
go
back
to
the
beginning
and
create
a
new
bucket,
because
this
is
a
world.
G
Might
not
live
forever,
they
might
leave
as
long
as
your
workload,
but
the
workload
is
living
right,
a
live
workload
and
and
then
you
know
how
do
you
mutate
any
of
you
of
these
configurations?
That's
that's
the
only
question
so
you
might
say
yeah,
let's
bypass
kubernetes
there,
it's
fine.
E
Like
you're
gonna
rewrite
your
code,
push
out
a
new
version,
start
a
new
set
of
pods
and
new
set
of
everything
and
and
go
on
with
you
know,
version
two
and
like
it
seems
reasonable
to
me
to
say
that,
like
at
the
moment,
you
create
the
bucket.
You
know
what
that
version
of
the
workflow
wanted,
so
it
can
request
what
it
needs.
And
then,
if
you
change
your
mind,
you're
gonna
write
version
three
and
start
over
again
and
like
yeah.
E
A
E
E
B
So
an
admin
can
change,
reclaim
policy
on
a
pv
today,
right,
yes,
yeah,
and
so
that's
an
interesting
example,
because
if
we
have
the
retention
policy
wrong
in
the
bucket
class,
it's
just
a
human
error
and
the
bucket's
been
populated
now
and
and
and
the
errors
that
it
was
set
to
delete.
We
don't
have
a
way
in
cozy
to
change
it
to
you
know,
to
save
it,
and
so
once
the
once
tear
down
starts
in
the
b
that
bucket
physical,
back
and
bucket
will
go
away
when
that
wasn't
desired.
E
But
I
mean
you
have
a
default,
which
is
you
know
it's
going
to
do
what
the
storage
class
says
it's
going
to
do
so,
like
you
know,
by
default,
most
of
the
buckets
you
create
through
automation.
You
want
to
destroy
through
automation,
the
reason
the
administrator
can
change.
The
reclaimed
policy
on
pvs
is
because
every
now
and
then
you
know
you
change
your
mind.
You
say:
oh
wait.
This
is
a
special
one.
E
B
E
D
Hey
guys
tim
just
joined,
so
if
you
guys
wanna.
H
D
H
D
H
I
I
apologize
for
being
late.
I
somehow
didn't
I
I
don't
know
if
I
forgot
to
put
on
my
calendar
or
something,
but
I
I
I
apologize.
A
That's
all
right,
so
so
tim,
so
we
we
were
talking
about
bucket
mutation
and
ownership
and
self-service.
A
A
That
that's
where
we
ended
up
questioning
if
this
is
even
an
important.
H
Feature,
I
know,
that's
a
fair
question
and
certainly
not
supporting
any
form
of
mutation
would
make
it
easier.
The
question
is,
really:
can
you
get
away
with
that.
A
A
So
so
gcs
has
the
rest
of
the
fields
I
figured
out
later
on.
Let's
rest,
the
features
I
mean
so
does
azure,
but
this
is
an
exhaustive
list
of
features
as
listed
inside
of
the
aws
docs
and
also
admin.
If
we
support
these
right.
H
A
Yeah
yeah
yeah,
I
think
I
think
you
can
you
can
yeah.
I
think
you
can
set
two
or
three
different
kinds
of
policies
anyway,
so
coming
back
to
this,
so
the
on
the
left
hand,
side,
we
have
the
list
of
features
and-
and
you
know
we
define
a
user
to
be
someone
who
is
an
application
developer
that
wants
to
consume
a
bucket
for
their
workload.
A
All
of
the
concerns
here
that
are
listed
here
are
not
application,
developer
or
user
user
level
concerns.
These
are
infrastructure
level
concerns.
H
Is
that
right
I
mean.
Let
me
let
me
be
devil's,
advocate
right,
I'm
as
an
application
developer.
I
want
to
make
sure
that
even
if
my
app
gets
compromised,
you
can't
delete
data
from
my
bucket
because
it's
really
important
data.
So
I
want
to
be
able
to
set
the
retention
policy
right.
I'm
going
to
put
a
10
year
retention
policy
on
every
every
object
and.
H
That
up
front
when
you
create
the
bucket
yeah,
I
mean
not
always
like
you,
you
write
your
app,
you
create
your
bucket
you're
doing
your
thing
and
then
you
get
a
security
audit
and
you
go
oh,
you
know
we
should
probably
be
adding
this
thing
or
cloud
provider
launches
a
new
feature
and
you
go.
Oh
that's
useful.
I
want
to
turn
that
on
right,
but.
E
I'm
saying
you
change
your
automation,
workflow
to
start
requesting
buckets
with
the
new
feature
and
any
existing
buckets.
You
just
have
to
go
fix,
but
the
whole
point
of
cozy
is
we're
creating
and
deleting
buckets
in
an
automated
way,
and
you
have
to
change
your
workflow
to
consume
the
new
feature.
H
So
that's
that's
an
answer
I'm
just
trying
to
advocate
or
try
rather
trying
to
to
to
take
the
position
of
making
sure
that
whatever
answer
you
propose
holds
water
to
to
somebody
who
who
would
potentially
use
this.
I'm
saying
I've
got
workloads
that
I
would
probably
use
this
for
and
we've
been
through
this
process
of.
Oh,
maybe
we
should
turn
on
the
life
cycle
feature
we
didn't
realize
it
was
there
before,
but
maybe
we
should
turn
it
on.
E
I
think
I
think
the
argument
is
for
most
of
these.
You
should
know
at
creation
time
whether
you
want
them
or
not,
and
if,
if
it's
something
new,
you
have
to
make
your
you
have
to
change
your
workflow
anyways
to
start
taking
advantage
of
them
and
yeah.
If
you
have
a
bunch
of
existing
buckets,
there
is
a
problem
to
be
solved,
but
but
that's
not
the
common
case.
The
common
case
is,
you
know
what
you
want
when
you
create
the
bucket.
H
I'm
not
here,
so
I'm
certainly
not
here
to
argue
with
your
product
market
fit
right.
That
is
not
my
goal.
My
goal
is,
is
really
just
to
hold
your
feet
to
the
fire
and
make
sure
that
the
the
the
things
that
the
questions
that
I
have
that
are
appropriately
designed
out
or
accommodated
right,
if
you
say
that
that's
just
not
what
this
does
and
this
api
isn't
for
that.
H
A
The
way
you
know,
if
you're
going
to
automate
the
creation
of
buckets
per
workload,
you're
not
really
looking
at
you
know
the
manual
steps
being
involved
at
all
where
an
admin
goes,
and
you
know
configures
whatever
for
the
for
the
bucket,
we
don't
everything
you
want
to
know
is,
or
the
bucket
is
really
designed
for
that
workload
to
consume.
So
when
the
workload
goes
away,
we
we
see
the
bucket
as
something
that
also
goes
away.
A
That's
how
cosy
should
be
consumed
is
where
we're
coming
from
in
terms
of
in
terms
of
in
terms
of
best
practices,
I
would
say.
G
Like
I
think
I
think
that's
it's
like
immutability
is,
is
a
very
you
know,
simplif
like
it's
a
feature
for
that
simplifies
a
lot
of
things
for
us
and
for
users
as
well
in
some
cases,
but
on
some
other
ways.
I
think
it
really
complicates
the
life
of
a
field.
G
You
know
somebody's
who's
actually
supporting
this,
so
I
I
tend
to
you
know.
I
agree
that
it
simplifies,
but
it's
it
doesn't
solve
the
problem
for,
for
others
who
really
support
that
after
that
gets
deployed.
E
G
The
gap
is
very
different,
like
it's.
It's
mainly
about
you
know
these
capabilities
that
we're
looking
at
right
here
that
you'd
find
in
in
cloud
platforms
as
a
configuration,
and
you
you
we
are
suggesting,
let's
keep
it
out
of
the
loop
for
kubernetes
since
it's
like
not
part
of
an
automated
workload
deployment.
E
Yeah,
I
find
it
instructive
to
look
at
how
we
handle
volumes
and
how
we
kubernetes
knows
almost
nothing
about
the
detailed
workings
of
storage.
It
just
knows
about
it's
block
or
file
system.
It's
got
a
size,
it's
got
a
few
access
modes
and
maybe
you
can
take
snapshots
and
that's
it
and
like
that's
good
enough
for
the
vast
majority
of
what
kubernetes
does
with
volumes
and
then
all
of
the
special
options.
You
know
you
just
go
to
the
storage
controller,
to
configure
them.
Kubernetes
doesn't
get
in
the
way.
E
C
C
H
H
E
A
H
We
don't
we
don't
need
to
map
each
feature
to
every
other
feature,
the
the
the
model
of
a
pv.
You
know
if
we
try
to
if
we
try
to
model
this
the
same
way
we
model
a
pv
and
pv
claim.
That's
that's
one
path.
A
The
core
difference
is,
I
would
say,
one
is
the
fact
that
it
doesn't
talk
the
posix
api.
There
are
no
edits,
the
other
is
it's
consumed
over
the
network.
That's
all
is
there
to
buckets
really.
H
No,
I
mean
like
hypothetically
like
every,
I
would
say
every
that
all
the
cloud
provider
bucket
systems
have
a
way
of
exposing
a
bucket
publicly
right.
There's
this
anonymous
access
thing.
There
is
no
equivalent
for
a
volume.
There's
no
way
for
me
to
take
a
file
system
mounted
disk
and
say:
hey
expose
this
to
the
internet
and
let
anonymous
people
download
stuff
from
it.
I
have
to
provide
a
server
on
top
of
it
right,
so
there
are
some
pretty
fundamental
differences
there.
A
H
But
you
don't
need
an
ingress
controller,
I
mean
we
could
model
it
as
an
ingress
controller,
but
you
don't
need
an
ingress
controller
to
do
that.
Right,
like
I
can
just
go
to
storage.googleapis.com
and
look
at
your
anonymous
buckets.
I
don't
need
something
special
to
do
that.
H
So
we're
we
may
run
out
of
time.
We
will
run
out
of
time
the
the
questions
that
I
have
were
really
all
about,
like
what
is
the
role
of
cozy
and
how
do
our
people
can
use
it?
If,
if
you're
saying
that
the
the
the
purpose
of
cozy
is
to
grant
arbitrary
users
within
I'm
going
to
find
out
how
to
say
this
right
to
grant
users
within
a
cluster
access
to
buckets
that
they
do
not
own,
even
even
though
they
may
have
manifested
the
creation
of
it,
they
still
don't
own
it.
H
A
Yeah,
it
goes
back
to
that
that
role
of
admin
was
his
user.
You
would
be
an
admin.
H
But
I
don't
want
to
be
an
admin.
I
want
to
be
a
user
right
like
so
it
is.
It
is
in
my
mind,
it's
the
paragon
of
kubernetes
and
cloud
native
is
self-service,
and
if
you,
if
we
build
a
system
here
that
is
lacking
the
ability
to
do
self-service,
it
feels
like
it's
to,
and
this
is
again
all
about
scoping
right.
It
feels
like
it's
missing
the
point.
E
So
so
we
have,
we
have
a
good
story
for
greenfield
style,
self-service
where
I
just
want
to
create
the
bucket
and
consume
it
and
then
delete
it
when
I'm
done
with
it.
Are
you
saying
that
we
don't
have
a
good
story
for
self-service
for
the
brownfield
use
case
of
hey?
Somebody
else
made
this
bucket
and
I
just
want
to
vent
it
into
my
namespace.
H
There's
at
least
one
policy
thing
in
the
anonymous
access
right,
which
is
set
on
the
bucket,
which
I,
as
a
user,
can't
self-service
mutate,
which
is
frustrating
I
I'm
using
a
bucket,
but
I
don't
own
it,
and
even
though
I
caused
it
to
come
into
creation,
I
don't
own
it,
and
specifically
the
cosi
spec.
H
If
I
recall
said
if
you
want
multiple
people
to
be
able
to
use
a
bucket,
you
create
multiple
kubernetes,
bucket
resources,
capital
b,
buckets
pointing
to
the
same
cloud
bucket,
I'm
just
going
to
use
those
terms
or
s3
bucket
and
brown
field
yeah.
So
so
now,
in
order
for
multiple
people
to
use
it,
you
have
to
go
to
the
brownfield
case,
but
we
could
write
a
controller.
E
Later
that
automates,
that
process,
that
that
does
what
today,
an
administrator,
would
have
to
do
to
basically
make
copies
of
the
bucket
object
and
make
and
bind
it
to
a
namespace
bucket
request
in
a
different
namespace
like
that,
can
all
be
automated
with
yet
another
controller
sure.
But
now
you
have
a.
H
You
have
a
policy
at
least
one
policy
field
on
the
bucket.
That,
in
theory,
could
be
different
across
the
two
different
kubernetes
buckets,
and
you
now
have
to
decide
which
one
actually
is
reconciled
to
the
s3
bucket
and
which
ones
are
ignored
right
like
what.
What
stops
me
from
creating
two
kubernetes
buckets
that
point
to
the
same
underlying
bucket
and
setting
different
policies
on
them.
Yeah.
A
We
just
kind
of
talked
about.
We
don't
think
anonymous.
Access
should
be
even
a
part
of
the
spec
as
it
is
now.
It
needs
to
be
taken
out
made
in
opaque
field,
but
to
answer
your
question:
what
fields
should
be
considered?
What
field
should
be
ignored?
Yeah
that
becomes
tricky.
I
I
I
guess
I
I
have
a
a
question
also,
so
I
I
guess
I'm
I'm
coming
from
the
perspective
of
like
trying
to
integrate
with
this
librarian
and
rook
and
trying
to
consider
what
our
users
are
going
to
going
to
be
wanting
to
do
so
in
in
this
case.
There's
also,
you
know.
I
I
I
have
the
ability
to
set
those
kind
of
permissions
like
can
I
say
I
don't
want
any
other,
like
kubernetes
users
like
if
I'm
in
a
multi-tenant
environment,
to
be
able
to
claim
my
own
bucket
with
write
access
or
like
do.
I
want
them
to
be
able
to
claim
it,
but
only
with
read
access
yeah.
You
can
still.
A
Do
it
by
setting
the
now
we
just
said:
okay
policy:
you
can
set
granular
policies
based
on
whatever
protocol.
It
is
through
through
the
class.
A
Yeah
but
another
question
I
have
is:
were
you
talking
about
provisioning
buckets
without
you
know,
it's
like
an
admin
provisioning,
a
bucket
that
will
be
later
consumed
by
someone
else
or
maybe
used
even
manually
is
is
that
is
that
how
we
were
thinking
of
this
use.
I
Case
no,
I
guess
I'm
imagining
that
I
am
like
a
a
user
in
just
some
kubernetes
cluster
and
this
kubernetes
cluster
has
many
users,
I'm
merely
one
tenant
of
many,
and
I
you
know
as
part
of
my
workload,
I
want
to
create
a
helm
chart
library,
and
I
think
that
is
a
case,
for
this
is
a
thing
that
is
long-lived
like
I
don't
ever
want
this
to
go
away.
I
don't
have
the
option
of
just
saying
you
know.
I
I
A
In
this
domain,
but
still
don't
get
what
you're
mutating.
I
On
that,
I
I
think
tim
had
a
good
example
of
like,
like
a
security
audit,
if
suddenly
you're
like
oh,
I
want
to
change
some
parameter
about
this
or
I
have
recently
discovered.
Oh,
I
set
this
parameter
wrong.
This
is
in
danger
of
being
deleted.
If
I
mess
something
up,
I
want
to
like
change
the
policy
to
be
retained.
I
E
I
I
don't.
I
don't
see
why
this
is
different
than
with
volumes.
I
mean
yeah
if,
if
the
same
thing
happened
with
volumes
and
you
had
some
security
audit
that
says
well,
you
forgot
to
set
the
security
option
on
your
volumes.
Like
you
just
go
fix
that
by
going
to
the
storage
controller
today,
you
wouldn't
expect
kubernetes
to
be
able
to
go
fix.
Your
audit
policy
on
your
pvcs.
H
Well,
I
mean
actually
like
if,
if
my
time
machine
was
working,
I
would
probably
go
back
in
time
and
convince
saad
not
to
implement
persistent
volumes
as
a
non-namespace
resource
and
instead
make
the
whole
thing
namespaced,
because
the
binding
the
whole
you
know,
pvpc,
binding
stuff,
was
very
short-lived
and
everybody
moved
to
dynamic
provisioning
anyway,
because
it's
better
in
pretty
much
every
way.
And
so
I
would
argue
that
the
lack
of
self-service
on
pv
is
a
counter
indicator,
not
a
pattern
to
be
followed.
C
H
So
I'm
not
sure
the
answer
to
that.
There's
an
ongoing
discussion
on
sig
architecture
that
I
kicked
off
largely
in
response
to
this
conversation,
but
but
also
this
pattern
that
has
been
occurring
over
and
over
again
about
whether
we
can
treat
status
as
sort
of
the
trusted
subset
of
fields
that
are
allowed
to
write
by
controllers,
but
not
allowed
by
users.
Right
you
to
do
something
here.
You
have
a
sort
of
request
and
grant
pattern.
H
Pvp
claim
models
that
bucket
bucket
request
models
that
the
question
that
I
that
is
open
is:
could
we
model
that
in
a
single
resource
with
spec
and
status
or
or
maybe
something
else
in
a
trusted
way,
because
I
think
it
would
provide
for
simpler
apis
like
wouldn't
it
be
nice
if
there
was
just
a
single
resource
here
that
you
created
and
as
a
request
and
the
controller
filled
in
the
status?
And
now
you
can
use
your
bucket
the
it's
unclear.
Yet
what
the
resolution
to
that
discussion
will
be
which
goes
to
the
like.
H
Should
it
be
one
resource
or
two,
but
the
the
larger
question
here
that
I
s
that
I
have
that
really
nags
at
me
is:
should
this
be
a
namespace
should
bucket
be
a
namespace
resource
that
indicates
some
form
of
of
ownership
or
if
you
know
it's,
it's
your
api,
not
mine
right.
So
I'm
gonna
be
really
careful
here,
because
at
the
end
of
the
day,
sig
storage
has
to
deal
with
the
ramifications
of
whatever
design
decisions
are
made.
H
And
so,
if
we
say
like
this
is
what
it's
for.
This
is
the
box
and
if
your
use
case
doesn't
fit
in
the
box,
then
you
don't
use
this
api
because
it's
not
for
you,
then
that's
fine.
I'm
not
gonna!
Well
I'll,
discuss
whether
the
box
is
the
right
shape
or
not.
But
if
you
have
use
cases
that
indicate
that
that's
the
right
box,
then
then
cool
I
lost
the
train
of
where
I
was
going
with
that.
H
My
point
being
that
I
don't
want
to
try
to
dictate
use
cases
that
aren't
realistic.
But
I
do
look
at
this
from
a
user's
point
of
view
and
I
find
it
limiting.
E
I
just
wanted
to
to
sort
of
mention
that
we've
mirrored
this.
This
pvc
pv
two-way
bind
with
snapshots
and,
and
the
question
you
raised
about
you
know
what,
if
I
want
to
share
my
bucket
across
namespaces,
is
a
problem
that
we
already
have
in
the
snapshot
world.
You
know,
I
am
writing
a
controller
to
enable
people
to
take
snapshots
of
volumes
and
then
make
replicas
of
those
snapshots
and
other
namespaces,
so
other
people
can
clone
them.
That
runs
into
all
of
the
same
problems
you're
talking
about
and
that's
that
code
is
ga.
H
H
E
I
I
will
be
very
interested
to
if
it
turns
out
that
the
the
two
object-
namespace
non-namespace
by
two-way
with
two-way
bind,
is
the
wrong
decision.
Then
what
we
can
possibly
do
about
all
the
places
where
it's
the
decision
we
made,
because
I
I
don't.
H
I
mean
even
if
it
dated
two
resources,
but
it
was
two
resources
that
had
a
clearer
concept
of
ownership.
You
know
it
might
be
better
and
to
be
clear,
I
haven't
I
hadn't
thought
about
these
snapshots
use
case,
but
it's
just
one
more
for
this.
This
pattern
right
and
I
will
say,
use
this
pattern.
H
A
lot
yeah
well
and
I
and
I'll
say
like
there's
a
the
other
api
that
I've
been
involved
with
this
year,
which
is
the
gateway
design
in
in
the
network,
went
in
the
other
direction
and
said:
look
we're
not
going
to
model
this
as
a
name
as
a
rooted
resource.
H
The
whole
stack
is
it
well
accept
classes,
the
whole
stack
is
name
spaced
and
the
people
who
who
create
the
gateways
really
do
own
them,
and
it
really
is
designed
to
be
self-service,
but
they
have
some
delegation
model
that
allows
cross
namespace
linking
to
to
solve
the
brownfield
cases
and
it
it's
a
little
early
to
be
super
confident
in
it.
But
it
feels
pretty
good.
H
And
I
just
realized
the
time
I
I'm
happy
to
keep
going
on
this.
I
like
what
I
don't
want
to
do
is
like
pigeon
programming
right
swoop
in
and
take
a
crap
all
over
everything.
But
I
do
want
to
encourage
folks
to
think
about
the
value
of
self-service
and
if
we
decide
that
it's
not
valuable
enough
to
accommodate,
then
then
that's
an
explicit
decision
that
that
you
can
take
right.
E
What
we
should
do
is
is
take
a
look
at
a
counter
example
of
one
that's
been
done,
as
you
say,
all
within
the
namespace
all
self-service
and
see
and
take
a
very
serious
look
and
see
that
can
it
address
all
of
our
needs,
because
I
I
have
this
sneaking
feeling
that
I
can't
put
into
words
that
there's
something
about
our
namespace
non-namespace
dual
object
model
that
has
there's
a
reason
for
it
that
we
would.
We
would
lose
something
valuable
if
we
put
everything
in
the
name,
space.
H
So
one
thing
I
can
do
I
mean
you
can
go
off
and
do
that.
I
would
be
happy
to
ask
in
fact
I've
already
asked
the
networking
folks
if,
if
it
came
to
it,
would
they
be
willing
to
show
up
at
one
of
these
meetings
and
just
show
you
the
gateway
api
and
talk
about
how
it
could
possibly
map
to
this?
If
that's
interesting,
that's
interesting
to
me.
What's.
H
Or
it
is
alpha,
and
now
I
believe
or
will
the
alpha
in
I
mean
it's
not
in
21,
but
it's
aligned
with
okay,
so
yeah
I'd
be
happy
to
have
them
show
up.
If
saad
or
somebody
wants
to
ping
me
afterwards,
I
can
connect
the
dots
yep
we'll
do
and-
and
it
might
you
know,
might
be
a
thought
experiment
that
we
do
we
model
it
out,
and
we
say
you
know
what
bucket
doesn't
hold
as
well
as
tim
thought
and
like
now
that
we
thought
about
it.
A
Yeah,
I
think
we
should
consider
it
once
I
I
don't
know
entirely
how
it
works,
but
it's
definitely
worth
considering.
C
H
Okay,
yeah
right.
Unfortunately,
I
have
to
drop
off
sod
catch
up
with
me
afterwards
and
and
well
you
know,
bowie
you
can
just
ping
bowie
and
and
mark
and
they've
already
offered
to
show
you
what
they've
been
doing.
Yep
sounds
good,
perfect,
all
right,
and
if
we
want
to
have
a
follow
up,
then
on
this
I'm
happy
to
attend
again.
Yeah.
Sorry,
sorry.
H
If,
if
it
works
for
you
will
be
meeting.
A
A
You
thank
you
thanks.
Thanks
yeah,
I
think
we
can
conclude
here.
Talk
to
you
all
on
monday
sounds
good.