►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 02 September 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
So
I'll
all
right,
so
so
I
think
last
week
we
were
still
talking
about
allowed
namespaces
and
we
said
that
allowed
namespaces
or
the
concept
of
how
we'll
share
buckets
and
as
of
last
week,
the
resolution
was
for
alpha.
We
don't
need
to
do
anything
for
allowed
namespaces
or
market
sharing,
because
it's.
A
B
Before
you
get
into
technical
details,
I
think
we
need
to
sort
out
some
other
things,
because
I
know
that
we
are
kind
of
out
of
tree,
but
in
general
the
you
know.
I
think
there
are
some
deadlines
for
all
the
other
caps
like
today
is
the
day
for
for
you
to
get
the
prr
section
filled
out.
I
just
noticed
that
you
don't
have
that.
B
No,
no
there's
one
file.
You
should
yeah
today.
You
should
do
that
just
to
get
everything
added
there
and
pin
the
person,
whoever
you
add
it
as
your
pr
reviewer
yeah,
just
at
least
just
this
is
something
they're
going
to
check
right.
Another
thing
is
that
I
think
the
release
team
has
been
inconsistent,
how
they
label
this
the
type
of
caps,
because
last
time
I
asked
them,
and
then
there
was
the
there
were
some
there
were
some
discussions
and
then
there
was
one
person
in
that
thread.
B
Saying
oh
yeah,
the
other
tree,
but
it's
tracked,
then
we
still
get.
You
still
get
the
benefit
of
the
blog
or
something.
And
this
time
I
was
a
double
check
with
the
the
person
who
is
the
new
release
lead
and
he
was
like,
if
you
are,
if
you're,
not
so,
if
you're
not
following
those
deadlines,
then
basically
you
are
not
even
doing
all
of
those
like
blocks
like
what
this
is
not
not
what
I
heard
last
time,
I
I
just.
B
I
already
went
back
and
forth
with
him,
and
so
I
think
it's
no
good.
So
what
but,
but
I
don't
think
we
can
actually
meet
the
deadline
right,
because
it's
it's
really
next
thursday
you
actually
have
to
get
the
cap
merged,
but
we
still
have
not
got
team
did.
Have
you
got
him
to
respond.
A
I
I
email
him
every
day,
just.
B
A
No
response
so
saying:
that's
the
biggest
bottleneck.
That's
there
right
now.
It
is
just
getting
a
review
yeah.
This
game
is
last
time,
so.
B
I
have
added
the
api
review
labels
at
least
now.
This
is
a
okay,
I
mean
because
they
do
have
like
people
who
are
tracking
all
of
those
p
caps
that
require
api.
So
I
see
jordan
actually
added
this
one
to
their
table.
They
have
this
project
tracking.
You
know
project
board.
Basically,
so
this
at
least
is
there.
Yeah
did
tim
did.
C
Yeah
I
mean
I
I
I
worry
that
like
maybe
having
read
it
once
and
maybe
having
a
bad
taste
in
his
mouth,
he
doesn't
want
to
read
it
again
and
if
there,
what
we
can
do
to
overcome
that,
maybe
we
need
someone
else
that
he
trusts
to
give
it
a
look
and
tell
him.
It
looks
a
lot
better
because,
if
he's
like.
B
B
I
wonder
if
he
can
actually
help
mauricio,
not
sure.
Oh.
D
B
Because
you
work
at
google,
so
you
have
some
internal
connections,
so
we're
actually
just
trying
to
get
tim
to
review
this
cap
again,
but
we
couldn't
get
him
to
respond.
I'm
not
sure
if
you
have
any
internal
communication
channel
kind
of
get
his
attention.
B
So
so
well,
so
this
is
what
actually
is
mainly
team-
the
api
reviewer,
but
if
you
can
pin
start
up
you
find
if
assad
can
go,
pin
team
that'll
be
fine
too.
If
you
know
you
can
find
sad
because
I
did.
B
I
did
ask
that
last
time
in
our
last
week's
sikh
storage
meeting,
I
asked
him
if
he
can
pin
tim,
and
he
said
yes
but
of
course
he's
busy
if
you
can
help,
maybe
if
you
can
help
inside
and
ask
sad
to
continue
to
review
this
cap,
do
you
know
the
do
you
know
this
new
cap
sid?
Can
you
add
that.
A
Sure
so
so
to
ben
one
of
the
things
he
said,
I
think
that's
possibly
true
where
tim
feels
like
this
thing
is
off
the
mark,
so
he's
you
know
he
he
probably
thinks
there's
a
lot
that
needs
to
be
changed
and
he
needs
to
review
from
scratch.
So
that's
one
of
the
reasons
we
reloaded
from
scratch.
Other
thing
is,
we
didn't
even
get
a
review
last
time
even
before
he
read
it
for
the
first
time.
B
One
who
is
kind
of
like
looking,
I
mean
help
me
in
this
area
like
help
review
the
cap
or
or
what
do
you
think
somebody
up
somebody
else
in
the
in
the
in
your
team
can
review
it.
C
Yes,
yeah,
typically
the
ones
that
review
kept
charging.
B
C
But
we
need
to
send
the
message
that,
like
we're
effectively
being
pocket
vetoed
by
just
you
know,
being
ignored,
yeah
and
and
like
if
that's
what
they
want.
That's
fine
but
like.
I
would
be
very
surprised
if
that's
what
they
want,
so
maybe
there's
miscommunications
and
everyone
expects
someone
else
to
do
it.
I
don't
know,
but
like
just
the
fact
that
you
haven't
heard
back
even
like
a
reason
why
he
hasn't
read
it
or
like
I'll,
read
it
next
week.
A
Yeah
yeah
no
you're,
right
ben.
We
we
need
to.
We
need
to
clearly
communicate
that
you
know,
we're
being
you
know,
we're
being
ignored
and
and.
A
So
the
thing
is,
the
thing
is:
if
yeah
that's
an
issue,
but
after
we
do
everything
right
and
if
we
still
get
ignored
then
then
you
know
then
then
it's
it's!
The
ball
is
in
their
code.
A
At
that
point
we
can
easily
raise
an
issue
and-
and
I
think
we're
already
there
more
or
less
but
but
it
you
know
if,
if
there
isn't
even
a
review
by
the
by
the
end
of
this
this
release
cycle,
then
then
you
know
we
can
really
raise
some
alarms,
saying
hey
what's
going
on,
especially
because
we
would
have
done
all
the
things
right
and,
and
you
know
they
dropped
the
ball
so
so
and
then,
if
they
can
refer
us
to
a
different
reviewer,
I
would
be
fine
with
that,
because
I
understand
tim
has
already
gone
through
it
once
and
it'll
be
easier
if
he
reviewed
it
now,
because
he
has
some
context,
but
if
he
doesn't
even
review,
wouldn't
wouldn't
that
be
worse
than
getting
another
review
who
starts
from
scratch.
B
Well,
the
problem
is,
I
don't
really
know
you
know
any
other
reviewers.
The
only
ones
that
I
know
have
been
reviewing
is,
but
jordan
is
another
one
and
then
I
think
clayton
sometimes
chimes
in,
but
I
mean
for
them.
It's
also,
you
know
brand
new.
B
If
you
want
to,
I
don't
know,
if
jordan,
you
know
he
might
at
this
point
they
may
all
be
kind
of
overloaded.
At
this
point,.
A
A
Oh,
no
I'm
saying
I'm
saying
even
if
we
end
up
getting
the
review
from
jordan
or
asking
him
to
look
at
it,
we
want
to
make
sure
we
let
them
know
that.
That's
what
we're
doing,
because,
okay.
B
Hopefully,
what
you
can
do
is,
since
he
just
actually
added
this
to
the
trucking
board
and
you're
just
on
the
cap.
Can
you
just
pin
jordan
just
to
see
if
he
can
take
a
look
just
just
on
that
cup.
A
B
B
E
How
you
doing,
when
is
the
date
dude,
I'm
just
writing
it
down.
Well,.
B
D
B
Last
time
what
we
did
was
we
we
actually
because
last
time
we
also
missed
the
deadline
right,
so
we
asked
the
release
team.
This
was
actually
team
was
the
one
who
came
up
with
the
business.
He
said.
Well,
everything
you
do
is
out
retreat.
Can
we
just
you
know
we
don't
have
to
follow
that.
So
we
actually
asked
to
the
release
team.
They
say
yeah,
you
can
do
this
attract
without
a
tree
and
there
is
a
category
this
way.
You
know
it's
still
tracked,
so
you
can.
B
You
don't
have
to
follow
the
deadline
and
that,
but
you
may
still
get
the
benefit
of,
like
writing
a
blog
or
something
for
some
attention.
So
we
say:
oh
that's,
perfect,
for
us
right,
but
the
funny
thing
is
this
time
when
there
was
a
new
release,
lead
I'm
asking
again
and
he's
like.
Oh,
if
you
want
us
to
track,
then
you
still
have
to
follow
all
the
deadlines.
Otherwise
you
don't
get
blocks.
B
C
B
B
B
He
doesn't
say
we
don't,
he
would
just
say
we
don't
have
to
follow
the
deadline,
he
so
the.
If,
if
we
don't
want
to
do
the
api,
that
means
we
need
to
change
the
api
group.
He
said
that
right.
He
said
that
assad
was
saying
no
right,
so
we
still
want
this
api
to
be
a
seek
storage
api.
If
we
change
this
to
any
other
group
like
our
own.
B
C
B
Basically,
just
saying
that
the
review
you
don't
so
okay,
so
this
is
the
thing.
If
we
want
to
want
this
api
to
be
in
seek
storage
api
group,
then
we
must
have
an
api
review
to
review
it.
What
he
was
saying
is
just
that:
okay,
since
your
auto
tree,
you
don't
have
to
follow
the
cat
merge
deadline,
so
this
way
give
us
more
time.
E
Why
does
it
need
to
be
part
of
the
the
six
stories
api
group.
B
That's
because
you
want
this
one
to
be
supported
by
sixth
order.
That's
what
sat
said.
Otherwise,
we
can
do
whatever
we
want,
but
that
was
what
then,
what's
the
reason
for
us
to
be
a
sub
project
on
the
secret
story
right.
This
was
pretty
previously
using
a
different
ripple
right.
E
B
B
Yeah,
I
think
I
think
yet
autotree
is
fine.
I
mean
we
can
do.
I
I
think
you
know
I
I
forget
about
what
the
release
leader
is
saying.
I
think
we
can
always
submit
a
blog
ourselves,
because
you
know
you
see
that
people
stop
stopping
blocks
in
kubernetes
blog
area
anyway,
so
it
may
not
be
like
part
of
the
like
1.23
release.
If
they
don't
allow
us,
we
can
still
try.
If
they
don't
want
us,
we
can
always
submit
a
blog
there,
so
it
will
still
be
a
kubernetes
blog.
B
So
I
think
I
think
we
don't
have
to
you
know
just
to
get
stressed
on
that
one.
You
know
the
the
bottom
line
is
we
need
to
find
it
again
with
your
first.
A
Yep,
okay,
yeah.
B
E
A
Hey
so
shing
there's
another
question
I
have
so
on
the
on
the
cap.
Jordan
has
written
I'll
quickly,
show
you
yeah,
jordan
has
written.
B
A
B
So
this
way,
I
think
the
api
machinery
has
this
checking
board.
So
I
know
they
know
that
what
are
the
caps
are
being
reviewed,
yeah
yeah.
Actually,
if
you
look
at
because
most
of
them
have
multiple
people
reviewing
so
yeah,
it's
fine,
we'll
have
a
both
jordan
and
tim.
There.
B
A
That's
john
right
now:
that's
not
jordan!
Okay,
cool
good
to
know
yeah
we're
in
the
bucket
okay
cool
all
right!
So
so
going
back
to
the
cab
did.
Did
anyone
else
get
a
chance
to
review
the
cap.
A
Sounds
good,
thank
you,
luis
ben.
Did
you
also
get
a
chance
to
look
at
it?
No,
I
had
I
had
it
on
my
list
and
I
I
haven't
done
it
yet.
Yeah
it'll
be
good.
A
If,
if
you
can,
you
know
get
a
word
in
and-
and
you
know,
it'll
also
automatically
subscribe,
you
to
any
questions
that
come
up
there
and
and
it's
it's
generally
going
to
be
good
if
more
eyes
take
a
look
at
it
and
and
jeff
has
done
a
round
of
detailed
reviews
and,
and
we
feel
good
about
it,
I
still
need
to
add
the
make
the
changes
that
that
shing
mentioned.
I'm
going
to
be
able
to
do
that
today,
right
after
the
meeting,
but
anything
else.
A
If
anyone
wants
to
make
sure
looks
good
you
should
you
should.
You
know,
take.
A
You
leave
a
review
sooner
rather
than
later,
because
deadlines
fast
approaching
it
would
be
much
appreciated.
A
All
right,
all
right
and
and
showing
just
as
as
a
resolution
for
just
to
finish
up
the
conversation
we
were
just
having
so
do
we
we
we've
assigned
jordan.
That's
one
thing:
do
we
still
want
to
set
up
the
meeting
with
saad
and
tim
and
go
over
the
cap
with
them.
B
I
think
that's
actually.
We
should
do
that
because
you
know,
because
we
don't
know
whether
jordan
has
done
right
now
it's.
I
think
he
must
be
also
very
busy
at
this
time
one
week
to
go.
B
C
A
Okay,
thank
you.
Thank
you
all
right,
okay,
so
I
think
that
concludes
that
conversation
again
highest
priority.
Is
that
right
now
and
and
like
I
was
saying
earlier-
the
biggest
bottleneck
is
just
getting
a
review,
but
but
yeah
we
have
a
plan
now,
okay,
so
let's
get
into
the
technical
aspects
of
where
we
left
off
and
what's
next,
so
we
left
off
last
week
saying
that
we
don't
really
need
to
support
bucket
sharing
across
name
spaces
in
alpha,
because
we
can
do
it
manually.
C
But
right
at
the
right
at
the
end
of
the
call,
an
issue
did
come
up,
which
was
our
plan
for
brownfield,
could
potentially
conflict
with
the
plan
for
bucket
sharing
so
yeah.
We
had
that
whole
alternative
for
like
how
how
you
could
share
a
bucket.
C
You
know
with
either
some
controller
automating
the
process
or
someone
doing
it
manually
or
whatever.
But
then
there
was
the
question
of
what
you
know
for
buckets
that
were
created
from
you
know
outside
of
cozy,
and
we
want
to
start
using
them
with
cozy.
What
is
the
process
and
and
if
we
stick
with
the
old
plan
of
like
well,
you
just
create
a
you
know.
C
C
We
need
to
also
have
an
opinion
on
what
the
brownfield
story
is
just
to
make
sure
that
it
is
compatible
with
our
eventual
story
for
bucket
sharing
and-
and
I
think,
based
on
the
the
conversation
last
week,
that
we
need
to
probably
say
no
allowed
namespaces
and
no
bars
pointing
directly
at
b's,
because
those
two
things
sort
of
imply
each
other.
A
C
Right
but
but
if
you
want
to
import
a
bucket
and
then
you
want
to
have
it
shared
across
multiple
namespaces,
the
only
way
to
do
that
under
the
new
scheme
is
to
have
lots
of
b's
and
lots
of
brs
that
all
point
to
the
same
bucket
or
to
have
one
b
and
one
br,
but
then
create
all
the
bars
in
one
namespace
and
transfer
them
to
the
other
name
spaces
and-
and
both
I
don't
think
it.
I
I'd,
be
surprised
if
anyone's
happy
with
either
of
those
options.
C
So
so
I
wanted
to
talk
through
them
and
say
like
I,
I
don't
see
a
better
way
than
those
two
options
but
like
maybe
we
should
just
come
to
peace
with
they're.
The
only
way.
C
I
mean
the
bigger
problem,
is
you
know,
cozy,
automates
creation
of
buckets
and
that's
great,
but
like
there's
always
going
to
be
situations
where
you
either
have
buckets
that
were
created
outside
of
cozy
just
because
they
pre-exist
or
you
know
you-
you
want
to
migrate
data
in
from
some
bucket
that
was
created
by
some
other
cluster
or
you
want
to
share
a
bucket
across
multiple
clusters.
I
mean
there's
a
long
list
of
reasons
why
we
can't
assume
that
every
bucket
was
created
by
cozy
in
the
local
cluster
right.
C
So
so
there
has
to
be
a
process
by
which
you
can
take
a
bucket
that
simply
exists
and
pull
it
into
the
cluster
such
that
cozy
can
begin
to
work
with
it.
In
particular,
it
can
create
new
bars
or
it
can
take
existing
bas
and
bind
them
to
pods,
so
the
pods
can
access
the
bucket.
Somehow
and
because
of
the
one-to-one
binding
scheme,
we've
come
up
with,
like
any
sort
of
any
sort
of
importation
of
a
bucket
means.
C
You
also
need
a
br
in
you
know
to
to
point
to
it,
because
that's
the
thing
that
the
bar
is
gonna
point
to
to
create
new
accesses,
unless
you
also
want
to
force
people
to
sort
of
manually,
create
bas
and
not
rely
on
cosy
to
do
that
either,
which
is
really
ugly.
C
So
so,
fundamentally,
like
we
see
these
things
moving
across
clusters,
we
see
these
things
coming
in
from
outside
the
cluster.
In
addition
to
the
the
regular
simple
use
case
of
its
you
know,
the
entire
life
cycle
is
managed
by
cozy,
and
so
it's
just
a
question
of
how
do
you?
How
do
you
make
that
happen?
C
Well,
you
know
it's
manual
right,
you
you,
you
can
create
the
bucket
yourself,
you
combine
it
to
the
br
yourself,
we're
not
preventing
you
from
doing
that,
but
we're
also
not
making
it
easy
and,
like
that,
that's
a
surprisingly
useful
sort
of
first
step
is
just
to
sort
of
punt
and
say:
well,
you
can
do
it,
but
we're
not
helping
you
and
I
guess,
that's
kind
of
what
I'm
suggesting
we
might
want
to
do
but,
like
I
want
to
make
sure
everyone
understands
that
there
are
downsides
that
come
with
that
and
just
that
we're
at
peace
with
with
those
downsides
that
you
know
we're,
saying
we're
not
going
to
prevent
it,
but
it
is
going
to
be
manual
and
you
will
end
up
in
situations
where
you'll
have
like
duplicate
copies
of
some
objects,
and
you
have
to
deal
with
the
side
effects
of
that.
C
Yeah
yeah
this
is
this-
is
basically
walking
the
same
path
of
pvpc.
Where
you
know,
if
you
have
a
volume
that
exists
outside
the
cluster,
you
can
create
a
pv
manually.
If
you
want
multiple
namespaces
to
have
access
to
that
volume,
you
can
make
two
copies
of
the
pv
and
have
two
pvcs
and
figure
out
the
ugliness
that
that's
gonna
cause.
You
know
nothing
stops
you
from
doing
those
things
you
just
have
to
own
the
complexity.
E
I
think
I'm
just
again
I'm
coming
in
blind
today,
but
if
I
was
reading
a
document
that
that
described
how
to
do
that
for
alpha.
I
would
be
okay
with
that.
C
Yeah,
that's
why
that's
why
I'm
saying,
like
I
think,
the
manual
as
long
as
we
just
explain
how
it
works-
and
you
know
say
sorry,
it's
not
easier
to
do
at
least
it's
deterministic
and
simple.
It's
simple
in
terms
of
like
we
don't
have
a
bunch
of
extra
machinery
on
our
api
to
enable
these
more
complex
use
cases.
You
know
the
api
is
sort
of
bare
bones,
just
get
you
what
you
need
and
then
you
know
if
you
want
to
do
imports
it's
manual.
C
C
But-
and
I
don't
like
that-
but
like
I
don't
see
another
way,
you
mean
the
bs
mining
to
the
b
right.
We,
I
think
it
was
jeff
that
came
up
with
that
scheme.
C
I
don't
know
how
long
ago
and
it
was
very
attractive,
but
it
did
force
us
to
think
about
the
security
implications
of
that
and
having
some
sort
of
a
lot
of
namespaces
field
and,
and
then
everyone
started
hating
that
the
mechanics
of
that
when
we
started
thinking
about
well,
how
do
you
actually
deal
with
that?
You
know
the
allowed
namespaces
field
over
time,
and
so
the
new
proposal
is
to
get
rid
of
all
of
that.
C
But
that
also
means
getting
rid
of
the
clever
scheme
or
the
bar
can
point
directly
at
the
bee
and
enables
just
that
simpler
mode
of
sharing.
So
I
want
to
sort
of
put
that
out
there
and
say:
are
we
okay
with
not
doing
that
and
going
back
to
the
even
older
vision
of
you?
You
just
have
multiple
vr's
and
multiple
buckets
that
all
point
to
the
same
thing.
If
you
want
multiple
namespaces
to
to
have
access
to
the
bucket
the
ability
to
create
new
new
accesses
to
a
bucket
like
bars.
D
And
we
don't,
we
don't
want
to
go
back
to
multiple
brs,
pointing
to
the
same
bee.
There's
all
kinds
of
lot
that
locks
you
in
so
much
on
expanding
your
design
for,
say
mutation.
You
have
to
do
weird
things
to
decide
who
actually
owned
the
bucket.
C
D
So
that,
but
that,
then
what
do
you
think
if
multiple
b
pointed
the
same
back
end
bucket
and
you
now
have
to
do
you
keep
the
b's
in
sync?
Do
you
have
a
ma,
a
main
d
and
the
rest?
Don't
matter
I
mean.
How
do
you
deal
with
those
kinds
of
situations
then
wait.
C
So
so
so,
if
you
remember
like
first
of
all,
even
in
the
even
the
improved
scheme
you
came
up
with,
if
you
want
to
share
a
bucket
across
clusters,
you
will
have
multiple
b's
that
have
the
same
contents
right.
That
is
unavoidable.
No
one
disputes
that.
So
all
I'm
saying
is
each
in
a
different
cluster.
A
C
So
all
I'm
saying
is:
if
you
want
to
have
multiple
namespaces,
have
access
to
the
same
bucket
in
the
sense
that
they
can
create
new
bars
on
that
bucket.
Then
you
will
have
to
do
the
same
thing.
You
do
cross
cluster
across
name
space,
which
is
create
a
second
b
and
a
second
br
and
bind
them
and-
and
that
would
just
be
the
way
you
do
it.
C
So
it's
it's
not
like
it's
total
heresy
because,
like
like,
I
said
you
know
when
you
go
cross
cluster,
you
have
to
do
this
anyways.
So
it's
a
problem.
You
can't
avoid
solving
I'm
just
saying
like
let's
it
simplifies
the
single
cluster
cross
name
space
case
to
make
to
do
it
the
same
way,
even
though
it
so
so
jeff
to
get
your
specific
concern,
of
which
one
is
the
master.
C
If
we
set
it
up
so
that
you
can't
do
anything
to
change
them
other
than
delete
the
bucket,
then
it
only
becomes
a
question
of
who
has
the
power
to
delete
and
we
already
have
a
deletion
policy
that
determines
that.
So
as
long
as
we
don't
ever
get
into
a
situation
where
we
have
to
do
bucket
mutation,
this
is
okay
and
lucky.
Mutation
is
a
whole
other
subject
that
we've
tiptoed
away
from.
D
I
don't
think
I
even
buy
the
argument.
I
understand
the
argument,
but
I
don't
agree
with
the
fact
that
you
have
to
have
multiple
b's
when
you're
talking
about
cross
topology
versus
within
a
cluster.
I
see
those
as
being
very
different
and
I
think
simplifying
and
making
it
more
intuitive
within
the
boundary
of
a
cluster
is
still
worth
doing,
even
though,
like
I
agree
with
you
in
multiple
clusters,
there's
going
to
be
some
resource.
D
Cloning
object,
cloning
to
support
multiple
clusters,
we've
had
a
couple
of
versions
of
cube,
fed
and
other
things
trying
to
address
that
and
it
to
me.
It
doesn't
feel
like
a
strong
argument
to
to
then
jump
to
the
idea
that
we
can
have
multiple
b's
within
the
same
cluster.
C
Well,
so
so
I
I
kind
of
I
share
your
feeling
that,
like
it
would
be
nice
not
to
have
to
do
this,
but
but
the
caution
I
would
add,
is
like
we
can't
stop
someone
from
doing
this
like
once
the
apis
are
there.
You
can
make
two
b's
and
two
br's
that
point
to
the
same
bucket
and
we'll
never
know
that.
That's
what
you've
done
so
like.
A
I
mean
it's
not
yeah,
but
but
you
can
do
that
with
a
lot
of
resources.
I'm
trying
to
understand
yeah.
Do
we
wanna,
try
and
solve
that,
make
it
so
foolproof
that
that
you
know
that
that
we
can,
we
can
survive
such
an
event
or
do
we
want
to
say
you
know
if
you
do
that,
it's
it's
the
user's
mistake
at
that
point,
and-
and
we
can
address
it
in
the
future,
but
but
is
that
really
an
important
consideration
for
design.
C
What
I
want
to
get
to
is
saying
it's
not
a
mistake.
It's
like,
if
you
do,
that,
we
will
follow
your
orders
and,
and
you
know,
and
you
own
the
consequences
of
anything
that
you
do
but
like
as
far
as
cozy
is
concerned,
it's
still
valid
because
I
I
want
it
to
be.
You
know
simple
and
deterministic
enough
that,
like
you
know,
if,
if
you
have
two
buckets
that
point
to
the
same
thing
and
and
two
and
they're
bound
to
two
brs
and
two
different
name
spaces
like
th,
the
the
logical
thing
happens.
C
When
you
get
to
the
cross
cluster
situations
like
this
is
just
what's
going
to
happen,
and,
and
I
I've
argued
strenuously
that
like
cross
cluster,
is
the
key
situation
where
object,
storage,
beats
file,
system
and
block
storage
right.
It's
it's
ideal
for
sharing
data
across
multiple
clusters,
and
so
like.
A
C
A
That's
the
wrong
way
to
look
at
it
ben,
at
least
in
terms
of
the
deployments
they
solve
the
problem
very
differently,
so
cross-cluster
sharing
doesn't
happen
the
way
you
know
that,
if
you're
thinking
of
it,
that
is,
they
don't
share
the
same
bucket
across
clusters,
necessarily
it's
still
possible
technically,
but
generally
when
they
do
this
class
cross-cluster
thing
it's
with
bucket
replication
across
regions.
A
So
what
they
do
is
they
replicate
a
bucket
with
eventual
consistency
from
one
region
to
another
and
and
it's
a
different
bucket
in
each
of
those
clusters
except
they're,
kept
in
sync
somewhat
and
and
each
cluster
talks
to
the
geographically
nearest
bucket.
That's
the
mod
so.
A
So
I
mean
this
problem.
Let
me
put
it
this
way.
This
problem
would
also
exist.
If
you
know
two
different
clusters
wanted
to
use,
say:
nfs
mount
or
two
different
clusters
wanted
to
say
we
use.
You
know
amazon
load
balancers
when
we
have
this.
C
Yeah
yeah,
so
if
you
have,
if
you
have
an
nfs
volume,
and
you
want
to
sh
to
use
csi
to
bind
your
your
pods
to
it
like
you
need
to
create
two
different
pvs.
That
actually
point
to
the
same
thing.
C
And
it's
assumed
that
that
when
you
do
that,
you're
doing
things
like
setting
the
retention
policy
to
retain
and
not
delete
so
that
they
don't
accidentally
delete
the
storage
that
the
other
guy's
using
right.
Like
you
just
have
to
be
aware,
when
you
set
up
that
situation,
that
you
know
you
set
the
delete
policy
or
the
retain
policy
correctly.
To
avoid.
C
Right,
that's
what
I'm
saying
is
is
I
I
don't
see
buckets
as
being
different
than
that,
like
you
have
a
bucket,
you
create
a
b
in
one
cluster,
you
set
the
deletion
policy
to
retain
and
then
and
you
can
bind
it
to
a
br,
so
someone
can
create
bars
and
and
get
access
to
it.
And
then
you
do
the
same
thing
in
another
cluster
and
and
neither
guy
can
delete
the
bucket.
C
But
when
they're
you
know,
if
they
want
to,
if
they
want
to
stop
using
it,
they
can
delete
their
br
and
the
b
will
go
away
and
the
actual
bucket
will
hang
around,
which
is
exactly
what
you
would
want
so
like
I
I
that
all
feels
fine
to
me,
but
when
you
get
into
like
okay
now
you
have
two
different
name
spaces
in
the
same
cluster
and
you
want
to
do
the
same
thing
like
and
then
we
and
then
we
say
well:
okay,
we're
going
to
come
up
with
with
some
way
to
avoid
having
two
brs.
C
A
Or
do
you
mean
the
situation
where,
if,
if
someone
else
creates
a
brown
field
b
b,
static,
brownfield
b,
where
they
go
and
create
the
bucket
the
bucket
api
object
by
hand
and
it's
pointing
to
the
same
bucket?
Are
you.
A
A
And
and
you're
saying
that,
because
if
the?
If
you
know,
because
br
has
to
point
to
something
when
it
comes
to
this
bucket
transfer
kind
of
methodology
or
having
having
it
point
to
something
or
having
some
mechanism
to
say
from
the
name
space
where
the
bucket
was
created,
that
this
can
be
used
somewhere
else
too
right,
like
let's
say.
C
The
second
one,
which
is
it
must
be
bound
to
a
br
in
your
namespace,
is
the
one
I'm
advocating
for,
but
it
inevitably
means
that
each
one
is
only
usable
in
one
namespace,
and
so,
when
you
want
the
second
name
space,
you
gotta
create
another
copy
of
everything.
So
and-
and
I
I
agree
with
jeff-
that
that
is
horrible
but
but,
like
I
haven't
heard
the
thing:
that's
not
horrible.
Yet.
A
Yeah,
I
I
still
I
still
I'm
not
entirely.
You
know.
I
think
I
need
to
think
about
this
more,
but
I'm
not
entirely
sure
we
do
need
a
security
model.
What,
if,
what?
If
it's
okay
to
have
anyone
else,
just
use
the
bucket
any
other
namespace.
C
Well
then,
then,
yours,
then
your
security
model
is
all
namespaces
right.
That's
a
policy
choice
you've
made,
and
that
is
a
valid
policy
choice
to
make
you
know
if
we
did
go
down
the
path
of
allowed
namespaces,
you
should
be
able
to
put
a
wildcard
in
there
and
say
everyone,
but
I
I
guarantee
you.
There
are
situations
where
someone
will
want
to
create
a
bucket
and
they
can
create
that
they
want
to
be
able
to
use
to
create
bars
and
they
will
want
to
limit
which
namespaces
can
do
that.
A
What
if
they
made
a
manual
process
around
it,
so,
let's
say
no
allowed
namespaces
and
and
for
for
brownfield,
buckets
or
regular
buckets
and
what,
if
he
said,
if
someone
else
wants
to
access
from
a
different
name,
space
go
through
an
approval
process
like
like
it
happens
with
the
certificate
signing
requests.
A
It's
manual
and
and
who,
who,
whichever
namespace,
if
it
was
created
through
a
br
whichever
namespace
created
it,
gets
to
approve
it
or
if
it
was
created
as
a
brownfield
bucket,
then
the
admin
gets
to
through
it.
E
A
E
I
was
this
sounds
a
lot
like
the
pvc
movement
across
night
spaces
style
issue.
C
Again
so
so
so
luis
last
week
we
talked
about
you
know
being
able
to
do
bucket
transfers
or
bucket
access
transfers
across
namespaces.
Basically
copying
that
design
and
everyone
thought
it
seemed
like
a
reasonably
good
future
approach,
not
something
we
would
do
for
v1,
but
a
mechanism
that
we
would
eventually
use
to
enable
that
so,
but
but
what
I'm?
A
B
C
Which
I
mean
could
conceivably
be
kept
secret,
but
like
it's,
it's
just
an
object
name.
So
it's
not
that
secret
that
anyone
who
knows
it
can
can
mint
new
credentials
on
that
bucket
and
use
them
in
their
name
space,
and
there
is
no
limit
on
what
what
they
can
do
right
any
bac
that
exists.
They
can
create
a
bar
point
it
at
the
bucket
and
gain
that
level
of
access
to
the
bucket
merely
by
knowing
the
the
name
of
the
kubernetes
object.
F
A
Security
hole
but
isn't
that
the
case
with
nodes
say:
let's
say
let's
say
we
have?
We
have
a
node
where
we
have
say
host
path,
volume,
that's
being
managed
by
a
certain
particular
admin
service
and-
and
you
know,
if
anyone
knows
the
name
of
that
they
they
can
schedule.
Another
part
that
goes
to
the
same
exact
node.
C
A
C
C
B
A
That's
what
it's
getting
to
know,
that's
what
it's
getting
to
okay,
so
so
first
question
is
see.
We
don't
restrict
something
like
that
from
happening.
Why
do
we
have
to
restrict
buckets
from
being
shared?
If,
if
someone
wants
to
be
a
bad
actor,
I
you
know
it's
the
cluster
admins
responsibility
to
make
sure
that
it's
used
correctly.
A
I
think
I
think
we
might
be
trying
to
solve
a
problem
that
is,
that
is
much
ahead
of
us.
I
I
don't
know
if
we
should
start
by
designing
an
access
model
inherent
in
in
how
buckets
are
accessed
like
when
a
br
points
to
be.
I
think
it
should
just
be
allowed.
What
would
you
what
do
you
think.
D
About
that,
and
can
I
just
add
something
I
mean
you
there's
our
back
rules,
so
you
could
prevent
on
users
and
name
spaces
from
creating
a
bar,
and
it
sounds
like
then
what
you're
getting
is
we
want
something
more
granular
than
that
you
want
to
allow
someone
to
create
a
bar
in
some
instances,
but
not
create
a
bar
that
directly
references
a
b.
C
Right
you
need,
you
need
a
way
to
discriminate
between
between,
like
I
might
is
he
allowed
to
use
cozy
at
all
you're
right.
You
can
use
our
back
to
prevent
someone
from
using
cozy
at
all
and
just
say
you
may
not
create
bars
period,
but
the
moment
you
let
them
create
bars
like
they're,
going
to
be
able
to
do
this
if
they
know
the
bucket
name
and
so
you'd
like
to
be
able
to
distinguish
between
the
user,
that
that
is
all
powerful
and
create
brs
that
point
to
anything
versus
the
user.
D
Sid
hasn't
sid,
and
I
had
a
discussion
earlier
this
week
and
I
learned
something
and
from
his
his
research
and
experience
said
you
go
ahead
and
add
it,
but
he's
most
of
the
users
that
are
creating
brs
and
bars
are
going
to
be
a
devops,
not
the
app
writer
so
and
the
devop
is
a
more
trusted
user.
So
do
you
want.
A
Yeah,
so
so
at
least
with
minio-
and
you
know,
I
work
with
a
whole
bunch
of
customers
and
and
overwhelmingly
the
pattern,
is
the
application
developer
has
no
insight
into
how
even
these
kubernetes
resources
are
made
or
managed,
and
it's
the
devops
person
or
the
sysadmin
that
that
brings
this
together.
The
application
and
the
resources
it
needs
so
where
we
are
getting
at
is
the
devops
person
is
or
the
sysadmin
is,
is
more
trustworthy
than
than
an
application
by
itself.
A
So
so,
even
if
an
application
is,
you
know,
potentially
harmful
or
a
bad
actor
they're,
not
going
to
directly
request
these
resources.
It's
going
to
be
provisioned
by
the
admin.
A
F
I
don't
know,
but
I
think
what
this
comes
down
to
is
that
you're
saying
it's
an
admin,
so
if
it's
an
admin,
you
can
create
the
buckets
anyway
we're
not
talking
about
self-service
anyway.
I
think
the
entire
thing
is
the
tension
between
self-service
of
users,
not
admins
and
devops.
A
Yeah
so
one
one
thing
to
add:
there
guys
we're
saying
users
should
should
you
know
the
devops
should
should
be
able
to
just
use
a
bucket
if,
if
needed,
we
don't
want
to
put
an
access
model
in
there.
F
F
You
have
to
create
more
objects
to
get
to
the
same
point,
but
I
think
what
I,
what
I'm
hearing
from
ben
is
that
he
prefers
to
you
know
to
lose
the
self-service
capability
where
every
tenant,
every
user
of
a
namespace
can
really
get
a
self-service
of
of
a
bucket
of
you
know,
a
connection
to
a
bucket
and
keep
the
cozy
protocol
under
the
cozy
apis
more
portable
right,
so
that
you
don't
have
to
have
these
cases
where
you
list
namespaces
within
your
these
or
things
around
these
things,
where
you
have
some
references
outside
to
the
external
buckets.
F
Somehow
we
had
all
also
these
suggestions,
where
you
just
referenced
the
external
bucket
directly
from
from
the
requesting
like
the
bar
or
br
whatever
so
put
it.
So
I
think
it's
like
a
service
versus
portability
thing.
That's
what
I'm
hearing
is
that:
how
do
you
look
at
this?
What's
the
portability.
C
The
the
use
case
that
that
I
have
in
mind
is
kind
of
a
blend
where
the
you
know
you,
you
have
a
an
admin
driven
sort
of
bucket
import
from
outside
the
cluster
right,
because
you
know,
I
guess
creating
a
the
cluster
resource
requires
elevated
privileges,
but
then,
once
the
bucket
is
known,
you
want
to
go
to
self-service
with
in
terms
of
who
has
access
and
who
can
connect
to
it.
And
all
of
that.
That's
what
the
bar
pointing
to
the
b
sort
of
indicates.
Is
someone
had
to
create
that
b?
C
C
I
don't
want
to
keep
burning
up
time
on
this
so
like
if
everyone
is,
is
of
the
opinion
that
the
security
doesn't
matter,
and
we
just
want
to
move
on,
I
I
will
stop
harping
on
it,
but
but
I
I
feel
strongly
that
there
is
a
security
issue
and
it
has
to
be
addressed
eventually
and-
and
I
I
don't
know
what
the
answer
is.
C
F
A
So
so,
let's
let
okay!
So
so
you,
the
security
problem,
you're
defining
is
so
what
guy
is
saying
is
for
if
you're
going
to
build
a
multi-lane
system.
This
is
not
it,
but
but
the
security
problem
that
we're
really
looking
at
is
that
there
can
be
bad
actors
that
we
don't
know
of
in
different
names
and
other
name
spaces
that
can
that
can
mess
up
my
data
right.
That's
that's
the
security
problem
we're
trying
to
we're
trying
to
solve.
So
we
only
want
trusted
people
to
have
access
to
the
bucket
correct.
C
But
yeah
so
so
the
nightmare
scenario
is
alice,
creates
a
br
and
goes
to
the
dynamic
provisioning
process
of
getting
a
b
and
then
she
starts
using
the
bucket
and
but
and
then
bob
learns
the
name
of
alice's
b
and
he
goes
and
creates
his
own
bar
and
creates
and
gets
access
to
alice's
bucket
and
then
begins
deleting
all
the
data
in
it
or
modifying
the
data
and
all
he
needed
to
perform.
That
attack
was
to
know
the
name
of
the
bucket
and
that's
pretty
creepy
yeah
right.
A
And
you
put
it
that
way,
yeah!
So,
let's,
let's
go
back
to
that
one
other
question
that
we
had,
which
is
this
concept
of
multiple
buckets
being
unavoidable
for
the
same
back
in
bucket.
So
isn't
that
kind
of
an
exceptional
situation
as
in
it
can
only
happen
if
someone
goes
and
creates
a
b.
That
points
to
an
existing
backend
bucket
is
that
right
within.
A
C
Well
or
you
need
some
explicit
security
mechanism
like
allowed
namespaces,
that's
where
we
started
a
couple
months
ago
right.
We
came
up
with
that
scheme
because
we
were
trying
to
avoid
the
multiple
buckets
and
I
pointed
out,
there's
a
security
problem
that
you
know
somewhere.
Someone
needs
to
know
what
is
allowed
and
what's
not
allowed,
and
a
lot
of
namespaces
was
the
best
we
could
come
up
with,
but
gradually
we
started
to
like
it
less
and
less,
and
so
we
came,
we
sort
of
tried
again
with
another
approach
to
say.
C
Maybe
maybe
you
know
we'd
do
bucket,
sharing
a
different
way
and
to
get
rid
of
a
lot
of
names
faces.
A
I
see
so
okay,
we're
almost
out
of
time,
but
let's
sit
on
it.
It
seems
like
we've,
we've
we're
back
at
square
one,
but
I
do
understand
the
problem.
C
I
mean
the
the
the
nice
thing
about
having
multiple
copies
of
objects
and
and
saying
you
can
only
use
your
namespace
copy.
Is
then
the
security
mechanism
is,
you
have
to
have
a
bound
object
in
your
namespace
right
and
that's
just,
which
is
how
everything
else
in
kubernetes
works
right
like
a
pod,
can
only
refer
to
a
pvc.
That's
in
the
same
name
space.
You
can
only
refer
to
a
config
map.
That's
in
the
same
name
space.
C
So,
given
that
that's
how
everything
else
works
in
kubernetes,
I
feel
like
it
makes
sense
to
say
a
bar
can
only
point
to
a
br,
that's
in
the
same
namespace
and
there's
your
security
mechanism,
but
as
long
as
bars
have
the
one-to-one
binding.
You
end
up
inevitably
with
this
multiple
copies
of
the
bucket.
At
the
end
of
the
day
when
you
want
to
share
that's.
D
A
fair
argument,
but
at
the
same
time,
user
resources
like
pods,
point
to
cluster
wide
resources
like
storage
classes.
Yes,.
D
Yeah,
it's
possible.
I
would
like
to
think
more
about
this.
Not
you
know
just
some
time
to
try.
I
think
you
bring
up
a
really
good
concern
then,
and
I've
never
been
a
big
fan
of
allowed
name
spaces,
even
though
I
helped
promote
that
idea.
I
I
I
it's
never
felt
good
to
me
so
yeah,
I'm
okay,
trying
to
dig
deeper
for
a
solution.
F
If
that's
what
the
provider
wants
to-
and
you
know
just
that
controller
would
lose
on
demand
and
things
like
that,
so
I
I
don't
see
it
as
a
as
a
problem
to
extend
right.
It's
I
I'm
not
sure.
If
what
we
should
do
now
is
like
spend
a
few
more
months
on.
You
know.
C
F
F
So
we
can
say
no,
I
think
it
makes
sense.
I
mean
it
makes
sense
to
say
no,
and
you
know
leads
to
another
controller
that
would
import
like
you
know,
copy
the
b
and
that's
it
and
you
know
do
whatever
it
wants
to
keep
it
in
sync
or
not,
or
you
know,
we
just
keep
it
out
of
scope
from
cozy
to
say
that
cozy
cozy
just
make
sure
that
you
have
your
once.
You
have
your
namespace
objects,
you
know
your
pods
work
with
it
and
and
your
you
know
you
can
do
a
green.
D
C
D
And
and
then
that
sort
of
locks
it
out
and
then
we
say
we
don't
have
sharing
across
namespaces
for
alpha.
No,
we
do,
we
do
have
it,
but.
D
C
A
We
don't
like
it,
okay,
so
we're
out
of
time.
Unfortunately,
I
have
I
have
a.
I
have
a
hard
stop
today.
Let's
continue
next
week
ben
and
I
I
I
do
understand
the
concern
and
I
think
everyone
here
does
yeah,
but
for
now
the
the
proposal
still
says
the
br
points,
the
b
for
rocket
sharing,
but
but
you
know
we
can
always
go
and
fix
that
and
change
it
even
after
alpha.
A
So
so,
let's
keep
going
with
what
we
have,
because
we
don't
have
a
resolution
or
consensus
on
on
what
the
better
approach
should
be,
but
but
yeah
we
will
keep
an
eye
out.
For
you
know
we
will.
We
will
remember
not
to
just
stick
with
this
approach
and
we'll
make
sure
that
we
address
the
problem
correctly.
D
If
anyone
wants
to
write
up
a
quick
proposal
in
a
google
doc
and
post
it
on
sig
storage
cozy,
I
I
would
definitely
review
it
and
read
it.
I'm
interested
in.
I
need
more
time
to
see
something
in
front
of
me
than
just
hearing
all
the
different
conversations.
That's
just
a
good
suggestion.
Yeah.
A
All
right
yeah,
please
ben!
If,
if
you
have
a
I
mean,
I
don't
think
it's
a.
I
think
I
think
it
might
be
a
good
idea
to
explore
having
a
namespace
b
and
a
global
b
for
sharing,
maybe
so
yeah.
D
A
Someone
wants
to
put
that
in
a
proposal.
Please,
please
do
so
yeah,
okay,
so
we're
out
of
time
again.
Thank
you.
Everyone,
let's
meet
again
next
week
and
and
ben
and
guy
and
vianney.
If
you
get
a
chance,
everyone
here
really
please
review
the
cap
and
leave
your
feedback
there.