►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 18 March 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Those
of
you
who
missed
it
and
let's
make
a
decision
today,
whether
we
go
forward
with
it
or
not,.
C
And
is
this
visible
yeah?
Okay?
So
this
there's
again
we
discussed
this
a
week
ago
and
then
we
discussed
it
for
some
part
of
our
monday
meeting
this
week
and
as
I
mentioned
monday
monday,
I
haven't
made
a
change
to
it.
It's
just
what
we
showed
thursday.
C
I
think
it's
ben
said
you
know
he
wanted
to
soak
a
little
and
and
that's
fine,
and
so
this
is
what
we
have
and
I
put
the
link
to
this
in
sig
storage,
cozy
channel,
and
if
you,
if
you
look
at
that
link
or
I'm
scrolling
up
to
where
I've
listed
some
benefits
of
this
proposal,
and
I
what
do
I
have
above
that
I
just
I
just
have
a
workflow.
C
I
think
anyway,
the
the
link
is
in
storage
if
you
cozy,
if
you
scroll
up
a
little
bit,
I've
been
making
changes
to
this
document
not
related
to
this
proposal,
and
so
I
have
a
table
of
contents
in
it.
Now
that
you
can
click
to
get
to
this
diagram.
So
that's
where
we
are
sid.
What
would
you
like?
Would
you
like
me
to
go
over
the
workflow
again
or
just
we
can
just
discuss
it,
ask
questions.
B
A
So
I
think
there
are
some
new
people
here
today
and
also
I'm
not
sure
if
vyani
and
nicholas
got
a
chance
to
look
at
this
new
architecture
and
review
it
I'll
give
a
quick,
quick.
You
know
summary
of
why
we're
doing
this
so
with
our
current
design.
We
have
this.
A
We
have
this
design
where,
when
we
want
to
share
buckets
between
namespaces,
we
end
up
creating
copies
of
the
bucket
literally
another
bucket
object
is
created
by
probably
a
different
name
and,
and
it
leads
to
a
bunch
of
problems
around
who
holds
the
source
of
truth
and
and
how
deletion
can
be
facilitated
in
such
a
way
that
if
there
are
other
consumers,
we
we
respond
in
the
right
way.
A
So
we
wanted
to
come
up
with
the
model
where
we
don't
have
this
weird
bucket
copy
mechanism,
but
rather
have
references
in
in
a
more
intuitive
way
between
namespaces
that
are
requesting
the
same
bucket.
So
that's
that's.
When
jeff
came
up
with
this
proposal
so
I'll,
let
jeff
explain
the
proposal
once
and
then
and
also
explain
in
a
quick
summary
of
you
know
some
of
the
trade-offs
that
it
brings
about,
and
some
of
us
have
spent
a
few
weeks
on
this
already.
A
So
I'm
hoping
we
make
a
decision
to
either
go
forward
with
this
or
not
by
the
end
of
this
meeting.
C
Yeah
so
sid
outlined
the
motivation
behind
this
thinking
here
and
it
I
believe
it
will
make
deletion
cleaner,
although
ben
is
brought
up
that
maybe
there's
another
another
retention
deletion
policy
we
need
beside
and
that's
where
we
talk
about
forced
elite
versus
a
sort
of
a
delayed,
or
you
know
a
delayed
or
lazy
delete,
but
anyway,
in
in
this
diagram,
on
the
left
hand
side
in
namespace,
one
that
that
is
the
design
we
have
today
in
the
cap.
C
C
One
is
instantiated
in
the
cluster
space
and
then
there's
two
two
users
in
namespace,
one
that
want
to
use
that
bucket
and
they
just
refer
to
the
br
as
today
and
and
and
then
so,
with
two
bars,
you
get
two
bucket
access,
instances,
ba1
and
two
as
shown
and
they're
cluster
scoped,
and
they
point
to
they
point
to
both
point
to
bucket
one.
C
Okay.
Now
in
sharing
a
bucket
in
a
different
name,
space
is
where
we
we
do.
This
cloning
today
and
this
proposal
addresses
that
so
what
we
have
in
namespace
2
is.
We
want
access
to
the
same
bucket
bucket
one
we
we
create
a
bar
and
instead
of
that
b-a-r
pointing
to
a
br
in
namespace
two,
it
points
to
bucket
one
directly.
C
That's
that's
the
crux
of
the
change.
That's
the
main
change
right.
There
and
bar
also
causes
bucket
access
instance
three
to
be
created,
and
it
also
points
to
bucket
one,
so
this
represents
sharing
within
a
namespace
sharing
outside
of
a
namespace.
It's
been
pointed
out,
it's
not
symmetric
between
the
two
use
cases
of
sharing,
and
that's
that's
true,
but
it's
cleaner
and
it's
been
pointed
out.
C
How
does
bar-3
know
the
name
of
bucket
one
if
they
don't
have
our
back
rules
that
allow
them
to
list
it
or
you
know
how
do
they
just
know
that
name
and
and
and
the
answer
to
that
is
the
same
as
the
current
kept
design.
C
If
we
were
doing
sharing
in
namespace
2,
the
user
in
namespace
2
would
have
had
to
create
br2
and
br2
would
have
pointed
to
bucket
one.
So
they
still
needed
to
know
the
name
of
bucket
one.
So
this
isn't
introducing
anything
new
in
terms
of
a
user
knowing
the
name
of
bucket
one,
but
it
is
not
symmetric
between
namespaces
that
are
sharing
a
bucket.
D
Hey
jeff,
this
has
been.
I
one
issue
did
flowed
back
into
my
mind
that
informed
our
original
design,
which
was
if
you
want
to
share
a
bucket
across
kubernetes
clusters,
you'll
unavoidably,
have
to
clone
the
bucket
to
the
other
cluster
and
create
another
br
or
I
guess
you
wouldn't
have
to
create
another
br
with
the
new
proposal.
D
You
would
just
have
to
clone
the
bucket
and
then
point
some
va's
at
it
or
bars
at
it.
But
you
the
moment
you
go
across
clusters,
you,
you
can't
avoid
cloning.
The
bucket
right.
E
D
Clusters,
so
so
that
one
of
the
benefits
of
the
original
design
was,
it
makes
cross
name
space
sharing,
identical
to
cross
cluster
sharing
right.
The
thing
that
you
would
do
to
share
across
clusters
is
the
same
thing
you
do
to
share
across
namespaces
you
clone
the
bucket.
You
create
a
vr
that
points
to
the
bucket,
and
then
you
just
use
that
in
your
namespace
and
it's
the
same
process
so
and
that
well,
it
helped
push
us
towards
towards
the
original
design
months
ago.
D
It
does
remind
us
that
there
are
still
situations
where
you
are
going
to
have
two
copies
of
the
bucket.
You
are
going
to
have
an
issue
like
what,
if
one
guy
deletes
it,
while
the
other
guy's
using
it,
you
know
what,
if
what?
If,
if
we
end,
if
we
ever
end
up
supporting
mutation,
how
do
you
control
which,
which
one
of
the
kubernetes
clusters
is
allowed
to
perform
the
mutations
that
those
kinds
of
questions
will
remain
even
in
the
new
design.
A
So
so,
if
you
can
clearly
answer
how
important
there
are
other
use
cases
which
is
cloning
across
clusters
as
compared
to.
A
E
C
I
mean
I
think
the
sharing
across
clusters
is
going
to
be
have
some
kind
of
cloning.
If
you're
trying
I
mean
it's
you're,
going
to
have
multiple
across
clusters,
you're
going
to
have
multiple
instances,
abstracting
a
a
single
physical
resource,
whether
that's
a
volume
or
or
a
bucket
right,
and
so
I
don't
think.
D
Yeah
right
right,
so
so
recognizing
that
in
the
original
design
we
said
why
not
just
make
cross
name
space
sharing
identical
to
to
all
the
other
forms
of
sharing
and
say,
look
we're
gonna
have
to
click
to
copy
the
bucket
in
some
cases.
So
why
not
copy
it
in
all
cases
and
make
everything
just
collapse
down
into
one
use
case
yeah,
but
in
that
case,
you're.
C
Cloning,
then,
if
we,
if
we
in
this
example,
you
would,
if,
in
this
example
of
two
namespaces
sharing
buckets,
we
would
we
would
have
a
bucket
one
and
a
bucket
two.
Then
right,
and
so
you
would.
And
now,
if
you
talk
about
cross
cluster,
then
you're
going
to
clone
bucket
one
and
bucket
two
will
be
in
cluster
one
and
we'll
be
in
cluster
two
and
they
all
are
representing
a
single
physical
bucket.
D
That
wanted
to
use
it
you'd
have
to
copy
it
again,
but
but
but
in
that
world
it
becomes
very
obvious
that
every
one
of
the
buckets
needs
policy
information
to
tell
you,
may
I
delete
it?
May
may
I
mutate
it.
You
know
all
that
information,
because
you
need
a
way
to
you
know
when
there
is
when
there
is
two
clusters.
They
can't
talk
to
each
other.
You
need
a
mechanism
to
decide
who
has
the
power
and
it
has
to
be.
A
I
don't
see
how
yeah,
I
guess,
you're
saying
the
benefits
of
whatever
we're
doing
may
not
be
you
know,
maybe
completely
taken
away
as
soon
as
we
have
to
you
know,
have
buckets
and
be
shared
across
clusters
right.
A
Quick
question
there
like:
how
is
this
different
from
like
the
static
brown
field?
Whatever
we
call
the
static
brown
field.
A
Brown
field:
you
know
we
we
don't
really
care
about.
You
know,
because
we're
not
involved
in
the
creation
life
cycle.
We
don't
get
involved
in
the
deletion
life
cycle
as
well.
In
the
sense,
if
it
happens
outside
of
the
scope
of
koji,
we
just
you
know
we
just
let
it
happen.
We
we
just
deal
with
it.
Would
that
be
a
good
enough
answer
here?
D
D
Right
right,
that
is
the
pure
brownfield
situation
and
we
have
a
pure
greenfield
situation,
but
with
the
third
one
we
talked
about
was
transitioning
from
green
to
brown.
So
you
you
created
it
inside
kubernetes
and
then
you
decided
you
never
want
to
delete
it
or
you
know
you
want
to
promote
it
into
basically
a
brownfield
bucket.
So
there's
some,
you
can't
just
delete
it
in
kubernetes,
obviously,
because
that'll
delete
the
buckets.
So
how
do
you
transition
from
your
greenfield
bucket
to
a
brownfield
bucket?
So
so.
A
C
C
C
Yeah
and
that-
and
that
was
a
good
discussion
we
had
then
what
happens
if
namespace,
if
you
delete
br1
what
happens
to
bucket
one
and
what
happens
to
the
back
end
bucket.
D
Like
what
would
I
what
I'm
contemplating
is
is
what
if
we
say
before
you
can
share
it,
you
have
to
sort
of
go
through
a
brownfield
conversion
and
then
then,
once
you
share
it,
it's
basically
already
brown
and
what's.
D
Well,
it
would
it
would
it
would
remove
a
set
of
use
cases
where
you
have
this
combination
of
one
namespace
can
mutate
it
and
delete
it,
but
nobody
else
can,
but
other
people
can
still
see
it
and
it
would
force
you
to
either
say
look
it's
all
green,
in
which
case
it's
stuck
inside
the
name
space
and
it
can
never
leave
or
it's
brown,
in
which
case
everyone
can
share
it.
But
nobody
has
access
to
delete
it
or
do
anything
that
manages
the
life
cycle
right
it
would.
D
It
would
clearly
split
you
into
one
use
case
or
the
other
right
right
now.
I
think
the
extra
complexity
and
the
design
comes
out
of
the
fact
that
we
want
to
do
both
at
the
same
time,
and
I'm
saying
well,
why,
like?
Why
not
just
force
you
to
flip
a
switch
when
you
want
to
start
sharing
a
bucket
and
say
I?
A
D
You
know
and
say:
okay,
I
am
now
reassuming
control
and-
and
I
promise
that
I'm
not
sharing
it
with
anyone
when
I
do
that,
so
that
you
know
you
could
get
it
back
into
the
state
where
you
could
control.
D
No,
I
mean
it
sounds
complicated,
but
I'm
literally
imagining
like
this
process
involving
flipping
a
bit
and
deleting
an
object
to
go
from
green
to
brown
and
then
to
go
from
brown
to
green.
It
would
be
flipping
another
bit
and
creating
a
new
object
and
you're
back
in
the
green
field.
Like
I'm,
imagining
some
very
simple
mutation
that
that
switches
you
between
the
two
modes.
A
Isn't
that
also
harder
to
okay?
So
just
from
the
user's
perspective,
it
seems
harder,
but
you
know
maybe
not
this
is
this.
Is
this
has
more
to
do
with
how
we.
D
We
build
this.
We've
always
said
that
when
you
go
across
name
spaces,
you
need
an
ordinary
user,
can't
do
it
by
himself
right
right.
You
need
someone
with
admin
powers,
so
an
ordinary
user
will
always
be
stuck
in
the
green
field
world
and
then
to
get
into
the
brownfield
world
you're
going
to
need
help
from
an
admin
or
a
powerful
controller,
and
we
could
just
say
that
you
know
to
transition
back
and
forth
is
another
function
of
that
controller?
A
Okay,
so
we
have
multiple
questions
here.
The
first
one
is
is:
is
this
approach
the
one
that
jeff
has
is
projecting
right
now
better
than
what
we
earlier
had
and
and
if
the
answer
to
that
is
yes,
even
with
the
problems
that
you
mentioned.
D
A
The
bucket
is
being
created
with
the
context
of
the
cluster,
which
is
created.
It's
a
like.
I
think
we
talked
about
it
at
one
point
where
that
brownfield
will
eventually
go
away.
I
think
even
you
said
that
the
idea
being
you
create
the
buckets
on
the
fly
as
you
use
it,
and
then
they
go
away
when
the
workload's
done
so
how
much
more
important
or
how
important
is
sharing
across
clusters.
D
And
I
want
to
make
one
other
assertion,
which
is
the
the
bucket
exists
outside
the
cluster
like
this
is
not
a
resource
that
is
internal
to
kubernetes,
such
that.
If
you
deleted
the
kubernetes
cluster
itself,
the
resource
would
go
away
like
if
you
just
nuke
the
whole
kubernetes
cluster,
the
bucket
will
survive
that,
and
so
it
is
outside
the
cluster
in
a
very
real
sense
and
it's
logical
to
want
to
share
it
across
clusters.
If
you
have
lots
of
clusters,
there's.
F
There's
buckets
so
clearly
there's
systems
who
are
internal
to
the
cluster
right.
There's
two
cases
here.
D
And
you
can
share
snapshots
right,
like
you,
can
have
a
snapshot
that
was
created
in
one
cluster
and
you
can
share
it
with
another
kubernetes
cluster
and
then
allow
people
to
clone
from
it
and,
like
that's
a
perfectly
reasonable
thing
to
do.
If
both
those
kubernetes
clusters
have
access
to
the
same
storage,
but.
A
D
E
F
A
F
F
D
F
A
Okay,
let
me
ask
the
question
this
way,
so
if,
if
this
is
a
benefit
within
the
cluster,
I
mean
like,
if
you're
just
sharing
within
the
cluster,
if
this
approach
is
better
now,
I
want
to
ask
this
question:
if
we
were
to
go
with
this
approach,
do
you
think
we
can
come
up
with
a
solution
for
the
across
cluster
use
case
down
the
line,
or
do
you
think
that
might
be
a
really
hard
thing
to
do
and
we
shouldn't
go
forward
with
this
approach
right
now,.
D
Why
I
was
just
outlining
it
at
the
beginning,
you
know
what
you
would.
You
definitely
would
have
to
create
another
copy
of
the
bucket
in
the
other
cluster,
but
you
could
forego
any
creation
of
any
brs
in
any
name
spaces
and
just
start
creating
bars.
That
pointed
to
the
bucket,
in
all
of
the
various
namespaces
that
wanted
to
use
it
in
the
new
cluster
yeah
yeah,
that's
true
and
and
of
course,
now
you
have
no
way
of
deleting
it
other
than
an
admin
deleting
the
bucket
object.
A
A
I
mean
like
we're
trying
to
see
you
know
if
there's
a
way
to
work
with
the
whole
idea
behind
this
approach
is
the
approach
that
jeff
is
projecting
right
now
is
we
want
to
avoid
copying
buckets,
but,
along
with
that,
comes
a
lot
more
benefits.
Also
now,
in
what
we
do
is
when
we
go
across
clusters,
we
basically
lose
those
benefits
because
we're
copying
anyways.
A
D
E
D
E
C
I
think
measuring
this
in
a
multi-cluster
environment
is
not
consistent
with
how
other
caps
and
other
even
current
projects
are
being
measured
right
now,
then,
and
until
we
have
a
cube,
fed,
2
or
something
where
resources
are
visible
across
clusters
by
some
main
cluster
controller.
D
Well,
so
I
I
don't
agree,
I
think
I
mean
this
is
one
of
the
unique
aspects
of
storage
relative.
To
most
other
I
mean
most
other
things.
Networking
compute
everything
sort
of
by
its
nature
exists
inside
the
cluster,
but
storage
is
the
special
thing
that
tends
to
exist
outside
the
cluster
that
the
cluster
has
to
make
use
of.
C
D
It
just
seems
to
me
that
if,
if
I
was,
you
know
if
I
did
have
a
whole
group,
a
bunch
of
kubernetes
clusters
that
I
was
doing
various
things
on
and
I
wanted
to
share
data
between
them
and
object
store,
is
the
obvious
way
to
provide
storage
that
is
accessible
across
all
of
my
kubernetes
clusters,
like
what
there's
no
better
way
than
object
storage
to
achieve
that,
and
so
it
seems
like
one
of
the
the
best
cases
for
object.
D
A
C
C
However,
I
what
I
have
trouble
agreeing
with
is
that
it's
a
problem
that
cosy
should
solve,
because
I
think
it's
a
bigger
multi-cluster,
although
object
storage
is
a
great
use
case
for
multi-cluster,
it's
a
bigger
problem
than
that
and
it's
been
you
know,
there's
been
uber
netties,
there's
been
cube,
fed
one
and
two
we've
had
work
in
multi-cluster
that
doesn't
seem
to
stick
yeah.
I
don't
know
if
there's
a
current
effort
now,
so
I
I
think
when
some
subset
of
kubernetes
resources
are
visible
across
clusters
is
when
cozy
would
start.
Looking
at
that.
D
A
D
Well,
so
so
the
thing
that
it
makes
better,
is
you
don't
end
up
with
multiple
copies
of
the
bucket,
but
you
don't
get
out
of
you.
Don't
get
out
of
jail
on
needing
to
have
policy
bits
on
the
bucket
that
tell
the
controller
whether
it
is
allowed
to
delete
the
bucket
and
whether
it's
allowed
to
mutate,
the
bucket,
which
was
one
of
the
original
motivations
for
not
even
having
two
copies
of
the
bucket,
was
well.
D
We
didn't
want
to
have
to
manage
those
bits
right
well,
and
I
guess
we
convinced
ourselves
that,
even
even
with
only
one
copy
of
the
bucket,
you
still
need
a
deletion
policy.
So
nothing
so
like
it.
This
all
started
when
we
were
talking
about.
How
do
you
mutate
the
bucket?
D
If
it's
being
shared
right
and
the
question
is
well-
which
which
one
is
the
real
one
and
and
my
proposal
was
well-
you
you
have
to
tell
us-
which
one
is
the
real
one-
by
setting
a
policy
bit
on
the
bucket
and
then
we
said,
maybe
it's
better
to
not
have
multiple
copies
so
that
it's
obvious,
which
one
is
the
real
one,
and
but
what
I
think
this
discussion
is
pointing
out
is
that
that
only
helps
inside
one
cluster.
The
moment
you
go
across
clusters,
you
need
that
again.
D
And
I
don't
know
that.
I
guess
I
don't
know,
that's
a
win
it.
I
could
be
talked
into
the
the
idea
that
it's
still
a
win,
but
it's
a
smaller
win
than
I
think
we
had
originally
envisioned.
D
I
mean
because
because
we
we
still
don't
plan
on
doing
mutation,
but
it's
clear
now
that
if
we
did
mutation,
you
would
want
a
way
to
flip
it
off.
In
the
situation
where
you
had
shared
a
bucket
across
clusters,
you
didn't
want
the
other
cluster
to
be
able
to
set
anything
right.
You
wanted
to
be
feel
like
a
brownfield
case
where
you,
where
you
can
only
access
it,
but
you
can't
do
anything.
A
Is
there
another
way
we
can?
We
can
enable
sharing,
like
across
clusters,
like
think
of
it,
as
using
a
bucket
like
a
static
brown
field
case
where
a
bucket
was
previously
created
outside
of
the
cluster,
and
you
just
want
to
start
using
it
in
cozy.
Maybe
we
should
have
some
mechanism
where
no
bucket
object
is
created,
but
just
bar
can
point
to
a
static
brown
field
bucket.
D
A
C
I
hope
I
I'm
trying
to
remember
if
there
was
issues
with
uniqueness
of
different
provisioners
could
have
the
same
bucket
endpoint,
but
I
don't
know
if
that
matter
of
that
matters,
that
much,
but
for
a
little
bit
we
did
look
at
instead
of.
F
D
F
D
D
About
the
the
brownfield
use
case,
because
I
don't
think
about
it
very
much,
but
but
it's
possible
that
if
the
brownfield
use
case
is
clean
enough,
that
we
can
just
say
that's
what
you
do
for
everything
other
than
the
the
name
space
where
it
was
created
and
that
might
be
okay
to
jeff's
point
earlier
about
identifying
the
provisioner.
I
thought
the
whole
point
of
brownfield
was
the
provisioners
are
relevant
because
we're
not
managing
a
life
cycle.
B
A
C
D
A
That
description
is
not
a
problem
at
all.
Again,
I
look
at
it
as
an
implementation
detail,
but
yes,
in
terms
conceptually
sure
we
can,
we
can
make
that
distinction
attacher
versus
versus
provisioner,
and
that
would
simplify
how
we
think
about
brownfield,
because
we
now
think
of
it
as
just
something
to
do
with
attacher
rather
than
you
know,
provisioner
yeah,
yeah.
D
A
Yeah
once
you
make
it
like
that,
then
then
what
I
just
said
about
having
the
br
directly
point
to
the
bucket
for
all
brownfield
cases
starts
making
a
lot
of
sense.
D
D
D
Okay,
well
then,
maybe
we
can
continue
with
this
new
design
and
the
understanding
be
that
when
you
go
across
clusters,
it's
going
to
be
a
brownfield
use
case
and
yes,
you'll
get
another
copy
of
the
bucket.
Just
like
you
would
have
if
it
was
a
normal
brownfield
bucket
and
and
of
course
you
can't
do
anything
to
it
other
than
attach
and
detach
right.
E
D
A
D
A
F
D
D
A
E
I
think
it's
better
to
make
it
explicit.
That's
what
like,
when
we
design
one
snapshot
and
one
central
content,
it's
very
explicit.
You
can't
really
convert
between
a
what
we
call,
if
you
say,
green
or
brown,
or
the
pre-provisioned
and
dynamic
case
you
can't
convert
it,
there's
a
source
that
you
have
to
define
ahead
of
time
once
the
source
is
set,
that's
immutable.
E
D
E
A
E
A
What
what
ben
is
saying
is
as
long
as
you
know,
we
don't
have
to
provide
a
mechanism,
but
if
there
is
a
way
for
the
admin
to
to
basically
hack
a
bucket
together,
put
it
together
and.
E
D
D
A
Have
we
have
only
20
minutes
I
want
to
in
the
next
few
minutes
like
say
three
four
minutes
I
want
to
quickly
define
how
important
is
that
use
case.
A
A
Well,
I
think
that
also
adds
more
complications
in
terms
of
like
admins
might
want
to
own
brownfield
buckets
that
wasn't
a
part
of
the
cluster
that
was
created
in
the
wrong
way
and
then
mistakenly
they
delete
the
bucket
not
expecting
the
backing
to
go
away,
but
the
vacuum
goes
away.
I
don't
know
if
you
should
support
that
case.
Well,
so
it's.
F
A
different
if
I
just
restore
my
like,
if
I
store
backed
up
my
ammos
and
then
restored
it
when
the
cluster
went
away,
I
mean
it's.
How
do
you
even
know
in
cozy
whether
this
is
this
case,
or
that
case
I
mean
true,
that's
true.
D
A
The
controller
knows
if
the
bucket's
already
been
provisioned
or
not.
Okay,
I
mean
that
that
that
the
value
has
to
be
there
in
the
object.
Otherwise,
you
know
there
could
be
a
problem
within
the
original
cluster.
D
Maybe
this
is
one
of
the
differences
between
buckets
and
snapshots
is
that
we
have
a.
We
have
a
forward
pointer
that
we
know
it's
already
been
bound,
and
so
the
controller
can
just
ignore
it.
I
think
that
was
one
of
the
problems
with
when
we
were
designing
snapshots.
We
we
didn't
want
to
have
that
field
unless
it
was
this
like
import
case,
because
we
didn't
want
to
have
the
the
same
type
of
binding
that
we
did
with
pvcs,
but
I
think
with
buckets
it's
okay,
either
way.
D
If
we,
if
we
do
end
up
having
to
do
the
thing,
xing
is
suggesting
that
snapshots
did
it's
very
easy.
It's
just
another
field
and
some
additional
validation
rules
to
ensure
that
you
know
they
don't
set
both
fields
at
the
same
time.
So
I'm
not
too
worried
about.
If
we
have
to
do
that-
and
I
agree
that
we
should
at
least
prototype
it
and
prove
to
ourselves
that
it's
not
hard,
but
but
it
doesn't
need
to
hold
up
the
rest
of
the
design.
Yeah.
A
All
right,
so
so,
let's
actually
right
now,
it's
10
45.!
It
sounds
like
most
of
us.
If
not
all
of
us
are
okay
with
moving
forward
with
this
approach.
C
Well,
like
I
said
earlier,
we
did
have
a
proposal
at
one
point
that
was
going
to
do
that,
I'm
not
it
was
pretty.
It
was
pre-cap
or
very
early
before
they
kept
very
much
before
the
cap
was
ever
merged,
and
I
thought
some
of
the
issues
around
that
were
related
to
uniqueness
of
bucket
names
and
what
would
we?
What
would
that?
C
Is
it
a
composite
field
of
several
pieces
of
information,
and
I
thought
we
ended
up
concluding
that
it
wasn't
going
to
work,
but
I
can't
remember
right
now
why.
A
Well,
the
earlier
assumption
was
bucket
names,
weren't
globally,
unique,
but
we
know
across
all
three
clouds.
They
are.
A
So
so
I
think
I
think
it's
okay
to
push
this
edition
to
monday,
if
needed,
if
if
we
still
have
questions
about
the
feasibility
of
of
this
direct
reference
to
the
backend
bucket,
but
ideally
I
would
like
to
you
know,
move
forward
right
away.
C
A
No
once
we
have
that
dichotomy
of
attacho
versus
provisioner,
there
won't
be.
I
mean
there
won't
be
a
br
linking
to
a
b
it'll,
just
be
a
bunch
of
bars
pointing
to
the
b
and
and.
C
All
the
yeah
that
you
get
that's
for
brownfield,
but
for
greenfield
I
still
have
a
br
right,
yeah,
yeah
and,
and
then
the
the
br
causes
a
back
end
bucket
to
be
created.
It
causes
a
b
to
be
instantiated
right.
Is
there
a
binding
between
that
instantiated
b
and
the
br.
A
Yes,
yes,
so
that's
the
green
field
case
and
and
what
ben
brought
up
just
now
is,
let's
say
a
bucket's
already
been
created,
you're
moving
it
across
the
cluster
and
you
still,
you
want
to
take
ownership
of
the
bucket
and-
and
you
know
you,
you
recreate
the
br
and
b
or
you
just
copy
the
bimb
from
the
previous
cluster
over
here.
A
Can
we
prevent
the
bucket
from
being
you
know,
created
and
bound
again
and
and
again,
like
man
was
saying,
that's
possible.
C
C
C
F
To
imagine
right
now
and
maybe
jeff,
this
is
what
you're
saying
is
that
it's
hard
to
understand
how
the
the
brownfield
case
works
and
what
needs
to
be
supplied
in
the
the
access
request
right,
the
bucket
access
request.
How
does
that
look
like?
Is
there
any
driver,
specific
information
there?
What's
what's
kind
of
the
structure
right
now,
because
before
we
we
we
just
pointed
to
to
some
cozy's
construct
and
now
we're
not.
So
how
does
that
look
like
now.
C
A
C
What
you're
saying
is
that
cozy,
when
cozy
sees
a
direct
reference
to
the
back
end
bucket,
it
only
attaches
and
detaches,
but
the
provisioning
life
cycle
is
not
causing.
That's
all
we
do
with
normal
brownfield
anyway
is
exactly
all
brownfield
will
be
treated.
Let's
do
a
grant
and
a
revoke
yeah.
C
Yes,
so
so
I
think
the
purpose
of
say,
referencing,
a
url
in
brownfield
is
a
cross-cluster
purpose
for
maybe
easier
migration
or
portability
across
cluster,
but
it
doesn't
give
you
in
my
view,
at
least
it
doesn't
give
you
any
policy
control
across
clusters,
because
if
I
change
a
field
in
cluster
one
that
says
I'm
the
owner
cluster
two
has
no
idea.
You
did
that.
C
C
I'm
saying
since
there's
no
mechanism
to
automate
setting
resources
mutating
resources
that
handles
cluster
boundaries,
you
know
cross-cluster.
We
don't
have
that
right
now,.
D
D
Yeah
yeah
yeah,
so
yeah,
so
within
a
cluster
you
can
use
greenfield.
You
can
use
brownfield
once
you
start
spanning
cluster
boundaries.
You
have
to
use
the
brownfield
mechanism,
always
the
only
situation
where
the
going
from
brown
back
to
green
is
an
interesting
scenario
is
like
recovering
from
a
disaster
right,
like
my
cluster
got
nuked
and
I
want
to.
I
want
to
continue
to
manage
these
buckets
like
at
that
point.
D
I'm
sort
of
asserting
control
and,
and
and
there
are
dangers
there
right,
because
if
you
do
that
on
multiple
different
clusters,
then
yeah
you
end
up
in
a
situation
where
you
don't
know
who
the
owner
is
and
you're
going
to
have
problems.
But,
like
that's
a
that's,
not
that's,
not
a
solvable
problem.
D
So
so
you
basically
have
to
say.
If
you
do
this,
you
better
know
what
you're
doing
and-
and
it's
not
going
to
be
the
normal
use
case,
the
normal
use
cases.
I
want
to
share
use
the
brownfield
mechanism
and
you
never
get
into
trouble
that
way.
You
can
only
get
into
trouble
by
sort
of
create.
You
know
going
from
brown
to
green
and
you
you
just
have
to
know
what
you're
doing
when
you
do
that.
D
D
D
D
D
Attempt
to
solve
that,
we
should
say
that
that's
the
rope
that
you
can
hang
yourself
with
and
so
be
careful,
but
but
but
you
you
would
like
that
ability
to
be
able
to
recover
from
disasters
right
to
be
able
to
say.
Oh,
my
cluster
went
away.
I
just
want
to
go
point
it
at
the
bucket,
because
I
have
a
backup
with
a
gamble
and
so
imp
with
the
buck.
You
could
import
the
br
boom.
It's
like
it's
like
this
cluster
always
owned
it,
and
I
know
what
I'm
doing
so.
So
trust
me.
F
I
I
think
I
actually
think
this
piece
is
is
pretty
like
it's
pretty
clear
to
me.
The
the
only
thing
from
from
the
design
discussion
that
we
had
today
that
I'm
not
sure
about
is
what
are
we
doing
with
the
bars?
Are
we
really
detaching
these
from
the
bees
because
detaching
these
make
it
like
a
completely
different
reference
design
like
how
do
we
refer
to
a
bucket
for
access
and
that's
kind
of
taking
me
to
a
whole
other
dimension
of
this
design?
I
think
I
think
we
mean
that
we'll
still
have.
A
F
It's
just
the
way,
the
access,
so
what
you're
saying
so,
let
me
understand
that
you're
saying
that
the
bars
when,
when
I
construct
a
bar,
I
would
still
have
like
in
the
the
diagram
here
that
jeff
created,
I
would
still
have
to
point
either
to
a
br
or
to
a
b
for
for
this
to
work
for
the
access
to
work.
A
You
can
only
point
it
to
a
b
in
in
the
cluster
where
it's
shared.
Let's,
let's
you
know,
let's
strictly
talk
about
the
non-taking
ownership
use
case,
so
just
just
using
the
bucket
use
case.
In
that
case,
the
bar
would
point
to
a
b,
but
that
b
object
itself
is
something
that
you
can
only
attach
to,
but
not
attach
and
detach
with,
but
you
can't
manage
its
life
cycle.
A
F
F
Right
right,
so
this
is
one
option
to
to
get
access
to
a
bucket,
so
I
refer
to
a
bucket
request
and
the
other
would
be
to
refer
to
a
b
directly
right
and
I
I
wouldn't
have
the
the
what
we
mentioned
here
for
a
second
which
is
the
br.
Just
has
you
know
a
bunch
of
your
urls
and
whatever
properties
that
are
needed
to
construct
the
access
requests
directly
by
the
user
for
the
external.
A
Right,
the
bar
will
not
have
like
the
bucket
name
protocol
and
all
that
yeah
okay.
It
will
still
lie
in
the
bucket
object
itself,
but
then,
like
I
said,
the
bucket
object
is
going
to
look
different.
A
Yeah,
I
think
yeah
originally,
I
also
thought
we
could
just
have
you
know
that's
why
we're
talking
about
uniqueness
that
we
could
we
could
put
that
value
in
the
bar
itself,
but
when
I
listened
to
ben
explained
again,
I
think
he
meant
we
have
it
point
to
the
bucket
still
but
yeah.
D
Can
you
confirm
that
ben
yeah,
that
that
was
what
I
meant
and
I
I'd
love
to
see
like
I
don't
know,
just
just
walking
through
all
the
steps
you
know
he
like
create
a
bucket
export
it
to
another
namespace
or
to
another
cluster,
and
then.
E
D
What
exactly
do
you
do?
You
know
with
examples
of
what
the
ammo
would
look
like?
That
would
make
it
much
more
concrete,
but
in
my
imagination
it
it
works.
Fine.
A
All
right
so
we're
out
of
time.
I
don't
think
you
know
we
should
just
decide
right
now.
Let's,
let's
decide
on
monday,
once
we've
had
enough
time
to
kind
of
walk
through
the
steps
all
of
us
have
at
the
time
just
either
just
mentally
in
our
own
minds,
or
maybe
someone
can
prototype
and
and
let's
make
this
decision
on
monday
so
monday,
the
way
I'm
thinking
of
office
hours.
A
A
So
it's
gonna
be
a
session
for
developers,
people
you
know
who
are
writing
drivers
and
also
people
who
want
to
you
know,
rather
than
reviewing
the
code
on
each
pr
understand
how
the
code
looks
right
now
and
suggest
improvements.
So
I'm
thinking
the
way
we'll
do.
It
is
11
to
11
30.
The
first
part
of
the
meeting
we'll
have
the
usual
meeting
and
11
30
to
12.
We'll
have
this
office
hour
session,
where
we'll
jump
into
the
court.
A
So
that
way
and
we're
going
to
do
this
office
hours
every
alternate
week
and
and
that
way
you
know
by
doing
it
this
way,
we
still
have
the
opportunity
to
have
the
regular
discussions
and
then
use
the
rest
of
the
time
to
you
know,
help
out
vendors
and
developers
who
want
to
integrate
with
cozy
and
all
that
kind
of
things
does
that
sound
good
yeah
awesome
all
right.