►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review - 27 August 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Yeah
presenting
some
slides
we're
just
gonna
we're
going
to
continue
where
we
left
off
last
time,
yeah
all
right
so.
B
So
last
week
we
started
out
with
looking
at
references
of
the
between
the
various
objects
in
this
cozy
ecosystem
and
the
original
design
was
when
you
clear,
a
bucket.
B
The
user
creates
a
bucket
request,
refers
points
it
to
a
bucket
class
and
cosy,
will
use
the
these
two
resources
to
create
a
bucket
and
when
requiring
access
to
the
same
bucket,
a
user
will
create
a
bucket
access
request.
Point
it
to
a
bucket
access
class
and
cosy
will
be
able
to
use
these
two
resources
to
grant
access
for
this
bucket
for
that
user.
B
Now
we
had
a
design
where,
if
a
bucket
were
to
be
shared
between
multiple
I'm
hearing,
an
echo,
if
you
can
mute
yourself,
that'll
be
good.
So
if
a
bucket
were
to
be
shared
in
this
original
design,
we
had
this
concept
of
having
multiple
lucky
requests,
pointing
to
the
same
bucket.
B
There
were
a
few
problems
with
this
approach.
The
first
one
was
we
were
dealing
with
a
many
to
one
relationship
which
is
in
order
to
be
able
to
delete
all
the
bucket
requests
when
one
of
the
buckets
get
when
one
of
the
bucket
requests
gets
deleted.
We'll
have
to
create
a
many-to-one,
a
two-way
mapping
which
is
complicated
now,
so
in
order,
the
second
actually
there's
another
problem,
which
is
the
second
one.
B
The
second
one
is
portability
when
you,
when
the
user
creates
a
bucket
request,
if
cozy
automatically
creates
a
bucket
corresponding
to
that
request,
there
is
a
uuid
for
that
bucket.
That's
filled
back
into
the
bucket
request
field.
Now,
when
you're
trying
to
port
this
bucket
request
to
the
new
system.
B
The
uuid
will
not
be
the
same,
so
if
we
were
to
just
use
actually
let
me
think
about
this.
I
actually
never
mind.
I
don't
think
the
the
portability
is
a
straightforward
issue,
just
as
it
is,
but
we'll
get
back
to
that.
B
I
want
to
discuss
the
approaches
that
we
discussed,
that
we
already
talked
about
and
then
we'll
get
into
the
in
our
details.
B
Yeah
this
is,
this
is
the
same
approach,
but
with
multiple
bucket
access
requests
and
multiple
name
spaces.
Okay,
this
is
the
this
is
the
okay.
This
is
the
second
approach
where,
in
order
to
address
portability
issues,
we
thought
that
we
could
put
the
bucket
name
in
the
bucket
access
class
so
as
far
and
and
have
only
one
bucket
request
for
a
bucket.
B
B
Let's
say:
user
creates
a
bucket
request
to
create
a
bucket.
The
user
will
not
be
able
to
access
this
bucket
until
the
admin
goes
and
creates
a
bucket
access
class.
For
that
bucket.
The
other
issue
is
for
every
bucket,
we'll
need
a
bucket
access
class
and
for
every
bucket
and
for
every
access
pattern.
We'll
need
a
separate
bucket
access
class.
We
mentioned
that
we'd
be
polluting
the
bucket
access
classes
with
a
lot
of
entries.
B
This
is
this.
Is
these?
Are
the
issues
discussed
with
this
approach?
The
third
approach,
which
we
all
more
or
less
liked,
was
to
deal
with
the
multiple
many-to-one
mappings
and
actually
to
to
circumvent
it.
B
We
said
we
can
have
one
bucket
request,
always
point
to
just
one
bucket,
so
it's
a
one-to-one
mapping
and
if
you
want
buckets
to
be
shared
between
different
name
spaces,
then
you
create
a
separate
bucket
object
for
that
specific
namespace
and
have
the
bucket
request
in
that
namespace
point
to
that
bucket
and
to
take
care
of
deletion.
B
If
a
bucket
request
that
point
that
point,
the
points
to
the
bucket
had
the
deletion
policy
of
delete,
we
actually
go
ahead
and
delete
the
back
in
bucket
and
all
the
other
bucket
requests
which
were
utilizing.
This
bucket
will
not
be
able
to
use
it.
So
if
there
are
workloads
using
it
in
other
name
spaces,
they
might
lose
access
to
the
bucket
in
the
middle
of
its
operations.
B
So
so,
with
this
approach,
implementation
becomes
easier
and
with
the
one
that
I'm
showing
right
now
because
of
the
one-to-one,
and
also
we
have
the
added
benefit
of
the
api
review
committee,
understanding
this
approach
already
now.
B
A
There's
not
a
portability
issue,
though
the
point
this
sort
of
bends,
you're,
saying,
multiple
brs
point
and
there's
multiple
b's
and
the
bucket
back
ends.
The
multiple
b
bucket
instances
and
kubernetes
could
all
end
up
pointing
to
the
same
bucket
in
the
back-end
store,
and
the
issue
is
importability.
The
portability
is
sold
because
you
have
bars
referring
to
the
vr
right,
so
you
don't
have
a
uuid
name
in
any
user
defined
spec.
Oh
right,
yeah
portability.
What
you
have
what
you're?
A
Allowing,
though,
is-
and
you
solve
portability-
by
making
no
user
reference
to
a
clustered
resource
that
has
to
have
a
unique
name.
So
since
there's
no
user
reference
to
a
cluster-wide
resource,
it's
portable
by
definition,
however,
it
means
you
can't
delete
use
cases.
You
just
yank
the
rug
out
from
underneath
apps
there's
no
mechanism
to
to
orchestrate
delete
so
that
if
a
pod
was
running
or
a
vr
was
still
active,
any
any
of
these
vrs
could
delete
and
you
could
put
annotations
or
you
could
put
ownership
or
you
track.
B
Right
right
to
ben's
point,
could
we
say
that
exactly
the
admin
consider.
C
A
Well,
what
is
the
I
mean?
We
only
have
retain
and
delete.
So,
yes,
you
have
to
set.
You
know.
The
bucket
class
has
to
have
re,
retain
if
you're
going
to
share
and
if
you
don't
have
retain.
If
you
have
delete
as
your
retention
policy,
then
it's
kind
of
the
wild
west.
You
don't
really
know.
A
There's
also,
I
mean
I
was
getting
some
education
from
xing
on
how
pv
sharing
works
in
in
csi
and
and
it
only
works
through
static
provisioning,
and
it
requires
a
manual
pv
creation
by
an
admin
explicitly
putting
in
the
volume
portion.
A
You
know
the
ip
path
whatever
of
the
volume
so
that
you
can
have
multiple
pvs
reference,
the
same
back,
end
volume,
but
it's
a
manual
task
by
the
admin
and
the
pvc
that's
going
to
bind
to
that
pv
is
pre-bound
by
a
user,
and
that
means
that
pvc
knows
the
pv
name,
but
the
admin
generated
the
name.
So
we're
going
to
call
that
portable
still,
it
wasn't
a
a
csi
generated,
pv
name
right.
So.
C
We
would
I
so
so
for
for
snapshots
and
for
pvs.
Currently,
you
need
an
admin
to
help.
You
do
that,
there's
no
way
for
user
to
just
import
a
brownfield
snapshot
or
pvc,
but
first
object
buckets.
We
would
like
that
capability.
So
the
question
is:
is
there
a
way
that
we
can
structure
the
bucket
request,
such
that
the
user
can
supply
the
actual
handle
to
the
actual
bucket
in
such
a
way
that
the
provisioner
will
sort
of
import?
It.
B
B
There
do
we
expect
so
then
do
we
ex
well.
So
it's
a
generated
name
here.
The
backend
bucket
name
could
be
a
generated
name
right.
So
what
what
happens
then,
but.
A
E
Yeah
yeah
and
it's
again
like
we
can
use
the
same
application
to
like
to
which,
when
during
deployment
it
goes
through
stages,
right,
staging
environment,
conducting
environment
testing,
environment
right,
we
will
need
to
change
the
name
every
time.
So
it
doesn't
seem
like
that.
That's
why
we
probably
will
lose
portability.
B
What,
if
we
did
something
like,
I
don't
know,
just
throwing
it
out
the
label
selector.
C
C
Oh
okay,
so
so
there
needs
to
be
so
in
order
to
so
so,
if
you
know
the,
if
you
know
the
actual
bucket
name
on
the
on
the
the
the
the
back
end,
you
can
you
can
do
an
import,
but
the
problem
is:
is
that
if
you
just
create
it
through
kubernetes,
you
never
find
out
what
that
is
yeah,
you
can't
do
it.
So.
Can
we
solve
that
problem
so
that,
when
you
create
a
bucket
part
of
its
status,
information
is
its
actual
name
or
is
that
a
security
problem.
B
I
I
don't
know
if
it's
a
security
problem,
but
so
let
me
let
me
rephrase
the
question:
are
you
asking?
If
so
are
you
are
you
asking
if
we
should
consider
a
different
approach
for
referring
to
the
bucket
name
from
the
bucket
request
for
brownfield
buckets.
F
C
B
C
Right,
well,
I
guess
what
I'm
thinking
then
is
so
so
I
create
a
bucket
on
one
kubernetes
cluster
and
now
I
want
to
go
use
it
on
another
kubernetes
cluster,
and
so
I'm
going
to
basically
get
the
name
from
the
first
one
and
then
do
an
import
on
the
second
one.
How
am
I
going
to
get
access
on
the
second
one
like
who's,
going
to
where's
the
power
to
to
sort
of
grant
a
new
access?
B
So
I'm
talking
about
it,
yes
ben
so,
let's
say
I'm
using
min
io.
May
you
set
up
the
object
storage
system
within
kubernetes
itself.
Now,
let's
say
I
move
my
entire
cluster
from
my
on-prem
to
a
different
on-prem
location.
I
set
up
a
whole
new
manio
cluster.
Now
I
won't
be
able
to
reuse
the
resources
that
I
was
using
earlier,
because
the
generated
pocket
name
in
the
first
setup
will
not
be
the
same
as
the
generated
bucket
name
in
the
second
setup.
B
Right
but
but
now
I
I
can't
just
redeploy
all
the
resources
I
was
using
earlier.
A
E
C
So
so,
if
part
of
your
workflow
is
to
like
create
brand
new
buckets,
then
then,
as
far
as
I'm
concerned,
like
portability
implies
that
you're,
okay
with
creating
brand
new
buckets
after
you
move
the
new
cluster.
Because
that's
what
would
happen
with
pvcs
right?
Pvc
data
doesn't
magically
move
around
it.
If
you
request
new
pvcs
you're
going
to
get
empty
ones,.
G
C
If
you
want
to
bring
your
data
over
to
another
place
like
it's,
not
just
copying
your
yaml,
it's
actually
saying
okay,
I
need
to
actually
import
the
data
that
existed
in
this
other
place
onto
the
new
cluster
and
then
refer
to
it
with
my
with
my
app
and
the
rest
of
it
should
be
portable,
but
the
data
doesn't
just
magically
move
around.
You
have
to
import
it.
C
E
Guys,
could
you
remind
me
we
used
like
in
the
very
beginning
of
this,
like
cap,
we
used
to
have
different
bucket
class
format
for
brown
and
green
field,
whereas
for
brown
field
bucket
class
class
will
explicitly
say
to
which
bucket
this
bucket
class
belongs.
Is
it
the
case
right
now,
or
we
kind
of
moved
away
from
that.
E
Right,
I
remember
this
specific
idea
was
actually
something
it
wasn't
even
andrews.
It
kind
of
it
was
before
kind
of
I
think,
maybe
even
before
this
cap
and
andrew
maybe
even
know
knew
about
this
problem.
I
think
it
came
from
somewhere
else,
but
I
thought
we
all
of
it,
but
we
don't
follow
yes,
okay,.
G
B
Yeah,
I
think
yeah
yeah.
I
think
that's
a
good
question
actually,
so
I
want
to
actually
get
back
into
this
portability
thing.
So
how
do
we
define.
B
C
A
difference
between
portability
in
terms
of
I
can
take
an
app
that
ran
in
one
cluster
and
run
in
another
cluster
where
they,
you
know,
I'm
okay,
with
starting
over
from
from
nothing
and
then
there's
I
want
to
migrate.
I
want
to
actually
share
data
across
multiple
clusters
and
that's
not
what
I
think
we
typically
call
portability.
That's
right!
Actual
data
sharing
so.
B
In
in
the
in
the
option,
one
that
you
mentioned
when
you
say
you
want
to
be
able
to
take
the
ammo
that
you
use
in
cluster
one
and
be
able
to
use
that
in
cluster
two,
should
the
user
be
expected
to
update
whatever
reference
they
have
to
a
bucket
depending
on
where
they
are,
or
should
it
just
work?
No
matter
what.
C
C
C
G
C
Right
so
so,
if
if,
if
the,
if
the
bucket
pre-existed
both
clusters,
then
both
of
them
would
just
basically
import
the
same
bucket
and
start
using
it,
but
if
one
of
the
clusters
created
a
bucket
and
now
the
other
cluster,
and
then
you
want
to
move
your
application
over
to
another
cluster
like
it
would
be
reasonable
to
assume.
It
would
also
create
a
bucket.
That's.
C
C
E
E
Well,
basically,
I
can,
if
bucket
class
itself
will
have
some
information,
for
example,
distinguish
between
green
and
brown
field.
So
in
one
case
for
new
bucket,
I
will
use
bucket
class,
which
will
say:
okay.
If
it's
a
green
field,
you
create
a
new
bucket
and
for
brownfield
case
in
the
new
cluster
bucket
class,
we'll
say:
okay,
I
want
to
reuse
this
specific
bucket
and
my
application
yemo
will
be
absolutely
the
same.
It
will
just
pick
up
a
new
bucket
class
which
will
instruct
provisioner
to
reuse
existing
bucket
instead
of
creating
a
new
one
but
yeah.
C
If
we
look
at
it
that
way,
no,
I
this
is
a
good
example
of
a
of
a
way
that
you
could
work
around
it.
But
but
the
problem
is,
is
that
a
storage
class
could
only
ever
refer
to
one
bucket
right.
I
don't
see
how
you
can
make
a
storage
class.
That
says
this
is
the
brownfield
storage
class
and
it
it
can
deal
with
multiple
buckets
somehow
so
so
you
don't
want
to
end
up
in
a
situation
where
you
need
a
brownfield
storage
class
for
every
bucket.
B
E
B
Yeah
but
then,
if
it's
provisioned-
and
you
know
you
take
the
you-
you
set
the
pv
name
to
say
some
pvc
dash
uuid,
which
is
how
it
looks
if
it's
provisioned
and
then
and
then
you
just
take
the
same
pvc
definition
to
another
cluster.
It
won't
work
for
you
because
here,
when
you
provision
it,
the
uad
will.
A
Be
different
when
we
look
at
portability
and
the
idea
of
brownfield
or
sharing
csi
is
not
portable.
If
I
have
to
do
static
provisioning,
if
I
have
to,
if
we
do
green
csi
and
now
I
have
a
pv
pointing
to
an
nfs
nfs
system
right,
the
nfs
volume,
okay-
and
now
I
want
to
share
that.
I
have
another
pv
that
has
to
get
manually
generated
to
refer
to
the
same
nfs
volume.
A
Okay
and
then
I
have
to
bind
a
new
pvc
to
that
manually
generated
pv
and
well,
I
guess
the
name
was
admin
created.
So
I
think
that
means
it
doesn't
have
a
uid
in
it.
So
it
would
be
portable.
B
A
C
E
B
I'm
still
don't
quite
understand
why
that
is
a
problem,
so
if
you
dynamically
generate
it
and
the
the
pvc
name,
if
you,
if
you
have
a
pvc,
that's
that
dynamically
generates
the
pv
and
then
you
try
to
just
take
that
pvc
object
and
then
try
to
put
it
in
a
different
yeah.
I
guess
a
cluster.
If
it's
sorry
we
put
in
a
different
cluster.
H
C
C
E
B
E
B
E
Possibilities,
it's
absolutely
one
of
the
possibilities
but
another
possibility
and
which
I
think
even
precludes.
What
you
proposed
is
that
if
there
is
any
pv
with
the
name
with
which
refers
to
p
to
my
pvc
new
pvc,
that
will
match
automatically
even
without
any
further
kind
of
checking
or
maybe
they
will
be
checking
and
if
it
doesn't
match.
Maybe
there
will
be
an
error
in
that
case,
but
basically
they
first
they
try
to
map
by.
If
pv
name
refers
to
pvc
name,
namespace
yeah.
B
B
So
let
me
ask
you
so
if
I
have
so,
if
I
have
a
pvc,
that's
referring
to
a
pv,
okay,
so
and
and
it
got
matched
somehow
and
then
it's
referring
at
this
point
after
matching
to
a
particular
pv.
I
can't
take
that
pvc
spec
as
it
is
and
then
use
it
in
a
different
cluster.
E
B
The
information
is
not
portable,
that's
exactly
what's
happening
here.
So
instead
of
the
matching
doing
the
referring
we,
the
user
is
expected
to
do
the
referring.
That's
the
one
difference,
and-
and
in
this
case
exactly
like
that,
if
it
already
starts
differing,
then
it's
not
portable
and
I
think
that's.
Okay.
It's
not.
E
That's
right!
Yes!
Yes,
I
think
I
agree
with
you.
Yes,
it's
okay.
If
bucket
request
starts
pointing
to
a
bucket,
then
this
bucket
request
configuration
is
not
portable
anymore
right.
I
think
I
agree
with
that.
Right.
A
B
I
have
a
question
here
actually
so
so
how
do
we?
Let's
say
I
create
a
bucket
request
in
one
name
space,
it
ends
up
creating
a
bucket.
Now,
how
do
I
use
that
bucket
from
a
different
name
place?
How
would
that
cloning,
or
whatever
we
want
to
call
it?
How
do
we
import
that
bucket
to
a
different
namespace?
Let's
use
the
word
import,
it
seems
more
intuitive,
at
least
to
me.
C
C
E
So
yeah
just
a
wild
idea
how
it
may
work
you
just
copy
the
source
bucket,
as
is
mostly
but
in
bucket
request
reference
in
the
bucket.
You
change
it
to
a
bucket
request,
like
2b
bucket
request,
a
new
namespace,
a
new
name
of
the
bucket
request.
So
and
then
you
go
ahead
and
read
bucket
request
there
and
again
like
spv
pvc,
it
will
automatically
just
bind.
E
E
E
C
B
B
C
So
the
bucket
is
not
namespace.
You
just
have
a
second
copy
of
it
with
the
different
name
right
that
refers
to
the
same
actual
bucket
and
then
yeah.
So,
but
if
the
admin
creates
that
sure
you
could
sort
of
create
a
pre-bound
bucket
request
that
refers
to
it
and
then
the
buy
and
then
the
provisioner
will
say:
oh,
it's
a
half
bound,
I'm
going
to
complete
the
bind
and,
and
then
it'll
be
done,
but
but
that
requires
the
admins
who
have
basically
cloned
the
bucket
object
in
at
the
cluster
scope.
Well,
what?
If
that?
What.
E
Can
the
alternative
here
like
if
we
want
to
create
a
new
bucket,
so
we
don't?
We
want
to
avoid
one-to-many
relationship
right,
so
we
need
to
create
this
new
bucket.
So
I'm
trying
to
understand
what's
the
alternative
here,
only
admin
can
create
non-name
space.
Did
the.
C
Alternative
in
my
imagination,
was
you
have
a
special
field
on
your
bucket
request,
spec,
which
is
optional.
It
says
this
is
a
brownfield.
I
want
to
import
it
and
then,
when
the
provisioner
sees
that
it
says,
oh,
I
should
not
fulfill
this
request
by
creating
a
new
bucket.
I
should
fulfill
this
request
by
cloning,
an
existing
object
and
then
binding
to
that.
C
B
I
C
C
H
G
C
B
B
B
B
I
B
That's
yeah
you're
right
yeah,
so
so,
but
you
know
we
leave
that
up
to
the
driver,
but
but
you
know
generally
a
bucket
accident.
You
know
it
really
represents
one
type
of
access
and
a
separate
service
account
where
not
the
kubernetes
service,
not
separate,
say
cloud
service
account,
that's
actually
accessing
the
bucket
and
and
each
one
you
know
wants
to
access
it
in
its
own
way.
Some
might
just
want
to
write
some
might
want
to
read
and
write
whatever.
I
J
B
If
I
have,
in
this
case
the
the
credential
is
given
per
user,
that
is,
the
access
pattern
is
independent
of
the
buckets
access
mode
so
in
in
pv
and
pvc.
If
you
create
a
read-only
volume,
nobody
can
write
from
right
into
it
in
in
a
buckets
case,
a
bucket
is
just
the
bucket
and
each
user
gets
a
completely
different
type
of
access.
B
I
I
don't
think
these
are
very
different
entities
with
you
know
I
mean.
G
I
I
And-
and
my
only
question
is,
if
we
are
using
bucket
request
to
represent
even
you
know,
referring
to
an
existing
bucket
referral
to
existing
bucket.
Just
my
question
is:
what
is
the
difference
between
these
two
objects?
Are
they
saying
anything
new
from
the
application
side
about
because
it
seems
like
we
are
asking
for
access
to
bucket
and
an
optional
creation.
I
C
Or
the
other
one,
I
think
I
think
it
is.
I
mean
I
think
andrew
does
a
good
job
of
explaining
why
it's
needed,
I'm
not
good
at
channeling
him.
I
I
will
say
the
difference
with
pvcs
is
that,
with
with
pvcs,
the
hypervisor,
the
node
has
the
ability
to
like
remount
a
volume
read
only
so
if
it
mounts,
if
it's,
if
the
volume
is
exported
to
the
node
writable,
the
volume
you
can
make,
it
read
only
for
just
one
pod
by
remounting
your.
C
C
G
I
C
I
B
I
Well,
I
I
think
this
is
you
know
it's
just
a
restricting
way
of
defining
the
api,
but
you
know
if,
if
you
really
feel
like
there's
a
good
reason
for
restricting
how
the
driver
returns
credentials
yeah
well
we're
not
restricting.
C
I
I
think,
maybe
the
other
half
of
it
is
there's
a
life
cycle.
So
if
you
create
a
pod
and
you
attach
it
to
a
bucket,
it's
going
to
need
a
credential,
something
has
to
create
that
credential.
You
could
try
to
do
it
just
in
time.
Maybe,
but
then
the
question
is:
when
is
it
safe
to
delete
it?
I
think
having
the
bucket
access
request
be
a
formal
object
that
gets
deleted
at
some
point
means
you
know
when
it's
safe
to
delete
that
credential.
C
I
B
I
A
B
I
Is
not
if,
if
that's
it's,
of
course
not,
the
question
was
when
you
are
referring
to
this,
as
if
you
define
api
as
mint
credentials,
for
example,
it
is
something
oh.
I
B
Sure
yeah
it
can
be
an
existing
or
a
new
one.
Yeah,
that's
so
that's
how
we've
been
we've
been
designing
it
so
far.
That's
why
I
guess
I
was
trying
to
understand,
clarify
your
question
so
like
do
you
still
see
no,
not
see
the
need
for
the
bar?
B
Is
that
still
you
know.
I
I
I
just
think
that
the
the
I
mean
the
the
thing
I
don't
have
a
good
junior
time
and
I
have
to
say
first
of
all,
but
I'm
the
problem
that
I
see
whenever
we
are
referring
directly
between
a
bucket
access
request
to
the
bucket,
which
is
what
we
suggested
originally
right
on
that
then
we,
the
bucket
request,
becomes
just
a
creation
facility
right
and
when
we
move
that
back
to
be
the
bucket
request
is
just
an
import
facility.
Right
then,
when
that
is
the
case,
I'm
just
questioning
why?
C
B
I
B
B
It's
still
creation
of
the
bucket
object,
so
I
I
see
where
you're
coming
from.
Actually
I
I
I
know
what
you
mean,
it
is
right
now,
a
bucket
request
for
brownfield
is
really
not
doing
anything
other
than
doing
some
internal
kubernetes
management.
As
far
as
the
user
is
concerned,
why
should
they
do
a
bucket
request
for
a
bucket?
That
already
exists?
That's
a
fair
question,
but,
given
all
the
constraints
we're
dealing
with
this
is
the
model
we're
going
with,
and
I
think
you
know
going
forward.
B
E
Yeah
but,
as
I
mentioned
before,
like
user
may
not
even
know
ahead
of
time
if
they
want
a
new
bucket
or
they
want
an
existing
one,
their
application
should
be
portable
and
maybe
maybe
I'm
making
this
up.
But
it's
just
something
that
comes
to
me.
When
I
have
my
application,
I
really
need
just
storage
to
talk
to
in
some
cases
right,
I
don't
care
if
it's
a
new
one
or
it's
an
existing
one,
and
I
want
to
use
the
same
gmo
kind
of
for
my
application.
B
A
That
question
that
statement
about
an
application,
not
caring
about
whether
it's
a
new
or
existing
bucket,
is
that
I
mean
what
is
your
collective
wisdom
on
that
use
case.
Is
that
actually
common
or
important?
I
I.
E
Just
imagine,
for
example,
again
like
if
you're
talking
about
application,
which
goes
through
stages
in
test
and
environment,
it's
okay
to
create
a
new
bucket
and
test
against
new
buck
every
time
right.
But
in
my
product
environment
I
can
probably
probably
there
is
an
assumption
that
there
is
already
some
existing
bucket
list,
which
I
should
reuse
yeah.
B
So,
in
that
case,
we
we're
saying
the
bucket
real,
quick,
like
ben,
was
saying:
there's
a
there's,
a
switch
there's
a
there's,
a
field
like
a
brownfield
refill
or
we
could
rather
than
putting
it
that
way.
Let's
have
a
bucket
name
field
if
it's
filled
in
it's
brown
field.
If
it's
not
it's
green
field
and
in
say
test
and
staging
it's
not
filled
in
and
production,
we
fill
it
into
exactly
what
you
want
yeah,
but.
E
E
B
Mean
so
how?
How
would
you
like?
Let's
talk
about
the
behavior
in
production,
so
behavior
of
this
application?
I
so
are
you
saying
that
the
user
will
not
even
care
if
it's
greenfield
or
brownfield
just
just
like
that,
somehow
the
right
bucket
is
provisional.
E
B
What
people
do
is
like
in
the
pod,
spec
they'll
have
different
service
account
different
config
maps,
pointing
to
say
environment
variables,
all
that
is
changed
between
prod
and
staging.
That's
how
if
you
see
customize
or
if
you
see
helm
both
of
in
both
cases
in
terms
of
environments,
the
data
and
the
in
the
ammo
changes?
Well,
that's
not
in
the
year
mode
right.
C
Well,
it
does
it
does
so,
but
what
I
hear
is
that
you
want
to
be
able
to
write
an
application
such
that,
whether
it's
a
greenfield
or
a
brownfield
deployment
becomes
a
deployment
time
decision
and
you
don't
have
to
change
the
animal
to
achieve
that
right.
That's
correct,
yes,
and-
and
that
does
make
sense
to
me-
and
so
I
think
we
should.
C
We
should
make
sure
that,
like
the
the
bucket
request
can
be,
can
be
written
in
such
a
way
that
that
it
can
be
a
deployment
time
decision
like
I,
I
think
we
don't.
We
don't
currently
achieve
that
with
csi
snapshots.
I
think
with
csi
snapshots
it
always.
You
can't
request
an
empty
snapshot.
The
snapshot
always
tests.
Oh.
E
Yeah,
I
think
snapshot
is
not
portable.
I
think
snapchat
is
the
yeah,
it's
a
beast
which
is
not
really
portable.
I
I
agree
with
you
totally
but,
for
example,
pvc
right,
I
I
have
the
same
pvc
definition
and
I
don't
care.
What's
the
underlying
storage
right,
it
might
be
in
google
cloud,
it
might
be
pd
and
amazon.
It
will
be
some
ebs
right
or
whatever,
and
so
you
don't
care.
You
have
one
pvc
definition
and
it's
just
it's
portable
through
cloud
providers
or
even
on-prem,.
B
So
questions
so
we
have
only
five
minutes,
so
I
won't
stop
right
there.
So
question
is:
how
important
is
this
for
mvp.
C
B
B
So
the
br
would
just
select
the
labels
that
it
wants
and
then
one
of
the
buckets
that
satisfies
that
label
selector
would
be
given
to
the
br
yeah.
B
So
yeah,
so
so
some
of
the
parameters
like,
like
you
know
in
case
of
pvc,
pv
it's
the
access
mode.
So
you
know
some
set
of
parameters.
G
A
C
A
lot
depends
on
the
storage
class.
If
the
storage
class
points
to
a
csi
driver,
then
the
external
provisioner
sidecar
gets
to
do
whatever
it
wants
right,
and
so
so
and-
and
I
I
don't
think
with
with
csi
in
particular-
it's
possible
to
do
the
kinds
of
things
you're
talking
about
with
matching,
but
but
we
should.
We
should
find
out
what
the
closest
analog
in
the
pvc
pv
world
is
and
try
to
model
it
after
that,
because
that
seems
like
a
reasonable
thing
to.
B
C
B
B
C
B
A
A
B
E
A
C
A
C
I
J
B
C
B
C
B
Answer
the
admin
has
to
do
it,
it's
just.
We
don't
have
an
answer
for
automation
and
I'm
not
sure
it
could
be
enough
for
mvp,
but
I
I
think
everyone
has
to
agree
on
that.
E
B
Kind
that
is
kind
of
odd
to
me
also,
so
I
can
give
some
info
how,
like
you
know,
mineio
is
used
in
vmware.
B
F
So,
like
I
see
it
in
manufacturing,
you'd
have
different
name
spaces
different
clusters
and
have
to
work
with
the
same
data
sets
or
do
the
next
next
steps
with
the
same
data
set
right.
I
B
Yeah,
okay,
so
yeah.
I
think
I
think
the
open
question
is
to
automate
or
do
we
leave
it
as
it
is.
I
think
one
way
to
discover
it
actually
put
it
out
there
and
see
how
if
people
are
screwing
up
screwing
this
up
a
lot
or
it
works
fine
or
is
it?
Is
there
questions
really
coming
up
about
this?
That
way,
we'll
actually
be
able
to
find
out?
If,
if
we
need
automation
there
or
not,.
B
Okay,
all
right,
thank
you.
Everyone
we'll
meet
on
monday
again
and
we'll
try
to
make
a
decision
on
monday
about
how
to
take
care
of
this
I'll,
see
you
on
all
then
monday
at
11,
pst.
D
B
He
well,
I
think
the
question
is
pretty
simple.
I
don't
know
in
what
particular
way
we
need
andrew,
but
he's
been
a
very
important
part
of
this
community,
so
he
like
in
that
sense
before
we
make
a
decision
about
the
cap
itself,
he
should
be
present.