►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review - 31 August 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Okay,
so
so
one
of
the
things
I
wanted
to
do
this
week
was
take
every
give
a
quick
recap
of
where
we
were
in
in
terms
of
the
direction
you're
trying
to
make.
I
think
some
of
us
some
of
the
members
couldn't
join
last
week,
so
I
want
to
make
sure
everyone's
on
the
same
page
about
the
latest
developments
so
quickly
I'll
go
over
what
they
are.
A
So
what
we
started
out
with
two
weeks
ago
was
trying
to
understand
the
relationships
between
different
objects
in
this,
in
this
ecosystem,
the
cosy
ecosystem
and
the
way
we
originally
imagined
it
was
when
a
user
wanted
to
create
a
bucket
to
create
a
bucket
request,
and
there
would
be
a
corresponding
bucket
class
that
they
would
refer
to
and
using
these
two
pieces
of
information,
because
you'd
go
ahead
and
create
a
bucket
when
the
user
needs
to
access
the
bucket,
they
would
create
a
bucket
access
request,
point
it
to
a
bucket
at
access
class
and
because
you'll
be
able
to
use
that
information
to
create
a
bucket
access
object,
which
represents
some
form
of
credentials
or
a
granted.
A
Now,
in
this
model,
where
there
was
confusion,
was
how
to
delete
the
bucket
when
we
have
this
use
case
of
multiple
bucket
requests
pointing
to
the
same
bucket
in
the
original
design.
We
had
this
idea
that
in
different
namespaces,
if
multiple
bucket
requests
wanted
to
utilize
the
same
bucket,
they
would
all
point
to
the
same
cluster
scope
bucket.
A
This
had
the
problem
of
the
many
to
unmapping,
which
we
went
over
and
also
the
problem
of
deletion,
which
is
not
knowing
clearly,
the
deletion
of
which
bucket
request
should
lead
to
the
deletion
of
the
bucket.
There
are
ways
to
go
around
it,
but
it
was
not
the
best
model
because
of
the
many
to
one
problem
as
well.
A
So
one
one
solution
that
came
up
from
this
discussion
was
to
directly
point
to
the
bucket
from
the
bucket
access
request
in
case
it's
a
green
field
bucket
and
point
to
the
bucket
from
the
bucket
request.
If
it
was
sorry,
the
other
way
point
to
the
bucket
access
from
the
bucket
access
request
to
the
bucket,
if
it's
ground
field
and
from
the
market
request
to
the
worker,
if
it's
green
field,
now
it's
clear
that
deletion
of
the
bucket
request
always
leads
to
deletion
of
the
bucket.
A
However,
this
still
has
the
money
tone
problem
so
and
also
the
other
problem
is
discovery.
Is
the
bucket
name
in
this
in
this
model,
somehow
the
bucket
access
request
in
every
namespace
should
know
the
name
of
the
bucket,
and
it
needs
to
be
known
up
front
before
the
work
access
request
can
be
created
since
bucket
names
are
auto
generated
using
uuids.
A
If
we
were
to
port
the
same
bucket
request
and
bucket
access
request,
a
new
cluster,
it
wouldn't
just
work,
so
we
that
was
the
problem
of
portability.
So
we
came
up
with
the
approach
that
we
could
maybe
point
to
the
bucket
directly
from
the
bucket
access
class,
because
the
bucket
access
class
is
an
admin
control
resource.
We
can
put
the
responsibility
on
the
admin
to
work
with
these
uuids
and
and
go
ahead
and
set
up
the
bucket
class
appropriately.
A
So
and
the
user
user
resources,
the
bucket
request
and
bucket
access
request,
would
actually
be
portable.
Only
the
admin
resources,
which
is
the
bucket
class
and
bucket
access.
Wouldn't
there
was
one
problem
with
this
approach,
which
was
the
bucket
act.
There
would
be
a
one
bucket
class
for
every
bucket
and
for
every
access
pattern
on
that
bucket
and
also
the
user
wouldn't
be
able
to
access
a
bucket
until
the
bucket
access
class
was
created.
A
Already
for
that,
for
that
bucket,
we
were
exploring
more
solutions
when
we
came
up
with
what
we
finally
thought
was.
The
best
approach,
still
think,
is
the
best
approach,
which
is
we
wanted
to
keep
things
simple
in
terms
of
how
we
manage
a
creation
and
deletion,
and
what
we
came
up
with
was
every
bucket
request
will
only
point
to
one
bucket.
A
So
it's
a
one-to-one
mapping
and
there
would
be
a
bucket
for
every
namespace
that
wanted
to
use
it
and
a
corresponding
bucket
request
that
points
to
just
that
bucket
in
terms
of
deletion,
deleting
a
bucket
request
would
delete
its
corresponding
bucket
and
whether
that
would
delete
the
back-end
bucket
or
not
depends
on
one
parameter
in
the
bucket,
which
we
were
thinking
of
calling
either
deletion
policy
or
release
policy,
which
should
be
either
delete
or
retain.
A
The
question
that
came
up
with
this
approach
was
sharing
buckets
that
is
going
from
green
field.
Buckets
to
brown
field
buckets
the
way
it
works.
When
this
model,
the
way
we
envisioned
it
would
be
for
the
green
field
case,
a
user
would
create
the
bucket
request,
it
would
end
up
creating
the
corresponding
bucket
and
the
user
would
create
a
bucket
access
request,
pointing
to
the
bucket
request,
and
that
would
create
the
bucket
access
and
they
would
be
able
to
use
it.
This
is
all
within
one
name
space.
A
Now,
the
the
user
related
objects,
the
bucket
request
and
bucket
access
request
in
this
model
are
portable
only
for
the
for
the
green
field
use
case.
Let
me
not
say
only
now
for
the
green
field
of
brownfield
use
case
when
a
user
would
like
to
access
the
same
bucket
from
an
other
namespace,
they
would
have
their
own
copy
of
that
bucket.
A
The
first
question
that
came
up
was:
how
would
the
user
represent
a
brownfield
bucket
or
an
existing
bucket,
and
how
would
that
get
copied
over?
How
would
that
get
presented
or
somehow?
How
would
that
end
up?
Creating
a
copy
of
the
bucket
and.
B
I
mean
that's
how
it's
done
with
snapshots.
If
you
want
to
share
a
snapshot
with
someone
in
a
different
name
space,
you
have
an
admin
clone,
the
snapshot,
content
object
for
you
and
then
you
can
create
a
volume.
You
can
bind
your
own
namespace
objects
to
the
one
the
admin
created.
If
he
tells
you
the
name
of
it.
A
I
think
so
so
here
I've
represented
that
model
that
you
just
described
so
the
admin
copies
over
the
bucket
for
the
new
namespace.
Then
the
user
can
go
ahead
and
create
a
br
for
the
assigned
bucket
and
then
create
a
bar.
The
bucket
access
request
for
that
bucket
request.
A
That
would
lead
to
the
creation
of
the
bucket
access
and
then
the
bi
can
be
used
in
the
part.
C
B
Name
of
the
the
name
of
the
non-namespace
object,
yeah,
which,
which
you
can't
know
unless
the
admin
tells
you.
D
A
A
So
in
this
in
this
workflow
every
time
a
bucket
needs
to
be
shared,
the
admin
would
go
ahead
and
copy
clone
the
bucket
for
the
new
namespace.
So
I
did
some
research
and
rob
here
he's
on.
The
call
today
is
an.
E
A
To
start
out
with,
I
want
to
say
that
there
are
obviously
use
cases
for
sharing
buckets
across
namespaces,
but
in
terms
of
admin
workload
I
would
like
rob
to
kind
of
describe
what
kind
of
workflows
he's
seen
relating
related
to
buckets
and
and
yeah
just
go
from
there.
F
Before
we
do
that,
could
I
ask
a
question
here:
you
talked
about
the
green
field
and
then
you're
talking
about
a
green
field
to
brownfield
and
the
obvious
missing
here
is
the
pure
brown
field,
and
I
suspect
that
it's
just
the
leading
edge,
that's
different.
The
pure
brown
field
is
you're
manually,
provisioning,
the
very
first
one,
but
then
once
it's
provisioned,
it's
the
same
as
the
greenfield
brownfield
case.
Your
the
idea
of
copying
buckets
is
that
the
thought.
F
B
B
Well
or
just
another
sort
of
sidecar
controller,
the
you
know
that
is
the
the
the
sharing
manager
or
the
importing
manager
that
you
know
that,
or
maybe
even
takes
yet
another
object
that
we
haven't
included
in
the
graph.
That
just
does
that
work
for
you
magically,
so
a
human
doesn't
have
to
be
involved
right.
The
the
point
is
is
this:
this
puts
out
this
creates
the
primitive
and
then
you
can
build
on
it
to
make
it
better
right.
A
Now,
using
the
primitives
that
we're
going
with
in
this
workflow,
we
can
always
automate
it.
A
Hey
rob,
can
you
can
you
go
over
what
what
is
you
were
discussing
on
friday,
I
think,
or
on
thursday,
after
the
meeting,
when
we
just
had
a
quick
chat
that
that
you
know
you
you're
explaining
what
the
workflows
look
like
and
how
common
it
is
to
do
greenfield
to
brownfield.
I
think
we
all
kind
of
agree
that
greenfield
brownfield
is
not
the
most
common
case,
but
your
experience
seemed
seems,
like
everyone
should
hear
about
it
at
least
sure.
G
So
I
guess
a
bit
of
a
backstory
right:
we
we
I'm
an
sre
and
we
manage
multiple
clusters
and
multiple
applications
per
cluster
and
we
have
a
very
self-service
model
for
our
tenants.
G
G
So
with
that,
as
a
as
a
this
kind
of
context,
all
of
our
applications
for
the
most
part
are
contained
within
a
single
namespace.
So
we
have
a
tenant
that
application
runs
in
a
single
namespace
on
a
cluster
that
we've
provided
and
that
tenant
has
the
ability
to
define
the
resources
they
need,
including
s3
buckets.
G
We
don't
have.
We
do
have
some
applications
which
are
made
up
of
multiple
components
that
end
up
in
their
own
namespaces
that
talk
to
each
other
and
work
together
as
a
single
entity,
but
even
in
those
cases
when
they
allocate
and
provision
like
any
of
the
resources,
whether
it
be
an
rds
instance
or
an
s3
bucket
or
whatever,
that
information
only
resides
within
the
namespace.
G
That
needs
it
know.
We
don't
have
any
use
cases
at
the
moment
where
a
pod
or
part
of
an
application
from
one
namespace
needs
to
talk
to
the
data
or
do
not
talk
to
a
pod
or
even
an
rds
instance
in
another
namespace.
That
is
part
of
another
namespace.
G
C
G
For
the
most
part,
it's
based
on
namespaces
but,
as
I
said
it's,
we
do
have
like
multiple
app
one
application.
That's
made
of
multiple
components,
so
it's
kind
of
where
it
gets.
That's
where
it
gets
a
little
bit
murky,
where
a
component
could
have
its
own
namespace,
and
that
component
is
really
part
of
a
larger
hole.
That
is
the
application
by
itself.
Doesn't
do
all
that
much!
That's,
not
a
huge
common
use
case.
A
G
C
I
mean,
for
example,
logging
infrastructure
that
has
its
own.
G
You
know
so
I
mean
we're
running
on,
so
we
we
run
all
this
stuff
on
open
shift.
So
there
is
a
logging
component
in
openshift
and
then,
if
that
is,
is
not
sufficient,
then
we
do.
You
know,
have
a
process
procedure
to
use
external
logging
system
like
a
cloud
watch
or
something
like
that
if
we
need
to
and
but
those
are
those
are
provided
by
us
as
our
infrastructure.
A
Yeah
speaking
from
my
experience
at
midnight,
it's
similar.
G
G
Sorry
go
ahead.
No,
I
was
just
saying
those
resources
will
be
independent
right,
so
an
application
team
will
have
their
own
say,
aws
or
gcp
or
whatever
account
and
cloudwatch
will
be
allocated
will
be
created
for
us
or
whatever
logging
solution,
we're
going
to
use
will
be
created
for
them
in
their
account
and
then
they
can
access
it.
But
we
don't
have
a
singular
global
resource.
That's
going
to
be
available
across
multiple
tenants.
F
So
just
a
a
quick
check
here
I
mean
I,
I
think
your
your
input
is
useful
and
illuminating.
I
guess
my
question
is:
do
we
think
that
it
is
pivotal
that
it
that
it
for
some
reason
that
we
are
making
a
decision
in
here?
That
depends
entirely
on
that
input,
or
do
we
think
that
more
just
sort
of
reinforces
the
decision
we're
already
making
here.
A
I
think
one
is,
I
think
it
gives
us
some
real
world
knowledge
of
how
how
things
are
being
done,
and
two
is
it's
it's
at
best.
It.
F
Reinforces
right,
I
guess
what
I
was
trying
to
get
at
is
that
that
I
wouldn't
want
to
be
picking
apart
rod's
case
and
saying:
oh
well,
maybe
not
everybody's
like
this,
because
I
don't
think
that
we're
saying
that
only
because
of
his
input.
We
think
this
is
the
right
solution
right.
F
It's
rather
that
here
is
a
real
world
situation,
and-
and
I
guess
the
main
issue
here
is
that
you're
you're
you're
kind
of
using
that
to
head
off
is
the
issue
that
people
have
with
having
the
multiple
bucket,
multiple
bucket
request,
kind
of
idea,
right
that
that,
by
having
now
one
per
name
space
and
having
the
mapping
be
one
to
one
to
the
non
namespace,
that's
really
what
you're
trying
to
head
off
arguments
against
is
that
is
that
right,
right
and
I
mean
I
have
to
say-
I
don't
have
any
argument
against
this-
that
per
our
earlier
conversation.
F
Maintaining
that
one-to-one
through,
I
think,
cleans
up
a
lot
and
brings
it
a
lot
more
in
line
with
sort
of
standard
process.
The
the
only
concern
I
think,
we've
all
addressed
as
can
solve
later,
is
that
this
is
just
a
little
bit
more
admin
overhead,
but
automatable
overhead
right.
The
model
itself
doesn't
appear
to
have
any
fundamental
issues,
and-
and
if
somebody
disagrees
with
that
statement
that
I
would
definitely
like
to
hear
that.
C
So
no,
but
I
do
have
a
question
about
that.
So
I
would
consider
this
use
case
of
brownfield
only
and
all
the
cases
that
sharing
buckets
between
namespaces
to
be
maybe
not
current
practice.
But
I
mean
some
just
a
very
basic
capability
and
I
would
I
would
just
consider
this
in
in
the
flow
that
wrote
that
I
mean
in
the
cozy
road
map
to
to
include
right
to
have
an
automation
for
this.
C
So
the
brownfield
automation,
I
think
I
think
it's
a
it's
a
worth,
while
automation
for
cozy
to
you
know
to
provide
some
way
of
you
know
just
automating
that
piece,
because
I
don't
think
it's
really
something
very
advanced
or
anything
that
will
be
included
in
any
other
way
right.
So
it
will
just
be
very
custom.
A
A
Yeah
I'm
agreeing
with
them
wow
sweet.
If
anyone
has
any
objections,
please
speak
up
now.
A
Great,
so
it's
hard
on
the
call.
A
All
right
no
worries,
so
xing
is
here.
What
we
can
do
next
is
if
jeff
was
also
here.
Jeff
are
you
here?
A
So
let's
update
the
cap
based
on
this
and
let's
inform
everyone
on
the
sleek
storage,
cozy
channel
and
also
basic
storage,
that
you
know,
we've
updated
the
cap.
Let's
say
we
do
it
by
tomorrow
and
I
would
like
all
of
you
to
review
the
cap
and
you
know
approve
it.
If
you
think
it's,
if
it's,
if
it's
ready
and
yeah
we'll
go
from
there.
F
Did
I
do
want
to
say
one
thing,
your
evolution
of
this
that
presentation
you
gave
at
the
beginning
was
well
done.
A
Thank
you
so
much.
Thank
you
so
much.
I
also
want
to
appreciate
everyone
in
the
community
andrew
especially
ben
also
david.
I
think
you
helped
us
see
a
lot
of
use
cases
that
we
missed.
So
thank
you.
A
I
wouldn't
say
a
lot,
but
you
know
in
the
sense
it's
not
like
we
missed
too
many,
but
some
key
ones
like
like
you
know
like
this
reference
is
the
one
to
one
so
appreciate
it.
I
appreciate
everyone's
input.
H
Yeah,
I
I
second
that
I
really
appreciate
a
better
understanding
of
what
workload
portability
means,
especially
in
terms
of
referencing
to
other
resources.
A
Oh
yeah
and
special
shout
out
to
jeff
many
of
you
don't
know
this,
but
jeff
spent
the
most
amount
of
late
nights,
really
flushing
this
out
again
and
again
and
again,
and
that's
why
we
that's
how
we
really
got.
A
You
know
rigor
into
our
process
into
developing
this,
so
yeah,
all
good
now,
so
we've
got
we've
got
a
good
model
to
go
forward
with.
The
next
thing
I
want
to
kind
of
discuss
is
what
progress
we've
made
in
terms
of
developing
this,
the
programming
side
of
it.
A
So
we've
got
some
of
the
members
of
our
our
current
active
programming
team.
On
this
call,
let
me
see,
I
don't
know
if
yeah
so
on.
This
call,
we've
got
rob
me,
we've
got
srini
and
we
have
two
other
members.
Krish
is
also
here.
So
krish
is
one
of
them
and
there's
one
more
person
named
rajesh,
so
we've
all
been
actively
developing
and
we've
got.
We've
got
a
small
amount
of
code,
and-
and
it's
you
know
it's
in
very
basic
working
quality
right
now.
A
We
invite
all
of
you
to
contribute
and
also
do
code
reviews
yeah.
Once
once
we
get
the
official
repository
yeah
we'll
be
tagging
all
of
you
for
code
reviews,
so
look
forward
to
that.
D
A
Right
we're
using
private
repos
right
now
and
that's
what
I
mean
once
it's
official
we'll
be
able
to
do
that.
So
this
meeting
is
supposed
to
be
a
30
minute
meeting
we've
been.
I
think
there
was
one
day
two
weeks
ago
where
this
went
on
for
two
hours,
but
today
we
can.
We
can
finish
at
30
minutes
since
we
have.
We
have
a.
A
We
have
a
reason
to
end
this
right
now,
I'll
see
you
all
on
thursday
and.
E
For
the
cap,
for
the
completeness
there
is
some
new
requirement
for
the
cap
and
there
is
certain
information
that
you
need
to
fill.
I'm
not
sure
if
you
guys
have
updated
that.
E
They
be
so
our
okay,
so
let
me
find
the
information
I
would
just
put
link
on
that
cap.
H
Okay,
that's
that's
fine
yeah
shing,
maybe
just
a
link
in
the
sig
storage,
cozy
channel.
Okay,
also,
you
know
sod's
not
here,
but
at
least
at
one
point
we
discussed
sort
of
this
non-implementable
or
provisional
approval
as
a.
I
think.
E
Yeah,
I
think
it's
that
mentioned
that,
so
maybe
the
st
this
letters
is
the
provisional
right
is
that
is
that
provisional
right
now
not
implemented
by
the
provisional.
I
E
That,
then,
I
think
he
was
saying
then
that
way,
then
we
don't
have
to
go
through
api
review.
He
was
that's
what
I
was
saying
right
and
we
can
get
yeah,
but
I
think
the
resource
nurse.
Maybe
then,
maybe
you
don't
need
that,
but
I
think
those
are
mostly
for
like
half
hour
beta
stage,
but
sad
did
put
afar
for
the
swine
in
this
quarter.
So
anyway,
I'll
put
a
link
there.
You
can
take
a
look.
It's
basically
something.
H
A
I
I
think,
I
think
we
have
agreement
on
the
design
we
need
to
update
the
cap
and
the
approval
happened
when
you
know
people
give
people
approve
it
on
on
github,
first,
so
yeah.
I
think
it
should
naturally
follow
what
we
have
discussed.
H
Today,
trying
to
figure
out
how
much
of
the
form
formal
structure
of
the
cap
is
needed,
there's
testing
and
there's
some
other
as
some
other
sections
in
the
cap
that
we've
just
skipped
so
far
right.
I
E
A
A
Yeah,
but
if
it's
just
something
that
needs
you
know
us
to
update
the
cap
and
if
you're
not
you
know
discussing
anything
to
do
with
the
design,
then
you
know
we
can
easily
do
that.
It's
like
write
a
test
plan.
We
can
do
that
yeah.
I
think
I
think
we
that's
it
from
my
side.
If
anyone
else
wants
to
bring
up
something,
please.
D
H
D
A
All
right,
11,
30,
exactly
on
time,
see
you
guys.
I
will
see
you
all
on
thursday
and
yeah
look
forward
to
our
ping
on
slack,
you
know
once
we
have
the
kept
updated.