►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 13 January 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Sidhartha Mani (Minio)
A
Okay,
so
so
we
we
had
an
api
review
done.
Michelle
looked
at
our
cap
request,
our
enhancement
request,
and
today,
I'd
like
to
just
go
over
the
comments
and
hear
you
know
your
perspective
on
it,
and
you
know
help
me
answer
some
of
the
questions
that
are
here
too.
A
So
just
going
to
quickly
go
over
all
the
unresolved
comments
as
right
now.
A
So
blaine
did
I
answer
your
question
correctly.
Oh
yeah,
this
is
not
a
question
yeah
I
fixed
it.
A
A
Okay,
so
this
is
a
feature
request
that
we
had
early
on.
I
want
to
go
over
this
quickly
and
jeff
brought
it
back
up
in
interview
a
few
days
back
a
few
weeks
back
so
when
creating
a
bucket
red
hat
and
jeff
who's
from
red
hat
and
a
bunch
of
others
from
red
hat
also
wanted
a
mechanism
to
provide
a
prefix
for
that
bucket.
The
reason
is,
whenever
a
bucket
is
created
by
cosy,
it
uses
a
uuid
or
a
generated
name
and
and
for
management
purposes.
A
If
you
wanted
to
have
a
human
readable
name
to
that
bucket,
some
form
of
identifier,
we
we
thought
of
the
idea
of
having
a
prefix.
A
But,
but
later
on,
we
decided
that
it's,
it's
probably
it
it
shouldn't,
be
a
part
of
I
mean
it
need
not
be
a
part
of
the
first
version
of
the
api,
the
main
reason
being
having
a
prefix
is
still
limiting.
A
B
C
C
A
D
When
you're
talking
about
a
label
here
are,
it
seems
like
you're
talking
about
a
kubernetes
label
unless
you're
talking
about
like
a
like
a
meta.
C
C
Metadata
on
the
bucket
itself,
okay,
so
so,
like
the
the
the
specific
cozy
driver,
should
have
access
to
information
that
it
can
then
store
in
whatever
vendor
specific
way
it
wants
to
on
the
bucket
itself.
And
then,
if
you
want
to
do
auditing
or
tracking
or
you
know,
cost
center
analysis
whatever
it
is,
you
want
to
do
with
that
information.
You
have
it
on
the
bucket
right.
D
A
So
the
cloud
providers
at
least
provide
a
mechanism
to
specify
some
sort
of
metadata
in
case
of
something
like
min
io.
We
don't
we
don't.
We
don't
have
that.
However,
we
don't
see
such
a
request
coming
in
and
we
could
add
it
just
as
quickly
where
I'm
coming
from
is
the
s3
protocol
which
which
midnight
speaks,
does
not
have
a
standard
for
specific
metadata,
and
I
assume
the
other
two
protocols
are
the
two
major
protocols
azure
and
gcp.
A
They
also
don't
have
a
standard
for
specific
metadata.
That
being
said,
they
all
have
mechanisms
to
do
it,
so
the
driver
could
still
do
it
if
it
wanted
to,
but
on-prem
solutions,
I'm
fairly
certain
that
none
of
them
have
this.
C
A
Right
right,
I
mean
create
bucket,
is
a
part
of
s3
standard.
So
that's
where
I
was
coming
from.
You
know
it's
reasonable
to
think
you
know
it's
a
part
of
the
creation
process,
but
but
you're
right,
s3
shouldn't
care
about
administrative
stuff.
The
the
protocol
standard
itself.
A
Yeah
and
the
driver
can
can
come
up
with
any
naming
scheme
it.
You
know
it
wants.
If,
if
some
some
kind
of
I
mean
it
can
be
a
contract
between
between
the
user
and
the
driver,
so,
for
instance,
the
driver
could
say,
specify
this
label
and
you
know
we'll
use
that
as
a
bucket
identifier
or
some
part
of
the
bucket.
Like
you
know,
in
volumes
we
have
topology
to
say
you
know
this
volume
belongs
to
this
rack,
or
this
color
or
whatever
so
similar.
To
that
we
could,
we
could
have.
D
A
an
implementation
that
the
the
backend
needs
to
deal
with
this
is
something
that
we've
seen
with
seth
kind
of
using
the
the
predecessor
to
cozy
the
like
lib
bucket
provisioner
library
that
seth
currently
doesn't
have
a
way
of
like
tagging,
buckets
with
information,
and
so
there's
like
a
corner
case.
D
Where
one
could,
you
know
say
manually,
create
a
bucket
in
the
back
end,
as
I
like,
like
I
can
just
manually,
create
it,
and
then
I
can
specify
a
like
cozy
bucket,
like
in
a
brownfield
use
case
and
maybe
like
the
conversation,
has
evolved
a
little
bit.
There
were
like
a
couple
months
of
meetings.
I
was
wasn't
able
to
to
attend
where
then
you
could
say.
Oh
I
I
want
to
create
a
like.
D
You
know
a
bucket
for
this,
like
manual
bucket
that
exists,
and
so
the
back
end
is
like
well,
this
bucket
already
exists,
so
I
don't
need
to
do
anything,
but
then,
when
that
bucket
is
deleted,
then
the
previously
user
created
bucket
could
be
deleted
by
the
lib
bucket
provisioner,
and
so
there
is
some
is
some
benefit
to
being
able
to
for
some
benefit.
For
the
back
end
to
be
able
to
understand,
was
this
a
bucket
originally
created
by
cozy,
or
was
this
a
something
that
was
created
by
the
user?
D
D
C
But
a
user
can't
directly
import
a
brownfield
bucket
either,
so
an
administrator
would
have
to
be
involved
or
a
controller
acting
on
behalf
of
the
administrator,
and
you
could
set
up
a
policy
that
says
every
brownfield
bucket
created
in
this
way
will
have
the
re
retention
policy
set
to
retain.
Therefore,
it's
impossible
to
do
the
wrong
thing.
A
C
Not
for
management
right,
so
that's
what
the
deletion
policy
is
for.
D
C
A
A
Okay,
all
right
the
next
one
earlier
draft
defined
bucket
id
is
the
name
of
the
backend
bucket.
In
this
version,
I
don't
see
where
an
admin
can
see
the
name
of
the
backend
bucket.
Maybe
this
simply
isn't
needed
for
any
admin
use
cases.
I
see,
I
need
to
go
see
what
he's
talking
about.
A
B
C
D
And
kind
of
like
on
the
topic
of
so
that
this
is
on
a
bucket
correct,
not
on
a
bucket
request.
C
C
Well,
the
driver
chooses
the
idea
of
the
bucket
yeah,
so
so
the
user
chooses
the
name
of
the
bucket
claim.
Cozy
chooses
the
name
of
the
bucket,
which
becomes
the
bucket
name
that
gets
passed
down
to
the
cozy
driver.
The
driver
chooses
the
id
and
then
cozy
stores
that
id
and
refers
to
it
by
the
id
thereafter.
A
That's
a
good
question,
so
we
we
kind
of
went
back
and
forth
on
this.
A
little
bit
bucket
id
can
be,
you
know,
should
be
specified
when
it's
a
brownfield
bucket.
D
A
A
E
Like
in
the
status
field
like
for
so
you
should
always
be
in
the
field,
because,
that's
you
know
when
you
return
you
get
that
get
that
id,
then
in
the
in
the
spec
field.
If
if
it's
a
it's
a
brownfield
right,
you
need
that
in
the
spec
field.
That's
the
only
time
you
need
that
in
a
stack
in
the
stack
field,
which
is
where
is
the
brownfield
case
right?
Where
is
that
the
other
way?
Oh,
yes,
and
you
think
it
was
a
kid
right.
So
this.
A
E
Yeah,
so
what
we
did
for
the
in
snapshot,
cases
that
id
is
in
is
always
in
the
status
for
both
brownfield
and
greenfield
and
then
and
then
the
I
then
that's
only
in
the
spec.
I
think
it's
in
the
believe.
It's
in
brownfield
case
that
you
already
know
that
ahead
of
time.
A
Okay,
so
do
we
should
we
have
different
names,
for
it?
Is
there
any
so.
A
Right
because,
if
coming
from
the
perspective
of
a
user
like
if
I,
if
I
were
to
see
two
fields,
both
spectrum
status
of
the
same
name-
it's
it's
always
you
know
it's
unintuitive
like
I
can't
very
few
people
are
gonna,
read
the
api
definitions,
so.
E
Right
so,
in
our
case,
it's
pretty
clear
because
we
actually
have
like
a.
We
actually
have
a
source
field
right,
so
we
have
a
source
field
and
then
this
this
source
can
be
either
like.
In
our
case,
it's
the
snapshot
right,
so
the
source
could
either
be
a
volume
handle
than
standard
provisioning
or
it
could
be
a
snapshot
handle
that's
the
static
provision.
That's
the
you
know
the
brownfield
case.
E
A
D
I
wonder
I
I
do
sort
of
like
the
idea
that
the
sheen
sort
of
brought
up
from
from,
like
prior
experience
of
having
a
a
sort
of
source
like
sub
spec,
where
we
can
choose
like,
is
the
source
of
this
bucket,
like
a
new
creation,
or
is
the
source
a
pre-existing
bucket
that
I,
if
it
ends
up
being
just
like
one
field
of
difference,
that
might
be
a
little
heavy-handed.
Otherwise
I
might
propose
like
pre-existing
bucket
id
for
the
spec
and
then
the
status
I
think
bucket
id
is
probably.
E
E
The
dynamic
case
we
don't
really
have
a,
we,
don't
really
have
a
source,
isn't
it.
This
is
new
right.
This
is
not
not
the
same
as
the
snapshot
case.
The
source
is
one
handed
in
this
case.
You
don't
really
have
a
source,
it's
dynamic,
provisioning
right.
So
basically
it's
only
in
the
brownfield
case.
You
have
this
source
right
because
in
in
our
case,
we
actually
have
two
field.
You
can
only
choose
one
in
that
source
right.
So
in
this
case
it
looks
like
it's
just
one.
One
thing
so
right.
A
Yeah
there's
only
one
field
right,
and
I
think
I
understand
what
blaine
I
think
was
speaking
is
trying
to
say
you
know
for
a
second,
I
even
thought
we
could
have
like
a
like
a
compound
resource
like
something
like
a
structure
where
it's
not
flat
anymore,
but
a
structure
that
says
bucket
source,
dynamic
or
static
and
the
bucket
id.
A
E
So
so
I
think
this
field
basically
will
only
be
populated
if
it's
the,
if
it's
the
static
provisioning
in
the
in
its
back.
Basically,
if
it's
a
dynamic
provision,
then
this
field
will
not
be
used
right.
A
A
Yeah,
I
think
I
think
then,
in
that
case,
let's,
let's
keep
it
simple
until
we
get
a
better
solution.
E
A
So
I'm
going
okay,
so
let
me
say
it
like
this,
so
the
point
or
the
purpose
of
this
api
is
to
intuitively
convey
to
the
user
what
it's
meant
for
and
for
the
sake
of
spiken
status.
If
we
were
to
have
it
in
two
places,
I
think
we're
adding
more
confusion.
A
Would
it
be
better
to
just
leave
it
here?
Just
have
it
in
spec.
E
E
Not
good,
I
think,
that's
the
the
design
in
the
existing
pv.
I
think
that's
actually
very
confusing.
I
think
it's
actually
still
better
it's
more
clean
to
have
that
in
both
places.
It's
it's
very
clear
that
you
know
the
spec.
It's
that's
where
you,
where
you
want
it
right.
It's
it's
never
going
to
change
the
status,
whether
it
be
the
one
that
will
be
changed
by
the
controller.
Otherwise,
this
one
field
can
be
changed
by
two
different
things.
One
is
the
controller.
One
is
the
admin.
It's
that's.
E
Actually,
that's
actually
not
not
very
confusing.
That's
why
we
actually
did
that
in
the
snapshot.
D
Design
it
uses
something
like
this,
where
bucket
id
is
in
the
spec
and
we've
gotten
lots
of
kind
of
we've.
Seen
in
slack,
lots
of
rook
users
have
been
confused
about
how
to
know
what
their
bucket
is
called
when
it
exists
when
it
is
like
populated
after
the
fact
in
the
spec,
rather
than
in
the
status.
D
A
That's
true
yeah.
That
is
true.
Okay,
no,
I
yeah.
My
only
requirement
is
that
when
we
clear
to
the
user
and
also
the
api
satisfies
all
our
technical
needs
so
coming
from
that
perspective,
I
would
like
suggestions
on
the
name
then
so
for
in
the
spec.
A
Maybe
we
don't
call
it
bucket
id
in
a
status.
We
can
leave
it
as
bucket
id
and
and
in
the
spec
should
we
call
it
as
source
bucket
id
requested
bucket
id
here.
Well,
not
requested
right,
because
it's
it's
the
bucket
id
of
an
existing
workhead
right.
E
E
C
E
C
E
I
got
I
thought
like
we
are
actually
trying
to
avoid
saying
this
word
in
in
you
know,
for
the
like,
a
pvc
or
for
the
snapshot,
we
actually
stopped
saying
that
word
we
just
use
pre-provisioned.
E
A
E
A
A
And
in
case
of
brownfield
buckets
existing
buckets,
will
we
fill
the
status
bucket
id
as
well.
D
I
I
also
agree
with
that.
I
think,
like
the
status
is
the
place
where
users,
where
we
want
users
to
go
to
get
information
about
how
to
like
or
administrators
or
whatever,
to
get
information
about
how
to
like
connect
to
the
bucket
or
possibly
do
like
back-end
management
of
the
bucket.
If
we
make
like
some
custom
logic
for
users
or
admins,
to
figure
out
that
is
dependent
on
what
the
type
is,
then
I
think
it
will
end
up
confusing
people.
It's
a
lot
easier
to
create
tooling,
if
it's
always
populated
in
the
status.
B
A
B
A
So,
no
so
suspect.
So
when
creating
a
bucket,
the
user
creates
the
bucket
claim,
and
that
is
where
they
they
provide,
that
that
is
the
only
name
that
the
user
gives
now.
Okay
and
then
cosy,
creates
a
bucket
by
general.
You
know
uuid
and
if
cozy
is
creating
the
bucket,
then
this
field
will
be
left
empty
in
the
spec.
It
will
be
left
empty,
and
in
that
is
field
we
are
adding
a
new
under
status.
We
are
adding
a
new
new
field
called
bucket
id.
A
The
same
thing
goes
into
the
status,
and
here
we're
going
to
rename
this
to
existing
bucket
id.
B
A
So
if
there
is
an
existing
bucket
that
the
admin
wants
to
use
via
cosy,
they
would
fill
in
this
field,
but
regardless
of
how
the
bucket
is
provisioned,
either
by
creating
a
bucket
claim
or
by
the
admin,
creating
this
bucket
object.
The
status
fields
bucket
id
will
obviously
fill.
B
Okay,
and
actually
both
are
the
same
value
like
there's
no
different.
I
know
so.
I
have
follow-up
question
like
so
while
admin
is
creating
for
the
brownfield
case,
if
it
means
creating
the
bucket
resource,
then
should
he
need
to
know
about
the
back-end
bucket
name
or
it's
not
needed
the
driver,
if
we'll
figure
that
out.
B
A
Okay,
so
is
your
question
so
so
let's
define
this
scenario,
are
you
saying
there
is
a
bucket
already
existing
in
the
back
end,
correct.
B
A
A
A
Please
feel
free
to
ask
at
any
point
yeah.
We
we
like
to
have
very
open
discussions
here.
There's
no!
You
know
we
don't
have
a
structured
meeting
for
for
one
good
reason:
it's
so
that
everyone
can
participate
easily,
at
least
until
until
we
get
to
a
more
standardized
api.
This
is
how
we.
B
A
A
Oh,
I
think
this
is
the
same
question.
You're
asking
bucket
x
access
okay,
so
so
what
jeff
is
saying
here.
A
Yeah
jeff
is
talking
about
the
use
case,
exactly
what
you
brought
up.
Is
this
what
you're
bringing
up
jeffen
as
in?
If
the
bucket
was
created
in
a
different
name
space,
then
this
would
be
empty
and
and
they'd
have
to
set
the
bucket
name.
B
A
A
Make
sense,
okay,
so
there's
a
spelling
mistake
here,
but
I
I
did
this,
but
I
think
I
think
he's
just
saying
improve
the
comments
and
make
it
clearer
rather
than
implicit
should
be
certain
name.
They
should
also
be
optional
right.
Yes,
this
should
be
optional,
so
they're
talking
about
this
field
here
next
class.
A
This
should
be
service
upon
name
but
service.
Account
name
is
never
seen
the
cosil
map
yeah.
This
is
optional,
so
there
are
two
kinds
of
authentication:
one
is
using
access
key
and
secret
key.
The
other
is
using
service
accounts
where
the
service
account
tokens
are
mapped
to.
A
Someone
who's
requesting
access
simply
just
specifies
the
service
account,
and
then
they
just
implicitly
get
access.
This.
This
form
of
access
control
allows
us
to
do
credential
rotation
more
easily.
Do
yeah!
That's
that's.
Basically,
one
of
the
main
advantages:
instead
of
doing
access,
keys
and
secret
keys,
yeah
authentication
is
provided
by
a
service
account.
A
All
right
so
yeah,
the
question
was:
is
it
optional?
Yes,
if
you're
doing
it
through
access
key
and
secret
key?
It's
you
know
this.
This
won't
be
specified
if
you're
doing
access
control
or
you
know,
access
control
through
service
accounts,
then
access
key
secret
key
will
be
optional,
so
both
of
these
fields
can
be
made
optional.
D
Okay
and
both
can
be
used
simultaneously
as
well.
They're,
not
mutually
exclusive.
A
They
are
mutually
exclusive
actually,
but
I
can't
think
of
a
situation
where
you'd
want
both
or
how
would
that
even
work,
because
someone
would
have
to
you
would
only
need
to
authenticate
one
way.
B
C
Down
more
than
one,
how
would
they
know
what
to
do?
We
want
to
be
unambiguous
and
say
you
know
we
should.
I
think
sid
was
arguing
for
mechanism
to
prefer
one
over
the
other
if
you
want
to,
but
then,
when
you
actually
create
a
bucket
access
object,
it's
only
going
to
be
the
one
that
you
requested
and
the
workloads
are
required
to
support
both.
D
D
I
I
don't
yeah,
I
mean
I
I'm
certainly
open
to
like
differing
opinions
on
that,
but
I
wonder
if
nesting
them
under
a
like
authentication
method
or
some
sort
of
subspec
could
be
helpful
and
then
that
spec
could
be
explained.
Like
pick
pick,
one
of
these
other,
you
know
pick
one
of
these
methods
and
only
one.
I
know
we.
D
C
A
C
No,
no,
I
mean
the.
If
we
did
this,
there
would
have
to
be
whatever
was
doing.
The
validation
would
just
validate
this
right
right.
The
thing
that
was
validating
objects
would
just
look
at
the
discriminated
union
and
make
sure
exactly
one
sub
member
was
filled
in
and
and
if
it
was
zero
or
two,
it
would
fail.
D
Yeah,
I
I
guess
my
my
my
opinion
is
that
having
a
like
a
a
subfield
that
is
a
discriminated
union
is
a
a
better
experience
for
users,
because
otherwise
they
kind
of
see
that
there
are
two
fields
and
they're
both
optional,
a
user
might
fill
in
neither
or
they
might
try
to
fill
in
both.
A
Okay,
there's
two
things
here:
one
is
add
something
like
authentication:
oh,
we
do
have
it.
We
have
this.
We
have
so
we
have
this
in
the
class,
though,
because
we
wanted
to
specify
it
yeah
now,
so
we've
been
convinced
on
that.
So
the
question
now
is
about
having
a
nested
field
or
just
having
it
be
flat.
A
A
I'm
I'm
strictly
coming
from
the
point
of
a
user.
That's
going
to
create
this
resource,
would
they
benefit
from
knowing
or
having
these
two
fields
be
nested
rather
than
just
flat
under
spec?
A
Oh,
I'm
I'm
done
with
this
view
these
two,
I'm
sorry.
I
don't
have
my
phone
in
front
of
me.
Oh
credential,
secret
name
and
credential
answer
service
account,
name.
C
Oh
right,
so
yes,
I
had.
I
had
hoped
that
the
controller
could
fill
all
this
stuff
in,
but
for
reasons
you
explained,
the
the
user
has
to
fill
those
in
yes.
C
C
B
A
All
right
so
yeah
credential
secret
name,
is
for
bucket
and
photo
json.
So
enough
and
service
account
name.
Why
is
that
mandatory.
C
Because
some
drivers
might
only
support
im
based
authentication,
but
for
those
that
don't
do
we
need
to
fill
in,
they
don't
need
to.
But
that's
the
whole
thing
the
user
doesn't
know
what
the
driver
is.
The
user
is
making
a
request
with
zero
knowledge
of.
What's
on
the
back
end,
they
don't
know
if
the
service
account
is
going
to
get
ignored
or
if
it's
going
to
get
used,
so
they
have
to
specify
on
the
chance
that
it
gets
used.
D
C
Yeah
I
mean
because
these
are
going
to
be
kubernetes
objects.
You
know
people
are
going
to
build
workflows
around
them
that
are
meant
to
be
portable,
which
means
that
they
have
to
work
across
multiple
drivers.
So
it's
not
okay,
just
to
say
oh.
I
know
that
my
particular
back
end
doesn't
use
service
accounts.
Therefore,
I'm
going
to
leave
it
blank,
because
if
someone
else
tries
to
use
it
and
they
leave
it
blank,
it
might
not
work
for
them.
C
A
C
A
Yeah,
okay,
that
to
me,
because
if,
if
you
force,
if
if
a
user
is
forced
to
set
a
secret
name
and
or
service
account
name,
and
it
doesn't
really
get
used
anywhere,
that's
just
as
confusing.
C
A
I
I
agree
with
that,
but
but
if
they're
not
using
service
account,
if
they're
not
using
iam
the
it
doesn't
have
to
match
the
pod
service
account
and
this
service
account
need
not
match,
it'll
still
work,
but
the
user
can't
know
that
they'll
figure
it
out
and
and
that's
when
that's
when
it'll
seem
like
a
leaky
system.
Actually,
no,
I
mean.
C
They
can't
know
it
in
advance
and
when
we're
talking
about
automated
workflows,
there
is
no
figure
it
out
later.
There's
you
have
the
workflow
and
you
want
to
be
able
to
press
the
button
and
get
the
desired
result
and
the
workflow
has
to
just
do
the
thing.
That's
going
to
work
across
any
implementation
right.
A
C
I'm
saying
that
the
fact
that
the
service
account
might
be
ignored
in
any
particular
in
kubernetes
cluster
is
something
that
users
can't
be
aware
of
for
on
a
particular.
You
know
whether
a
particular
kubernetes
cluster
will
ignore
it
or
not,
so
they
have
to
code
their
workflows
in
such
a
way
that
it's
it's
always
present,
and
I
agree
that
you
can.
You
have
a
default
mechanism
that
says
you
know
you
can
leave
it
blank
and
you
get
basically
the
same
effect
as
if
you
left
it
blank
for
the
pod.
C
But
that
is
the
same
thing
as
having
specified
what
the
name
of
the
default
service
account
and
if
it
and
if
you
leave
it
blank
for
the
for
the
bucket
access,
but
don't
leave
a
blank
for
your
pod
and
they
don't
end
up
matching.
Then
that's
a
bug
in
the
user's
workflow
right,
because
it
might
work
on
some
clouds
and
not
work
on
other
clouds
right.
C
So
we
really
have
to
find
a
way
to
ensure
that
they
do
match
in
practice,
and-
and
this
is
where
our
you
know-
lack
of
integration
with
the
pod
spec
is
going
to
really
hurt
our
design
right,
because
if
we
had
a,
if
we
had
cubelet
level
support
for
doing
the
right
thing,
we
could
just
figure
this
out
right.
A
Fair
enough,
but
we
will
eventually
end
up
in
the
cube
yet
so
let
us
hope
after
all,
I
I
hope
so
yeah
yeah.
I
I
believe
it
one
day,
a
few
years
from
now,
probably
but
yeah
all
right
does
that
address
everyone's
questions.
A
The
credential
secret
name
has
to
be
specified
every
single
time
because
it
is
going
to
it
is
going
to
contain
the
actual
secrets
inside
of
well,
it's
going
to
contain
a
structure
called
bucket
info
dot,
json,
and
this
structure
contains
information
about
the
bucket,
as
in
the
bucket
name,
authentication
method,
the
backend
url
access
key
secret
keys
if
it's
secret,
key
based
authentication
or
it's
going
to
have
a
pointer
to
the
service
account
saying
this
is
service
account
based
authentication.
A
And
the
service
account
name,
which
is
the
second
field,
is
a
typo
here,
but
the
service
account
name
will
be
specified
will
be
used
when,
when
the
bucket
access
class
has
the
authentication
type
set
to
im
and
the
service
account
name,
we
just
argued
that
it
has
to
also
be
always
set
in
order
to
be
future
proof,
and
and
what's
the
word
for
it-
portable
portable
yeah,
portable
there,
you
go
thanks,
man
all
right,
so
yeah
it
should
be
service
account
name
all
right.
We
can
explain
this
here.
A
We
have
only
five
minutes.
I
think
I
think
we
should
end
it
at
this
point,
rather
than
start
a
new
discussion,
because
I
don't
want
to
leave
in
the
middle
of
one.
Since
we
have
only
five
minutes.
Does
anyone
have
any
questions,
anything
else
that
they
want
to
bring
up.
B
So
actually
I
have
like
last
week
I
was
not
there
for
the
meeting,
but
in
the
discussion
on
the
recording
I
just
mentioned
about
the
reference
policing
like
it's
being
moved
out
like
do.
We
have
any
concrete
on
how
we
are
going
to
share
between
spaces
or
something
like
that
or
like.
A
Yeah,
so
let
me
let
me
pull
that
up
from
last
week,
so
I
actually
added
the
notes
of
last
week's
conversation
that
we
have,
which
is
which
is
pinned
to
six
stories
crazy
channel.
A
And
yeah,
I'm
just
gonna,
pull
it
up
quickly
to
quickly
remember
what
it
is
that
we
discussed.
Okay,
so
one
we
found
that
reference
policy
is
not
being
used
by
the
gateway
api
and-
and
we
said
we
redefined
self
service.
So
so
reference
policy
was
going
to
be
used
to
allow
people
from
other
namespaces
to
access
buckets
created
in
one
namespace,
which
is
really
self-serving.
A
A
The
third
method
is
selector
selected
namespaces,
where,
if
we
specify
a
selector
on
the
bucket
saying
these
are
all
the
places
it
can
be
shared
or
can
be
used
from,
then
that's
how
it
would
be,
and
all
of
this
would
be
specified
when
creating
the
bucket
by
the
user.
So
when
a
user
creates
the
bucket,
they
would
say,
hey,
you
can
share
with
everyone.
You
can
share.
Only
these
people-
or
you
can
know,
there's
only
access
from
within
the
namespace.
A
B
Okay,
so
we
are
also
moving
away
from
russian's
policy
and
it's
okay
like
so
I
guess
tim
has
suggested
to
your
substance
policy
like
do
you
get
the
feedback
from
them
or
like.
A
B
A
Tim
was
coming
from,
so
tim
was
saying
that
gateway
api
is
facing
the
same
issues.
We
are
have
a
look
at
how
they
solve
it.
That's
all
he
was
suggesting
he
wasn't
really
pushing
for
reference
policy
as
much
and
getting
api
itself
has
moved
away
from
it,
and-
and
I
haven't-
haven't
really
followed
up
with
them
on
that,
but
I
should
that
being
said,
it
is
kind
of
a
heavy
api.
A
Yeah
there
was,
there
was
one
request
by
michelle,
which
was
pretty
interesting.
A
A
Wait
workloads
yeah,
so
here
has
there
been
any
thought
on
how
a
retention
period
for
compliance
use
cases
can
be
supported.
Will
that
conflict
with
deletion
policy?
A
So
so
we,
you
know
with
mineo
with
s3
there
is
you
can
specify
bucket
life
cycle
or
sorry
object.
Life
cycle
there's
also
a
bucket
life
cycle
where
you
say
that
bucket
cannot
be
deleted
or
an
object
cannot
be
deleted
until
it's
been.
A
You
know,
for
a
month
after
it's
created
or
something
like
that,
you
can
also
say
things
like
once:
it's
deleted.
It
goes
into
kind
of
a
lot
more
for
a
month
before
it's
truly
taken
out,
so
you
can
recover
it
in
the
meantime
stuff
like
that
to
prevent
accidental
deletion
exists
and
then.
A
There's
this
other
policy
around
ransomware,
it's
ransomware,
someone
within
the
company
which
owns
the
bucket,
you
know,
is
pissed
off
of
the
company
or
something
and
decides
to
hold
the
data.
Ransom
so
or
you
know,
tends
to
delete
the
data.
So
against
that
also
we
have
some
policies.
C
A
I
mean
it
can
be
something
you
know.
You
could
argue
that
it's
at
the
bucket
level,
where
I'm
coming
from
this
is
this.
Is
this
s3
standard,
not
a
not
a
media
thing
where
I'm
coming
from
is
I
you
know.
I
had
these
use
cases
in
mind
already,
but
I
I
was
kind
of
punting
it
for
after
alpha
and
basically
after
we
won
even.
C
But
I
mean
they
can
be
opaque
parameters
in
your
bucket
class
and
the
driver
can
implement
them
down
down
in
the
storage
layer.
However,
once
and
then,
if,
if
you
get
a
request
to
delete
a
bucket
from
kubernetes
that
conflicts
with
what
the
retention
policy
is,
you
can
just
fail
the
deletion
until
it
no
longer
conflicts
right.
A
Yeah
I
mean
I
would
still
like
to
discuss
a
little
bit
more
when
we
have
more
time,
but
I
thought
this
was.
This
is
an
interesting
request
coming
in
so
early,
but
but
ben
yeah.
Your
approach
should
mostly
be
just
enough.