►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 29 July 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
So
last
week
we
were
talking
about
well.
Guy
brought
up
this
point
that,
in
order
to
manage
buckets
in
the
back
end,
once
they're
dynamically
provisioned
by
cozy,
it
would
be
good
if
we
could
pass
along
some
metadata
from
the
bucket
request
back
to
the
driver
so
that
some
information
about
the
bucket
is
stored
in
the
back-end.
B
So
say
you
know
if,
if,
if
the,
if
the
bucket
request
came
from,
you
know
name
space
one,
and
if
that
namespace1
was
somehow
added
as
a
metadata
into
the
back
in
bucket
some,
some
administrators
could
go
filter
by
name
space
and
say,
delete
or
perform
some
operation
on
a
group
of
buckets
like
this.
B
B
We,
we
discussed
a
few
solutions,
but
we
didn't
have
a
solution
for
the
scenario
I
I'll
go
to
the
solutions.
But
before
that
I
I
want
to
highlight
one
scenario
where
we
we
do
not
know
what
the
what
the
behavior
should
be.
That
is
in
case
a
bucket
is
created.
A
brown
field
bucket
is
created
which
has
no
bucket
request.
B
B
So
so
does
my
question
make
sense,
as
in
we
want
to
pass
some
metadata
from
the
bucket
request
back
to
the
driver,
so
say,
for
instance,
you
set
the
name
and
namespace
of
the
bucket
request
and
then
you
send
it
to
the
driver
and
the
driver
can
set
it.
You
know,
driver
can
set
these
these
values
as
metadata
fields
in
say,
s3.
C
D
There's
this
issue
here-
well
ben-
was
probably
going
to
stay
in
this
thing.
The
the
issue
is
portability
versus
flex,
stability,
right
and
and
the
report.
If
portability
is
priority,
you
don't
want
a
vr
to
pass
any
kind
of
object,
store,
specific
metadata
to
a
drive
down
eventually
to
a
driver
because
you
lose
portability,
but
we
lose
flexibility
because
we
have
youth
cases
where
the
user
knows
something
that
they
want
and
they
could
give
the
driver
that
information
and
instead
we
have
to
figure
some
out
a
ban
way
to
communicate
that.
I.
B
I
think
that's
a
good
point
jeff.
I
you
know
we'll
have
to
somehow
set
the
expectation
clearly
that
these
mentality
data
fields
shouldn't
be
used,
for
you
know
taking
any
actions
it
should
just
be
passed
along.
It
should
be
opaque,
basically
and
pass
along
back
to
the
back
end
or
that
you
know
otherwise
it
won't
work
yeah.
Otherwise
we
would
be
breaking
portability.
C
I
I
would
much
rather
have
you
know
things
that
are
intended
to
be
like
vendor-specific
proprietary
escape
hatches
to
be
harder
to
to
implement,
to
make
the
vendor
do
harder.
Work
not
not
make
that
easy,
because
it
just
makes
our
life
hard
right.
If
we
provide
explicit
fields
for
passing
opaque
information
down
at
create
time,
then
we
have
to
document
that
we
have
to
support
it.
We
have
to
deal
with
all
the
ugliness
and
non-portability
that
inevitably
results.
C
B
With
you
good
point
so
guys
here
guy
could
you
could
you
go
over
the
motivation
of
why
we
were
talking
about-
and
you
know,
passing
some
metadata
from
from
the
user
created
bucket
request
all
the
way
to
the
driver
you
brought
it.
E
Up
yeah,
yes,
I'm
sorry
yeah
sure
I
I
think
the
motivation
is
simple.
It
was
for
a
garbage
collection
by
you
know
when
you,
when
you
look
at
your
cloud
account
after
using
such
automated
workload
deployments,
which
come
with
those
dynamic
provisioners
and
then
and
then
you,
you
know
you
tear
down
your
these
clusters,
not
always
very
you
know
or
like
in
in
a
in
some
order
fashion.
Sometimes
it's
just
you
know,
turning
them
off
or
and
then
just
terminating
everything.
E
So
then,
then
these
accounts
having
all
these
buckets
or
like
in
the
csi
world.
I
guess
it
will
be
those
volumes
anyway.
They
are
really
unidentifiable
right
and
it's
really
something
that
we
we've
had
like
in
an
account
in
an
in
an
environment
where
we
had
multiple
teams
using
like
large
accounts.
You
know
cloud
accounts
like
that.
It
was
really
being
requested
that
that
we
allow
users
to
tag
these
these
cloud
resources
somehow
right.
E
That
was
the
motivation
I
I
mentioned
last
time,
and
I
also
like
ben
and
I
think,
followed
up
and
and
said
also
on
slack
a
little
bit
and
but
I
want
to
go
back
to
that.
Like
description
is
one
of
those
fields,
even
in
cloud
resources
like
I
mean
not
in
kubernetes
at
all
right,
if
you
go
to
aws,
you
you'd
find
that
most
of
the
resources
there
just
have
some
form
of
a
description
field,
which
is
you
know
it's
pretty
clear
to
me
that
I'm,
when
I'm
filling
it
up,
nothing's
going
to
happen.
E
Like
I
don't
know,
maybe
maybe
there
is
some
confusion,
but
I
don't
think
there
is
a
confusion
and
this
kind
of
where
I
came
from
like
if
I
could
somehow
tag
my
cloud
resources
with
such
a
description.
I
think
I
would
be
nicer
to
my
administrator
later
on
when
when
he
needs
to
go
back
and
do
this
garbage
collection,
so
I'm
not
saying
it's
a
must.
I
really
think
that
the
name
and
name
says
are
pretty
good
start
like
it's.
E
Fine
and
labels
become
like
a
whole
different
story
of
portability
and
open
up
like
a
complete.
You
know
kind
of
worms
in
terms
of
what
can
be
done
with
this,
so
if,
if
we
actually
decide
to
just
pass
on
the
labels
from
the
request
to
the
driver,
so
this
is
why
I
thought
description
makes
a
little
bit
better
in
that
sense.
But
you
know
I
I'm
not
gonna.
I
don't
think
it's
like
it's
the
most
important
thing.
E
So,
if
you
guys
feel
like
description,
doesn't
make
enough
sense
in
that
in
in
the
environments
you
guys
work
with
it's
fine
I
mean
name
and
namespace
will
do
fine,
and
the
driver
can
also
decide
to
to
you
know,
add
some
tags
from
the
cluster
itself
right,
so
the
cluster
name
cluster
uid,
something
like
that
so
yeah,
that's
my
like
new
thinking
about
this.
After
a
week.
B
I
think
I
think
the
motivation
makes
a
lot
of
sense
and-
and
I
think
jeff
mentioned
it
last
week-
this
is
this-
is
the
kind
of
day
two
problems
that
you
know
that
need
to
be
addressed.
If,
if
people
start
using
cozy
manageability
of
buckets.
E
B
I
see
I
see
because
yeah
like
yeah,
I
guess
I
guess
that
makes
sense,
because
with
pvcs
also
we
we
our
pvs.
We
deal
with
the
same
issue,
don't
we
yeah
pretty
much,
and
and
and
does
someone
know
how
admins
you
know
deal
with
just
pv
pollution?
If
that's
the
right
term
here
when
there
are
too
many
pvs
and
they
all
have
retention
policies
set
to
retain,
and
nobody
knows
where
the
pv
came
from.
E
B
F
C
I
have
an
answer
I
I
was
just
going
to
mention.
You
know
trident
deals
with
this
today.
We
stamp
information
on
our
objects
that
at
least
include
like
the
kubernetes
cluster
name
and
the
driver
name
and
stuff
that
is
known
to
to
the
to
the
driver.
It's
not
volume
specific,
but
that's
not
really
what
you're
interested
in
when
you're
collecting
garbage
you
just
want
to
know.
You
know
which
kubernetes
cluster
did
this
belong
to,
so
that
you,
you
know
if
that
covariance
cluster
is
gone,
you
know
you
can
safely
delete
it
or
not.
C
F
E
There's
there's,
like
the
other
side,
knows
about
it
right,
like
the
the
attached
side
like
like,
there
is
a
call
to.
C
A
controller
publish
that
could,
in
principle,
do
something
to
the
back
end.
A
lot
of
drivers
don't
do
anything
because
they
don't
need
to.
But
yes,
there's
an
opportunity
at
when
the
pod
starts
up
and
a
volume
attachment
is
created
to
do
something
on
the
back
end.
In
response
to
that
specific
attachment
and
and
with
cozy
we'll
have
no
such
thing
because
there's
no
controller,
there's
no
explicit
attachment
object.
There's
the
there's
the
grant
access
right
when
you
grant
access.
That
is
your
opportunity
to
say
hey.
E
Yeah,
but
the
access
is
also
something
that
can
get
like
garbage
like
it
can
get
just
left
out
like
leaked,
but
you
know
in
in
in
the
volume
world.
I
guess
you
also
have
the
actual
connection
to
the
volume
in
some
sense
right.
It's
usually
some
stateful
connection
live
connection
between
those
attached,
like
some
form
of
attachment.
I
know
on
the
cloud,
for
example,
I
don't
know
if
it's.
C
C
In
kubernetes
there's
a
volume
attachment
object
that
gets
created
when
a
volume
is
attached
to
a
node
and
it
has
a
life
cycle
and
everything
and
and
that's
what
everything
is
attached
to,
is
that
that
specific
object
and
the
driver
doesn't
need
to
do
anything
in
response
to
the
volume
attachment
object.
But
it
that's
an
that's
a
hook
that
you
know
one
gets
created
when
this
attachment
is
established
and
destroyed.
E
Yeah,
so
so,
if
you,
if
you
go
to
like
volumes
on
aws,
you
see
them
attached
or
detached
right.
You
know,
you
know
if
they're
being
used
right
now
right,
it's
like
pretty
clear
to
know
at
least
that
they
are
not
being
used
right
now.
I'm.
E
That
the
pv
doesn't
exist
like
and
it's
not
mounted
or
whatever
it
could
be
very
different
behaviors.
But
in
some
ways
you
can
you
can
pretty
much
know
that,
like
you
have
a
bunch
of
volumes
which
are
you
know
old
enough
or
been
attached
or
haven't
been
attached
for
a
month
or
so
and
that's
it
right.
It's
there
are
ways
which
are
good
enough.
I
guess
to
filter
those
that
that's
my
experience.
B
From
a
volume
assistant
yeah,
I
see
what
you're
saying
yeah:
that
kind
of
tight
coupling
is
good
for
this
kind
of
tracking,
I
guess
but
has
its
own
issues.
Otherwise,
I
think
what
ben
suggested
makes
a
lot
of
sense.
Just
have
you
know,
driver
level
parameters
that
are
not
buckets
specific
that
are
not
object,
specific,
but
just
driver
specific
and
regardless
of
you
know
which
bucket
it
is.
It
always
attaches
the
same
information.
B
The
driver
always
attaches
the
same
information
say
when
creating
a
bucket
or
when,
when
performing
some
operations
on
it,
so
so
for
a
particular
driver,
you
get
you
get
some.
You
get
the
same
fields
that
show
up
in
your
backend
and
and
if,
if
we
say
that's
the
right
way,
we
don't
have
to
make
any
changes
in
we
we
give
that
as
a
recommendation
or
we
you
know
we
leave
it
up
to
the
driver
to
do
it.
E
B
D
I
have
a
delight
when
you
say
driver
specific.
Can
you
just
repeat
what
you
said
about
driver
specific.
B
Okay,
so
what
I'm
saying
is
we
can
set
okay?
So
what
I'm
saying
is
so
earlier
we're
talking
about
passing
along
fields
from
the
bucket
request,
all
the
way
to
the
to
the
back
end,
so
that
so
an
admin,
who's,
who's,
say
perusing
through
the
list.
In
the
back
end,
say,
the
s3
ui
will
be
able
to
classify
where
each
of
the
buckets
came
from,
because
the
bucket
names
are
all
going
to
be
uuids.
B
So
you
know
the
original
proposal
was
that
we
somehow
passed
along
fields
from
the
bucket
request
back
to
the
back
to
the
bucket
in
the
back
end.
But
we,
you
know
just
like
you
mentioned.
There
are
some
problems
with
that.
One
is
that
that
whatever
fields
we
pass
back
to
the
driver
to
send
to
the
back
end
is
probably
going
to
be
abused
by
the
driver
to
you
know
to
do
vendor-specific
stuff.
We
don't
want
that.
B
B
That
way,
the
way
you
still
get
you
know
a
higher
level
of
granularity
of
you
know
where
this
bucket
came
from
without
without
compromising
or
you
know,
messing
with
the
portability
aspects
of
creation.
C
C
B
C
C
B
C
C
D
I
just
was
thinking
more
about
what
guy
said
about
description
and
that's
a
pretty
neutral
field
and
a
q
cue
cuddle
describe
verb
could
could
grab
it.
If
we
had
description
a
first
class
field
in
a
storage
class,
it
could
be
optional
called
description.
D
Maybe
it's
not
automated
could
be,
but
but
but
at
least
there's
a
chance
that
the
admin
can
set
a
meaningful
description
in
the
bucket
class
that
ends
up
being
copied
to
the
bucket
and
and
we
it
could
prove
useful
for
use
cases.
That
guy
was
mentioning.
D
Sorry
yeah,
I
have
bad
home
internet
right
now.
In
summary,
add
add
a
first
class
field
to
a
bucket
class
called
description.
It's
optional!
If
it's
present
pass
it
when
the
bucket
instance
is
created
copy
description
from
the
bucket
class
to
the
bucket
instance.
It
sits.
H
D
And
it
could
be
used
by
administrators
to
garbage
collect
or
for
other
purposes.
Well.
They'll
know
something
about
this
bucket
because,
like
you
said,
we
can't
infer
anything
from
its
name.
C
But
but
that's
exactly
what
I
don't
like,
because
by
creating
a
first
class
field,
it
means
that
we
have
to
include
it
in
our
apis
document
what
it
does
etc,
and
it
seems
simple,
though
description.
A
C
A
C
Are
you
what
I'm?
What
I'm
saying
is
like?
You
can
already
do
that
with
an
opaque
field
and
you're
in
your
bucket
class
right,
you
can
have
an
opaque
field
called
description.
You
put
whatever
you
want
there
and
and
if
you
know
that
your
particular
driver
processes
that
field
and
does
what
you
just
described
like
that's
good
but
like
it,
doesn't
require
that
all
the
other
drivers
also
do
the
same
thing.
C
E
So
I
guess
I
guess
we
could.
We
could
say
that
drivers
are
expected
to
make
that
field
available
for
the
administrator,
and
you
know
if
the
administrator
account
cannot
find
this
field
on
its
like
providers
interfaces.
It
will
basically
turn
to
the
provider
and
ask,
or
where
do
I
find
this
information
that
I
put
there
right.
C
No,
no,
I
what
I'm
saying
is
like
if
it's
up
to
the
driver
implementer
to
to
track
this
information
or
not
track
it.
If
we
don't
have
a
stance
that
you
must
do
it
or
must
not
do
it,
then,
then
it's
by
default,
an
implementation
dependent
thing
that
can
be
covered
by
the
existing
opaque
fields.
In
this.
D
Well,
it
sounds
like
I,
I
misused
the
word
first
class
or
it's
it's
another
philosophical
issue.
A
first-class
field
in
our
ap
can
occur
in
the
cozy.
Api
should
mean
something
that's
actionable
by
cozy
that
cozy
uses
that
information
in
that
field
and
cozy's
not
going
to
use
description,
and
I
think
that's
what
got
ben's
issue
was.
We
have
a
field
in
there
in
our
api,
that's
not
used
by
cozy
and
that
doesn't
make
sense.
So.
E
E
Very
perspective,
like
we
have
to
be
very
specific,
like
the
administrator,
has
to
know
exactly
what
what
to
expect
and
the
driver
has
to
know
exactly
what
to
provide
right.
Yes,
so
I
that's
fine.
I
think
I
think
if
we
agree
that
this
is
you
know
as
useful
as
can
be,
we
can
say
it
should
be
available
to
the
administrator
on
the
backend
system.
We
don't
have
to
exactly
define
where,
in
the
sense
that,
like
it's
a
different
system
right,
it's
an
external.
E
But
but
then,
if
right,
but
I
think
like
what
what
jeff
is
adding
to
this
mix
is
saying:
if
we
don't
specify
anything,
it
really
becomes.
You
know
you
know
something
that
every
driver
would
implement,
probably
very
differently
right,
because
that's
the
way
things
happen
so.
C
You
know
that
that
you
do
these
things,
because
other
people
have
have
found
that
it
worked
well
and-
and
you
know
it's
a
good
pattern
to
follow,
but
but
it
I
would
just
keep
it
out
of
the
normative
spec
that
says
you
know
that
this
is
because
we
can't
describe
what
you're
supposed
to
do
with
it
right.
It's
just
a.
It
would
be
nice
if
you
did
something
useful
with
this
information.
It's
like
well,
that's
not
clear.
C
E
Think,
for
I
think
it's
fine,
I
mean
I
I
I
already
agreed.
I
think
it's
it's
not
it's
not
that
we
cannot
go
and
just
you
know,
advance
without
this
for
sure
right,
but
I
think
also
ben,
I
think,
you're
looking
at
it
from
from
the
cozy
controller's
perspective,
which
is
which
is
like
very
valid,
but,
on
the
other
hand,.
C
C
I
wonder
what
happens
if
I
put
fubar
in
this
field
and
we're
not
going
to
have
an
answer
for
him
like
we
don't
know
what
happens
if
you
put
fubar
in
that
field
because
we
didn't
say
what's
going
to
happen
and
like
so,
the
documentation
isn't
going
to
be
useful
and
the
guy
reading
it
is
going
to
scratch
his
head
and
say
like.
Why
is
this
field
here?
I.
E
C
The
u.s
treats
the
description
the
same
time
every
time,
the
same
way
like
they
have
control.
They
can
guarantee
you
that
description
will
be
treated
in
this
way.
If
we
don't
require
that
drivers
do
something
specific,
with
description
and
hand
it
back
to
you
at
certain
times
or
make
it
available
in
certain
places,
then
then
it
is
pointless.
C
If
we
don't
close
the
loop
and
like
say
when
you
put
it
in
here,
it's
going
to
come
back
out
here,
then
we're
not
doing
anything
with
it
effectively,
because
it's
an
open
loop
right,
you're,
putting
information
in
and
it's
going
somewhere
and
you'll,
never
see
it
again
depending
on
what
so
so,
here's
I
guess
here's
another
way
to
look
at
it
is
when
someone
is
reading
the
cozy
docs.
There
is
no
way
for
such
a
field
to
make
sense,
because
we
can't
force
the
vendor
to
do
anything
specific.
C
You
would
see
something
that
said
if
you
put
description
in
your
storage
or
in
your
bucket
class
parameters
like
we
will
do
exactly
this
with
it
in
this
specific
driver.
Like
oh,
okay,
like
that's
something
I
want,
because
I
know
that
I
can
go
put
it
fubar
in
this
field,
and
I
know
where
I'm
going
to
see
it
on
the
other
end
for
this
specific
driver,
and
so
then,
then
you
know
what
you're
doing
but
like
for
a
different
driver.
It's
going
to
do
something
different.
Your.
E
So
think
about
like
a
reclaim
policy
right,
can
you
really
know
that
your
driver
implemented
it.
C
E
B
C
C
C
E
B
Drivers
that
don't
support
bucket
metadata,
like
s3
s2,
doesn't
support
well
yeah.
It
does.
C
E
E
B
All
right,
so,
okay,
so
there's
this
one
thing
that's
been
on
my
mind
and
I'm
not
sure
what
we
discussed
or
if
we
even
brought
this
up
at
least
not
from
my
memory
but
before
before
I
go
into
it.
I
want
to
talk
yeah.
The
topic
that
I
was
thinking
of
was
was:
how
do
we?
How
do
we
set
bucket
parameters?
B
I
think
we
brought
it
up
a
few
weeks
ago.
Maybe
something
about
path,
style,
access
versus
domain
style
access,
but
similar
to
that
we
there
are
a
lot
more
parameters
that
need
to
be
set
while,
while
creating
the
bucket-
I
don't,
I
don't
believe
we.
We
came
to
a
resolution
on
that,
but
but
before
before
we
start,
I
I
want
to
open
up
the
floor
to
you
know
there
are
some
new
people
here.
Does
anyone
have
any
questions
or
any
topics?
B
That's
that's
relevant
to
them
or
the
drivers
that
they're
writing
and
you
know
or
any
concerns
that
you
want
to
bring.
H
B
Got
it
all
right,
okay,
so
yeah
thanks
for
joining
us
all
the
new
people
welcome
and
and
yeah.
If,
if,
if
you
have
any
questions
or
any
suggestions,
please
please
feel
free
to
speak
up.
B
We
encourage
that
now,
let's
get
into
this
topic
of
setting
bucket
parameters,
so
I
believe
we
talked
about
this.
A
few
weeks
ago,
we
were
talking
specifically
about
path,
style,
access
and
domain
cell
access.
There
are,
there
are
some,
let's
forget
about
that
specific
field,
but
in
more
in
general,
there
are.
There
are
some
fields
or
some
parameters
that
need
to
be
set
while
creating
the
bucket
and.
B
E
E
You
you're
mentioning
things
which
are
you
know,
policies
and
configurations
for
bucket,
but
you
don't
describe
them
as
being
part
of
a
class.
You
describe
them
as
being
some
form
of.
I
want
to
mutate
the
bucket
right.
I
want
some
dynamic.
B
B
B
That's
what
I'm
getting
to
so.
How
do
we
so
so
the
problem
statement,
I'll
I'll,
bring
it
up
here
and
we
need
to
figure
out
a
solution
for
this.
So
the
problem
is
pretty
simple,
which
is
we?
We
need
a
mechanism
to
specify
parameters
that
that
makes
sense
or
that
are
required
while
while
creating
the
bucket
and
and
we
need,
we
need
this-
this
mechanism
to
be
extensible,
because
you
know
we're
going
to
have
more
and
more
fields
being
or
more
parameters
being
added
along
the
course
of
this
project.
B
So
how
does
how
does
pvs
and
pvcs
do
it
today
like
what,
if
I
wanted
to
say,
give
me
an
nfs
volume
or
give
me
a
iscsi
volume,
particularly
with
this
raid
configuration
or
this
erasure
coding
configuration
how?
How
how
do
you
do
that?
Just
basically
storage.
F
E
C
B
C
Because
they're
not
expected
to
be
meaningful
to
anyone
other
than
other
than
the
plug-in,
because
they're
proprietary
and
opaque
and
the
application
that's
consuming
the
bucket.
No,
I
mean
I,
the
application
doesn't
understand
the
specific
driver,
that's
underneath
it.
It
just
knows
that
it's
speaking
s3
to
something
it
could
be
speaking,
s3
to
amazon
or
netapp
or
openstack.
Swift,
like
you
know
it
doesn't
care
and
they're
all
going
to
have
wildly
different
options.
E
But
in
the
volume
world
you
also
have
like
the
the
providers
are
also,
in
some
cases
providing
a
driver,
like
an
actual
you
know,
say,
kernel
driver
in
some
way
right
something
running
within
the
hosts
of
the
of
the
cluster,
if
needed,
to.
E
Right
sometimes,
I'm
saying
like
if
you
need
special
things
on
the
client
side
say
we
discussed
before
things
which
affect
the
sdks
in
terms
of
s3
right
these
cases,
but
for
for
the
bucket
world,
it's
just
an
sdk
which
is
like
part
of
the
container
world
which
forces
us
to
kind
of
communicate.
This
information
back
to
the
container
somehow.
E
So
we
we
had
this
when
we
discussed
the
credentials,
you
know
refreshing
and
all
that
yeah
you
know
revoking
all
the
all
the
things
that
kind
of
had
that
required
us
to
communicate.
Something
to
the
you
know
some
client
side
right
but
like.
H
C
So
so
for
all
of
the
all
like
the
s3
access
parameters,
you
know
the
access
key,
the
secret
key,
the
bucket
name,
the
the
endpoint
url.
You
know
we
came
up
with
a
list
of
things
that
we
have
to
communicate
back
and
they're
specific
to
s3
and.
D
E
G
E
E
Is
being
used,
so
I
I'm
not
saying
this
is
the
perfect
example.
I
was
just
saying
that
in
the
bucket
world
cozy
we
we
don't
have
the
opportunity
to
deploy
or
we
might
have
an
opportunity
to
deploy.
But
it's
like
not
it's
not
a
best
practice
right.
You
usually
don't.
C
C
I
I
just
I
want
to
be
careful,
because
a
lot
of
these
things
don't
need
that
level
of
support.
If
it's
just
like
purely
a
client-side
thing
or
purely
a
server-side
thing,
we
don't
need
to
get
involved
if
there's
something
where
the
the
client
and
server
really
do
need
to
agree
on
something.
And
it's
not
something
that
only
one
or
two
vendors
support,
but
something
that,
like
plurality
of
vendors,
are
going
to
support.
E
Was
there
something
like
said?
Was
there
any
any
missing
thing
that
you
wanted
to
touch
on
for
v1
or
was.
B
It
like
forward-looking
yeah,
we
were
forward-looking,
so
we
one-way
is,
you
know
our
stance
has
been
so
the
kind
of
things
that
we
need
to.
We
need
to
address
going
forward
and
we
too
are
again
extensions
to
the
current
design
like
stuff,
like
I
mean
bucket
parameters,
seems
pretty
clear
now
and
then
and
then
figuring
out
more
of
bucket
sharing
and
if
bucket
mutation,
if
if
we
get
into
it,
if
that's
going
to
be
something
we
support
at
all,
but
we're
getting
getting
back
into
the
current
discussion.
B
So
that
was
the
overall
picture,
but
the
current
discussion.
So
the
question
I
have
is:
are
there
going
to
be
any
fields
that
are
that
are
returned
by
the
by
the
driver?
It's
going
to
be
vendor
specific.
So
currently
we
allow
the
driver
to
give
back.
You
know
a
structure
that
that
goes
into
the
application
part
now
now
that
that
structure
is
there
any
possibility
that
that
there's
something
going
to
be
there?
That's
vendor
specific,
because
because
it's
just
letting
the
driver
pass
fields
out
of
there.
E
B
Well,
it
doesn't
have
full
control
right
so
now
the
sdk
is
expected
to
work
with
different
vendors,
specifically
not
just
work
with
the
s3
protocol,
or
you
know
a
specific
protocol,
but
rather
with
specific
vendors.
B
E
E
Kind
of
right-
yes,
I
I
think
it
somehow
makes
sense-
I
mean
the
driver
is
the
one
connecting
the
workload
to
the
backend
bucket
right.
That's
the
that's
the
responsibility
of
the
driver
to
be
able
to
hook
them
up
and
by
by
hooking
them
up
as
best
as
possible.
I
don't
think
we
break
portability
because
another
driver
might
hook
it
up
to
another
bucket,
as
you
know,
based
on
the
spec
pretty
good
like
but
differently,
which
is
fine,
which
is
still
portable,
but
you
know
different
implementation.
E
So
I
don't
know
my
my
senses
saying
that
it
makes
sense
that
the
driver
just
like
in
in
a
volume
world.
I
could
you
know
I
I
could
as
a
provisioner
like
I
could
deploy
a
driver,
a
kernel
modules
and
whatever
I
needed
to
make
this
volume
appear
seamless
like
a
seamless
block
of
similar
spots
as
the
volume
for
my
applications.
Then
that's
it
like
this
is
all
I
need
to
to
provide,
but
the
problem
with
s3
is
that
it's
a
pretty
wide
protocol
with
you
know
some
data
like
io
path
function.
E
E
Yeah,
I
don't
know
I
I
think
so
in
some
way,
because
otherwise
we're
kind
of
opening
this
three
to
to
be
non-standard,
s3
right.
So
it's
gonna,
it's
gonna
be
behaving
differently
like
on
one
driver.
I'm
gonna
get
an
error
and
on
the
other
one
I'm
gonna,
it's
gonna.
C
E
But
we're
not
specifying
like
we're
not
covering
all
the
s3
api
and
saying
inventory
should
work,
and
this
shouldn't
right
we're
not
it's
not
we're
not
doing
that,
like
this
support.
Metrics
for
the
protocol.
G
B
G
B
C
So
is
it
not
okay
to
say
like
the
minimum
is
just
that
like
you,
can
get
and
put
and
delete,
and
you
know,
do
all
the
regular
things
to
your
bucket
objects
and
draw
the
line
there
and
then,
if
there's
a
bunch
of
other
things,
we
want
to
support
to
define
some
apis
in
the
future.
This
is
you
know,
s3
extensions
or
you
know,
and
then
a
way
to
sort
of
allow
you
to
specify
which
ones
you
want
or
which
ones
you
need
or
find
out
which
ones
are
available.
C
C
C
B
C
C
You
know
capability,
query
process
that
says:
does
the
drive
does
this
particular
driver
support
feature
x
and
if
so,
then
you
get
this
extra
stuff,
but
if
not,
we
define
how
how
the
fallback
works
and
then
you
can
document
all
that
on
the
other
side
and
say
you
know,
may
or
may
not
get
this
functionality
depending
on
the
driver.
Does
it,
but
but
the
the
important
thing
is
like
stuff
that
is
available
across
the
board
is
useful
to
to
end
users
right
because
it's
that's
that's
the
portable
stuff.
C
If
you
know
that
every
driver
has
this
feature,
then
you
can
just
use
it
without
fear
of
it
ever
breaking.
When
you
move
to
another
cluster
the
moment
you
start
depending
on
stuff
that
may
or
may
not
be
there.
You
start
to
lose
portability,
and
we
have
lots
of
examples
of
that.
Over
on
the
csi
side,
kubernetes
we've
already
have
like
18
different
capabilities
that
define
you
know.
What
exactly
is
the
sidecar
going
to
do
in
this?
This
weird
situation,
you
know,
features
that
yeah.
B
B
Yeah,
I
think
we're
going
to
have
to
start
documenting
that
or
come
up
with
a
concept-like
capabilities
here,
also
because
so
recently
we
we
had
a
we
have
we,
you
know
we
have
a
customer
who,
who
you
know
their
security
policy,
doesn't
allow
them
to
use
access,
keys
and
secret
keys.
They
only
do
sds
tokens,
that's
the
that's.
The
only
allowed
mechanism
for
them
to
authenticate
with
while
receiving
data
or
pushing
data
so,
but
do
they
also
need.
B
E
This
is
not
necessarily
an
issue
in
alpha,
it
might
be,
I
mean
it
might,
it
might
be
a
more
like
we
discussed
sts
a
bunch
of
times.
It
might
be
an
issue
for
beta
or
something
like
that
where
we,
you
know
where
this
customer
tests
the
alpha
and
like
in
a
test
environment
or
something
and
says
well
without
acs,
I'm
not
going
to
use
it,
but
you
know
the
rest
seems
fine,
something
like
that
might
be
a
good
process
anyway.
B
C
The
other
possibility
is
that,
like
you
know,
we
just
say
this
is
what
cozy
is,
and
you
know
I
know
how
security
policies
pop
up
at
companies.
You
know
some
someone
sees
a
vulnerability
somewhere
and
then
declares
from
on
high
that
thou
shalt
not
do
x.
You
know,
but
but
very
little
thought
is
often
given
to
the
wise
and
what
other
mechanisms
could
have
been
used
to
address
the
underlying
problem.
C
I
wonder
if
we
just
get
cozy
out
there
and
get
people
to
start
using
it,
and
then
you
know
someone
and
it's
using
access
cues
for
cozy,
but
it
it
doesn't
as
long
as
the
the
vulnerability
doesn't
exist
for
a
different
reason.
You
could
say
well,
you
know
using
cozy
is
fine
right,
like
sts,
if
you're
using
s3
directly,
but
if
you're
using
it
through
cozy,
then
just
do
whatever
cozy
does
and
you're
not
going
to
have
problems
like
to
actually
find
out
if
that's
possible
or
not.
C
D
E
C
H
C
C
C
C
B
Yeah,
maybe
we
should
look
into
something
like
that.
Anyways.
There
are
other
time.
Let's,
let's
continue
next
thursday.
E
Okay,
one
thing
I
wish:
maybe
we
can
follow
up
next
time
that
we
didn't
look
at
this
much,
but
versioning
might
require
some
client-side
sdk
configuration,
I'm
not
sure
it
requires
it,
but
maybe
there
is
some
idea
like
if
the
application
itself
accesses
versions-
I
mean
not
not
if,
if
it's
just
in
the
background
right
anyway,
not
for
now,
but
I
think
sds.