►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 21 October 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Sidhartha Mani (Minio)
A
We
we
we
already
had
answers
for
majority
of
the
questions.
Maybe
we
just
at
that
point.
We
hadn't,
I
would
say
we
weren't
sure,
of
how
to
clearly
communicate
that
at
that
point,
but
but
now
we
do,
and
so
in
the
subsequent
review,
which
was
about
two
weeks
ago.
Maybe
three.
A
Let's
see
there
was,
let
me
show
you
what
was
said
all
right,
okay,
so
this
is
this
is
from
15
days
ago,
about
two
weeks.
I
have
a
bunch
of
concerns
still
not
below
much
less
than
before.
Some
are
big
access,
control,
cardinality,
etc.
So
access
control
is
what
I
was
saying
earlier,
who
gets
to
use
which
bucket
class,
who
gets
to
use,
who
gets
to
buy
into
a
particular
bucket
if
it
is
unbound?
A
A
Those
are
the
big
things
that
he
sees,
but
mostly
it's
okay
to
proceed.
I
think
there's
a
ton
of
work
to
routine
alpha
and
beta.
A
B
A
Yeah,
because
the
life
cycle
of
credentials
and
life
cycle
buckets
are
entirely
independent
of
each
other.
A
So
yeah
we
do
need
two
separate
resources
for
that.
Okay,
so
so,
let's
review
them
one
by
one,
okay,
so
so
on
name.
So
this
is
what
he's
these
are
the
open.
This
is
one
of
the
main
ones.
So
let's
say
a
bucket
claim
is
a
thing
that
holds
the
bucket
alive,
but
you
know
whatever
claim
holds
the
thing
alive,
so
he
thinks
we
should
rename
bucket
requests
to
bucket
claim
and
bucket
claim
right
now,
which
is
called
bucket
access
request
to
bucket
access,
but
I
still
hope
you
can
merge,
but
okay.
B
A
Yeah
right
yeah,
so
so
in
the
current
definition
of
the
cap
and
the
current
version
of
the
cap.
That's
in
this
pr
before
having
addressed
these
comments
we've,
what
we've
said
is
bucket
claim
is
is
the
is
great.
Access
request
is
renamed
to
bucket
claim
so
when
he
says
bucket
claim
here,
he
means
bucket
access
request.
C
B
A
B
A
Yeah,
we
have
a
strong
reason
to
have
them
be
separate.
A
Which,
which
is
really
what
was
saying
earlier,
the
life
cycle
of
credentials
and
the
life
cycle
of
buckets
are
independent
of
one
another,
so
I
mean
the
resource
is:
is
expected
to
represent
the
life
cycle
of
an
object
and
help
manage
it.
In
this
case,
you
know
we
couldn't
possibly
combine
them
to
the
only
case
that
can
be
made
for
being
able
to
combine.
A
Them
is
if,
if
that
single
resource,
which
represented
both
the
creation
of
a
bucket
and
usage
of
a
bucket,
had
a
many-to-one
relationship
with
the
bucket
so
for
each
request
for
a
bucket
or
each
bucket
claim
or
multiple
bucket
claims
could
point
to
the
same
bucket,
and
if
the
bucket
didn't
exist
already
when
the
claim
was
made,
it
creates
it.
If
it
already
exists,
then
it
just
points
to
it.
I
guess
something
like
that
could
be
done,
and
then
the
bucket
claim
could
always
be
used
for
holding
access
credentials
for
that
bucket.
B
Well,
we
could
make
credential
minting
implicit
right
where
it
just
happens
just
in
time
when
a
pod
tries
to
bind
to
a
bucket
and
there's
no
kubernetes
object.
That
represents
it.
It's
just
some
some
state
floating
around
inside
the
cozy
driver.
B
D
B
A
A
So
google
cloud
currently
prefers
workload,
identity
based
authentication
as
in
not
access
keys
and
secret
keys,
but
some
way
to
say
that
a
request
coming
from
a
particular
part
is
indeed
coming
from
that
pod,
and
so
they
should
have
so
and
so
permissions.
A
Right
right,
our
stance
on
this
has
been
that
we
will
look
into
it
when
we
go
into.
You
know
post
alpha,
but
I
guess
what
he's
saying
is
we
should
at
least
have
a
plan
of
what
it
would
look
like.
Excuse
me,
I
think
that's
what
he's
asking.
E
I
I
think
what
what
he
may
be
hinting
at
is
that
we
should
split
on
one
hand
the
creation
of
authentication
tokens
which,
in
like
on
google
cloud
and
aws,
would
be
a
no-op,
because
the
token
already
exists
and
is
already
exposed
within
the
workload
and
the
granting
privileges
to
access
the
buckets.
To
that
token,
with
a
little
bit
of
hand
waving
here,
which
would
still
be
required
also
on
google
and
aws
and
others
that
have
this
integrated
authentication.
A
So
so
you're
saying
that
I
I
think
I
think
we're
saying
the
same
thing.
So
let
me
let
me
reiterate
what
you
said.
Let
me
know
if
I
got
it
right
so
you're
saying
that
you
know
tim
wants
us
to
have
two
different
styles
of
authentication.
A
Okay,
so
so
that's
what
is?
We've
talked
about
this.
We
said
we
will
do
it
after
alpha,
but
maybe
now
it's
time
to
actually
go
over
how
it's
going
to
look
there
are.
There
are
some
unanswered
questions
in
that
space,
so
so
the
access
keys
in
secret
case.
It's
it's
pretty
clear
how
we
want
to
how
we
want
to
orchestrate
the
whole
thing
when
a
bucket
claim
is
created.
Well,
our
current.
A
Let's
call
it
bucket
access.
Let's
start
using
the
new
terms
bucket
claim
will
be
the
equivalent
of
a
bucket
request
and
bucket
access
will
be
the
equivalent
of
bucket
access
request.
So
is
that
is
that
clear,
I'm
just
going
to
take
silence
as
a
yes,
okay.
So
currently,
when
a
bucket
access
is
created,
we
end
up
creating
credentials
and
holding
them
in
a
secret
holding
the
actual
access
key
in
secret
key
in
a
secret
object
and
then
plugging
the
secret
into
the
part.
A
The
other
authentication
mechanism
is
somehow
saying
that
a
particular
pod
is
has
a
particular
identity,
so
the
way
that
that
generally
happens
is
either
through
a
metadata
service
like
like
on
aws.
I
don't
know
about
other
clouds,
but
on
aws
at
least
there's
a
metadata
service
which
is
like
a
reflection
api.
A
So
when,
when
you
query
the
same
link,
so
there's
there's
something
called
as
a
link:
local
ip.
I'm
sure
many
of
you
are
aware
of
this,
but
it's
it's
a
static
ip
at
rest
and
depending
on
where
you
ping
that
ip
from
you
get
information
about
you,
you
get
information
about
who
you
are
so,
for
instance,
if
I'm
on
instance,
one
let's
say
in
amazon
and
I
ping
my
link
local
ip
and
look,
for
instance,
name
I'll
get
the
instance
name
of
instance,
one
from
one
instance,
two
and
ping.
A
The
same
api
I'd
get
the
instance
name
of
instance,
two.
So
that's
that's
the
metadata
api.
So,
in
order
to
talk
to
s3
what
happens
is
in
amazon
in
order
to
talk
to
s3
with
the
workload
identity
based
authentication,
it
brings
the
metadata
api
to
get
back
a
token,
an
sts
token,
a
simple
token
service.
That's
what
sts
stands
for
and
then
uses
that
token
to
request
access
for
that
or
to
read
and
write
objects
from
from
a
particular
bucket.
A
B
I
I
think
that
the
problem,
the
problem
is
what's
going
to
work
in
every
environment.
Right
kubernetes
has
many
different
ways
of
deploying
all
the
way
down
to
like
cube
atom
on
a
vm
like
what
you
got
cube
atom
on
a
vm.
What
tokens
do
you
have
that
are
meaningful
other
than
the
ones
kubernetes
creates
itself?
A
A
All
right:
how
would
that
api
look?
How
would
we
you
know,
go
about
this?
Let's,
let's
start
with
the
simple
question
of:
how
would
they
specify
that
they
want
a
particular
kind
of
authentication
from
the
from
the
user's
perspective,
someone
asking
for
a
bucket
access
to
a
bucket.
D
B
To
say
you
know
the
the
the
admin
who
sets
up
the
cluster
knows
where
it
is
and
what
its
characteristics
are
and
can
configure
good.
You
know
security
practices
and
then
the
workload
then
can
be
portable
and
it
doesn't.
It
doesn't
care
whether
it's
running
on
cube
atom
or
running
in
google
cloud
or
aws
or
wherever
it
just.
B
G
D
A
Yeah,
that's
a
good
question:
let's
look
up
how
identity,
yeah
workload,
identity,
aws,.
D
A
Instance,
I
am
aws
so
and
let
me
also
put
in
metadata
service
here,
I'm
just
quickly
looking
up
how
it
does
it.
F
A
A
All
right
so
yeah,
let
me
do
some
reading
on
this
and
figure
out
how
exactly
it
does
it
so
that
you
know
we
can
see
if
it
can
be
applied
in
some
way
to
a
different
provider
as
well,
so
so
ben
to
address
your
question.
I
just
thought
of
this,
so
it
doesn't
really
affect
portability
right
because
because,
let's
say
again,
it's
an
admin,
control
resource,
so
all
standard
clients
for
s3
at
least
s3.
A
I
know
that
support
multiple
forms
of
authentication,
so
one
of
them
is,
you
know,
access
key
secret
key
and
then
there's
a
whole
whole
whole
bunch
of
others.
So
aws
sdk
well.
B
A
Well,
this
is
the
standard
sdk
for
aws
aws
sdk
go
to
from
there.
A
A
I
know
minio
gives
the
same
thing,
because
there
is
a
hierarchy.
There
is
like
an
order
of
precedence
when
it
comes
to
using
credentials
from
the
environment
and
the
last
one
goes
to
access
key
secret
key
there's
process
credentials,
endpoint
credentials
role.
B
B
B
A
To
yeah,
if
we
try
to
represent-
or
if
you
try
to,
if,
if
cosys
tries
to
represent
things
like
merida
api
version
and
and
iam
instance
version,
I
don't
know
whatever
that
is
we'll
end
up
in
a
in
a
in
a
hellscape
where
it
will
be
hard
to
manage.
A
Yeah,
I
think,
okay,
let's
so
this
is
the
this
is
the
primary
one.
This
is
the
main
sdk.
Let's
look
at
minions.
H
Okay,
just
a
quick
question:
is
it
possible
to
leave
that
up
to
the
user
and
create
a
secret
that
cause
you
can
just
pass
to
the
driver
or
whatever
the
next
layer
is
and
say
the
user
has
created
these
credentials
because
they
know
that
of
the
driver
necessities
and
they
just
kind
of
opaquely
gives
the
secret
contents
to
this
next
layer
and
then
just
calls
it
a
day.
H
B
Yeah
yeah,
it
is
tough
and
it's
it's
why
we
started
with
like
secret
access
key
secret
key
is
all
we
support
because,
like
that's
the
lowest
common
denominator,
that
we
know
we
can
guarantee
100.
The
problem
is,
is
that
it
stinks
and
these
other
mechanisms
are
better.
But
then
the
question
is:
how
do
you
guarantee
those
across
the
board?
Because
you
don't
want
to
have
a
situation
where
some
workloads
run
on
some
clouds
and
not
other
clouds,
because
then
cozy's
kind
of
useless
right?
If
it
doesn't.
H
H
Yeah,
I
I
hear
you
and
I
I
hear
where
you
guys
going,
but
I'm
just
looking
at
it
from
the
pvc,
for
example,
pv
point
of
view
and
a
storage
class
point
of
view
where
that
level
of
of
request
is
not
done
at
that
level.
It's
actually
pushed
till
this,
the
admin
to
say
if
you
figure
it
out
and
then
we'll
take
this
parameters
and
give
it
to
the
driver.
You
see
what
I
mean
so
in
the
pvc
model,
there's
no.
We
don't
bring
all
these
things
up
to
the
user.
H
B
B
Cozy
needs
to
be
able
to
make
a
similar
guarantee
like
if,
if
you
say,
bind
my
pod
to
this
bucket,
then
there's
gonna
be
something
that
you
can
expect
100
of
the
time
and
then
there'll
be
other
things
that
may
may
vary,
but
we
want
the
the
thing
that
you
get
100
of
the
time
to
be.
You
know
the
the
case
that
everyone
relies
on
enough
to
basically
write
a
portable
workload.
H
Just
a
quick
thing
good
quickly.
He
said
I
completely
agree
with
you
ben
what
you're
saying
it's
just
that
in
in
that
part,
when
the
pvc
gets
that
explanation
in
the
mounted
port,
you
still
need
the
storage
class
and
the
storage
class
is
what
provides
that
abstraction.
Sorry
for
my
daughter,
yeah,
but
yes,.
H
Was
just
going
to
say
that
I'm
just
trying
to
avoid
creating
a
a
a
possible
you
know
all
incumbing
objects
or
api
that
that
abstracts
everything
is
going
to
be
a
tough
answer.
So
instead
I'm
trying
to
say
we
could
the
way
the
storage
class
does.
It
is
that
it
says
the
admin
understands
how
the
storage
system
works.
They
put
everything
in
there.
H
A
What
I
mean
yeah,
we
have
something
similar,
so
the
issue
here
is
between
you
know.
The
difference
between
pvc,
pve
and
buckets
is
that
the
wire
protocol
that
the
the
read
write
protocol
is
not
standard.
So
so,
with
pvcs
and
pvs,
the
protocol
to
access
data
is
posix.
You
read,
write
all
through
syscalls
and
and
it's
stable
regardless
of
what
provider
it
is.
It
doesn't
matter
who
the
back-end
storage
is
in
case
of
buckets.
A
It's
different,
the
clients,
if
they
have
an
s3
client,
the
s3
client
needs
to
support
the
kind
of
authentication
that
we're
asking
for.
Similarly,
if
it's
a
gcs
client,
it
can't
use,
you
know,
s3
style,
authentication,.
H
B
H
And
I
think
you
I'm
going
back
to
it.
I
just
think
that
the
client
knows
with
that
version
of
the
client
with
that
version
of
the
server
it's
going
to
be
really
tough
right.
I'm
just
saying:
let's
pass
that
to
them,
because
they
need
to
pick
it
they're
going
to
have
to
be
opinionated
anyway.
They
can't
take
the
s3
client
connected,
for
example,
to
azure's
blob
right,
so
they
already
have
an
opinion
of
what
they
exactly
what
they
want.
A
Okay,
so
yeah,
so
I
I
just
looked
up
for
the
three
major
clients
that
are
being
used.
Both
three
is,
I
would
say,
the
one
that
I'm
at
sharing
on
the
screen
right
now.
This,
I
would
say
is,
is
the
second
most
used
aws
sdk
for
s3,
the
first
one
is
obviously
the
official
aws
sdk.
The
third
one,
I
would
say,
is
minio.
So
all
three
of
them
support
instant
metadata
style
access.
A
A
B
If
we
can
be
precise
about
exactly
what
the
downward
api
will
include
and
then
we
can
ensure
that
that
is
a
subset
of
what
a
standard
s3
client
can
consume.
Then
it
does
become.
You
know
the
workload's
responsibility
to
just
say:
well,
you
know,
if
I'm
not
using
a
standard
client,
then
I
I
can
go,
read
the
stock
to
see
what
what
I
have
to
deal
with.
I.
A
A
We
we
specify
a
credential
chain,
so
you
know
that's
what
we
call
it
in
in
mineo.
We
call
it
a
credential
chain.
That
is,
you
can
say,
you
know,
pick
up
from
aws
environment
variables.
First,
if
not
pick
up
from
many
environment
variables,
if
not
do
third
style
and
fourth
step
and
so
on.
What
if
we
were
to
do
something
similar
with
our
bucket
access
classes,.
B
B
E
A
Know
you
can
have
10,
you
mean
like.
So
what
if
we
said
it
like
this?
What
if
what?
If
in
the
bucket
access
class
you
specify?
What
are
all
are
you
know,
supported
by
the
driver
and
in
the
bucket
access
object,
that
the
developer
controls
they
specify,
which
exact
authentication
mechanism
they
need.
H
B
H
B
B
H
H
Right
so
so,
but
let
me
just
ask
you
this
way
so
ben.
Do
you
think
what
siddhartha
just
said
would
not
apply,
because
if
the
driver
said
I
support
the
following
authentication
models
and
then,
if
you
have
them,
do
you
and
that's
just
a
secret
for
you?
It's
not.
B
H
The
same
library
would
that
be
okay
or
no.
B
No,
no,
I
think
we
we.
We
have
to
have
a
contract
between
us
and
the
drivers,
and
we
have
to
have
a
contract
between
us
and
the
workloads
and
I'm
very
much
concerned
and
the
contract
between
us
and
the
workloads
has
to
be
much
more
strict,
because
that's
where
we
get
portability
from
right
for
cozy
to
have
value.
People
have
to
believe
that
if
they
code
their
workload
to
it,
they
can
write
once
run
anywhere
and
to
make
that
true.
G
Sorry
I
had
a
question
like
I'm
not
sure
how
different
is
this
compared
to
like
we
have
drivers
like
for
pvcs
pvs.
We
have
a
lot
of
features
right
that
are
optional
and
not
all
drivers
support
everything.
B
But
with
with,
but
the
set
of
things
that
are
non-optional
is
big
enough,
that
you
can
write
your
whole
workload
that
way
and
not
use
any
of
the
optional
stuff
and
you're
fine,
like
you,
don't
need
to
resize,
you
don't
need
to
snapshot,
you
don't
need
some
other
funky
stuff,
but
all
the
basics
work
100
of
the
cases
with
pvcs.
If
you
code
your
your
workload
to
depend
on
a
pvc,
you
can
be
sure
that
it's
going
to
work
anywhere.
A
I
mean:
is
it
fair
to
say
that
for
applica
there
is
change
required
in
application
code?
Like
you
know,
we
we're
going
to
take
that
effort
to
go
and
update
the
apis
to
to
go
and
read
cozy
credentials.
That's
the
case
I
mean
so
so
I
mean
we.
A
We
could
get
to
that
point
where
we
we
target
the
most
important
or
the
other
prominent
sdks
for
each
of
the
clouds,
just
just
the
main
ones,
supported
by
the
clouds
themselves
and
and
add
a
credential
provider
for
cosy,
specifically
in
the
context
of
s3.
What
if
we
did
something
like
that.
B
That
way,
I'm
just
saying
we
have
to
have
a
very
tight
contract
about
what
cozy
will
provide
to
pods
when
they
ask
when
they
try
to
attach
to
a
bucket
like
you,
they
they
can't
be.
You
might
get
any
one
of
these
10
things.
You
might
get
something
proprietary,
you
don't
know
what
the
hell
it
is
because
no
one
can
write
a
workload
to
deal
with
proprietary.
B
H
Yeah,
I
I
think
that
this
is
just
my
opinion,
but
I
feel
that
and
siddhartha.
I
think
you
mentioned
that
it's
like
we
already
have
a
client
that
has
made
a
decision
to
use
a
certain
api
to
access
a
certain
object
store,
so
that
already
creates
a
a
model
where
it's
no
longer
the
same
as
pvcs,
because
it's
no
longer
able
to
just
immediately
connect
to
any
object
store.
H
A
No
so
so
louis
yeah,
that's
a
very
good
point.
The
the
thing
is
we're
trying
to
standardize
for
a
protocol,
our
our
our
boundaries
lie
inside
of
a
protocol
so
for
we
call
it
protocol,
so
a
data
api.
So
if
it
is
s3
we're
trying
to
say
that
all
s3
clients
you
can
move,
you
can
move
between
s3
providers
like
like
mini
or
versus
amazon
itself,
and
then
we
want
to
maintain
that
level
of
portability
it.
We
can
reason
about
why
we
can.
A
We
can
reduce
the
scope
of
portability,
we
can
say
it's
not
just
the
s3
protocol,
it's
s3
and
a
particular
authentication
mechanism,
but
but
if
there
is
an
authentication
mechanism
that
is
aws
only
let's
say
or
minion
only
and
and
they
do
exist
just
to
be
clear.
If,
if
that's
the
case,
then
then
we
lose
that
portability
the
little
bit
of
portability
that
we
were
able
to
offer
initially.
A
So
you
know
it's
just
the
the
the
scope
of
portability
keeps
reducing
and
probably
lost.
At
this
point.
B
Let
me
let
me
reprise
an
argument
that
I
made
a
long
time
ago
that
some
of
some
of
the
luis
might
have
not
heard
like
that.
What
we're
the
battle
we're
trying
to
fight
here
is
that
you
can
already
use
object.
Storage
in
kubernetes
clouds
today
fairly
trivially
right.
Anyone
can
write
a
pod
with
an
s3
client
inject
their
credentials
through
whatever
mechanism
they
want
provision
the
bucket
through
some
other
mechanism
and
off
you
go
right.
Using
object.
Storage
in
kubernetes
is
not
hard.
B
A
Yeah
portability
is
one
of
the
most
important
goals
here,
but
again
the
scope.
Affordability
can
can
be
slightly
redefined.
Then
just
just
a
thought
experiment
here.
I
don't
think
we
should
go
down
this
path.
I'm
saying
this
with
this
disclaimer,
what
if
we
said
it's
portable
for
a
particular
protocol
and
authentication
mechanism
instead
of
just
protocol,
just
like
we
saw
the
top
three
sdks
support
every
the
same
forms
of
authentication.
B
H
B
H
What
I'm
trying
to
say
is
that
I
understand
you're
saying
the
cosy
driver,
then
you're
not
closely
compliant,
but
what
I'm
trying
to
say
is
that
as
we,
if
this
releases
that
way
and
we
start
using
it,
what
we
are
saying
is
that
you
must
the
the
the
one
the
company
that
doesn't
move
is
the
one
and
that
everybody
wants
to
use.
That's
the
that's
going
to
be
the
standard
and
then.
H
H
A
H
No,
no!
No!
What
I'm
trying
to
say
is
that
it
no!
This
is
more
of
a
it's,
not
a
technical
thing,
I'm
trying
to
say
it
is
more
of
a
community
thing.
D
H
The
a
cozy
driver
who's-
that's
owned
by
maybe
a
small
company,
that's
trying
to
do
the
right
thing
and
then
a
cozy
driver
that
is
done
by
a
large
company.
That
just
says
we're
not
going
to
change
our
specs
to
satisfy
and
whatever
cosy
says
sorry.
So
now
the
expect
changes
to
satisfy
the
large
company.
You
see
what
I
mean
that
I'm.
C
D
G
B
That
really
is
portable
and
then
to
come
back
and
start
layering
in
a
bunch
of
optional,
better
ways
of
doing
things,
but
understanding
that,
like
you,
need
a
three-way
agreement
where
you
know
that
the
workload
has
to
agree
to
it.
The
kubernetes
release
has
to
or
the
kubernetes
controllers
the
version
of
the
kubernetes
controllers.
You
have
has
to
support
the
capability
and
the
driver
has
to
support
the
capability.
And
if
all
three
want
this
new
optional
feature,
then
then
you
can
enable
it.
But.
A
B
B
D
Everything
inside
only
the
like
the
month
right,
the
no
no
publish
voting,
that's
the
only
thing
that
is
required.
Everything
else
is
even
query
warning
that
is
optional.
Yes,.
A
D
A
A
H
Driver
then,
then
the
the
controller
actually
changed
that
to
the
kubernetes
speak
to
the
csi
speak
and
then
so
it
it
was
able
to
get
back.
If
saying,
the
driver
does
not
support
that,
then
the
request
is
is
annulled.
A
B
Credentials
right,
the
the
the
difference
is
is
that,
like
all
the
pvc
api,
pre-existed
csi,
and
so
we
were
building
csi
against
an
already
existing
workload-facing
interface
that
we
could
we
couldn't
change
right.
We
might
have
done
things
differently
if
we
had
an
opportunity
to
change
the
pvc
api
when
csi
was
invented
so.
D
My
understanding
is
that
the
the
pvc
itself
is
a
has
to
be
portable,
because
that's
user
created
that
one,
but
like
the
storage
class,
that
that
can
be
different
right
because
that's
vendor-specific
and
then
I
think,
pv
also
that's
the
admin
scene.
So
that
can
change.
But
supposedly
you
can
just
move
your
pvc
from
one
environment
to
another.
But
that's
my
understanding.
I
mean
this
is
a.
A
Can
we
do
that
some
same
thing
here
so
in
the
bucket
claim
object?
We
we
specify.
This
is
the
exact
access
mechanism
that
I
want
and
in
the
list
of
capabilities
in
the
driver,
it
specifies
that
these
are
all
supported
and
only
if
it's
supported,
we
we
go
ahead
and
you
know
provide
the
bucket
your
bucket.
D
A
B
A
H
D
B
H
Yeah,
I
think
that
would
be
good,
because
then
you
could
get
there
driver
developers
or
storage
developers
that
are
not
amazon
to
come
in
and
say.
I
would
like
to
add
it,
and
this
is
my
expected.
You
know
I
think
it'll
still
work.
B
A
B
F
A
They
mean
I
mean
the
enumeration
should
be
straightforward,
because
you
take
a
particular
official
sdk
for
for
a
particular
cloud
and
whatever
it's
the
exhaustive
list.
That's
supported
by
them
is
what
we
enumerate.
We
don't
go.
Okay,
we
don't
care
about
vendors
as
much
as
like
you
know.
We
start
with
a
particular
cloud
official
api
and
and
because
everyone
else
standardized
on
the
cloud
api.
So
we
can,
I
think,
reasonably
say
actually
now
that
I
think
about
the
homework.
A
I
was
going
to
say
that
if
you
just
go
ahead-
and
you
know
enumerated
it
for
the
cloud,
we
should
be
good
enough,
but
but
you
see
things
like
ceph
supports.
What
is
that
called
kerberos.