►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 03 June 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
So
what
time
is
it
there
different.
B
A
Thanks
for
joining
so
late,
so
okay,
so
let's
get
started
with
the
questions
that
you
had
because
it's
late
for
you
and
if
you
wanted
to
leave
early,
I
want
to
make
sure
we've
covered
all
the
topics
that
you
had
in
mind.
So
let's
go
over
that.
B
A
Okay
sounds
good;
okay,
all
right
so
so
here
is.
The
here
is.
The
here
are
the
two
topics
that
I
want
to
go
over
today.
I
don't
know
if.
C
A
Is
here
today,
but
let
me
ask
vianney:
did
we
decide
on
what
the
azure
api
should
look
like.
D
E
A
See
him
in
the
meeting
yeah
okay,
so
this
is
where
we
left
off.
A
So
what
we
said
was
we're
gonna
have
bucket
access,
so
this
is
the
downward
api.
The
download
api
is
going
to
look
like
this,
where
we're
going
to
have
something
called
bucket
access
info
and
the
bucket
access
info
is
going
to
go
down
all
the
way
into
the
part.
A
It's
going
to
have
three
different.
You
know
fields,
one
for
s31
for
azure
and
one
for
gcs,
and
we
defined
the
s3
field.
So
far,
and
we
said
the
s3
field
is
going
to
have
credentials
endpoint
bucket
name
and
region,
and
then
we
got
into
the
conversation
about
azure,
where
it
is
pretty
clear
that
we
need
endpoint,
and
then
we
were
trying
to
understand
what
else
we
need.
It
seem
like.
There
are
multiple
ways
to
authenticate
right.
A
Sorry
can
it
yeah
so
so
is
the
storage
account
name
the
same
as
access
key
or
is
it
a
combination
of
yeah?
Tell
me.
D
Yeah,
so
so
so
there
is
some
kind
of
echo.
D
The
storage
account
name
is
a
bucket
name,
and
you
need
a
container
name
to
which
is
basically
a
folder,
to
effectively
write
something,
and
what
guys
said
last
time
is,
if
you
don't
provide
one,
there
is
one
which
is
probably
by
default,
I'm
not
sure
about
that.
But
the
thing
is
that
the
container
is
not
a
bucket,
it's
just
a
prefix
or
a
folder.
If
you
want
in
the
s3
terminology.
F
Write
sharing,
let's
see
something,
blocked
the
screen.
Okay,
now
it's
better.
D
So,
for
instance,
just
to
to
quickly
set
up.
So
if
you
write
in
two
different
containers,
you
are
bound
by
the
performance
of
the
start.
Account
like,
like
you
know,
you
have
a
finite
amount
of.
You,
have
limits,
basically
per
storage
account
and
not
per
container,
because
the
container
does
not
exist.
It's
just
a
prefix
or
your
folder
here.
D
So
definitely
the
bucket
name
is
charging
name,
and
you
can
consider
the
storage
account
name
as
a
access
key.
Somehow
right,
I
see
okay
and
the
secret
key,
the
secret
key
or
so
the
program
is
in
amazon.
It's
very
clear
and
everybody
has
a
secret
key
and
you
can
assign
different
different
policies
to
different
socket
keys,
but
in
india
your
world
is
not
as
simple.
D
D
So
the
only
way
to
simulate
users
or
service
users
in
in
azure
is
to
generate
sas
tokens.
So
we
double
check
with
guys
today
and
basically
so
sas
tokens
are
temporary
keys
by
definition,
the
thing
is,
you
can
put
a
so
you
have
to
put
an
expiry
time
an
expiration
time
and
you
can
put
something
which
is
super
far
away
like
five
years.
If
you
want-
and
this.
D
To
sts
tokens
in
the
amazon
three
world,
where
amazon,
amazon
sds
tokens
are,
I
think
it's
only
36
hours
or
72
hours
max
for
a
temporary
key.
A
status
token
in
practice
has
no
limit
in
duration,
so
we
can
assume
there
are
permanent
keys
right,
it's
a
bit
farfetched,
but
so,
if
we
do
so,
we
should
not
store
the
secret
key,
but
we.
E
A
So
let
me
ask
you
this:
so
is
there
a
access
mechanism
where
there's
no
so
so
you're
talking
about
a
certain
kind
of
access
mechanism
where
we
have
this
concept
of
expiring
secret,
key
right
or
expiring,
credential
right?
Isn't
there
some
concept
where
we
have
you
know
long,
lasting
keys
or
you
know
forever
keys,
no.
A
Can
we
have
multiple
root
access
keys,
root
credentials
for
the
same
storage
account.
A
A
Okay
yeah.
We
should
probably
like
set
a
default
like
that
and
move
forward,
at
least
for
now.
D
Yeah,
so
I
don't
think
it's
I
mean
something
we
could
do
as
well
is
to
basically
in
the
case
of
a
are
we
we
could.
We
could
give
the
secret
keys
a
unique
secret
key,
but
there
are
two
of
them,
but
we
could
give
the
second
secret
key
if
you
want.
D
C
A
A
I
don't
think
that's
a
good
idea,
just
just
in
terms
of
you
know,
designing
a
system
where
like
why.
Why
even
have
multiple?
Why
even
have
this
cosy
wiring,
if
you're
just
going
to
share
the
same
credentials.
D
Yeah
yeah,
sometimes
you
just
want
an
app
to
run
and
it's
fine
to
share
the
key,
and
there
is
no
security
issue
because
you,
basically
you
give
access
to
the
bucket
anyway.
So.
D
D
No,
but
since
we
are,
we
are
also
creating
the
bucket
dynamically.
So
basically
we
are
creating
a
storage
account
right.
So
when
we
create
a
storage
account,
we
are
returned,
two
keys,
that's
the
default
and
when
we
ask
cozy
to
meet
question
show
we
write
down
the
second
key
and
everybody
will
have
the
same
key.
G
Right
but
but
the
issue
there
is,
my
expectation,
is
a
cozy
user
is
going
to
be
that
if
I
create
a
bar
and
I
bind
a
pod
to
it
and
then
later
I
delete
that
bar.
The
secrets
that
were
given
to
that
pod
should
no
longer
work
right.
Otherwise,
because
because
the
guy
who
the
guy
who
consumed
that
bar
with
his
pod,
could
have
taken
the
secrets
and
exfiltrated
them
and
stashed.
G
To
the
bucket
independent
of
kubernetes,
so
when
I
delete
that
bar,
my
expectation
is
anything
that
was
using,
it
can
no
longer
access
the
bucket,
because
I've
deleted
the
access
request.
The
access
should
have
been
revoked,
but
in
in
this
with
this
proposal,
if
everyone's
just
sharing
the
key
deleting
the
ba
art
is
nothing
because
the
key
still
works
and
anyone
who.
D
So
then
we
have
to
use
sas
tokens
yeah,
but
for
revocation
it's
another
problem,
so
I'm
checking
if
we
can
revoke
it's
token.
A
D
A
D
A
Understood,
okay,
that
sounds
good,
then
all
right.
So
let's
move
on
from
okay.
So
so
can
we
call
that,
as
as
as
an
enough
of
enough
of
like
like
do
we
feel
like
this?
Is
we
have
enough
information
about
azure
to
know,
move
forward
and
look
at
gcs,
or
is
there
something
else
we
should
discuss
about
azure.
A
Afford,
okay,
all
right
so
just
like
what
we
did
for
azure.
We
also
need
to
do
the
same
for
gcs.
I
don't
know
who's
on
the
call
right
here
today,
I'm
just
going
through
the
list
here,
but
someone
will
have
to
go.
Do
the
same
thing
for.
D
So,
just
just
just
when,
when
notes
are
here,
I'm
just
saying
actually,
you
cannot
revoke
the
sas
token.
D
G
D
A
So
so
you're
telling
me
that,
even
if
you
have
multiple
accessors,
let's
say
I
have
10
applications
talking
to
one
storage
account
they
would
all
have
to
use
the
root
keys.
A
G
Right
right
I
mean
I,
I
could
understand
why,
if
you're
microsoft,
you
want
people
to
use
the
native
way
instead
of
the
s3
way.
But
yes,
if
you
use
the
s3
way,
then
you
have
a
solution
to
this
problem,
which
is
every
application,
gets
its
own
access
key
in
secret
and
if
any
individual
application
goes
rogue,
you
just
yank
that
one
access
key
and
and
you're
you're
in
good
a
good
situation.
So
the
question
is
given
that
they
can
do
that
through
their
s3
gateway.
G
What
is
the
equivalent
thing
you
do
in
the
native
way,
if,
where,
if
you
want
the
existing
applications
to
continue
to
run
fine-
and
you
just
want
to
yank
one
bad
application
and
say
no,
I
don't
trust
you
anymore.
A
So
what
they
say
is
there's
something
called
a
shared
access
policy
and
you
have
to
create
a
shared
access
policy.
Okay,
for
bar,
you
can
access.
You
can
revoke
the
whole
shared
access
policy,
but
you
can't
revoke
a
single
shared
access
or
signa
shared
access,
signature
that
you've
created
for
for
the
same
policy.
You
you
you
have
to
revoke.
Can
you.
G
A
G
Yeah,
so
as
long
as
you
can
create
one
of
these
for
every
bar
for
every
ba
and
then
generate
the
associated
secret
or
whatever
for
policy,
I
think
that
that's
the
model.
A
D
G
A
A
Okay,
so
we
have
endpoint,
we
have
storage
account
name,
we
have
container
name.
What
else
do
we
need
secret
key.
H
D
And
we
can
commend,
say
bound
to
so
long
long-lived
token
bound
to
a
revocable
sharp
policy.
If
you
want.
G
D
G
D
A
query
param,
it's
an
http
query
pump,
so
you
have
the
signature
which
is
a
bag
of
bite
indeed,
but
you
also
have
in
clear
all
the
components
like
you
have
the
expiration
time
you
have.
D
D
If
you
don't
provide
the
all
the
query
pumps
and
you
only
provide
the
signature
for
instance,
because
what
I've
seen
is
only
providing
the
whole
set
of
strings
separated
by
you
know
ampersand
typically,
you
have
so
the
the
first
token
is
a
question
mark
because
you're
supposed
to
append
this
to
the
query-
and
you
have,
for
instance,
signature,
equal
bag
of
byte,
ampersand
expiration
time,
and
you
have
let's
say
10
years,
and
you
have
other
parameters
like
that.
A
All
right
so
so
I
think
now
we
have
enough
to
move
forward
right.
Yep.
G
No,
just
just
that
yeah,
just
the
things
that
are
opaque
should
be
opaque
and
we
shouldn't
be
too
worried
about.
What's
inside
of
them,
so.
D
You
you
you,
for
instance,
you
would
replace
token
by
token
or
key.
A
Or
oh,
okay,
so
so
now
that
we
have
some
consensus
on
what
we
want
to
do
for
azure,
can
we
move
forward
to
talk
about
gcs,
okay,
yeah?
Okay,
so
I
need
someone
to
sign
up
to
go.
Look
at
gcs
as
well.
Ideally
someone
who's
had
you
know,
even
the
tiniest
bit
of
experience,
setting
up
some
kind
of
you
know:
cloud,
storage
or
yeah.
A
They
call
it
google
cloud
storage
in
the
google
cloud,
so
so,
who
can
I
rely
on
for
that
who
has
tried
this
or
who
is
interested
in
doing
this?.
A
G
A
Okay,
so
let's
let's
move
forward
so
so
here
here's
one
of
the
things
that
I
wanted
to
discuss.
So
I've
been
updating
the
cap
and
one
question
or
one
thought
that
came
while
doing
that
was
we
have
you
know:
we've
tried
to
define
the
concept
of
you
know,
there's
the
bucket
bucket
star
type
resources
and
then
there's
the
bucket
access
star
type
resources.
What
I
mean
by
that
is:
there's
bucket
access,
request,
bucket
access
class
bucket
access
and
then
bucket
request,
bucket
and
bucket
class
in
in
this
paradigm.
A
The
way
we've
defined
the
class
part
of
this,
the
bucket
class
and
the
bucket
access
class.
I
believe
we
might
have
gotten
some
things
wrong
there,
because,
as
it
is
today,
we
define
this
concept
of
well.
There
are
a
lot
of
concepts
in
there
that
don't
make
sense,
for
instance,
allowed
namespaces
and
also
the
current
way
we
have
where
we
we
define
stuff
like
the
protocol
inside
the
class.
A
I
think
we
got.
We
got
a
few
things
wrong
while
well,
while
doing
that,
and-
and
I
was
thinking
about
how
we
did
this
or
why
we
made
the
decision
this
way
and
and
what
we
can
do
to
fix
this
problem
I'll
tell.
G
Hold
on
you're,
making
me
nervous
because,
like
the
the
the
allowed
namespaces
thing,
I
agree
is
very
hokey
but
like
when
we
started
talking
about
how
we
were
going
to
handle
brownfield
cases
where,
like
you
were
just
going
to
point
a
bar
directly
at
a
b
like
it
became
evident.
If
that's
how
we're
going
to
do
it,
a
lot
of
namespaces
was
going
to
be
the
only
security
mechanism.
We
had
no.
A
No
name
space
so
so.
First,
I
want
to
define
what
the
boundary
is
for
a
class.
I
think
I
think
we
got
that
itself
wrong.
Currently
we
put
protocol
structure
inside
the
class
and
I
don't
think
protocol
structure
should
be
inside
a
class,
because
protocol
is
more
specific
to
a
particular
bucket
or
in
a
sense
currently
yeah.
G
A
G
G
A
Right,
but
if
you,
if
your
workload
only
speaks
okay,
let's
let's
talk
about
that.
So
can
that
still
be
encoded?
G
Right,
but
what
would
it
mean
if,
if
the
workload
requested
an
azure
blob
and
like
I
didn't,
have
a
driver
for
azure
blobs,
I
only
had
an
s3
driver.
C
In
csi
so
sid
historically,
we
we
had
a
subset
of
protocol.
I
later
on
the
core
team
added
those
other
fields.
All
we
had
was
a
protocol
string.
There
was
a
desire
to
share
a
common
structure
between
the
bucket
instance
and
the
bucket
class,
and
that's
why
protocol
has
these
fields
that
don't
make
sense
from
an
admin
point
of
view
when
creating
a
bucket
class.
That
was
the
reason.
C
Secondly,
we
talked
about
drivers
that
could
support
more
than
one
protocol
and
we
said
and
google
was
used
as
an
example,
and
we
said
in
that
situation
they
would
create
two
bucket
classes,
one
for
protocol
x
and
one
for
protocol
y.
The
third
thing
is
we're
trying
to
make
the
creation
of
a
bucket
as
abstract,
as
we
can
to
the
user
and
so
they're
just
saying
I
want
a
bucket
and
here's
the
bucket
class
I
use,
and
they
don't
necessarily
need
to
know
the
details.
C
C
A
That's
a
really
good
point,
and
you
said
it
very
nicely
so
so,
if
you
look
at
what
csi
does
the
storage
class
concept
in
csi
does
not
have
any
information
about
whether
it's
a
file
or
a
block?
The
storage
class
inside
the
csi
is
more
or
less
used
for
kubernetes
level.
Abstractions
like
like
stuff
like
wait
for
first
consumer
or
what
the
driver
name
itself
is.
G
That
that
might
be
for
historical
reasons,
because
storage
classes
predate
the
concept
of
volume
modes.
I
I
think
that,
if
volume
modes
have
been
in
from
the
beginning,
they
might
have
surfaced
their
way
up
into
storage
classes.
I
I
agree
with
everything.
Jeff
said.
The
way
I
think
about
it
is
the
idea
of
having
storage
classes
or
bucket
classes.
Is
it's
a
way
for
the
admin
to
sort
of
present
a
menu
of
options
to
the
users
that
that
in
some
way
reflects
what
what
is
actually
wired
up
on
the
back
end?
G
Because
the
idea
is
if
you're,
if
you're,
just
a
consumer,
you
don't
know,
we
know
on
the
particular
kubernetes
cluster
that
you're
attaching
to
exactly
how
it's
been
configured.
What
drivers
are
are
back
there,
what
what
what's
underneath
the
infrastructure?
So
so
you
just
know
that.
There's
something
and
and
with
the
the
way
you
go,
look
and
see
what
your
choices
are.
G
Is
you
enumerate
the
classes
and
you
pick
one
or
if
you
really
don't
care,
you
choose
you,
let
the
default
one
get
chosen,
but
but
if
you,
but
if,
if
you
know,
if
you,
if
you
can
possibly
care
at
all,
you
would
enumerate
over
the
storage
classes
find
the
one
that
best
matches
what
you
want
to
do
and
then
use
that
for
all
your
pvcs
same
thing
with
buckets,
I
think,
is
you
know
if
you
have
a
couple
different
cozy
drivers
installed,
you'd
have
a
couple
different
bucket
classes
to
represent
them,
but
if.
A
G
G
Does
that
need
to
come
from
the
bucket
class,
though
well
kubernetes
needs
to
know,
because
because
maybe
I
have
the
gcs
driver
and
it
can
support
gcs
natively,
but
I
don't
want
to
expose
that
to
my
users.
I
just
want
to
give
everyone
s3,
because
I
care
about
portability
more
than
getting
the
most
out
of
my
object,
store.
G
E
D
G
I
mean
labels
aren't
meant
to
have
semantic
information
in
them.
I
I
think
that
I
mean
at
some
point
like
we're
going
to
create
a
we're
going
to
send
a
bucket
creation
request
to
a
driver,
and
it's
going
to
have
to
say,
like
I
want
an
s3
bucket
or
I
want
a
different
protocol
and
then
you're
going
to
have
to
populate
different
structs
and
then
you're
going
to
have
to
fill
in
a
different
part
of
the
bucket
access
object,
you're
going
to
have
to
send
different
stuff
down
to
the
downward
ape.
A
I
feel
like
if
we
had,
just
if
the
bucket
class
just
represented
common
parameters
that
that
were
not
protocol
specific
but
but
that
only
applied
to
the
kubernetes
wiring
of
this
whole
bucket
life
cycle?
That's
what
bucket
class
should
encode.
G
G
A
Something
else
protocols
is
is
finite
and
we
can.
We
can
clearly
represent
the
protocol
with
a
simple
string.
What
if
we
just
had
a
string
field
to
represent.
G
I
G
Yeah,
but
with
the
storage
class
like
it,
it
really
doesn't
end
up
mattering,
because
any
volume
still
gives
you
a
volume
except
for
the
whole
file
system.
Block
dichotomy
like
there
is
a
problem
there
where
like.
If,
if
some
storage
classes
can
do
blocks,
and
others
can't-
and
I
want
to
block
volume
like
it
just
sucks,.
G
I
G
I
Saying
that
the
the
the
workload
doesn't
care
to
anything
other
than
the
volume
mode
right.
I
G
I
D
I
Meant
like
uids
or
whatever,
like
any
any
any
misconfiguration
of
how
that
you
know
that
that
that
file
system
basically
provides
your
workload
with
access
can
be
more
complex
than
just
the
fact
that
it's
a
file
system.
Yes,.
G
Particularly
in
particular,
when
you
get
into
like
multiple
pods
sharing
access
to
a
volume,
you
can
end
up
in
a
world
of
hurt,
depending
on
you
know
what
storage
class
you
end
up
with,
but
like
the
basic
case
of
one
pod,
that
needs
a
volume,
is
it's
going
to
work?
No
matter
what
storage
class
you
pick?
If,
if
you
have
one
pod
and
it
wants
a
file
system,
volume
you're
pretty
much
guaranteed
to
be
able
to
work
with
any
storage
class
so
like.
That
is
the
simple
case
and
then
you're
right.
I
Like
even
even
affinities-
and
you
know
things
like
local
volumes
or
whatever
right,
there's
there's
more
cases
where
it
does
matter
to
the
workload
how
to
configure
itself
based
on
what
type
of
storage
class
it
uses
right.
G
Yeah
but
but
I
guess,
I
guess
the
way
you
think
about
those
those
are
all
additive
things
like
you,
you
know
extra
features
that
you
enable
on
your
storage
class
or
your
volumes
to
say,
like
I
want
this
this
stuff.
If
you
leave
everything
blank
and
just
say,
give
me
the
simplest
possible
volume
and
attach
it
to
the
simplest
possible
pod,
like
that's
going
to
work
10
times
out
of
10.
E
G
G
I
see
that
as
a
useful
top
level
thing
and
it
gives
workloads
a
clue
of
you
know
which
or
an
easy
way
to
filter
through
a
list
of
storage
classes
right
or
a
list
of
bucket
classes.
I
should
say
list
of
bucket
classes.
A
I
think
he
means
that
that's
the
right
approach,
but
we've
already
moved
on
and
you
know
kind
of
worked
with
the
kind
of
made
me
you
know
we
we've
been
able
to
work
with
with
the
storage
class
without
a
volume
mode.
How
do
you
do
it?
I
G
We
could
we
could
write
a
kept
to
add,
like
supported
volume
modes
to
storage
classes,
so
that
you
know
you
could
mark
you
know
the
storage
class
supports
block
volumes
and
file
system
volumes
or
only
block
volumes
or
only
file
system
volumes.
But
the
problem
is
always
going
to
be
that
it's
it
would
have
to
be
optional
for
backwards,
compatibility,
meaning
that
you're.
C
G
Going
to
have
situations
where
it's
just
going
to
be
blank
and
you're
you
have
to
guess.
So,
that's
probably
why.
C
G
Hasn't
been
pressure
to
add
it,
because
even
if
we
did
add
it,
it
would
only
address
some
small
fraction
of
real
world
use
cases
just
just
for
backwards
compatibility
reasons.
Here
we
have.
C
G
Opportunity
to
get
it
right
from
the
beginning
and
and
that's
why
I'm
arguing,
for
you
know
just
a
string.
The
protocol
name
s3,
gcs,
azure
blob,
you
know,
have
it
be,
you
know,
have
it
be
type
checked
so
that
you
don't
get
garbage
in
there?
So
it's
a
proper
enum
type
but
yeah
string.
I
I
A
Yeah,
that's
that's
what
it's
going
to
get
to
so
if
we
have
only
protocol
name
here,
what
should
bucket
request
have
class
name?
That's
it
I
mean
I
would
think
so,
protocol
specific
parameters.
What
do
it
wants
to
request
from
the
about
the
protocol?
Should
it
come
from
the
class,
or
should
it
be
something
that
that
that
the
bucket
request
can
what's
the
protocol,
specific
parameter
region
or
anything
else?
G
End
point
so
region's
a
tricky
one
because
it
gets
treated
as
a
top-level
thing
by
csi
right.
That
csi
has
a
formal
notion
of
topology
where
things
are
so
that
you
can
get
your
pods
and
your
volumes
in
the
same
region
in
the
same
zone
and
that's
all
formalized
in
csi
outside
of
the
you
know:
pvc
storage
class
thing,
but
what's
another
one
other
than
region.
A
G
Yeah
yeah,
I
mean
you
can
do
that
with
different
bucket
classes.
You
can
have
your
your
bronze
bucket
class
and
your
gold
bucket
class
and
okay.
So
again,
just
like
storage
classes,
you're
saying
yeah,
I
mean,
if
that's
how
you
want
to
slice
and
dice
it.
I
mean
the
the
nice
thing
about
the
storage
class
model.
Is
it's
pretty
flexible
right,
it's
just
a
menu
and
how
how
a
given
cube
admin
wants
to
structure
his
menu
is
up
to
him.
He
can
the.
I
I
G
G
G
G
A
A
I
If
I
provide
yeah,
if
I
have
a
workload
that
expects
a
file
system
or
a
block
right,
and
I
leave
out
the
storage
class,
what
am
I
I'm
in
the
pvc
I'm
specifying
at
least
that
the
volume
mode
I
do
have
an
expectation
from
the
infrastructure
to
provide
me
this?
That
would
yeah.
G
C
I
I
I
G
C
C
G
Well,
I
mean
yet,
but
I
don't
I
mean
the
the
default
storage
class.
Semantics
are
very,
very
simple
in
kubernetes
and
I
was
hoping
that
the
default
bucket
class
would
be
similarly
simple
that,
like,
if
you
specify
a
bucket
class,
you
get
that
bucket
class.
If
you
leave
it
blank
you
get
whatever
single
bucket
class
has
been
specified
as
the
default
and
like,
and
there
is
no
other
behavior
like
because
that's
really
easy
to
understand.
Yeah.
I
Yeah,
but
you
know,
the
names
of
the
bucket
classes
can
also
represent
different
defaults
right.
It
can
be
gold,
silver,
bronze
right,
like
we
said
or
whatever,
so
I'm
saying
that
the
default
in
the
sense
of
you
know
labeling.
This
is
the
default
one
or
not
the
default
one.
For
you
know
a
fallback
one
I
would
say.
Maybe
is
one
aspect
of
it,
but
there's
just
you
know,
naming
conventions
in
clusters
saying
this
is
the
default
for
gold.
I
This
is
the
default
for
silver,
bronze
and
then,
and
then
these
defaults
by
saying,
I
expect
s3
or
I
expect
gcs
or
I
split
azure.
It
just
gives
me
a
validation
in
terms
of
you
know,
this
is
the
worker
that
I
created.
I
can
actually
hand
it
off
to
somebody
and.
G
G
I
G
I
You're
right
you're
right
that
it
doesn't
help
you
to
recover
the
the
only
thing
that
it
helps
you
with
is
to
describe
what
doesn't
work,
which,
in
other
cases,
how
do
you
describe
to
the
workload
that
it
got?
Something
un
you
know
unexpected.
The
worker
doesn't
even
specify
what
it's
expected
to
work
with.
G
Right
but
I
mean
from
a
historical
perspective
like
basically
the
way
it
works
is
everyone
assumes
the
file
system
works
everywhere,
and
if
you,
if
you
request
a
block
volume,
good
luck
right,
that's
what
it
comes
down
to
is
the
system
does
not
help.
You
find
the
storage
class
that
supports
block
volumes.
It's
just
if
you
happen
to
know
that
it
does,
and
you
ask
for
it.
You'll
get
it,
but
if
anything
else
happens,
you're
out
of
luck
and
and
yeah.
I
F
I
G
I
Why
it's
inco
like,
on
the
other
hand,
I'm
I
understand
why
it's
inconvenient
to
have
a
double
specification
of
a
certain
field.
G
But
but
then
then
the
question
is
like:
how?
How
do
you?
How
do
you
communicate
any
that
anything
other
than
s3
will
ever
work.
A
Do
we
need
to
communicate
it
up
events
or
you,
you
go?
You
go
look
at
the
volume
you
go,
look
at
the
bucket
object
or
bucket
request
object.
It
shows
it
shows
that
it
failed.
G
A
A
G
D
That
would
be
something
that
would
be
nice
as
well.
Typically,
your
vendor
could,
which
supports.
Let's
say:
amazon,
s3
and
azure
could,
in
their
driver,
have
the
possibility
of
dynamically
return
credentials
for
for
the
one
which
are
asked
right.
D
G
G
A
So
we
only
have
seven
minutes
left.
I
want
to
bring
the
focus
back
into
the
conversation
on
on
just
bucket
provisioning.
The
current
way
in
which
we
provision
buckets.
We
have
to
rely
on
admin
to
give
us
a
bucket
class
with
the
parameters
that
we
need,
even
though,
as
the
user,
like
I'm
the
one
that
knows
what
parameters
I
need
for
my
bucket
now,
you
know,
since
the
list
of
parameters
is
very
small,
let
me
like
as
right
now
the
only
two
things
we
can
think
of
is
protocol,
name
and
region.
A
It
might
be
okay,
you
know
it's
kind
of
possible
to
work
with
bucket
class
having
the
fields,
but
but
it
seems
wrong
to
me
that
that
the
bucket
class
has
anything
to
do
with
protocol
at
all.
If
we
were
to
draw
the
line
on
the
bucket
class
saying
that
bucket
class
is
more
to
specify
kubernetes
specific
parameters
like
like
just
allowed,
namespaces
and
and
just
stuff
to
do
with
how
kubernetes
works
with
this
rather
than.
G
G
I
I
I
would
say
I
don't
provide
a
pv
for
it,
a
bucket
for
it,
but
if
I
get
requested
for
azure,
I
say:
okay
and
I
provide
it
right
and
and
like
this
comes
back
to
the
design
of
storage
classes,
with
with
the
volume
mode
where
the
the
storage
test
doesn't
specify
which
volume
modes
it
supports,
it's
kind
of
a
you
know,
you
discover
it
just
by
either
knowing
the
names
of
the
ones
that
support
it
or
by
trying
it
out
and
seeing
what
works
or
what
doesn't
right.
D
At
the
time
you
create
a
bucket
class,
we
talk
to
the
driver
and
ask
the
driver.
A
G
So
so
we
we
can
do
that
if
we
move
the
protocol
to
the
bucket
request
and-
and
then
you
basically
say
with
your
bucket
request,
which
protocol
you
wanted
and
and
given
that
the
bucket
class
doesn't
have
any
information
you
just
it's
just
whether
the
driver
succeeds
or
not
right,
yeah
that
that
is
a
workable
mode.
G
That
would
that
would
be
like,
like
volume
mode,
where
basically,
everyone
kind
of
assumes
s3-
probably
works
everywhere,
but
then
the
other
ones
you
just
have
to
try
them
out
and
and
in
some
cases
even
s3
might
not
be
supported.
F
F
It's
just
there
are
certain
things
that
we
want
to
be.
I
think
there
are
certain
things
that
we
can
only
tell
for
one
type,
but
not
the
other,
but
if
we
solve
both,
then
there's
no
way
for
us
to
say
that.
So
there
are
things
like
that.
F
F
You
know
their
driver,
they
have
two
drivers,
I
think
yeah,
I
think
like
azure.
I
believe
they
also
have
separate
drivers.
A
Okay,
so
we're
almost
out
of
time
actually
basically
a
lot
of
time.
Let's
continue
this
discussion
on
monday,
but
I
think
we're
going
in
the
right
direction.
G
G
A
A
I
think
I
think
he's
saying
we
should
rework
the
fields
in
bucket
access
class
just
to
make
sure
we
don't
have.
You
know
we
follow
the
same
paradigm.
There
also
yeah
yeah,
okay,
yeah,
all
right,
yeah
and
then
and
then
once
you're
done
with
that,
I
think
we
have
to
revisit
allowed
namespaces
and
then
I
think
we
should
be
good
if
you're,
in
good
shape.
After
that.