►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 24 May 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
Kubernetes
projects,
so
this
is
where
we
do
our
project
planning,
and
this
is
this-
is
in
review,
so
we've
we've
always
had
our
deadline
to
be
demo
one
so
so
you'll
see
that
milestone
in
a
few
places,
even
though
we've
actually
gone
past
that
milestone.
B
We
have
do
have
test
cases
now,
no
there's
another
tab
somewhere
here.
Let's
put
that.
B
Here
this
is
not
done
test
case
of
validating
yeah.
This
is
done
all
right.
Let's
actually
do
it
at
the
pr,
it's
still
green,
so
I
believe
there's
a
pr
left
for
it.
This
is
this
is
in
the
pipeline
for
sometime
in
the
future.
It's
a
future
milestone,
add
or
remove
finalizers
on
the
bucket
access.
This
is
done.
This
move
this
I'm
sure
of
okay,
now
process
bucket
access
request,
delete.
B
B
B
And
open
item
okay,
so
we
we
talked
about
a
bunch
of
things
last
week,
but
just
based
on
this
chart
I'll
quickly
give
an
update
of
where
we
are
in
terms
of
development.
So,
in
terms
of
the
core
controller
logic,
we
have
the
main
use
cases
of
creation
and
deletion
working.
Just
fine.
B
There
are
a
few
bugs
here
and
there,
but
but
but
nothing
that
just
stands
out.
So
the
reason
I
say
there
are
a
few
bugs
here,
and
there
is
a
few
bugs
were
raised
when
when
people
started
writing
drivers
or
when
people
started
using
cosy,
we
addressed
almost
all
of
them
and
I
believe
some
of
the
bugs
are
still
under
review,
like
the
fixes
for
it
are
still
under
review.
B
I
don't
believe
there
is
any
bug
that
we
still
haven't
addressed,
but
but
when
you
start
using
it
and
you
run
into
issues,
we
will
know
that
you
know
there
are
other
bugs
that
need
to
be
addressed.
B
So
that's
in
terms
of
the
core
logic
we
decided
about
two
or
three
weeks
ago
that
we
will
have
the
concept
of
a
sample
driver,
a
sample
driver.
That's
completely
in
memory
and
which
does
not
actually
have
any
vendor
specific
code
in
it
that
sample
driver
code
is
still
under
review.
B
So
we
left
a
bunch
of
comments.
It
was
opened
about
11
10
days
ago.
There
are
a
bunch
of
comments
left
on
it.
I'm
not
sure
if
nicolas
has
addressed
them
yet,
but
as
soon
as
as
soon
as
all
the
comments
are
addressed,
we'll
merge
it
right
away
so
going
forward.
This
is
the
driver,
we're
going
to
use
for
ci
and
to
show
us
the
example
to
whoever
is
going
to
start
writing
drivers,
and
that
reminds
me
to
talk
about
ci.
B
So
in
terms
of
ci,
we
have
someone
named
who
unfortunately,
cannot
join
us
at
the
regular
meetings,
but
he
is
working
with
us
and
and
helping
us
set
up.
Ci
he's
the
person
who
set
up
the
automated
builds
that
happen.
Every
night
of
you
know
in
the
cozy
repositories
and
and
I've
asked
him
to
review
this
code
as
well.
B
So
that's
why
he
gets
familiar
with
this
code
base
and
because,
because
he's
going
to
be
running
this
in
our
nrci
in
in
the
kubernetes
ci,
and
if
anyone
else
would
like
to
take
a
look
at
this
code
and
leave
some
comments,
that
would
be
really
good
too.
So
we
can
move
it
forward.
I
believe
all
of
the
responsibility
is
kind
of
falling
on
chris
right
now.
B
Yeah
I
asked
janis.
Also
janus
is
another
person
who
unfortunately,
cannot
join
regularly.
B
So
so
he
talks
to
me
at
a
time.
That's
much
more.
You
know
during
work
hours
for
him,
and
I
I
I
tell
him
about
what
needs
to
be
worked
on
and,
and
I've
also
had
him
look
at
this
code
and
give
a
review.
Janis
is
also
actively
contributing
to
his
project
writing
code,
so
yeah.
So
so
that's
where
we
are
in
terms
of
development.
I
just
wanted
to
give
you
a
quick
update.
B
B
Cool
I
wish
nicholas
is
here
because
we
could
have
possibly
just
addressed
the
comments
here
itself,
because
I
don't
know
everything
that's
here.
We
could
have
possibly
addressed
the
comments
here
itself
and
then
and
then
merged
it.
Right
away
is
chris
here.
C
C
B
Victoria
got
it
got
it.
I
didn't
know,
okay,
moving
on.
Let's
talk
about
the
cap,
so
the
plan
that
we
came
up
last
week
for
the
cap
was
we're
going
to
quickly
rewrite
the
cap
and
you
know
simplify
it,
make
it
make
it
much
smaller
and.
B
Ben
was
also
saying
a
lot
about
this.
The
idea
is
we
we,
we
are
sent
small
updates
to
tim
as
we
keep
making
them,
but
in
the
meantime,
like
with
the
first
version
of
whatever
we
have,
we
simplify
the
cap
and
then
and
then
send
it
to
tim
right
now,
it's
bottlenecked
on
me
because
I
said
I
will.
I
will
do
the
first
step
of
simplification,
so
I'm
going
to
finish
it
by
today
today.
B
This
is
what
my
time
will
be
spent
on,
but
other
than
that
you
know,
there's
nothing
much
else
to
say
what
I
would
ask
is
once
once
we
have
the
simplified
version
of
the
cap.
I
I
you
know.
I
would
like
to
immediately
send
it
to
tim
for
review,
but
in
the
meantime,
while
tim
is,
you
know,
while
we're
still
waiting
for
tim,
he's,
probably
going
to
take
a
few
days
to
take
a
look
at
it.
B
While
that's
happening
we'll
we'll
need
everyone's
help
to
go
through
the
simplified
version
of
the
cap.
It's
it's
a
full
rewrite,
but
it
should
be
easier
and
quicker
to
read
so
it'll
be
good.
If
everyone
takes
a
look
at.
B
It
so
that's
that's
all
in
terms
of
updates.
There
are
there's
a
technical
discussion.
I
want
to
start,
but
any
other
questions
so
far
like
in
terms
of
you
know
when,
when
we'll
go
alpha
or
how
that's
going
to
work
for
anyone
who
missed
last
week's
meeting.
B
Okay,
all
right,
so,
let's,
let's
get
into
the
technical
discussion,
we
were
having
last
week
where
we
left
off.
So
so,
if
you
look
at
our
current
apis,
let
me
open
it
up.
B
Humanities,
six,
yeah
api,
and,
if
I
go
here
so
we
have,
we
have
a
few
types.
We
have
the
six
types,
the
bucket
the
bucket
access
and
the
surrounding
types
for
these
two
resources,
the
classes
and
the
requests.
B
We
were
talking
about,
the
download
api
or
the
structure
or
the
data
that
will
go
into
a
pod,
that's
requesting
a
bucket.
Now
that
is
going
to
be.
That
needs
to
be
a
versioned
and
managed
api,
because
there'll
be
an
external
client.
That's
going
to
consume
it
now.
B
We
so
far
said
that
we
will
just
put
the
fields
from
the
bucket
class
dot
protocol
as
the
as
the
fields
in
inside
of
cluster
protocol,
as
as
the
fields
that
will
go
into
the
pod.
That's
that's
requesting
the
bucket,
but
we
soon
we
soon
found
out
that
bucket
class
dot
protocol
is
kind
of
rudimentary
and
also
it
kind
of
doesn't
make
sense
when
you
think
about
the
bucket
class
as
bucket
classes
as
a
structure,
that's
supposed
to
hold
fields
for
a
class
or
multiple
buckets,
but
as
what
we're.
B
But
but
what
you're
expecting
by
putting
this
into
the
part
is
that
this
will
have
feels
specific
to
that
bucket.
So
two
ideas
clash
and
it
kind
of
stops
making
sense.
It
really
stops
making
sense.
So
so
we've
been
discussing.
As
of
last,
we
were
discussing
what
we
should
have
in
the
downward
api
that
goes
down
to
the
part
and
so
far
what
the
discussion
yielded
was.
We
will
have
a
separate
structure.
B
So
the
last
thing
we
decided
was,
we
were
talking
about
the
s3
api
and
within
the
s3
api
we
talked
about
the
fields
that
we
would
need
and
let's,
let's
continue
the
discussion
on
that.
B
So
currently,
in
s3
protocol
we
have
a
region
and
signature
version,
so
this
by
itself
is
not
enough
for
a
bucket
for
a
part
to
talk
to
a
bucket,
so
for
a
bar
to
talk
to
a
bucket.
It
needs
it
needs
a
bunch
of
fields.
B
The
first
is
the
endpoint,
which
is
either
host
name
or
the
full
fqdn
to
the
actual
bucket
or
just
the
ip
address.
So
if
it's
just
an
ip
address
or
a
host
name,
then
it's
the
end
point
to
the
actual
backend
object
storage
system.
If
it
is
a
host
name
followed
by
a
path,
then
it's
the
path
to
the
it's,
the
full
path
to
the
actual
bucket,
not
just
to
the
objects
of
the
system.
B
So
we
said
we
will
capture
that
in
a
url
field
called
endpoint,
then
we
said
we
will
have
another
field,
and
let
me
write
this
down.
So
it's
easier
to
follow.
B
B
B
So
this
will
be
the
contents
of
bucket.yaml
and
I
believe,
we'll
also
need
credentials.
B
B
B
Sweet,
okay,
I'm
just
going
to
keep
going
forward
so
so
bucket.yaml.
This
is
what
we
decided.
We'll
have
we'll
have
these
fields
we'll
have
endpoint
bucket
name,
signature
version,
region,
credentials
and
sort
of
well,
you
can
have
certificates
as
part
of
the
credentials
field
right.
Does
anyone
know.
D
Certs,
so
so
what
type
of
certificates.
B
Let
me
think
about
this,
it
could
be,
it
could
be
a
cell
search
or
it
could
be
for
the
authentication
itself.
Maybe
we
should
have
something
like
auth
info
or
something.
B
B
B
B
A
Yeah,
I
think
we
want
to
oh,
you
want
something
similar.
Is
it
a
new
type
of
similar.
C
Okay
and-
and
this
isn't
I'm
sorry,
I
I
got
distracted
a
little
bit
from
for
work
stuff,
but
is
this
intended
to
replace
the
csi
volume
communication
mechanism
we
have
defined
now
to
the
pod.
B
Well,
yeah
right
now
it
doesn't
replace
the
csm
csi
mechanism
is
a
way
to
provide
this
to
the
part.
So
that's
orthogonal
this
will
just
have
this
is
this
is
the
data
that
will
be
sent
down
to
the
part
either
we
are
csi
or
by
the
cube
later
by
secrets.
C
B
C
Just
one
way
this
will
be
the
new
way.
This
is
a
new
way
and-
and
it's
instead
of
the
csi
driver,
section
of
the
of
the
pod
of
the
volume
spec
in
a
pod
right.
B
No,
that
will
remain
as
it
is,
jeff
so,
okay,
so
that
maybe
we
shouldn't
call
it
download
api,
then.
So
what
this
is.
This
is
the
structure
that
will
go
into
the
part.
This
is
in
not
into
the
pod
spec,
but
into
the
pod
workload.
B
At
a
specific
part
that
I
see
where
you're
coming
from
sure,
so
you
would
still
have
to
specify
the
cs
like
the
volume
info
which
will
which.
B
C
Okay,
so
you're
you're
is
this
more
defining
what
the
volume,
the
content,
what
the
structure
of
the
volume
info
files
look
like
yep
file
looks
like
yep,
so
it
has
it
has
these
fields
in
it,
it's
a
structured
file.
It
would
look
like
this
and
and
it's
okay,
that
the
credentials
are
in
here
mixed
with
other
stuff.
That
often
full
struck
is
okay
to
have
in
here.
B
Yeah
yeah,
it
should
be,
it
should
be
encrypted,
while
it
is
stored
in
fcd
best
practice
wise,
but
it's
really
up
to
whoever
is
running
it,
but
when
it
comes
to
the
part
it
cannot
be
encrypted.
It
has
to
be
plain
text.
B
Yeah,
basically
for
sure
but
you're
not
encrypted.
B
Descriptive
so
we
have
type
bucket
info
which
could
be
do
we
just
define
three
three
types
like
s3
bucket
info
and.
C
Yeah
we
can
do
that
and-
and
we
had
something
like
that
at
one
point-
even
in
the
kep-
for
the
bucket
instance,
you
know
protocol
field,
we
sort
of
had
a
variant
for
each
of
the
three
types
as
us,
three
and
and
google,
but
I
thought
there
was
feedback.
C
B
Hey
should
we
have
provider
specific
parameters
here?
No
right!
No,
we
shouldn't
have
anything
like
that.
Yeah.
I
was
just
wondering
if
you
should
have.
B
B
C
We're
doing
both
guy
we're
mounting
kind
of
an
intersection
of
bucket
and
bucket
access,
because.
C
D
Well,
the
fact
that
it
doesn't
have-
and
it
has
a
reference,
is
you
know
it's
it's
it's
a
choice.
It's
a
design
choice,
but
it's
like
what
we
are
providing
to
the
part
is
the
access
right.
It's
not
anything
else
and
I
think
that
answers
to
what
vianney
said
that
he
wants
that
to
represent
that
there's
you
know
secret
information
inside
or
anything.
B
B
B
Okay,
so
I'm
going
to
take
that
as
yes,
okay,
so
I'll
need
one
of
you
to
sign
up
to
come
up
with
the
fields
for
azure
and
gcs.
Who
can
I
rely
on.
B
D
B
D
So
so
they
what
they
do
with
it
is
that
they
basically
stringify
or
render
right
all
the
information
into
one
one
thing:
they
call
a
connection
stream
yeah.
B
B
Okay,
so
any
we
do,
we
need
a
specific
field
called
endpoints
effects.
E
So
let
me
let
me
rephrase
okay,
so
either
you
don't
care
at
all.
You
don't
know
where
you
know
you
don't
want
to
know
how
it
works,
and
you
just
provide
the
connection
string
and
the
conditional
string
contains
everything
that
the
lib
as
your
libraries
need
to
know
to
operate,
or
if
you
want
to
do
it
yourself,
you
need
to
have
the
the
protocol
https
or
so
the
endpoint,
and
it's
always
ending
by.
E
Core.Windows.Net
and
with
the
storage
account
name
plus
the
secret
key,
you
can
connect
so
you
so
you
can
remove
line
18..
There
is
no
access
key.
E
B
So
scheme
like
for
endpoints
endpoint.
B
I
think
protocol
we
have
a
object,
storage
definition
of
s3
azure
or
gcs
like
so.
If
endpoint
is
a
url.url,
if
it's
http,
can
we
assume
that
it's
an
https.
B
This
okay
in
amazon
also
we
do
that
where
we
have
to
specify
secure
equal
to
true
or
not.
But
can
we
say
it's
do
you
think
it's
reasonable
to
expect
them
to
just
infer
it
from
the
endpoint
field,
yeah.
E
Definitely
so
so
that
that's
correct
for
me
so
yeah,
and
so
either
you
provide
endpoint,
strategical
names,
quick,
key
or
connection
strings.
D
That
how
how
do
does
it
I
mean:
how
does
this
model
separate
between
the
an
account
and
like
the
account
credentials
and
the
bucket
right.
D
We
had
this
discussion
before
that.
The
container
is
basically
just
top
level
folders
within.
D
E
E
And
typically
you
can
forge
it.
So,
for
instance,
if
you
go
to
the
portal
of
azure
and
you
want
to
give
access
to
somebody
else,
you
you
forward
the
key
you
say
hey.
This
question
is
right
to
read:
right,
create
files
and
up
to
so
you
have
to
specify
an
expiration
time.
E
E
So
it
says
token,
so
let
me
check
so
shared
access,
signature.
E
So
it
starts
with
a
question
mark
and
you
have
variables
basically
the
variables
sum
up,
the
main
properties
so
that
when
you
get
the
token
you
know,
for
instance,
the
expression
time
so
it's
in
clear,
but
there
is
also,
of
course,
the
signature
which
is
encrypted.
B
E
And
the
uri
so
again,
two
cases
either
you
know
what
you're
doing
and
if
you
just
have
the
sas
token,
you
can
forge
the
ui
by
yourself,
because
it's
abuse
right,
yes
or
if
you
are.
B
Connection,
take
it
up
into
resource,
you,
are,
you
know,
resource
string
and
then
sas
token.
E
Now
what
I
should
with
you
should
use
the
line
19.
E
You
should
have
an
optional
safe
token
for
people
that
want
to
specify
the
token
by
themselves
it's
optional,
which
is
either
a
secret
key
or
a
sas
token
right.
Obviously,
the
source
token
is
a
query,
a
list
of
query
parameters.
D
Which
is
which
means
it's
it's
an
alternative.
I
also
saw
that
there's
like
there's
seems
to
be
three
three
methods
right.
Either
the
you
can
even
use
oauth
tokens.
D
From
active
directory.
B
Well,
what
tokens
we
cannot
use
right
because
is
supposed
to
be.
You
know
supposed
to
have
azure
or
authenticated
so
they'll
they'll.
You
know
who
who
or
you
know
if
azure
is
using,
say
github.
I
don't
think
it's
using
github,
but
let's
say
it's
using
github
it'll
have
to
authenticate
with
github,
not
through
us.
We
can't
hold
that
token.
D
D
Is
the
oauth
provider
basically
in
that
sense
yeah?
So
anyway,
I
didn't
mean
to
open
another
adjustment
that
there's
it
seems
like
they
have
three
methods
and
regardless
of
the
fact
that
it's
a
connection
string
or
or
separate
but
seems
like
they,
the
the
the
rest
api
allows
either
you
know
signed
requests
using
the
the
the
access
this
secret,
key,
sorry
and
or
some
oauth
mechanism,
or
this
shared
key
thing
right.
E
Yeah,
okay,
I
see
it's
probably
possible,
but
generally
when
you
have
an
application
accessing
bucket.
It's
those
two
methods.
D
B
B
So
so
the
connection
string
is
the
equivalent
of
sds
token
this.
So,
with
the
sts
token,
what
you
can
do
is,
you
can
say
it's
for
temporary
credentials
and-
and
you
know
you
construct
a
full
url
to
the
exact
resource.
B
Like
let's
say
you
want
to
allow
someone
to
upload
to
a
certain
bucket,
you
create
an
sds
token
and
then
for
that
bucket,
and
then
you
construct
a
full
url
assigned
url
with
the
url
to
the
endpoint
of
the
actual
directory
to
which
you
want
the
person
to
be
able
to
upload
and
then
the
token,
along
with
it,
and
anyone
who
has
that
url
can
upload.
It.
B
D
B
E
B
D
No,
but
it's
the
same
concept:
it's
just
a
matter
of
assigning
a
specific
resource
versus
the
entire
bucket
like
access
to
the
to
specific
resources
and
operation.
It
means
encoding
within
the
signed,
requests
some
more
restrictions
about
what
you
can
do
right.
B
Well,
no,
it's
basically
access
to
this,
but
this
is
temporary
and
amazon.
Pre-Signed
url
is
temporary,
no
matter
how
you
look
at
it.
It's
not
a
you
know,
long-term
thing,
but
it
looks
like
in
azure.
It's
a
it's
a
long-term
thing.
I
can
just
have
this
string
and
I
can
use
it
for
as
long
as
I
need
it.
Unless
I
rework
it's,
it's
a
it's
valid,
but.
E
I
I
think
you
can,
if
you
type
shared
access,
share
access,
signature
generator,
there
is
a
public
one.
Can
you
try.
B
D
But
still
why?
Why
is
this
important
I
mean?
Is
it
like
it?
Can
I
guess
that
the
question
we
had
with
s3
was
we
can
start
supporting
s3
without
sds
right
that
that
was
the
assumption
yeah
and
we
said
okay,
so
we
want
the
mvp
of
this
feature
to
go
out
and
then
we
will
add,
more
complexity.
B
We're
thinking
azure
might
be
viable
without
sas.
That's
the
other
way
in
the
sense
that
we
don't
need.
We
don't
need
to
support
sas
if
we
can
do
because
if
it's
always
limited,
then
we
need
wiring
to
deal
with.
You
know
refresh
and
all
that
we
don't
have
that.
So
I
would
avoid
sas
if
it's
always
limited.
B
Actually,
I
would
just
avoid
it
because
if
it,
if
there
is
a
possibility
of
it
being
limited,
then
we
can
run
into
issues
where
they
put
an
sas
token
in
a
the
driver
creates
an
ss
token,
but
then
things
don't
work
still,
because
you
know
it
probably
expired.
B
Okay,
so
when
not
to
use
sas,
sometimes
the
risk
associated
with
a
particular
operation
against
your
storage
account
outweigh
the
benefits
of
using
sas.
So
such
operations
create
a
middle
tier
service
that
writes
to
your
storage
account
after
performing
business.
Rule
validation
of
the
unit
also
sometimes
simpler,
to
manage
access
keys
in
other
ways.
B
E
I
I
I
would
support
sas
tokens
just
because
sometimes
you
have
you
have
a
public
bucket,
for
instance,
which
is
a,
but
if
we,
if
we
do
that
for
azure,
we
have
to
also
support
query
strings
for
s3
the
same
way
in
s3.
You
have
pre-signed
urls
that
give
that
grant
access
to
public
buckets
for
a
very
long
time
right.
D
Present
present
urls
are
also
restricted
for
for
an
operation.
You
cannot
upload
using
that.
If
you
get
a
present,
url
forget
right.
D
B
D
B
You
might
be
right
because
I'm
not
used
to
just
like
you
said
it
I'll
have
to
check
sorry,
I
assumed
something
like
that
is
possible,
but
might
not
be
so.
D
D
D
B
Yeah,
but
if
sas
tokens
would
I
think
we
should
not
have
sas
token,
because
there
is
this
possibility
of
it
expiring
and
us.
You
know
things
just
stop
working
all
of
a
sudden
and
we
don't
even
have
a
mechanism
to
like
like
clean
it
up,
or
you
know
like
forcefully,
even
remove
it.
D
E
B
D
D
It's
optional,
it's
optional,
like
any
other
thing,
so
a
connection
yes,
a
connection
string
is,
is
simply
you
know
an
encoding
of
this
structure
which
we're
talking
about
right
now
got
it
got.
E
It
so
so,
if
somebody
really
wants
to
use
a
sass
token,
they
could,
because
they
could
use
the
connection
string.
D
If
you
still,
if
you
support
it,
I
mean
it's
still
a
question
here
and
if
no,
if
we
need
you.
E
D
E
Actually,
that
don't
that
don't
always
want
a
connection
stream.
D
D
E
My
point
is:
you
have
three
types
of
applications.
You
have
the
application
that
support
specifying
the
two
ways,
so
you
can
specify
endpoint
stage
account
name
secret,
key
or
a
connection
string,
so
this
different
environment
variables
and
they
detect,
if
you
specify
one
way
or
or
the
other,
but
there
are.
D
Just
like
s3
also
has
applications
that
will
enforce
you
to
pass
things
as
environment
variables
versus
others
that
will
go
and
read
using
the
sdk
go,
read
the
credentials
file
from
the
file
system.
So
it's
the
same.
It's
just
a
matter
of
you
know
you
we
will.
We
will
have
to
put
some
in
it
container
in
these
cases
to
adapt
this
bucket
access
information
to
the
application.
E
B
B
E
B
E
B
Okay,
okay,
so
so
so
you're
saying
this
https!
Is
that
what
you're
saying.
E
B
Well
the
end
point:
it
should
just
be
like
you
know:
it's
provided
to
us
by
the
driver,
so
we'll
just
put
it
to
whatever
it's
supposed
to
be.
I
don't
think
we
should
get
into
that
business
of
enforcing
it
right
like
like.
We
will
give
the
workload
whatever
the
server
the
driver
gives
us,
and
if
the
driver
gives
us
https,
then
we
will
show
https
to
the
workload
if
it
gives
http
we'll
show
http
to
the
work.
Okay,
so.
E
B
D
No,
no,
no,
not
really,
I
think
the
the
level.
The
first
level
is,
the
storage
account
name.
Actually,
this
is
actually
the
bucket,
and
this
represents
the
entire
storage
container.
Or
is
it
not
stored
with
the
storage
bucket?
But
it's
also
somehow.
You
know
there
is
no
access
key
in
the
sense
that,
in
the
sense
that
you
think
about
access
keys
in
aws,
where
you
provision
or
where
you
mint
more
access
keys
with
access,
key
secret
keys
and
the
container
name
is
basically
a
prefix
within
that
market.
E
E
D
But
cosi
doesn't
support
all
these
models
in
in
the
provisioning
itself,
so
we're
just
saying
that
the
driver
can
notify
what
it
did
to
the
workload
in
some
way.
So
if
the
bucket
class
will
specify
some
way
of
doing
that
in
its
opaque
parameters,
I
would
like
you
to
to
create
container
names
or
jump,
create
containers
whatever,
and
then
the
driver
will
do
that
and
then
we'll
pass
on
this
information
back
to
the
to
the
pod
right.
So.
E
The
thing
is
sometimes
applications
don't
care
where
they
are
store
right.
They
just
want
to
bucket
somewhere.
So
sometimes
it's
good
to
provide
a
container
name
for
those,
so
just
use
the
whatever
name
you
you
provide,
and
sometimes
they
decide
by
themselves
and
they
create
the
container
names
they
want.
D
B
Looked
at
this
yeah
before.
D
E
The
the
thing
is
some
some
big
data
applications.
They
expect
that
you
provide
the
container
name
because
they
just
want,
like
basically,
a
storage
account
containing
a
container
name
pair.
They
can
use
and
they
just
drop
their
files
in
it
or
read
their
file
from
it.
B
B
Okay,
that
sounds
good
okay,
so,
okay,
so
can
can
one
of
you
go
back
and
check
if
this
is
enough.
B
Like,
like
just
to.
D
D
But
the
the
thing
that
I'm
not
really
sure
about
is
what
we're
doing
with
azure
is:
how
do
we?
How
do
we
mint
credentials?
Actually,
in
that
model,
I'm
a
little
bit
confused?
I
don't
think
they
have
the
same
mechanism,
so
we
won't
be
able
to
like
a
driver
with
this
model,
won't
be
able
to
create
as
many
secret
keys
as
it
needs
to.
As
far
as
I
remember.
E
No,
you
you,
you
can
create
a
storage
account
with
an
api.
D
D
This
is
also
a
bucket,
I
mean
if
I
want
just
to
access
an
existing
one,
oh
and
create
right
so
and
I
need
new
credentials
for
it.
D
I
can
create
a
an
iim
credential
for
it
right.
So
then
we.
E
I
think,
as
tokens
are
the
equivalent
of
iem
it's
it's.
It's
very
similar
to.
E
Tokens
are
but
there
this
is
how
they
achieve
access
control.