►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 19 April 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
B
B
Man,
yeah
so
jeff
said
he
won't
be
able
to
join
us
today,
because
I
think
he's
he's
got
something
to
do
with
the
vaccine
in
the
sense
he's
gonna
get
vaccinated.
I
think
the
second
shot
or
something
so
we
can
get
started
now
and
I
believe
shingen
started.
Recording
yes,
okay,
so
so
today,
I'm
hoping
we
finish
up
the
discussion
on
credential
rotation
and
so
that
so
that
we
can
start
focusing
on
finishing
the
api
review
and
thinking
about
other
features
that
that
need
to
be
enabled
by
cozy.
D
So
I'm
hoping
that
we
can
conclude
the
credential
rotation
discussion
by
like
deciding
to
not
decide,
because
the
last
meeting
proved
that
it's
like
a
pretty
deep
subject.
And
I
don't
I
don't
see
the
value
in
like
hammering
out
all
the
details.
Right
now.
B
Yeah
that
was
going
to
be
my
suggestion
too,
which
is
since
cloud
providers
themselves.
Don't
don't
actually,
you
know,
have
a
default
mechanism
for
credential
rotation
they'll
leave
it
up
to
the
admins.
D
Okay,
but
I
mean
the,
I
think
anything
that
we
do
do
should
be-
I
don't
know,
kind
of
transparent
and
easy
to
opt
out
of
or
or
something
so
that
was
that
was.
That
was
the
case
I
was
trying
to
make
last
week.
Is
that,
like
there's
a
way
to
we
could
write
the
spec
in
such
a
way
that
it
doesn't
matter
you
know,
and
if
someone
wants
to
do
something
they
can
and
if
they
don't
want
to
do
anything,
they
don't
have
to
and
cozy's
just
sort
of
getting
out
of
the
way.
D
B
No,
I
mean,
I
don't
think
it's
too
bad
to
be
a
little
opinionated
here
and
say
right
now.
We
don't
have
credential
rotation,
but
in
the
future,
with
with
the
service
account
based
authentication,
we
will
have
it.
Don't
you
think,
like
I'm
saying
just
just
only
have
non-rotatable
credentials
to
begin
with.
D
I
think
once
once
you
say
that
the
the
security
or
the
access
mechanism
is
your
security
token
that
your
pod
is
running
under
like
at
that
point,
then
what
you
really
want
is
your
pod
security
token
to
be
rotating,
and
if
kubernetes
can
do
that,
which
I
think
it
can
and
everything
else
works
like
you're
done
again.
Cozy
doesn't
have
to
get
in
the
way
right.
D
D
D
E
D
D
Well,
it
would
just
be
that
you,
you
have
a
workload
running
and
you'd
have
to.
You
would
start
off
with
one
token,
but
after
some
period
of
time,
kubernetes
give
it
a
new
token,
and
if,
if
the
pod
was
using
its
kubernetes
token
to
obtain
access
to
whatever
bucket
it
was
communicating
with,
you
would
just
get
new
tokens
every
so
often
and
as
long
as
you
were
pulling
the
latest
token,
you
would
always
have
access
and
something
else
would
be
in
charge
of.
D
F
D
B
In
mind,
even
in
the
cap,
if
you
see
we,
we
had
two
credential
authentication
mechanisms
that
we
had
discussed
andrew
brought
this
up
as
a
requirement
actually
for
for
google
cloud,
but
when
we
looked
into
it
it
made
sense
for
all
the
different
clouds
is.
Is
there
is
there?
Is
that
is?
Does
it
I
mean
saying
that
credential
rotation
will
only
work
with
service
account
tokens?
Does
it
does
it
prevent,
say
scality
from
providing
the
service
like
rotation,
because
if
it
does,
then
we
should.
We
should
address
it.
B
We
should
figure
it
out.
Oh
yeah,.
F
B
F
B
B
What
if
we
were
to
what,
if
we
were
to
associate
a
service
account
with
the
backend
in
the
sense
we
provide
an
api
that
would
that
would
allow
a
driver
to
you
know,
driver
to
return
return?
Some
we
already
have
that
api.
B
Tell
me
this
as
a
service
account
token:
is
it
okay,
so
it
says
it's
associated
with
the
secret,
so
would
that
mean.
D
D
I
see
so
like
when
you're
talking
to
gcs
you
use
the
same
account
token
you
use
for
doing
anything
else
in
google,
and
so
of
course
it
just
works
and
a
lot
of
people
find
that
convenient
if
you're
in
google
and
you're
using
google
storage
so
yeah
there's
no
reason.
We
should
not
allow
that
but
yeah.
The
vast
majority
of
implementations
by
number,
if
not
by
use,
will
will
just
be
doing
the
traditional
access,
key
secret,
s3
style.
B
Thing
so
so
to
clarify
one
thing
nicholas:
this
is
not
something
that
needs
to
be
supported.
This
is
something
that
can
be
supported
if
you
support
it,.
D
E
D
Well
so
my
answer
to
that
is
the
proposal
I
made
last
thursday.
You
know
where
we
leave
this
up
to
the
driver,
and
the
only
change
you
make
on
the
cozy
side
is
allow
the
driver
to
provide
some
expiry
date
and
then
because
he's
responsible
for
getting
the
new
token
or
getting
the
new
information
after
that
date
and
sending
it
to
the
pod
and
then
that's
all
cozy
does
is
just
you
know,
leaves
it
up
to
the
driver
that.
D
Be
I
think
the
lowest
cost
most
flexible
way
of
doing
it
at
the
cozy
layer
to
say
you
know
you
can
it's
an
opt-in
thing?
You
can
just
return
infinity
for
your
expiration
date,
in
which
case
you
basically
have
a
static
token
or
you
can
return
some
date
and
then
cozy
will
call
you
back
at
that
date
and
get
the
new
token.
And
then
you
can
do
whatever
you
want
down
in
your
driver,
your
storage
system,
beyond
that,
so
that.
F
I
I
totally
agree
with
that
being
the
the
most
likely
design.
If
at
some
point
we
want
this
as
a
feature
in
cozy.
Now.
Do
we
put
this
beyond
v0
or
whatnot,
because
this
will
have
an
impact
on
the
grpc
api,
which
of
course
is
extensible.
So
it's
not
necessarily
a
blocker,
or
do
we
put
this
in
the
provisioner
api
as
soon
as
possible
as
well
and
then
implement
the
logic
to
refresh
the
token
in
the
various
site,
cars
and
controllers
that
cozy.
D
Yeah
yeah,
so
so
I
I
agree
that
if
we
don't
do
it
now
and
we
do
it
later,
it
will
be
a
breaking
change
at
some
point
in
the
future
and
as
long
as
we're
still
in
alpha,
that's
not
a
big
deal,
but
then
assuming
we
want,
we
would
want
to
avoid
a
breaking
change.
Just
for
the
sake
of
avoiding
breaking
changes.
B
Yeah,
I
think
you
know
we're
all
on
the
same
page.
It's
just.
We
don't
have
the
solution,
yet
it's
not
like.
We
have
two
competing
solutions.
That
being
said,
I
think
you
brought
up
a
good
question
randy.
B
What
if
we
said,
I
think
for
alpha.
One
of
the
things
that
I
that
I
want
to
push
for
is
is
to
get
through
the
cap
review
process.
That
is,
that
is
the
hardest
step
or
the
api
review
process.
B
Once
we
get
through
that
the
the
scrutiny
will
not
be
as
high,
because
the
overall
design
has
already
been
accepted.
Now
that
we
know
there's
a
path
to
to
enable
credential
rotation,
and
I
think
we
can
actually
leverage
this
time-bound
tokens
to
do
that
to
wire
it
all
the
way
into
the
workload
I
would.
I
would
at
least
push
it
past
that
we
can.
We
can
get
started
on
this.
B
True,
so
we
already
have
something
like
that:
let's
open
that
up
and
and
it's
open,
everyone
can
take
a
look
at
it.
So
I'm
gonna
send
a
link
to
this
over
slack.
Sorry
over
the
chat
here.
B
B
B
You
don't
see,
oh,
are
you
a
part
of
kubernetes
sigs.
B
Ben
yeah
you're,
not
a
member
of
this-
I
don't
know
I
can
get
you
added.
Let
me
see,
how
did
we
add
hey
chris?
Are
you
there
yep
hey?
Do
you
have
a
link
to
that
pr
that
anime
to
add
you
or
you
made
that
pr
right.
C
C
Let
me
double
check.
I
think
it
was
a
community
like
kubernetes
community.
B
C
C
B
Yeah
I
mean
like,
I
think
I
think
you
need
specific
permissions
for
kubernetes
six
anyway,.
H
B
This
link
both
who's
that
so
ben.
You
should
be
able
to
follow
this
link
and
did
you
send
the
same
thing?
Perfect?
Okay,
oh
you've
sent
a
different
one.
A
B
B
Projects-
okay,
so
so
now
that
it's
it's
it's
on
the
board,
we
will
know.
I
think
we
can
add
a
milestone
to
this.
Oh,
you
say:
convert
to
issue
anyways.
This
is
not
important.
B
Let's
chase
api
all
right,
so
so,
if
that's
what
we're
saying,
I
think
we
should
move
forward
to
the
next
thing.
So
the
next
thing
that
we
have
to
do
is
look
at
look
at
common
features.
I
would
say
the
most
widely
used
features
used
by
different
object,
storage
systems
and
understand
whether
they
separate
them
or
not,
and
how
we're
going
to
do.
B
One
request
that
already
came
in
was
for
metrics
people
want
to
know
information
about
the
bucket
itself,
ideally
populated
in
the
bucket
object.
So
this
person
asked
for
bucket
usage
and
performance
metrics
so,
for
instance,
how
much
space
is
used
and
performance.
I
think
I
think
these
you
know
these
these
terms
already
make
sense
to
everyone
here,
so
we
pushed
it.
We
pushed
it
away,
we
said
we'll
do
it
after
alpha
and-
and
I
think
we
can
start
looking
at
this
there
is.
B
There
is
one
more
thing
that
I
just
thought
of
that
that
that's
something
that
we
haven't
really
talked
about
much.
It
is
developing
a
cozy
sdk
for
workloads
so
that
so
that
they
can,
they
can
just
use
the
sdk.
To
talk
to
to
talk
to
you
know,
to
read
the
cozy
cozy
files
and-
and
you
know,
we'll
keep
it
updated,
we'll
keep
it
current
and
people
just
have
to
import
the
latest
version
to
just
integrate
with
so
actually
the
sdk
first.
B
I
think
I
think
we
brought
it
up
once
right.
Did
we
bring
it
up
once
in
the
meeting.
B
Okay,
and
and
did
we
decide
whether
we're
going
to
go
with
a
go
sdk
or
something
like
I
mean
what
other
options
do
we
have
so
so
you're
going
to
provide
a
cozy,
client
library,
basically
which
which
people
could
just
you
know,
import
and
go,
and
that
would
read
the
buckets
for
you,
give
you
the
credentials
and
and
once
you
get
the
credentials
from
from
this
go
library
you
can
then
plug
it
into
your
s3,
client
or
whatever,
and
and
use
it.
C
C
Like
so,
for
example,
we
set
the
mount
point
for
credentials
inside
of
the
the
volume
section
in
your
manifest
for
your
like
deployment
or
pod
spec
right.
C
Yeah,
if
you
go
to
the
csi
adapter
repo,
you
can
see
an
example
there
that
might
be
easier
than
trying
to
explain.
C
C
C
C
Yeah,
so
if
you
see
here
under
volumes
like,
we
declare
the
volume
and
then
we
have
a
volume
mount
right
which
mounts
that
volume
at
like
slash
data,
slash
cozy.
In
this
case,
I
I
wasn't
able
to
find
a
good
way
to
pass
that
information
to
like
you
know,
whatever
workload
is
running.
B
D
D
D
D
C
Yeah
but
but
I
think
I
think
it
would
make
sense
to
just
do
a
go
sdk
for
the
time
being.
B
C
B
C
No,
it
would
need
to
be
different.
You
might
want,
like
you
know,
data
like
you
know,
archive
data
slash,
you
know.
Long
term
looks
like
there
need
to
be
different
paths.
B
C
I
I
don't
think
it
would
go
through
because
we'd
be
trying
to
make
multiple
amounts
to
the
same
path
and
then
trying
to
like
overwrite
the
credentials
and
connection
information
right
right
now.
We
have
it
error
if
it
already
detects
credential
information
in
the
in
the
directory.
B
B
Okay,
okay,
yeah
though,
and
and
bucket.yaml
is
that-
is
that,
what's
in
the
protocol
structure,
is
it
still
that
thing.
B
C
There's
like
a
credentials.json
and
protocol
json,
I
think
so.
The
protocol
just
connect
contains
the
value
from
the
the
protocol
field
in
the
api.
B
I
see
okay
so
from
the
bucket
got
it
so
so
bucket
name
and
and
whatever
else
is.
C
B
C
We
don't
have
support
for
certificates,
yet
I
think
that
was
another
point
of
discussion
that
we
didn't
really
reach
a
resolution
on.
I
think
that's
something
that
we
still
like
need
to
figure
out.
B
Okay,
so
so
the
clients
would
need
to
know
where
the
certificates
are
number
one
or
we
could
say
the
certificates
are
always
at
this
part.
If
the
path
doesn't
exist,
you
don't
use
search.
B
D
This
is
already
a
problem
that
people
who
make
images
have
to
deal
with.
I
I
do
think
that
that
it's
conceivable,
you
would
have
object.
Storage
with,
like
some
some
ca.
That's
not
in
the
standard
list
of
root.
Cas
and
you'd
want
to
be
able
to
supply
that
certificate
and
have
a
client
consume
it,
but
that
seems
like
a
nice
to
have
in
general
you
always.
You
have
to
worry
about
list
of
root
ca's
in
your
pod
definition,
anyways.
F
F
If
your
application
also
needs
to
talk
to
some
other
servers,
then
you
may
need
to
make
sure
that
its
ca
is
being
trusted
somehow.
But
if
you
want
to
make
sure
that
the
application
which
may
have
a
ca
registered
for
this
other
thing
that
I
need
to
talk
to
to
be
able
to
be
portable
across
multiple
object,
storage
implementations,
then
we
should
not
push
the
owners
of
getting
the
ca
for
those
various
storage
systems
into
the
image
to
the
user.
D
No!
No!
No,
if
you
want
portability
today,
the
way
you
would
do
that
would
just
be
make
sure
that
you're
signed
by
one
of
the
root
cas
so
that
you
can
obtain
trust
that
way
right
as
long
as
your
objects,
storage
server
has
a
signed
certificate
from
one
of
the
root
cas,
then
you're
portable,
because.
F
F
D
Yeah,
I
guess
I
guess
my
only
worry
is
like.
If
you
try
to
specify
it
we're
gonna
have
to
be
very
strict
about
like
is
it
you
know?
What
do
all
implementations
have
to
provide
it,
because
if
there's
one
that
doesn't
have
one
or
doesn't
know
what
to
provide
like,
you
can't
leave
it
blank
right.
If
we
make
it
part
of
the
spec,
then
it
becomes
required
and
you
can't
not
have
it,
and
so
we'd
have
to
think
through
what
you
know
what
to
do.
D
If,
if
or
you
need
a
very
clear
signal,
I
guess
in
the
top
level
file
saying
like
here's,
the
root
ca
or
I
don't
have
a
root
ca,
so
that
like
apps,
would
know
what
what
what
to
expect
when
they're
going
through
their
their
inputs
right.
D
B
C
C
Under
the
response
for
provisioner
grant
bucket
access
right
now
we
have
a
few
fields,
account
id
credentials,
file
content,
which
is
the
actual.
C
C
B
I
think
the
original
intention
of
having
this
was
to
allow
turnkey
migration
to
cozy,
where
drivers
would
tell
you
hey,
I'm
aws,
so
you
put
you
put
my
files
in
dot,
aws
home,
slash,
dot,
aws,
slash,
config
or
whatever
the
file
was
for
each
of
the
drivers.
That
way
a
workload
could
just
could
just
change
some
configuration
in
the
kubernetes,
pod,
spec
and
and
just
like,
just
continue
working
without
any
changes
to
itself,
but
I
think
somewhere
somewhere
along
the
way.
I
think
ben
ben
was
also
saying
this.
B
That
is,
I
think,
it's
reasonable
to
expect
applications
to
start
to
to
make
this
change
of
reading
config
a
different
way
in
order
to
start
using
cozy,
because
I
don't
think
there's
a
there's.
Another
way
that
that
works
across
all
the
different
cloud
providers
or
all
different
kinds
of
configurations.
D
Yeah,
I
think
what
would
be
reasonable
would
be
to
have
some.
You
know
if,
if
you
have
something
that
just
expects
like
standard
aws
credentials,
and
it
wants
to
run
that
way,
what
you
could
have
is
an
init
container.
That
knows
how
to
take
what
cozy
supplies
convert
it
into
that
format
and
put
it
in
the
appropriate
place
so
that
the
the
actual
application
can
run
unmodified.
D
B
B
Because
if
you
were
to
write
an
sdk,
one
of
the
problems
is
we'd
have
to,
we
would
end
up
writing
it,
for
you
know
in
each
of
the
languages
or
somehow
having
it
work
with
multiple
application
languages.
But
if
we
were
to
use
the
init
container,
that
just
translates,
we
really
have
to
only
support
the
three
protocols
that
we
have
right
now.
It
takes
a
cozy
spec,
you
know
bucket.yaml
and
everything
and
and
converts
it
into
s3,
specific
values
or
gcs,
specific
values
or
azure
specific
values.
H
But
if
you
have
this,
why
not
run
it
in
the
node
adapter?
If
you
have
these
like
few
schemes
of
of
you
know,
what
do
you
support,
and
you
know
you
would
have
like
a
cozy
native
one
for.
H
D
Yeah
we
want
this
to
be
outside
the
this.
The
specification,
the
specification
will
tell
you,
you
know
you
get
this
bucket.yaml
file
and
we'll
tell
you
what's
in
it
and
then
all
of
this
stuff
would
be
out.
You
know
below
or
below
that
where
this
consumes
that
and
produces
it's
basically
convenience
routines
for
people
that
were
too
lazy
to
do
it
themselves,
but
it
wouldn't
be
part
of
the
spec
right,
so
the
spec
will
just
have
the
the
cozies
yeah
the
bucket
dies
and
what
you
get
out
of
there.
B
All
right,
let's
do
that,
so
the
credentials
file
path
does
not
make
sense
anymore.
C
Yep,
I
I
I'm
opening
a
pr
shortly
I'll
take
that
google
in.
B
B
Hey
so
this
structure,
so
this
is
just
supposed
to
be.
This
is
like
an
opaque
string
as
far
as
we're
concerned,
we
just
pass
it
along.
C
Yep,
so
that
that
that
that'll
be
mounted
into
the
credentials
file.
What's.
H
By
the
way,
does
the
bucket
ask
access
request
have
so?
If,
if
I'm
writing
the
init
container
through
you
know
the
kubernetes
api,
am
I
able
to
read
the
same
information
from
the
bir
or
is
it
only
accessible
through
the
downward
api.
H
B
B
H
B
B
H
So
say
I
I'll
give
you
an
example.
I
could
write
an
init
container
that
reads
directly
from
I
mean,
maybe
not
even
an
init
container,
maybe
some
automation
right.
H
H
D
H
H
B
Yeah,
that's
a
good
question
and,
and
that
that
does
seem
like
a
valid
use
case.
To
be
honest,
though,
I
can't
imagine
how
exactly
it
would
be
used
that
some
somehow
strikes
is
a
important
use
case.
H
H
Does
it
occur
anywhere
else?
Do
you
think
to
keep
credentials
in
in
other
yamas,
which
are
not
secrets,
then.
D
H
B
D
H
A
H
H
D
Sense,
I
would
say
no,
I
would
say
if
a
workload
really
wants
to
sort
of
do
this
at
work
on
its
own
and
go
read
the
ba
and
go
read
the
bar
and
get
you
know
just
grant
that
that
application,
a
role
where
it
can
read
bas.
You
know
which
sort
of
gives
it
super
user
level
access
but
like.
D
What
makes
it
more
secure
is
it's
not
namespaced
and
default.
Rbac
doesn't
allow
you
to
read
it
so
so
so
they're
invisible
to
ordinary
users,
and
you
have
to
be
granted
a
role
that
lets
you
read
them
before.
You
can
even
see
them
and
that's
that's
the
security
is,
is
somebody
has
to
grant
you
a
role.
H
D
H
That
and
so
so,
okay,
so
now
to
the
just
well,
maybe
the
only
thing
I'm
worried
about
is
that
keeping
sick
like
the
same
credentials
in
bas
might
not
be
if
we're
doing
it.
This
way,
maybe
we're
doing
that
wrong
as
well.
Regardless
of
this
question.
A
B
Okay,
so
ba
has
no,
it
doesn't
have
credentials,
it
has
a
pointer
to
the
credentials.
It
has
information
about
the
bucket
that
this
credentials
belongs
to
that's
about
it.
Actually.
B
This
yeah-
I
think
you
just
gave
the
answer-
I
I
think
I
think
you
know
given
that
way.
I
think
it
would
be
okay
to
put
the
bucket
name
back
into
the
bucket
request
status
and
secret
name
back
into
the
bucket
access
request
status.
That
would
be
enough
right.
B
H
It's
always
needed
right,
so
we're
saying
it's
kind
of
an
er,
not
sure
that
every
vir
would
want
this
to
be
to
always,
you
know,
bring
the
the
credentials
back
to
the
workload
as
secrets,
but
so
perhaps
we
can
still
do
that
like
the
normal
path
by
default
and
just
mount
it
and
just
you
know,
the
node
adapter
will
just
expose
it
to
the
pods
and
that's
it.
But
if,
if
a
bir
request
specifically
to
create
a
secret
with
credentials,
would
that
make
sense
to
create
in
the
user
namespace.
H
That
I
want
to
create
right
so
that
I
want
a
secret
with
credentials.
So
sorry
there.
I
want
this,
the
secret
with
the
the
with
the
sensitive
information
right,
the
credentials
and
whatever
else
we
were
we're
kind
of
passing
in
the
ammo
of
the
bucket.
B
Yeah,
instead
of
the
instead
of
the
provisional
name
space,
which
is
the
default,
I
see
I
don't
know
just
the
abstraction
seems
like
it's.
It's
not
symmetric
like.
E
H
Okay,
let's
yeah.
B
D
H
We
could
support,
though,
putting
it
in
two
secrets:
one
for
the
provisioner
and
one
for
the
for
the
user
right.
H
Anyway,
it's
just
it's
a
it's
another
complexity,
I'm
not
sure
it's
the
let's
see
if
this
desire
to
use
kubernetes
api
comes
up
again,
I
don't
know,
I
don't
think
it's
clear.
B
That
we
have
to
have
it
right
right
right
is
it?
Is
there
anything
else
that
that
follows
this
pattern,
like
you
know
where
they
want
to
be
able
to?
You
know
access
the
same
object
from
two
places
like
like.
You
know
that
split
brain
problem
is
definitely
going
to
happen.
If
you
do
this,
where
you
have
two
versions
of
the
same
secret
right.
B
Yeah,
I
think
this
is
a
lot
of
complexity.
I
I
think
it's
best
to
wait
and
see
if
anyone
has
this
request
and
then
go
from
there.
B
Yeah
yeah
anyways,
I
mean
they
technically
have
access,
but
it's
not
automated,
like
you
said,
okay,
so
chris
just
brought
up
this
issue.
So
let's,
let's
finish
up
some
of
the
engineering
questions
that
people
have
today
and
and
next
week
I
mean
on
thursday,
like
I,
I
want
to
re-prioritize
a
little
bit.
I
don't
think
the
next
thing
we
should
do
is
you
know,
get
into
performance,
metrics
or
anything.
B
I
think
the
next
thing
we
should
talk
about
is
actual
implementation
like
vendors
implementing
it,
and
then
you
know
where
we
can
go
with
it
or
how
far
we've
come
with
it
and
and
if
they
need
any
help
or
anything
like
that,
but
but
right
now,
let's
address
this
question
that
chris
brought
up
chris.
Do
you
want
to
do
you
want
to
explain
it.
C
Yeah
sure
so,
there's
an
old
issue
that
says
protocol
structures
shouldn't
have
like
the
bucket
name.
So
right
now,
if
we
look
in
the
grpc
and
protobuf
spec
provisioner
create
bucket
has
a
name
field
and
each
of
the
protocols
basically
has
like
kind
of
an
equivalent
name
field
right,
whether
it's
container
name,
you
know
bucket
name
whatever,
basically
equivalent
to
just
name
so
there's
kind
of
like
a
duplication
there
and
it's
unclear.
C
You
know
which
one
does
what
wha.
Why
are
there?
Why
are
there
two
there's
an
issue
here
to
revise
this?
You
know
whatever
revised
means,
whether
it's
get
rid
of
the
one
in
provision
or
create
bucket
or
remove
it
from
the
protocol.
C
B
C
I
agree
I
mean
we
can
now
we
can,
we
can
add
the
names
the
other
way
right,
because
the
name
has
happened
from
the
bucket
id.
B
The
only
problem
is
azure
blob,
doesn't
the
name
is
not
just
the
name?
It's
container
name
plus
storage
account,
and
I
couldn't
tell
if
there
is
some
concatenation
of
this
or
like
some
construction
of
this.
That
makes
a
single
ap.
You
know
that
makes
it
a
single
string.
C
Yeah
I
was
trying
to
read
about
that
as
well.
It
seems
like
the
storage
account
or,
I
think,
is
basically
like
another
layer
of
abstraction
on
top
of
a
bucket
name.
So,
like
one
storage,
account
can
have
multiple
buckets,
but
the
way
it's
logically
organized
is
that
the
storage
account
has
containers,
I
think,
not
buckets.
They
call
it
containers
containers,
so
it
would
make
sense
to
have
that
inside
of
the
like,
the
protocol
right.
F
So
so
on
that
one
maybe
vna
can
give
some
insight
because
he
has
worked
on
the
blob
apis.
Oh
please!
Yes,.
B
F
Go
ahead,
basically,
how
account
names,
storage,
names
and
container
names
relates
to
buckets.
E
Storage
accounts
right,
yeah
yeah,
so
in
in
azure
it's
not
called
the
buckets
called
storage
account,
but
small
is
the
same
right.
What's
the
container.
E
C
H
C
H
No,
no,
no
so
the
storage.
So
it's
not
from
my
experience,
I'm
not
sure
if
I'm
missing
something
about
what
you
just
said,
but
storage
account
is
like
a
general
group
where
you
connect
to
in
blob,
and
then
you
create
containers
and
every
container
is
a
bucket
container,
has
blobs
in
it
right.
H
Is
a
bucket,
and
in
that
sense
it's
not
that
you
have.
You
know
when
you
go
into
a
storage
account
in
the
azure
portal
or
something
right.
So
you
see
a
list
of
all
your
all
your
containers
right.
It's
just
a
like
a
flat
list.
E
B
E
E
E
B
So
so
it
is
so.
The
question
is
like:
like
guy
was
saying:
can
you
configure
a
container
like
the
way
you
can
configure
a
storage
account,
so
I'm
trying
to
ask,
can
you
apply
bucket
level
policies?
Honestly,
you.
H
B
B
H
Key
and
procedural-
maybe
difference
between
these,
which
I
think
makes
this
container
something
we
need
to
be
provisioning.
E
So
in
amazon
you
have
a
bucket
key,
you
have
an
access
key
and
a
secret
key
right
to
access
the
bucket
and
in
azure
you
have
exactly
the
same
for
for
accessing
your
storage
account.
You
have
an
access
key
and
it's
not
exactly
the
same
name,
but
it's.
C
H
B
To
our
next
question,
is
it
reasonable
to
say
say
a
particular
namespace
will
will
always
provision
from
you
know
a
handful
of
storage
accounts.
Let's
just
say
one
storage
account
a
particular
namespace
that
provisions
buckets
in
kubernetes
namespace.
B
Let's
say
let's
say
we
pre
pre-create
a
bunch
of
bucket
classes,
each
with
a
different
storage
account
and-
and
you
know
the
admin
goes
and
tells
the
users
hey
use
storage,
account
one
or
storage,
sorry
bucket
class,
one
or
bucket
class
two,
and
then
it
you
know
what
what
I'm
trying
to
ask
is.
If
you
have
a
bucket
class
hard-coding,
a
storage
account
and
then
and
then
you
create
containers
on
you
know,
using
that
storage
account
is
that
is
that
flexible
enough
for
for
use?
You
know
regular
use
cases.
C
B
E
C
H
E
H
E
H
E
No,
no
because
a
re
first
in
azure,
a
resource
is
a
resource
and
you
can
have
multiple
storage
accounts
bound
to
a
results
resource
right,
but
everything
you
do,
for
instance,
are
your
guarantees
and
stuff
like
that
is
bound
to
start
account.
E
E
You
cannot
have
guarantees
per
containers,
for
instance,
typically,
a
storage
account
can
do.
I
don't
know
200
gigabytes
alone,
and
you
can
speed
and
that's
a
performance
bound
to
one
start
account,
but
for
container
there
is.
There
is
no
such
quantities.
B
E
Exactly
like
a
prefix
in
in
amazon,
so
I
think
it
would
be
interesting.
Sometimes
you
an
application
wants
to
to
be
prefixed
for
some
reason,
with
multiple
applications.
E
B
B
So
so
what
what
he's
saying
is
you
create
a
bucket
with
a
with
a
particular
storage
policy
like
reduced
redundancy
or
whatever
high
performance
or
whatever,
and
the
policies
applied
on
that
bucket
now
and-
and
this
could
apply
to
you
know-
take,
for
instance,
a
team
that
does
data
engine,
another
team
that
does
you
know
etl.
B
So
so
these
two
teams
will
write
under
different
folders
or
different
prefixes
under
the
same
bucket,
but
the
admin
applies
policies
on
that
bucket
as
a
whole
and-
and
so
the
equivalent
of
that
here
in
azure
would
be
a
storage
account
would
be
the
equivalent
of
a
bucket,
and
my
container
would
be
that
prefix
of
the
folder
in
which
each
of
the
different
you
know
tasks
are
written
and
and
at
a
global
level.
Now
you
you
have
any
this
team,
any
bucketed
provisions,
or
you
know
any
prefix.
B
It
gets
from
that
bucket
ends
up
having
the
policy
that's
defined.
For
the
entire
team,
that's
the
perspective,
he's
coming
from.
H
I
perhaps,
but
to
me
I
mean-
and
I
worked
with
this
with
the
azure
for
a
while,
and
I
in
my
experience,
a
storage
account
would
just
be
you
know
you
just
have
a
handful
of
these.
It's
not
really
something
you
allocate
to
too
often,
and
you
know
you
just
you
have
to
provision
containers
as
well.
So
that's
another
provisioner.
H
B
E
H
So
containers
containers
are
created,
ad
hoc
from.