►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 11 May 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Ideally
you'll
just
maintain
a
simple
map
or
some
data
structure
in
memory
that
that
just
says,
there's
a
bucket
by
this
name
it
it.
It
is
not
meant
for
supporting
any
data
apis.
I
o
apis,
like
you
know,
putting
data
into
it
or
reading
from
it.
It's
just
meant
to
assimilate
the
provisioning
side
of
things
nicholas
has
been
working
on
it
and
he
has
an
update
to
share
with
us.
So
let's
start
with
that,
okay,
we're
recording.
B
Okay,
wait
what.
C
D
B
Okay,
okay,
okay,
so
as
sid
mentioned,
indeed,
after
the
conversation
last
week,
I
went
ahead
and
implemented
this
sample
driver.
B
Just
let
me
know
when
it
goes
away,
so
I
implemented
this
in
a
private
repository.
Well,
I
mean
not
the
kubernetes
six,
not
quite
e6
repository,
yet
I
think
the
intent
is
to
move
it
there.
B
What
I
did
was
indeed
implement
the
whole
grpc
api
and
then
also
added
a
bunch
of
things
which
drivers
that
are
not
necessarily
within
the
kubernetes
organization
may
want
to
have
as
well
when
it
comes
to
ci,
et
cetera.
Of
course,
all
of
that
is
opinionated.
Others
may
want
to
do
ci
differently,
which
is
fine.
B
It's
just
a
couple
of
examples.
Code
wise.
It's
really
not
that
much,
I'm
maybe
just
going
through
it.
I
can
first
show
like
the
the
docker
file
is
a
very
simple
docker
file
which
performs
a
build
in
which
you
can
run
the
unit
tests
and
which
performs
the
actual
build
of
the
the
final
container.
So
there's
two
containers,
one
in
which
the
binary
is
built,
one
which
is
what
you
actually
ship
into
a
kubernetes
cluster
other
than
that
there's
not
much
in
the
root
code
wise.
B
I
split
it
up
in
one
the
the
command,
so
the
the
main
entry
point
for
the
provisioner
and
two
packages,
which
are
kind
of
reusable
code.
Whether
this
is
not
public
api
right
now,
so
whether
we
see
these
as
something
other
provisioners
can
build
upon
is
like
to
be
seen
when
it
comes
to
the
command.
So
this
is
basically
what
you
want
to
implement
as
a
provisioner,
a
vendor
who
wants
to
have
a
provisioner
for
a
storage
system.
B
The
main
entry
point
is
really
fairly
simple.
Provision
name
obviously
needs
to
change
and
the
the
goal
import,
part
etc
needs
to
change
as
well.
I
just
made
everything
for
this
repo.
B
All
you
can
provide
right
now
is
an
end
point,
which
is
where
the
provisioner
will
listen
as
in
unix,
so
the
provisioner
itself
is
a
grpc
server.
To
the
end
point
is
the
unix
domain
socket
parts
you
you
want
to
use
or
a
tcp
socket
like
an
ip
address
and
a
port
number
that
you
want
to
use.
B
I
was
thinking
of
building
a
small
cli
utility
as
part
of
this
as
well,
which
allows
you
to
invoke
or
easily
invoke
from
the
cli
rpc
calls
on
a
cozy
provisioner
grpc
endpoint
for
purely
for
for
testing
purposes.
A
So
there
is
yeah,
so
we
can.
We
can
talk
about
that,
there's
something
called
grpc
curl,
which
we
can
use
that
kind
of
testing,
which
is
quite
interesting
in
the
meantime.
I
have
a
few
questions
about
this,
though,
so
how
is
this?
How
does
it
create
buckets?
Does
it
use
like
an
in-memory
data
structure
of
some
sort?
Yes,.
B
Is
parsing
of
the
modline
arguments,
then
the
provisioner
server
is
what
you
actually
need
to
implement
as
a
storage
system
alter.
We
have
two
servers
in
the
cozy
spec:
there
is
the
identity
server
and
the
provisioner
server
the
identity
server.
I
simplified
a
bit
and
then
I
can
go
through
that
in
just
a
minute
where
you
don't
really
need
to
implement
the
identity
server
yourself.
B
I
build
a
wrapper
which
basically
just
takes
it
straight
and
gives
you
an
identity
server
as
a
result,
and
then
there's
a
bit
of
code
to
well
basically
set
up
all
the
whole
grpc
server
set
of
single
handling
and
whatnot,
which
is
likely
shareable
between
multiple
provisioners.
But
again,
this
is
up
to
you
whether
or
not
you
want
to
do
so,
then
on
the
actual
provisioner
server.
So
this
is
where
the
provisioner
server
grpc
interface
is
implemented,
and
indeed
it's.
It's
super
simple,
this
one,
it's
purely
in
memory.
B
So
if
you
restart
the
the
qual
in
which
the
provisioner
is
running,
then
everything
is
gone.
It
doesn't
really
try
to
remember
anything
which
would
complicate
things
a
bit.
The
data
model,
as
I
just
put
on
slack
as
well,
is
slightly
weird
because
I
keep
access
credentials
that
are
requested
to
that
are
created
to
access
a
bucket
within
that
bucket.
I
don't
keep
the
the
accounts,
so
you
want,
at
the
storage
system
level,
may
want
to
change
that,
but
functionally
this
works.
It
just
doesn't
necessarily
map
easily
to
a
real
storage
system.
B
Yeah.
That
makes
sense,
so
I
have
buckets
and
a
bucket
has
an
id
which
I
chose
to
be
a
uuid
for
for
simplicity's
sake,
but
of
course
that
depends
on
on
the
back-end
storage
system.
A
bucket
has
a
name
and
a
bucket
may
have
had
a
bunch
of
parameters
passed
in
at
creation
time,
and
we
need
to
keep
those
around
to
ensure
one
of
the
item,
potent
guarantees
that
the
the
cozy
spec
requires
and
then
within
a
bucket.
I
keep
the
accounts
that
are
created
for
that
bucket
around
as
well.
B
Behind
the
read,
write,
mutex
vote
in
luca
by
name
as
well
as
book.
Look
up
by
uuid,
because
you
kind
of
need
both
to
implement
ddrpc
calls
an
account
itself.
It's
got
an
id.
It's
got
a
name,
it's
got
an
access
policy,
but
I
there
this
thing
doesn't
implement
any
kind
of
access
policy,
because
it's
it's
not
providing
any
any
actual
storage.
So
I
just
keep
that
around
and
then
similar.
B
The
parameters
that
were
passed
in
when
the
account
was
kind
of
created
are
kept
as
well
again
to
make
sure
that
we
can
implement
only
one
of
the
calls
that
require
either
buttons
correctly.
B
I
think
the
my
provisioner
server
itself,
it's
it's
a
struct
in
which
you
can
keep,
for
example,
a
client
to
your
storage
systems-
api,
if
that's
applicable.
In
this
case,
I
just
need
to
keep
a
mapping
of
buckets
either
by
their
name
or
by
their
id
uuid
to
the
bucket
instance.
Again,
the
map
behind
the
read,
write,
mutex
and
I
make
this
struct
inherit.
I
know
it's
not
the
right
word
in
ghost
speech
the
unimplemented
provisional
server
such
that
and,
of
course,
at
this
point
in
time.
B
Maybe
I
shouldn't
do
so,
but
if
at
some
point
we
have
the
say
the
1.0
release
of
the
kodi
grpc
spec,
then
any
provisioner
could
inherit
from
the
unimplemented
one
such
that
any
new
calls
that
are
added
post
1.0
to
the
to
the
grpc
spec
should
be
optional,
calls,
otherwise
we're
breaking
backwards,
compatibility
and
then
using
this
approach,
any
provisioner.
A
Yeah,
so
don't
you
think
it's
better
if
we
actually
failed
if
the
calls
were
not
implemented
rather
than
succeed
or
you
know
fail,
silently
it's
not
really
so
adding
new
calls,
it
doesn't
break
backward
compatibility.
Removing
old
calls
breaks
backward
compatibility,
so
in
this
case
I
mean
I
like
the
idea
of
having
this
unemployment
provisional
server,
but
in
terms
of
guidelines,
I
wouldn't
say
just
because
you
have
this
unimplemented
interface,
I'm
guessing
it's
an
interface.
I
don't
see
it
here,
just
because
you
have
this
unlimited
unimplemented
interface.
A
You
know
it
does
not
mean,
like
you
know
your
your.
Your
provisioner
is
actually
api,
compliant.
It
will
compile
with
with
whatever
new
dependencies
we
bring
in.
However,
like
I
see
why
you've
put
it
there,
because
you
know
it's
a
great
way
to
have
the
compiler
type
check
if
the
provisional
server
implements
all
the
different
apis
yeah,
that's
why
I'm
54.
That
is
a
type
check
yeah!
That's
what
I'm
saying
yeah.
B
So
this
tech
itself
is
from
the
container,
object,
storage,
interface,
spec
and
then
implemented.
Provisioner
server
is
a
struct
provided
by
the
protobuffer
or
the
grpc
compiler,
and
it's
basically
a
struct
which
implements
all
the
calls
that
provisioner
server
has
and
makes
them
return
and
implemented.
A
Got
it
yeah?
That
was
going
to
be
my
real
question,
really,
that
is
shouldn't.
We
put
this
in
the
in
the
central
jrpc
server
somewhere,
if
it's
already
there.
This
is
great.
B
B
So
then,
on
line
54,
there's
a
fairly
standard
type
check,
just
making
sure
that
I
actually
implement
special
provisioner
server,
which
is
always
the
case,
because
indeed
I
inherit
from
an
implemented
provisioner
server.
You
can
create
a
new
one
which
sets
up
those
maps,
those
in-memory
maps
and
then
there's
the
implementation
of
the
various
calls.
So
in
create
buckets.
I
only
support
s3
buckets.
B
One
thing
I
noticed
while
implementing
this
is
that,
unlike
in
csi,
spec,
which
errors
are
supposed
to
be
returned
is
not
specified
in
the
cozy
spec
in
this
case
like
if,
if
I
only
support,
s3
and
create
bucket
request,
requests
manager
blog
bucket,
then
returning
invalid
argument,
I
think,
makes
more
most
sense
given
the
set
of
standardized
error
codes.
But
we
may
want
to
make
that
more
explicit
in
this
spec.
A
Yeah,
the
record
here
makes
a
lot
of
sense:
yeah
yeah.
We
need
help
nicholas
to
fill
out
these
things.
I
would
really
appreciate
it
if
you
would,
you
know,
look
into
it
and
maybe
even
make
a
docs
pr
yeah
with
the
right
error
codes.
That
would
be
really
good.
B
So
then,
in
the
implementation,
I
do
a
fast
part
check
whether
the
bucket
already
exists.
B
If
it
doesn't
exist,
I
need
to
create
it
and
edit
in
one
the
two
look
up
maps
I
then
validate
whether
the
parameters
that
are
passed
into
the
create
bucket
request
are
the
same
as
the
parameters
through
which
the
bucket
was
created
before.
If
that
is
not
the
case
and
according
to
the
spec,
you
have
to
raise
an
alert
exists
or
return.
An
alert
already
exists,
sorry
error,
which
is
what
I
do
below,
and
if
everything
is
fine,
so
either
it
already
existed
with
those
parameters
or
it
now
has
been
created
with
those
parameters.
A
So
so
a
few
things
here,
it
seems.
B
I've
said
before
the
strange
thing
here
is
that
indeed
I
keep
accounts
inside
the
bucket
so
that
the
I
don't
keep
them
global
at
the
storage
systems
level.
I
could.
I
was
not
aware
that
in
within
cozy,
one
account
for
a
given
account
name
be
requested
access
to
multiple
buckets
again.
B
It
doesn't
really
change
much
because
even
if
that
is
the
case,
this
code
will
still
work,
but
it
doesn't
really
map
to
how
a
real
world
storage
system
would
be
designed
most
likely
where,
indeed,
you
have
two
top
level
entities,
one
being
accounts,
one
being
buckets,
and
then
there
are
links
between
those
two.
When
a
particular
account
has
given
a
particular
policy
access
to
a
particular
bucket.
A
So,
okay,
so
one
is,
you
know,
be
like
having
it.
You
know
represent
the
represent
and
the
how
the
storage
system
looks
in
general.
Would
you
know
would
probably
make
things
easier,
but
but
I
see
what
you
mean
here
in
this
case
it
you
know,
I
don't
have
a
specific
reason
why
this
wouldn't
work.
That
being
said
so
so
you
were
saying
so.
Let
me
think
about
this.
So
you're
saying
that
here,
yeah
yeah.
A
This
is
the
thing
I
want
to
bring
up
so
you're,
saying
that
one
one
account
can
actually
be
used
by
multiple
buckets.
I
don't,
I
don't
think
that's
possible
because
the
way
we've
modeled
it
is,
we
always
create
a
new
account
or
we
created.
We.
We
send
a
new
provisional
grant
bucket
access.
G
A
Time
each
time
we
ask
for
a
new
bucket
and
and
the
account
name
that
we
pass
through
there
is
a
unique
uuid,
which
would
mean
I
mean
I'm
sure
there
is
a
way
we
can.
We
can
have
you
know
the
same
account
used
for
multiple
different
credentials,
but
that
is
very
unlikely.
I
mean
it's
an
anti-pattern,
basically,
the
the
the
way.
B
The
cozy
grpc
spec
is
designed.
D
B
I
I
built
this
like
piecemeal
step
by
step
that
one
got
me
to
put
these
accounts
within
a
bucket,
and
it's
only
later
that
I
realized
that
that
doesn't
it
it
fits
perfectly
and
cozy
as
in
it
works,
and
there
is
no
real
reason
why
this
model
would
be
incorrect.
A
No,
I
don't
understand
like
what
do
you
mean
by
that,
for
instance,
are
you
saying
that
the
way
you've
implemented
it
here
doesn't
matter
how
storage
systems
work,
or
do
you
mean
the
way
where
we've
kept
accounts
and
buckets
as
separate
entities
is
not
is
not
the
general
way
in
which
they're
implemented.
B
G
B
But
and
then
here
I
can
only
talk
for
for
the
scality
implementation,
because
I
don't
know
how
other
storage
systems
work
internally,
but
we
keep
two,
not
two
databases,
but
two
tables,
so
you
want
is
not
really
the
case
where
in
one
we
track
accounts
and
then
another
one,
we
track
buckets.
So
it's
not
that
the
account.
It's
not
that
if
you
delete
the
bucket
that
all
the
accounts
would
be
you
that
had
access
to
that
bucket
would
be
deleted
with
it
was
in
in
this
current
prototype
of
this
current
sample.
A
Well,
actually,
it
doesn't
right
because
there
are
use
cases
where
we
delete
the
bucket
first,
like
let's
say
you
know,
you
have
the
deletion
policy
set
to
force
and
someone's
supposed
to
go
and
clean
up
the
accesses
after
how
that
would
that
that
use
case
would
be
would
not
be
represented
by
this
data
structure.
B
But
the
strange
thing,
then,
is:
if
I'm
not
mistaken,
let
me
scroll
down
so
just
this
is
the
grand
pocket
taxes
where
I
do
a
couple
of
pre-checks
and
or
sorry
check
whether
the
account
already
exists,
if
not
created,
and
do
a
couple
of
checks
where
the
parameters
match
and
the
access
policy
match
and
the
elite
is
the
same,
but
I
believe
within
delete
you
pass
in
the
bucket
id.
B
No,
it's
a
question
because
of
the
the
the
scenario
you
just
explained:
yeah,
we
don't
really
have
a
segregation
between
buckets
and
accounts
because
even
revoke
button
bucket
access
has
a
bucket
id
and
an
account
id
in
it.
A
A
No,
no
I'm
saying,
let
me
put
it
differently
so
assume
that
bucket
is
already
gone
and
provisional
revoked
bucket
access
calls
gets
called.
So
in
that
case
you
would
try
to
get
a
delete
request
or
bucket
id.
A
B
A
Yeah,
I'm
saying
it
doesn't
model
the
cosy
api
right
now
then,
because
you
can
have
a
situation
where
revoke
is
called
after
delete.
I
mean
like
in
kubernetes,
that's
a
you
know.
You
should
always
account
for
that
order
of
operations
does
not
matter
should
not
matter,
and
it
should
be
added
important.
That
means
it
should
return
success
because
it's
already
gone.
Would
it
help
if
you
modeled
the
account
as
a
separate
data
structure?
A
B
A
B
A
lot
of
work:
no,
it's
not
a
lot
of
work.
It's
it's
really
easy,
but
then
my
question
is
then
revoke
bucket
access.
That
means
revoke
access
for
the
given
account
id
to
the
given
bucket
id.
A
So
so,
okay,
so
I
think
I
think
I
see
where
you're
coming
from
so
you're
saying
we're
just
taking
one
account
and
for
that
account
you
know
we
could
have
multiple
different
accesses,
but
we're
removing
this
specific
access.
That's
what
we're
saying!
That's
what
you're
saying
right.
So
if
that's
the
case,
when
does
the
account
get
deleted
right,
yeah
yeah?
A
So
I
think
I
think
maybe
the
word
revoke
bucket
access
is
what's
confusing
here,
because
when
we
use
the
word
revoke
or
use
the
phrase
revoke
bucket
access,
what
you're
really
saying
is
revoke
access
for
that
bucket.
But
what
we
should
say
is-
or
you
know,
between
us
in
conversation-
and
I
don't
know
if
we
should
actually
change
the
name
of
this
call,
because
so
it's
pretty
common
pattern.
A
But
what
what
it
really
means
is
remove
that
account
associated
with
you
know
with
that
with
that
for
accessing
that
bucket
so,
and
it
comes
from
the
fact
that
whenever
we
grant
bucket
access,
we
create
a
new
account.
A
A
The
word
grant
bucket
access
comes
from,
and
a
rework
actually
deletes
that
deletes
that
whole
account
and
by
deleting
the
account
you
also
remove
the
access
to
that
bucket.
A
Oh
yeah,
that's
that's
the
that's
the
that's
the
idea
that
is
one
buck.
One
account
will
have
a
it's
a
one-to-one
mapping.
A
B
Well
again,
I
don't
know,
but
now
in
in
the
case
of
revoke
bucket
access,
if
we
say
that
accounts
are
separate
from
buckets
and
it
just
happens
that
for
every
bucket
access
we
create
a
new
account
and
we
say,
give
access
to
this
bucket
and
then
the
account
gets
returned.
So
you
want,
then,
is
there
a
reason
we
pass
in
the
bucket
id
and
revoke
bucket
access
request.
F
A
A
Right
so
so
there
was
a
good
reason
for
understanding
number
yeah.
So
so
the
reason
for
it
is,
let's
actually
go
back
and
look
at
the
api.
Maybe
the
reason
for
it
is
likely
just
that
we
haven't
removed
it
from
the
api.
To
be
honest,
if
you
can
open
kubernetes
six
slash,
you
know
container
object
or
storage
interface
spec
we
can.
We
can
go
and
take
a
look
at
it.
A
B
A
Yeah
yeah
yeah:
this
is
good,
so
I
think
there's
some
lag
all
right.
So,
let's
see
grant
bucket
access
request,
grant
bucket
access
request
and
revoke
bucket
access
request
account
id
and
yeah
bucket
id
is
here.
So
let's,
let's
talk
about
this,
so
if
bucket
id
is,
if
account
id,
is
the
globally
unique
identifier,
and
is
that
what
we
say
in
the
grant
market
access
response,
this
will
be
required
to
revoke
access
yeah.
A
So
so,
however,
okay,
let's
talk
about
this,
this
is
something
we
can
address.
I
want
to
address
it
with
everyone
in
the
group,
but
in
the
meantime,
can
we
can
we
can
we
discuss
like
in
a
sense,
can
we
actually
implement
the
sample
driver
without
this?
This
thing
in
pr
you
know
in
holding
us
back,
because
if
you
go
to
the
sample
driver
now,
I
understand
so
you're
saying
because
the
api
is
designed
that
way.
A
You
know
you
you're
going
and
implementing
your
api
the
way
you're
implementing
it,
but
but
that's
actually
wrong
here,
because
what
you're
doing
is
you're,
not
you're,
not
satisfying
the
api
still
so
so.
What
is
happening
here
is
because
of
the
way
this
is
modeled.
We
are
not
able
to
simulate
the
force
delete
use
case,
so
I
understand
there's
a
pro
there's
something
that
could
be
changed
in
the
spec,
but,
however,
that
shouldn't
stop
us
from
implementing
this
right,
correct
yeah.
So
I
believe.
B
If,
in
this
implementation,
I
can
remove
a
bit
no
basically
change
line,
197,
which
now,
if
the
bucket
for
it
for
the
bucket
id
that
you
pass
it
in
a
revoked
bucket
access
request,
no
longer
exists.
Then
I
say:
return
provision,
revoke,
bucket
taxes,
response,
nil,.
A
So
you're
saying
yeah
so
that
would
be
yeah
so
so
you're
saying
you'd
return
a
new
error
yeah
the
same
as
line
two
and
six.
B
Yes,
indeed,
basically
say
that
bucket
doesn't
exist,
which
means
that
that
axis
doesn't
exist
because
the
axis
is
kept
within
a
bucket.
So
I'm.
B
Well,
maybe
I
should,
but
even
in
the
current
data
model,
the
the
the
intent
of
the
api
could
be.
I
mean
I
think
it
should
be
explicitly
stated
in
the
spec,
but
could
be
adhered
to
by
just
changing
line
197
into
okay.
I'm
done.
A
So
so
let
me
ask
you
this,
so
if
we
were
to
model
it
naturally
the
way
it
is,
then
then
you
know
issues
like
this.
That
we
just
found
will
not
happen.
We
wouldn't
even
have
to
so.
What
we're
trying
to
do
right
now
is
coming
up
with
a
new
model.
On
top
of
how
things
exist,
you
know
work
exist
right
now,
so
what
that
requires
is
for
us
to
deal
with
edge
cases
like
this.
A
A
better
abstraction
would
be
to
model
the
the
system
exactly
as
the
way
it
is.
You
know
expected
to
behave
when
you,
when
you
do
it
that
way,
then
these
xk
edge
cases
don't
even
exist
and
and
maintaining
this
code
will
be
much
simpler.
It's
just
good
abstractions.
I.
B
F
I
am
telling
you
yeah,
so
I
didn't.
A
The
cap
actually
clearly
says
it.
The
cap
has
this
information
about
about
how
and
also
not
just
cap
just
through
inference.
There
is
no
other
way.
This
could
work
other
than
having
a
new
account
created
per
request
and
and,
and
this
whole
reverse
mapping
of,
because
the
bucket
id
is
there
in
the
revoke
access
call.
I
have
to
model
the
api
this
way,
that
that
doesn't
make
much
sense
to
me.
Actually,
so
what
I
would
suggest
is
we
create
account
as
a
separate
concept
and.
B
D
D
All
right,
so,
back
back
in
the
day,
companies
had
only
one
account
for
their
entire
organization
right
right
and
they
would
create
tons
of
users
and
stuff.
Now
there
is
a
new
api
which
is
called
the
organization
api
and
it
allows
organizations
to
create
multiple
accounts,
but
it's
some
kind
of
refinement,
but
historically
people
had
only
one
account
for
the
entire
organization
right
for
the
entire
internet.
C
D
Had
only
one
account
and
thousands
of
users
right
so
when
you're
talking
about
when
you're
the
the
we
we
talked
about
many
times,
ideally
so
you're
supposed
to
bind
a
row
to
the
catalog,
but
for
the
v1
we
cannot.
So
what
do
you
expect
us?
Creating
a
user
or
creating
an
account.
A
So
account
cannot
be
associated
with
the
bucket,
so
account
is
more
like
a
namespace.
It's
what
you
mean
by
account
is
more
like
a
namespace,
which
is
a
group
of
resources
that
are
managed
a
particular
way.
You
know
just
because
you
create
an
account
for
a
new
bucket
does
not
mean
you
know
you,
there's
no
credentials
associated
with
that
account.
That
will
just
naturally
give
you
access
to
that
specific
bucket.
A
A
user
can
can
be
made
to
have
access
to
that
bucket,
but
but
not
the
entire
account
by
itself.
A
A
D
Okay-
okay,
it's
very
clear!
So
we
are
supposed
to
create
a
unique
new
use,
a
dynamic
user
right,
and
this
user
will
have
what
a
unique
arm
right
it's
it
can
be
uniquely
identified
by
what
what
do
you?
What
do
you
expect.
A
D
When,
when
you
create
a
when
you
grant
the
access,
so
when
you
create
a
bucket,
this
is
at
the
account
level.
So
nobody
has
access,
except
that
mean
for
the
account.
C
D
D
A
D
G
Question
on
on
a
similar
note,
the
the
account
or
I
mean
the
user-
can
be
identified
by
an
access
key.
If
you,
if
you
look
at
amazon,
I
mean
it
really
depends.
If
we
are
looking
at
at
a
user
that
can
you
know
rotate
keys
or
whatever
right,
we
discussed
it
quite
a
lot,
but
assuming
we
don't,
which
is
what
we
ended
up
with.
I
guess
right
then.
Basically,
the
what
we're
provisioning
here
is
credentials
is
access
and
that's
you
know
it
can
also
be
identified
like
directly
with
an
access
key.
A
Right
right,
so
we
we
just
call
it
account
id
in
case
you
want
to
change
it
to.
You
know,
have
a
separate
identifier
like
a
separate,
unique
id,
but
there's
nothing
that
stops
the
driver
from
returning
the
access
key
itself
as
the
unique
id.
A
Yeah,
the
confusion
is
so
currently
the
way
this
this
project
models,
the
account
is,
it
is
a
sub-resource
of
the
bucket,
so
when
the
bucket
is
deleted,
the
account
goes
away.
Naturally.
A
So,
if
you
scroll
up
nicholas
you'll,
see
that
there
is
a
data
structure
called
bucket
and
the
bucket
has
a
bunch
of
accounts
in
it
nicholas.
Can
you
scroll
up
a
little
bit
yeah
so
line
line?
35
is
where
bucket
is
defined
and
41
and
42
are
the
accounts
that
are
held
within
the
bucket.
Now,
when
we
delete
a
bucket,
what
happens
is
the
bucket
goes
away,
but
then.
G
A
D
So
if
we
were
to
to
write
the
amazon
s3
driver
again,
you
could
totally
totally
delete
the
bucket
without
deleting
the
users
right.
B
A
We
can
yeah
I'm
happy
to
bring
this
up
with
the
group,
and
you
know
I
want.
I
want
everyone
to
be
here
when
we
talk
about
it
like
I
don't
think
ben
is
here
right
now
we
have
guy
and
others
too.
We
could
start
the
discussion
today,
but
I
think
removing
a
spec
field.
I
would
like
to
have
a
little
bit
more
of
a
discussion
before
we
make
that
decision.
So
that's
that's
all.
I
have
to
say.
B
So,
in
in
that
case
is
are,
is
cozy
right
now
tracking
the
bucket
id
in
a
bar.
Sorry,
in
a
ba
as
well.
D
A
So
yeah,
so
in
the
meantime
maybe
you
know,
I
think
you
can
go
ahead
and
implement
it
the
right
way
and
because
that
that
decision
should
not
affect
the
sample
driver
in
any
form
agreed.
B
Yeah
I
mean
it
should
be
like
20
minutes
of
workout
mode
right
so
going
on
and
through
the
code
I
added
a
couple
of
tests
as
well
using
ginkgo
and
gomega.
B
Yeah
a
testing
framework
for
go,
and
these-
or
at
least
some
of
these
we
may
be
here-
we
may
be
able
to
maybe
again
reuse
across
multiple
provisioners,
because
it
it
performs
high
level
calls
and
it's
not
over
the
grpc.
It
directly
calls
into
the
the
api,
the
implementation
it's
not
going
through,
grpc,
socket,
etc.
B
I
really
know
that
much
about
it,
so
that's
kind
of
what
you
need
to
implement
for
a
provisioner
and
then
I
have
under
pkg
that
I
talked
about
before
the
identity.
Server
is
really
simple:
it's
something
that,
given
a
string,
gives
you
an
identity
server
such
that
it's
really
really
easy
to
implement
identity
server,
rather
than
having
a
little
bit
of
code
duplication,
but
because
I
use
this
one,
and
so
it's
called
tests
as
well,
of
course,
which
is
really
simple.
B
When
I
call
provisionary
get
info,
do
I
get
the
provisioner
name
that
was
passed
into
the
new
identity,
server
and
then
cozy
provisioner
is
a
bit
of
utilities
to
to
run
an
actual
provision
or
grpc
service.
B
B
At
least
it's
using
k
log
for
the
actual
logging
and
then
run
is
a
bit
of
wrappers
to
set
up
signal
handlers
and
just
basically,
you
pass
in
your
provisioner
name
and
an
implementation
of
provision
or
server,
and
it
runs
the
grpc
service
for
you
to
some
extent
that
one
has
some
tests
as
well
and
then
finally,
there
is
deploy,
which
is
all
the
stuff
that
we
all
you
need
to
run.
B
So
all
of
this
allows
you
to
easily
test
this
in
in
your
in
a
cluster,
and
I
actually
do
this
in
the
ci,
which
is
based
on
get
affections
where
next
to
running
some
lints
and
running
the
the
unit
tests
and
building
a
container.
B
I
also
run
some
and
I
call
them
end-to-end
tests.
It's
not.
I
mean
it's
somewhat
of
an
end-to-end
test
where
I
create
a
kind
cluster
in
getup
actions.
I
then
deploy
all
the
the
cozy
prerequisites,
so
the
api,
the
crds,
the
controller,
the
csi
adapter,
deploy
the
provision,
the
sample
provisioner
wait
for
it
to
be
ready,
also
deploy
a
port
in
which
I
will
then
run
a
couple
of
tests.
Wait
for
that
one
to
be
ready
and
then
what
I
basically
do
is
because
we
don't
implement
any
any
data
plane.
B
Api
is
to
look
for
is
what
is
what
the
csi
adapter
brings
into
that
folder
is
supposed
to
bring
it
to
that
port.
Do
I
actually
get
it
today?
There's
two
files,
there's
credentials.json
and
I
believe
that
it's
going
to
change,
but
I
copy
those
out
of
the
port
and
then
using
some
stupid
bash.
B
I
check
whether
what
is
contained
in
those
files
also
calling
into
jq
to
parse
the
json
equals
what
I
expect
to
find
in
those
files,
because
this
is
the
information
that
my
sample
provisioner
has
returned
to
the
to
the
sidecar.
B
That
is
basically
the
the
the
first.
I
I
want
to
go
here,
the
this,
the
the
ci
of
the
the
provisionary
sample.
I
don't
think
it
should
do
tests
of
cozy,
as
in
you
know,
test
whether
when
I
do
a
bucket
request
and
then
I
do
a
bucket
access
request
and
a
bucket
exists
and
then
if
I
delete
the
book
a
taxes
request
and
things
get
again,
maybe
that
would
be
a
better
end-to-end
test.
B
I
didn't
spend
the
effort
to
to
build
that
here
because
those
tests
they
don't
only
test
the
provisioner,
of
course
they
they
would
catch
bugs
and
say
the
controller
or
the
csi
adapter
as
well.
So
I'm
not
sure
where
those
should
live.
A
Let's
see,
I
think,
I
think
so,
for
the
cia
effort
specifically
for
this,
this
project
for
the
sample
driver
itself.
It's
going
to
be
simpler.
I
mean
like
like
just
what
you
already
have.
I
think
we're
going
to
have
end
to
end
test
be
a
completely
separate
repository
which
will
test
the
overall
framework,
where
you
know
we'll
integrate
all
the
components
and
then
test
all
of
cozy.
B
B
A
Yeah,
so
I
have,
I
have
a
few
questions,
so
this
makes
me
think
so
I
mean
originally.
We
wanted
this
to
be
for
two
reasons:
one
is
cio
and
one
has
to
be
the
sample.
Do
we
or
no?
No,
that's
not
what
we
said.
So
we
said
we're
going
to
have
this,
be
the
driver
for
ci
alone
and
we're
going
to
refer
we're
going
to
have
a
a
page
with
all
the
samples
shown
in
them
or
all
the
vendor
drivers
shown
in
them
links
to
that
being
the
being
the
samples.
A
So
so
we
said
something
like
the
samples
would
actually
be
amazon's
driver
or
scality's
driver,
or
you
know,
google's
driver
and-
and
this
would
just
be
the
cia
system
is,
is
that
is
that
what
we
decided
help
me
remember.
Please.
E
I
think
we
thought
we
would
have
like
one
driver
that
is
in
a
kubernetes
cozy
repo,
that
is
a
sample
driver
and
other
drivers
will
be
vendor
drivers.
We
can
have
a
document
right
so
we'll
have
a
link
to
those
drivers
in
that
document,
so
this
driver.
This
is,
I
think
this
is
a
the
sample
driver,
so
we
actually
should
move
that
to
a
new
ripple
you're,
going
to
submit
a
request
to
create
this
right.
Yeah.
A
Yeah,
so
my
question
was
exactly
that,
so
it
was,
do
we
have
a
separate
driver
only
for
ci
and
then
have
the
vendor
drivers
be
the
you
know,
samples
or
you
know,
do
we
use
this
driver
itself
as
both
ci
and
sample
in
front
excuse.
D
A
So
yeah
that
that
was
clear
from
your
answering
that
is
we're.
Gonna
have
to
have
this,
be
both
yeah,
so
so
nicholas.
If
you
can
make
that
one
change
and
and
I'll
I'll
create
the
repository
right
away.
Just
just
after
this
I'll
make
that.
B
E
Hey,
I
do
have
a
question
about
this.
I
know
we
talked
about
this
one
last
time,
so
this
is
called
a
provisioner
and,
as
I
was
reading
the
cap,
I
thought
this
is
actually
a
driver
right.
So
it's
doing
the
provisioning
but
is
actually
driver.
I
thought
we
also.
We
still
have
a
sidecar,
that's
working
together
with
the
driver.
Isn't
it?
I.
A
See
where
you're
coming
from
so
they're,
calling
that
the
provisional,
the
sidecar
is
the
provisioner-
and
this
is
this-
is.
A
E
If
you
look
at
how
csi
jar
works
right,
there's
a
we
call
them
drivers
and
then
we
do
have
a
provisional
side
car
so
and
also
you
know
what,
if
we
in
the
future,
we
have
other
functionalities.
E
That
is,
you
know,
not
just
provisioning,
but
we
have
some
other
java
functionalities.
A
That
makes
sense.
Actually
we
can
call
the
repository
driver,
cosy
driver
sample.
A
Got
it
so
so
about
the
cap?
So
that's
the
other
thing
that
I
want
to
discuss
so
so,
let's
and
finish
this
conversation
so
as
far
as
the
sample
driver
is
concerned
nicholas,
I
think
obviously
done
a
great
job
just
that
one
minor
change
and
you
know
we'll
have
the
repository
and
and
we'll
we'll
get
it
merged
and
just
make
sure
that
our
license
files
and
all
the
all
the
different
files
that
you
put
here.
I
think
you've
already
done
it.
But
you
know
minor,
boilerplate
stuff,
like
that.
B
Alters
headings
and
the
license
file
we
can
get
from
the
template
will
be
used
for
that
new
repository.
A
Right
right,
okay,
cool,
so
that's
one!
So
the
second
thing
is
about
the
cap.
So
about
the
k
up.
I
think
xing
and
blaine
also
left
a
few
comments
on
it.
We've
addressed
the
comments
jeff
just
had
you
know
small
changes
to
make
on
those
comments
and-
and
you
know
we'll
we'll
have
it
pushed
within
within
the
hour
or
so
so
so
shane.
We
know
that
elena
hashman
is
that
the
name
he's.
A
Yeah
production,
readiness
yeah,
but
but
you
know
friday
is
the
deadline
and
I.
A
G
A
It
it's
going
to
be,
you
know
our
cadence
is
going
to
be
affected.
If
we
do
that,
is
there
any
chance
we
can.
We
can
get
another
reviewer
tim
is
looking.
It
looks
like
he
might
not
respond
in
time
and
I
think.
E
Can
you
just
suggest
that
send
another
email
to
remind
him,
because
I
think
last
week,
because
it's
a
cubecom,
I
see
that
he
was
a
send
me
some
tweet
saying
he
has
to
get
up
like
3am
or.
C
E
He
was
not
really
reviewing
because
of
that.
A
A
Hopefully
yeah
hopefully
so,
yes,
I
will
send
another
email,
bring
it
to
the
top
of
his
list
and-
and
I
know
saad
is
also
trying
to
help
yeah,
but
I
think
you
know
it
might
be.
It
might
be
a
good
idea
to
at
least
have
someone
in
mind
so
that
we
know
who
to
ping.
E
Yeah,
so
at
this
point
I
don't
really
have
anyone
the
api
reviewer.
We,
I
think
most
of
our
sixth
or
just
here
goes
to
tim.
I
think
sometimes
I
see
some
other
someone
else.
I
think
the
clayton
review
some
of
them,
but
I
haven't,
but
he
hasn't
reviewed
any
of
mine,
so
I
don't
really
know
him.
So
I
I
don't
really
have
a
way
to
say
ping
him,
because
I
think
at
least
tim
has
already
looked
at
this
one.
Last
time
he's
actually
better
yeah.
E
E
C
E
A
I
see
yeah
yeah,
yeah,
okay
got
it
so
clayton
is
the
is
our
backup
you're
saying
and
it's
unlikely
to
be
a
good
backup,
you're
saying
because
very
little
time
is
left,
but
we
do
have
one
name
to
follow
up
with,
in
worst
case
scenario,
right.
E
B
Okay,
just
one
question:
if
we
change
the
terminology
to
driver,
the
provisioner
server
remains
provisional
server
right
as
in
the
grpc
interface.
Oh
so
you're
asking
that.
E
Thing
should
be
fine,
I
think
we
have.
If
you
look
at,
if
you
look
at
the
csi
array,
it
has
like
a
controller.
We
have
a
node
service,
we
have
service
in
there
as
well
no
service
control
service.
So
I
think
that
is
fine.
For
now
I
I
think
yeah,
because
that
we
is
that,
is
that
that's
in
the
cab
right,
isn't
it
those
terminologies
in
the
club.
E
So
the
only
thing
that
I'm
getting
confused
is
when
you're
calling
it
provisional,
but
then
I
think
it's
I
think
it's
actually
driver.
E
If
I'm
thinking
about
france,
you
know
how
csi
are
calling
things,
because
that's
not
the
side
car.
If
you
call
that
operation,
then
what
is
your
sidecar.
A
A
Wasn't
that
your
question
nicholas.
B
Yeah
but
then
there's
other
things.
I
guess
we
need
to
change
as
well.
A
I
don't
have
this
backing
wait.
Let
me
just
I
I'm
saying.
A
We
need
to
change
it
because
we're
out
of
time
so
I'll
quickly
say
what
I
wanted
to
say.
I
don't
think
we
need
to
change
it
because
the
provisioner
is
calling
create
bucket.
So
it's
like
the
provisioner
is
the
one
initiating
this
call.
So
it's
I
don't
think
you
know
that
disambiguation
is
really
going
to
help,
but
I
think
calling
it
yeah.
E
B
E
A
And
and
also
I
don't
think,
it's
going
to
cause
any
real
confusion.
Yeah.