►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket Provisioning KEP Review - 01 February 2018
Object Bucket Provisioning KEP -
https://github.com/kubernetes/enhancements/pull/1383
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Okay,
hi
everyone
thanks
for
joining
this
kept
review.
This
meeting's
a
little
bit
different
on
a
little
more
exciting,
maybe
than
the
previous
ones,
but
we're
going
to
be
seeking
approval
to
merge
this
cap
and
that
that
goal
might
not
be
realized
this
meeting,
but
that's
what
we're
that's,
what
we're
hoping
for,
or
striving
for
and
so
to
accomplish,
that
we're
going
to
go
through
the
meat
of
the
cap,
but
hopefully
at
a
high
level.
So
we
can
actually
get
through
it
in
an
hour.
A
But
I
don't
want
to
suppress
technical
questions
because
obviously
we
are
seeking
approval
and
and
that
the
cost
of
that
can
be.
We
need
to
go
deeper
into
something
that
a
person
has
a
concern
about
or
isn't
clear
or
whatever
it
is.
So
we
can
ask
technical
questions
and
if
we
don't
get
through
it,
all
this
meeting
will
go
on
we'll
just
continue
the
next
one.
A
So
my
agenda
for
this
meeting
is
to
give
you
a
quick
intro
go
through
use
cases
which
are
important,
because
you
have
to
agree
to
the
use
cases
and
then
believe
that
the
kep,
as
we
propose
will
meet
those
use
cases
we're
not
missing
an
important
use
case
or
focusing
on
an
unimportant
one,
we'll
go
through
the
apis.
We
have
the
grpc
spec
in
this
cap
and
sid,
whose
technical
lead
has
some
diagrams
we
can
share.
A
A
Recordings
are
available
on
the
six
storage
youtube
channel.
We've
had
what
did
I
count?
16
reviewers
on
the
cap
so
far,
50
something
commits
staying
there
for
history,
they'll
get
squashed
during
emerge,
potentially,
but
they're
there
every
commit
commits
visible
and
so
on
so
there's.
The
point
of
this
is:
there's
a
lot
of
activity
and
it's
it's
had
a
lot
of
eyeballs
on
it
and
many
iterations
name
changes
and
so
forth
that
you
expect
in
in
a
larger
project
like
this.
A
So
I
pushed
up
the
latest
changes
to
the
pr
late
yesterday
afternoon
early
evening,
so
many
of
you
might
not
have
had
a
chance
to
look
at
it
we'll
go
through
this
now
and
let
me
know
if
you're
able
to
follow
my
screen,
because
I
I
need
feedback,
whether
I'm
scrolling
too
fast.
I
have
a
another.
A
I
have
a
daughter
at
home
on
a
zoom
meeting
right
now,
so
I
don't
know
how
great
my
home
network
connection
will
be
for
this,
so
I'm
getting
down
to
use
cases
now,
so
I'm
just
scrolling
through
you,
you
guys
can
read
that
stuff.
Why
we're
doing
it
and
so
forth?
Some
vocabulary.
There's
terms
used
in
the
cap
and
we
we
define
them
up
front.
A
A
So
for
the
admin
we
want
them
to
be
able
to
manage
buckets
with
familiar
cue,
cuddle
and
other
tools
that
you
are
used
to
in
a
kubernetes
environment
and
the
second
one
is.
We
recognize
the
name
spaces
about
the
familiar
boundary
where
security
rules
are
enforced
and
applied
to,
and
so
we
want
an
admin
to
be
able
to
for
sharing
buckets,
to
be
able
to
allow
or
restrict
namespaces
that's
sort
of
the
unit
that
we
look
at
in
kubernetes.
A
We
also
not
stated
here,
but
our
design
does
allow
minimum
rbac
capabilities
for
pods
running
in
different
name
spaces,
with
a
sense
that
the
vendor
pod
is
the
least
trusted
pod
in
the
cluster.
You
know
you
don't
know
what
they
did,
and
so
you
don't
want
that.
Vendors,
the
driver
pod-
I'm
talking
about
you,
don't
want
that
pod
having
access
to
your
whole
cluster.
So
we,
our
design,
allows
that
pod
to
stay
in
its
own
name
space
for
that
vendor.
A
The
google
g
of
aws
azure
doesn't
matter,
but
but
they
don't
need
our
back
rules
that
grant
them
access
into
any
other
name
space
other
than
their
own
and
then
from
the
user
use
case.
Point
of
view
we're
our
user
we're
thinking
of
as
the
developer,
not
the
person
running
an
app,
but
the
person
in
charge
of
developing
the
app
and
using
the
object
store
and
that
developer
wants
familiar
manifests.
A
They
want
to
deploy
to
their
bucket
oriented
app
in
the
same
way
as
any
other
app,
and
they
want
to
use
familiar
tools
from
familiar
looking
animals
or
deployments
etc,
and
the
first
two
bullets
there
cover
that
requirement
or
need,
and
the
last
one
I'm
not
exactly
sure
I'll
sit.
So
you
can
chime
in
I'm
sort
of
thinking
it's
a
foundational
api
in
the
sense
that
hey.
A
If
we
have
these
apis,
we
can
build
other
neat
things
on
top
of
them,
which
is
true,
you
could
do
a
object,
backup,
cloning,
snapshotting
other
tools,
that's
not
just
a
user
use
case
that
can
be
a
vendors
or
as
well
even
an
admin.
C
So
I
have
one
question:
do
we
have
the
concept
of
buckets
having
different
roles,
I.e,
a
read-only
role,
a
write
exclusive,
where
only
one
app
is
able
to
write
to
that
bucket
being
coordinated
through
this.
A
I'm
going
to
give
you
the
a
high
level
answer,
and
if
it's
not
good
enough,
sid
can
chime
in
we
we
have.
We
have
a
a
provisioning
set
of
apis
and
in
parallel
we
have
an
access
request,
access
control
set
of
api.
We
have
two
of
them,
but
we
are
not
trying
to
be
an
iam
system
or
something
like
that.
So
what
what
we
do
define
in
in
one
of
the
apis
is
related
to
access
modes
of
read
only
read
write
right
only
and
we
have
a
public.
D
For
yeah,
so
so
we
have
so
we
support
whatever
the
underlying
cloud
provider
or
backend
object.
Storage
requires
what
jeff
is
talking
about.
We
call
it
anonymous
access
mode
where
we
give
like
catch-all
authorizations
per
bucket.
Some,
like
you,
said,
read-only
read,
write
or
write
only
or
private,
where
you
have
to
specifically
add
explicitly
add
a
credential
before
it
can
start
being
used.
C
Right
so
that
that
makes
sense
less
about
iam,
more
about
scheduling
where,
if
I
have
multiple
applications
that
can't
run
simultaneously,
because
they
both
require
say,
for
example,
exclusive
rights
to
a
bucket.
Is
that
something
that
kubernetes
is
going
to
be
able
to
then
correctly
schedule?
As
a
result,
the
information
provided
through
this
interface
so
in
in
such
a
case.
D
Where
so,
we
ourselves
like
kubernetes
itself
or
cozy,
will
not
understand
that
both
of
them
are
right-only
roles.
We
let
the
back-end
take
care
of
that,
so
both
will
be
scheduled
and
one
of
them
wouldn't
be
able
to
access
right
right
to
that
bucket.
A
Yeah
dave,
I
think
the
short
answer
is
no
to
your
question
and
the
back
end
wouldn't
have
anything
to
do
with
pod
scheduling
that
could,
if
that,
if
that
actually
is
important,
you
know
there
are
scheduling,
plugins
that
can
be
added
and
so
forth,
or
it
could
be
another
cap,
another
enhancement
down
the
road,
but
this.
C
A
Especially
local
local
persistent
volumes
where
they
need
they
need
to
know
that
information
we
don't
have.
We
haven't,
given
that
actually
any
thought
really
in
this
cap,
the
scheduling
side
of
things.
C
And
I'm
assuming
that
there's
been
some
thought
about
where
you'd
spin
up
a
local
pod,
specifically
to
provide
local
object.
Storage.
C
C
What
what
what
do
you
mean
by
local
object?
Storage,
where,
for
example,
I
spin
up
a
pod
that
then
uses
a
volume
to
then
expose
an
object
interface?
You
know
people
do
this
quite
often
in
development
patterns
or
using,
for
example,
a
min
io
or
other
lightweight
object
stores.
So
they
just
need
an
object,
storage,
local
object,
storage,
scratch,
space
or
persistent
store
yeah.
We
we
support.
D
E
Yeah,
actually,
I
think
if
I
understood
the
question
right,
you're
talking
about
implementation
of
the
object
store,
and
I
think
that
this
cap
is
almost
entirely
about
the
kubernetes
side
accessing
an
object
store.
So
if
you
in
fact
an
object
store
that
is
implemented
in
kubernetes,
it's
all
hidden
behind
the
driver.
How
do
you
do
that
is
not?
I
think
part
of
this
yeah.
C
So
not
about
the
implementation,
but
about
whether
the
cap
includes
enough
functionality
for
an
implementation
of
an
object
store
to
then
expose
itself
through
this
camera
yeah,
and
I
think
I
I
think,
as
you
go
through
here.
Hopefully
that's
yeah
cool.
Well,
I
don't
want
to
take
too
much
more
time.
Thank
you.
Yeah.
A
Right
and
save
a
question
if
it
hasn't
been
answered
well
enough,
reach
out
to
us,
add
a
comment
in
the
pr,
for
instance,
for
this
kept
and
we
will
address
it
there.
Okay,
so
I'm
scrolling
down
please
raise
please.
A
Let
me
know
if
my
home
network
is
messing
up
and
you
can't
see
this
stuff,
so
I'm
going
through
the
apis
and,
like
I
mentioned
there,
there's
sort
of
some
parallel
apis
and
a
very
high
level,
a
user
requests
a
bucket
we're
calling
it
a
bucket
request,
andrew
we
we've
iterated
on
this
name
a
lot
and
I
won't
go
through
the
history
of
it.
A
But
this
is
its
current
version,
which
we
think
is
the
most
meaningful
and
least
ambiguous,
because
when
you
say
the
word
bucket,
what
the
heck
are
you
talking
about
the
s3
bucket
or
are
you
talking
about
a
kubernetes
custom
resource
named
a
bucket
and
it
gets
confusing?
So
this
is
a
bucket
request
and
it
is
in
the
user
name
space
and
it
is
there
asking
for
access
to
an
existing
bucket
or
asking
for
access
to
a
new
bucket
that
they're
going
that
will
be
created
for
them.
A
It's
important
to
know
that
all
of
our
apis
have
finalizers
on
them.
Even
a
user-created
resource
we
will
cozy
will
add
a
finalizer.
We
want
to
orchestrate
the
entire
tear
down
of
these
objects
and
coordinate
it
with
the
object
store.
Underneath
this
I'm
not
going
to
go
through
individual
fields.
But
if
something
raises
your
eyebrows
ask
you:
can
the
user
can
basically
state
what
prefix
they
want
added
to
an
otherwise
random
bucket
name?
A
There's
a
concept
of
a
bucket
class
similar
to
a
storage
class
and
statuses
that
you're
familiar
with
the
other
thing
to
call
out
here
is
you
can
and
sid.
I
changed
this
after
our
conversation,
so
this
is.
This
is
going
to
be
unexpected
to
you,
sorry,
you
you
right
now
we
don't
have
a
way
where
you
can
fully
define
the
bucket
name
sid.
We
did
last
night,
but
we
don't
now,
but
what
what
you
do
is
you
can?
What?
How
do
you
access
brownfield
from
a
bucket
request?
A
You
know:
how
do
you
identify
that
bucket
and
it's
and
it's
done
through
field
number
seven
there
a
bucket
instance
name.
I
added
the
word
instance
to
it
because
bucket
name,
what
is
that
is
that
your
s3
bucket
name
or
is
that
a
kubernetes
resource,
so
I
called
it
bucket
instance
name
and
I
think
you
can
see
the
definition
of
it,
but
basically,
in
the
brownfield
bucket
request,
you
name
the
kubernetes
custom
resource,
which
is
called
a
bucket.
You
may
it's
it's
cluster
wide,
so
you
don't
need
a
namespace,
you
name
it
there.
A
How
did
that
get
there?
How
did
the
bucket
instance
get
there?
That's
an
admin
step,
it
could
be
automated,
but
it's
not
automated
by
cozy
there
could
be
a
separate
tool
chain
that
does
it
or
an
admin
creates
it
by
hand
like
they
did
in
the
old
days
with
pvs,
and
you
referenced
that
bucket
instance
in
your
bucket
request
and
that's
how
you
get
brownfield,
you
don't
need
a
bucket
class
in
the
brownfield
case.
A
The
bucket
instance,
which
we'll
go
through
next,
has
all
the
information
needed
by
the
provisioner
and
driver
protocol.
A
That's
just
saying
you
know
we
don't
have
portability
across
different
object
stores,
it's
not
like
posix
or
iscsi,
so
so
an
application
that
works
on
as
yours
protocol
doesn't
necessarily
work
in
google
cloud
and
so
forth,
and
vice
versa.
And
so
we
require
that
the
requesters
specify
what
protocol
their
app
is
talking
and
and
and
and
for
greenfield
new
buckets,
a
bucket
class
is
required
and
that
bucket
class
defines
the
protocols
it
supports
and
we
match
them.
A
So
if
you
want
s3
and
the
bucket
class
says
as
your
blob,
it's
a
mismatch.
Your
pod,
your
bucket
request,
is
not
satisfied.
It
doesn't
get
deleted
or
anything.
It's
typical
declarative
api,
not
satisfied
we'll
back
off,
but
keep
trying
we
don't
ever
give
up.
So
this
is
what
the
user
sees
when
they
want
a
bucket
or
access
to
an
existing
bucket.
A
So
what
is
this
bucket
instance
that
for
brownfield
is
created
manually
potentially
and
for
greenfield
it's
created
by
cozy,
and
this
is
what
it's
cluster-wide,
and
this
is
what
that
api
looks
like
and
there's
a
couple
of
things
that
are
important
here.
One
is
that
these
fields
are
mostly
populated
by
the
bucket
class
and
and
the
response
from
the
driver.
A
That's
where
we
get
this
information
now,
if
it's
brownfield,
the
admin
had
to
come
up
with
this,
they
have
to.
They
have
to
put
most
of
this
in
themselves.
They
don't
have
to
put
labels
in
in
the
finalizer,
but
the
rest
of
this
information.
A
So
a
couple
of
a
couple
of
points
here
we
have
a
release
policy
also
defined
in
a
bucket
class
which
you'll
see
next,
it's
the
standard
stuff,
you
know
retain
the
bucket
or
delete
the
bucket
delete's
important.
When
would
kubernetes
ever
delete
a
bucket
in
brownfield?
The
answer
is:
never
it
doesn't
matter
if
you've
closed
it
up
and
put
delete,
reclaimed
policy
of
a
release
policy
of
delete.
We
don't
delete
brownfield
bucket
content
ever
we'll
clean
up
kubernetes
crs
around
it,
but
we
aren't
going
to
touch
the
content.
B
When
would
we
wait,
why
not
so
in
the
case
that
I
have
manually
constructed
bucket
and
I
put
release
policy
delete,
why
would
we
not
delete
it.
A
Well,
I
guess
out
of
fear
just
you
know,
I
know,
there's
a
very
strong
sentiment
to
don't
with
code
that
can
have
bugs
in
it
to
never
delete
the
user's
data.
Now
there
was
one
time
so
sod,
it's
a
fair
question,
the
one
time
we
would
delete.
It
is
a
green
field
bucket
with
a
delete
policy
of
a
release
policy
of
delete,
which
is
not
a
default
and
no
more
accessors
to
that
bucket,
because,
unlike
pv
pdc,
we
have
a
one-to-many
relationship.
F
A
When,
when
all
the
brs
are
are
sat,
are
done
are
closed,
deleted
that
bucket
could
be
will
be
deleted.
Not
good
is
for
green
or
brown,
but
for
green
we
said
if
the
release
policies
delete
we'll,
also
delete
the
content
we'll
at
that
point,
we'll
call
the
driver
with
a
delete,
grpc
delete,
call
and
let
that
driver
actually
clean
up
everything.
C
Will
there
be
a
mechanism
by
which
code
can
be
added
that
executes
before
the
delete.
C
D
I
think
that's
a
good
point,
but
as
of
right
now
we
don't
have
it.
However,.
C
A
Looking
at
the
bucket
instance
here
you
oh
sid,
did
we
say:
there's
an
annotation
yeah
yeah.
I
don't.
C
D
B
Untyped
yeah
well
I'd
go
the
other
direction.
I'd
say
why
even
try
and
differentiate.
G
B
Right
so
I
think
the
way
I
look
at
it
is
it:
by
default,
the
release
policy
should
be
retained,
right,
like
if
somebody
doesn't
put
it
in,
or
the
use
case
that
you
mentioned
are
in
by
default.
It'll
be
retained,
nothing
happens
to
it
if
you
delete,
but
if
I
am,
for
example,
you
know
copying
over
a
bucket
and
that
bucket
was
dynamically
provisioned.
A
Yeah,
I
mean
delete's
a
tough
one
sod.
You
know,
there's
also
this
green
brown
use
case
that
andrew
first
brought
up
where
you,
where
you
provisioned
a
new
bucket,
but
now
you
prove
it
a
bucket
request
provisions
in
new
bucket,
so
it's
greenfield,
but
you
want
to
share
it
so
now,
the
other
accessories,
the
other
bucket
requests,
become
brownfield
requests
to
this
new
new
bucket
and
they're
sharing
it.
A
In
that
case,
it
seems
like
that
bucket's
whole
life
has
been
under
control
of
kubernetes
and
if
the
release
policy
was
deleted
in
that
situation
and
all
the
accessors
are
are
gone,
it
seems
like
we
could
delete
it,
but
to
me
saad,
if
you
have
some
existing
bucket
out
on
s3
somewhere
and
you've
created
a
abstract
bucket
instance
to
represent
that
bucket,
and
you
put
the
release
policy
as
delete
in
your
bucket
instance
that
bucket
could
have
been
around
for
years.
I
mean:
do
you
really
want
kubernetes
to
delete
it.
B
I
may
right
by
default,
I
would
say:
let's
not
do
that
right.
That's
why
release
policy
default
should
not
be
that
yeah,
but
if
I
want
to
allow
that
behavior
because,
honestly,
that's
what
the
api
here
says
right,
you
have
a
release
policy
if
I
set
it
to
delete
and
then
it
doesn't
actually
delete
in
some
cases.
That's
really
odd.
We
if
the
api
allows
you
for
statically.
E
Can
I
set
a
delete
whatever
the
retained
delete
policy
I
forget
the
name.
Can
I
have
that
automatically
deleted
on
a
statically
provisioned
pv.
A
J
Sorry,
I
don't
think
they're
out
anyone,
so
yeah,
I'm
wondering
on
what's
happening
on
the
following
case.
In
one
namespace,
you
create
a
green
field
right
bucket
and
somehow
you
get
the
credentials,
get
the
access
connection
and
you
go
and
create
it
as
brownfield
or
another
namespace,
and
then
on
the
on
the
green
field.
You
delete
it
right
and
then,
if
you
go
and
delete
the
contents,
then
what's
happening
on
the
other
namespace,
where
they
point
to.
C
C
And
then
that's
probably
don't
do
that,
but
you
could
get
into
a.
G
C
I
D
You
could
yeah
you,
you
could
you're
saying
if,
even
if
it's
a
brownfield,
if
the
release
policy
is
set
to
delete,
we
should
be
able
to
delete
it
yeah.
I.
A
Mean
I
think,
assad's
saying
honor
to
honor
the
release
policy.
That's
what
he's
saying
and
because
there
can
be
times
when
that
is
actually
what
the
admin
wants
done.
So
why
aren't
you
doing
it?
And
I
think
it's
just
a
very
conservative.
C
Data,
if
the
delete
policy
has
to
be
explicitly
set
for
brownfield
buckets,
then
that
setting
of
that
release
policy
to
delete
is
an
explicit
indication
that
you
want
that
behavior
yep.
I.
A
Can't
argue
that
it
is
it's
just
you
know,
and
kubernetes
basically
doesn't
try
to
keep
someone
from
shooting
themselves
in
the
foot.
So
you
know,
okay.
Is
it
okay
to
move
on
from
that?
I
think.
There's
good
points
raised
there.
I,
like
the
pre
post,
hook,
ideas.
I
I'm
okay
with
us
honoring
release
policy.
D
Yeah,
we
don't
need
any
major
design
changes
in
order
to
yeah.
This
is
a
decision
that
you
know
it's
more
of
a
use
case
decision,
whatever
we
agree
on
that
can
be
satisfied.
K
K
Added
you
know
by
administrator,
then
deleting
them
won't
cause
anything
to
happen
because
kubernetes
didn't
create
them.
Companies
might
have
no
idea.
There
might
not
be
anything
on
the
other
end
of
the
delete
rpc
to
call
for
a
pv
at
least
correct.
I
don't
know
if
we're
gonna
have
a
corresponding
case
here,
where
I
created
a
bucket
that
I
pointed
to
something
and
there's
literally
no
driver.
That
knows
how
to
do
create
or
delete
it.
Just
only
knows
how
to
do
attach.
A
Well
and
andrew
has
proposed
a
use
case
which
seems
to
have
been
lost
andrew
in
this
cap.
So
just
so,
you
know
of
stat
a
driverless,
in
other
words,
to
sort
of
jump
start
this
whole
idea
of
using
kubernetes
for
buckets.
You
don't
have
a
driver
at
all.
A
The
not
the
provisioning
you
could
you
could.
You
could
have
that
and
andrew
over
various
iterations
when
I
was
going
through
this
pretty
carefully
the
last
few
days,
although
driverless
is
defined
in
our
vocabulary,
and
there
was
one
reference
of
static
provisioning.
There's
no
examples,
there's
no
workflows
that
actually
support
it.
C
When
I
read
through
the
cap,
I
got
the
impression
that
driverless
was
was
just
implicit
in
terms
of
that.
If
I
am
an
application,
that's
aware
of
this,
I
can
use
the
kubernetes
apis
to
look
at
these
objects
that
are
accessible
to
me
that
define
the
buckets
and
then
I
can
just
use
that
to
go,
for
example,
and
directly
talk
to
aws
or
azure
blob,
etc.
Yeah.
A
And
david,
that's
absolutely
right
and
one
part
of
this
whole
life
cycle
was
we
stripped
down
the
kept
to
bare
bones
saying
all
we
really
need
is
a
foundational
api
and
you
can
build
anything
on
it
and
the
automation
that's
described
here
isn't
even
needed.
I
mean
it's
nice,
but
it's
not
a
requirement.
They
it.
If
you
have
a
good
set
of
apis,
you
can
do
what
you
want
with
them,
but
we
have
come
back
to
re,
adding
back
in
automation
to
this
cap.
C
I
I
don't
disagree
with
that.
I
just
wanted
to
to
make
the
point
that,
as
an
outsider
coming
in
and
reading
the
specification,
it
seemed
at
least
clear
to
me
how
an
application
could
use
this
without
requiring
a
driver
in
their
pod.
Okay,
thanks,
I'm
just
pointing.
B
Sorry,
quick
question
is
somebody
taking
notes,
there's
a
lot
of
good
feedback
here
I
want
to
make
sure
it's
captured
somewhere.
A
A
You
aaron,
so
we
have
anonymous
access
modes
here,
highlighted
as
described
what
I,
what
I
want
to
show
you
is
this
is
there's,
and
the
next
resource
is
the
bucket
class,
but
basically
in
brownfield
you
don't
have
a
bucket
class,
so
the
admin
would
say
what
name
spaces
can
use
this
bucket.
It's
really
that
simple
for
for
the
green,
slash
brown
use
case,
an
admin
would
create
a
bucket
class
and
in
the
bucket
class
they
can
say
what
name
spaces
can
use
this
bucket.
A
Here's
our
proposal
of
phases
and-
and
you
guys
that
are
have
done
this
stuff
before
we
are
trying
to
find
the
balance
between
not
making
it
look
like
we're
a
state
machine
or
have
all
these
states,
because
we
know
we
know
that
you
don't
go
from
a
to
b
to
c
in
kubernetes
and
we
don't
want
a
logic
predicated
on
what
phase
we're
in,
but
we
believe
from
a
debugging
logging
trying
to
figure
out
where
it
got
stuck
point
of
view
that
these
phases
are
reasonable
for
the
life
cycle
of
a
bucket,
and
so
this
is
our
proposal
on
phases.
F
I
think
the
if
you
look
at
the
api
conventions
it
says
faces,
this
pattern
is
deprecated.
You
should
not
be
using
this
anymore.
Oh.
A
Add
that
in
the
cap
I
know
I've
seen
this
coming.
Okay,
good
to
know,
we
should
use
the
we
shouldn't
use,
something
that's
actively
being
deprecated
right
now.
A
Well,
true,
and
because
the
oh,
we
still
don't
have
the
application
manifest
in
here
darn
it.
That
was
something
on
our
to-do
list,
because
the
application
specs
the
application
manifest
will
just
will
define
a
csi
volume.
A
C
C
M
A
M
Hey,
I
was
wondering:
do
you
have
the
I
didn't
notice
if
that's,
including
also
a
status
for
the
bucket
and
also
the
bucket
axis?
M
Is
that,
like
you
mean
the
the
phase
guy
right,
so
the
phase
or
conditions?
But
in
any
case,
are
these
represented
in
the
cap,
so
that
there's
stay
states,
different
statuses
for
bucket
access
and
different
statuses
for
buckets
right,
there's
like
different
status
for
each
one
right,
oh
yeah,.
A
So
this
this
is
the
bucket
instance.
So
it's
it's
on
the
provisioning
side,
api,
not
the
access
side,
so
it
it
it's
not
it's
not
directly
representing
access
states.
Although
you
can't
bank
bind
to
a
bucket
until
which
we
haven't
covered.
Yet
until
the
access
instance,
the
bucket
access
has
been
instantiated.
D
So
guys
we
do
have
separate
phases
for
buckets
and
the
access
apis
and
and
they're
representing
the
cap.
Here,
it's
just
further
down
below
yeah,
we'll
be
talking
about
it.
Yeah
and.
A
We're
like
more
than
halfway
through
the
meeting
shoot.
Okay,
I
gotta
just
throw
this
out
because
it's
a
change
and
they're.
It
may
not
be
accepted.
Acceptable
labels
used
to
just
be
a
label,
just
a
decoration
kind
of
thing
that
gives
some
information
to
an
admin.
Hey
like
this
was
created
by
cozy,
but
now
labels
actually
have
a
purpose.
A
The
label
is
our
linked
list
if
you
will
or
our
connection
to
the
binding.
So
we
have
a
label
here.
This
is
the
proposal
where
the
label
does
the
mapping
it
maps
the
bucket
request
or
the
bucket
access
it's
it's
technically.
The
key
of
the
label
is
the
bucket
access
instance
name,
and
the
value
of
the
label
is
the
bucket
request
name.
I
guess
it
would
be
namespace,
plus
name
and
and
and
the
set
of
those
shows
you
all
your
bindings
isn't.
C
Is
there
precedence
for
using
labels
this
way
or
is
it
used
as
a
free
form
in
a
field
elsewhere
in
kubernetes.
K
A
Yeah
that
that
is
correct,
so,
okay,
what
would
you
do
to
try
to
if
you
can't
use
owner
references
is
what
we
wanted
to
use?
What
would
you
do
to
represent
these
this
mapping?
Why.
E
E
A
Yeah:
okay,
okay,
a
bucket
class,
has
the
role
of
a
storage
class.
I'm
just
scrolling
down
here.
It's
admin
created
cluster
scoped.
There's
nothing
particularly
interesting
here.
It's
stuff
you've
seen
before
a
bucket
cl
an
object
store,
can
support
more
than
one
protocol
azure.
Does
that
gcs
does
that?
And
so
we
have
a
list
of
supportive
protocols
and
that's
kind
of
and
the
additional
name
spaces
beyond
the
name.
Space
of
the
bucket
request.
E
A
Yeah,
you
know,
I
don't.
I
didn't
see
that
as
a
current
comment,
but
I
remember
discussing
it
and
that
the
answer
is
no
there's
no
reason.
Since
we
have
don't.
We
don't
have
that
many
protocols,
there's
no
reason
that
you
need
a
list
of
supportive
protocols
that
I
can
think
of
and
in
fact,
there's
a
reason
not
to
do
it
in
that
it
makes
default
bucket
classes,
which
we
support
a
little.
C
C
C
I
E
C
Resolution
issues
with
having
multiple
protocols
listed,
prevent
people
from
having
to
mangle
their
classes.
Let's
say
you
have
a
provider
that
has
two
protocols:
they'd
have
to
have
class
protocol
a
class
protocol
b.
If
you
couldn't
have
a
list.
A
G
C
E
I
I
just
want
to
point
out
that
somebody
said
something
which
is
a
little
concerning,
which
is
that
you
have
to
have
a
class.
Oh.
E
F
The
other
question
about
the
protocols:
do
we
have
to
have
them
as
first
class
fields?
They
cannot
be
like
part
of
the
parameter
or
something
part
of
the.
What
I
was
just
saying
that
the
particles
can
can
they
be
part
of
the
parameters
like?
Oh
okay,
do
they
have
to
be
first
class?
Maybe
there
were
some
questions
in
the
past
that
I
didn't
catch
them.
Maybe
look.
E
B
I
can
expand
a
little
bit
on
that
too.
Basically,
if
you
think
about
it,
if
you
hide
it
behind
parameters
then
effectively,
we
need
to
have
a
standard
way
across
all
protocols
to
figure
out
what
you
know
bits
of
information.
We
need
to
expose
up
into
the
container
for
the
container
to
access
that
bucket
and
there's
no
generic
common
subset
of
fields
that
must
be
exposed
to
a
container.
B
A
Okay,
so
that
is
the
provisioning
side,
the
provisioning
api.
I
don't
think
we're
going
to
get
to
go
to
the
workflows,
but
please
look
at
them
at
the
near
the
bottom
of
the
cap,
just
above
the
grpc
spec.
That
shows
you
access
apis
and
provisioning
apis
in
parallel
and
what
the
what
a
typical
workflow
would
look
like,
and
I
just
we
aren't
going
to
get
to
cover
this
review
meeting.
I
I
was
wondering
sid,
if
you
wouldn't
mind
going
through
the
access
apis.
A
D
Oh
yeah
for
sure
can
I
can
I
share
my
screen.
That
might
be
easy
for
me.
A
A
And
just
as
a
timing
alarm,
we
are
like
12
minutes
left.
D
A
D
Yeah
I
just
joined
from
my
computer.
I'm
gonna
hang
up
from
this
one.
D
C
F
D
D
Okay,
so
so
jeff
just
went
over
how
the
buckets
are
provisioned
and
once
they're
provisioned.
We
model
the
access
to
it
in
in
such
a
way
that
each
workload
gets
its
own
set
of
credentials
to
access
the
bucket
and
consume
the
objects
in
it.
Now
the
having
a
separate
set
of
credentials,
rather
than
shared
set
of
credentials
for
each
bucket.
D
Let
me
let
me
explain
that
a
little
more
clearly,
so
when
we
create
a
bucket,
we
don't
create
one
credential
object
for
that
bucket
and
ask
all
the
workloads
to
share
them.
Rather,
we
generate
a
new
set
of
credentials
for
every
workload
that
requests
the
requests
to
consume.
That
bucket
and
the
way
we
model
that
is
using
the
bucket
access
objects.
D
So
there
are
three
objects
here,
just
like
in
in
case
of
buckets,
so
you've
got
a
bucket
access
request,
a
bucket
access
class
and
a
bucket
access
object.
D
D
It
has
fields
that
that
request
the
credentials
to
be
put
into
a
certain
a
secret
name
or
a
service
account
name.
This
seeker
will
contain
the
credentials
and
it
points
the
bucket
that
it's
interested
in
and
then
the
list
of
credentials
that
it
wants
is
specified
in
the
bucket
access
class,
I'm
going
to
quickly
jump
over
there.
D
The
bucket
access
class
defines
what
kind
of
actions
we
allow.
We
model
these
actions
here
to
be
like
the
iam
policies.
For
instance,
star
allows
you
to
read,
write
and
delete,
update
or
whatever.
There
are
no
updates
here,
but
rewrite
and
delete.
You
could
do
a
list
only
access
or
get
object,
which
only
lets
you
allows
you
to
get
the
objects,
but
not
list
them
or
put
objects,
which
is
right
only
and
we
can
do
that
for
by
default.
D
C
D
Yeah,
that's
pretty
much
it
at
the
high
level.
This
is
what
we
do
all
of
these
fields
here
they
support
that
workflow.
I
can
get
into
any
of
the
fields
that
we
have
given
the
amount
of
time
we
have
left,
I'm
keeping
it
kind
of
short.
D
Okay,
jeff
did
you
want
to
continue
or.
G
G
D
Do
we
need
this
is
to
say
that.
D
So
so,
when
we
looked
at
the
policies,
the
access
policies
for
different
object
stores,
they
vary
quite
a
bit
between
the
s3
style
and
the
azure
style
of
doing
things.
So
if
we
define
a
policy
action
for
azure
or
for
s3,
you
don't
get
the
same
level
of
granular
control
with
azure
at
least
not
as
far
as
we've
we've
explored.
That's
the
reason
we
had
that.
C
So
I
can
make
a
the
the
link
between
the
bucket
access
request
and
the
bucket
itself
is
that,
through
the
name
or
how?
How
are
they
linked
up.
C
C
C
D
So
it's
not
type.
How
would
is
that
what
you
mean.
M
D
C
Back
so
again,
where
would
it
be?
Where
would
it
be
validated
currently,
while
creating
the
bucket
access
object?
So.
D
D
There
we
do
it's
just
to
make
sure
it's
more
like
input,
validation
at
this
point
having
the
support
protocols.
It's
it's
not.
B
I
guess
the
problem
is
kubernetes
doesn't
allow
multi-object
validation
to
occur,
but
thinking
through
that,
I
don't
know
if
we
want
to
get
into
the
business
of
encoding
all
this
validation
logic
into
the
kubernetes
api.
B
You
know
we're
probably
going
to
drift
from
what
the
storage
system
actually
wants.
So
maybe
it's
better
to
let
the
storage
system
do
it.
Even
though
you
don't
get
the
immediate
gratification
of
a
failed
object
created.
E
E
Here
is
supported
protocols,
there's
a
plural
that
isn't
good
enough,
because
the
actual
access
of
the
bucket
of
the
the
actual
protocol
of
the
bucket
is
what
matters
right,
yep,
and
so
so
you
have
to
read
that
through
anyway
right
in
order
to
be
able
to
validate
that
this,
this
path
works
right
so
and
then
the
question
of
where
that
gets
validated.
I
guess
I
didn't
quite
follow
the
discussion
about
you
can't
do
multi-object
validation.
Why
are
there
multiple
objects
that.
B
B
Kubernetes,
api
validation,
you
can't
for
an
admission
controller
you
may
be
able
to,
but
that's
additional
work.
A
Yeah-
and
I
mean
a
fair
point
on
that
andrew
I
mean
obviously
it's
a
better
user
experience.
If
there's
going
to
be
an
error
that
you
catch
it
sooner
rather
than
later
and
and
web
hooks
could
could
facilitate
that
we
don't
have
any
web
hooks
as
part
of
this
cap
and
andrew.
Would
it
be
okay
that
that
would
be
a
non
a
later
than
mvp
feature?
A
E
I
completely
think
it's
okay,
I
guess
my
my
concern
is
that
it
seems
like
the
argument
for
putting
supported
protocols.
Here
is
a
kind
of
belt
and
suspenders,
except
that
I
don't
think
you're
really
getting
the
belt
you're,
not
actually
blocking
some
of
the
mismatches
that
could
happen.
D
A
That's
true:
we
have
two
minutes.
D
A
That
we
still
have
the
15
participants,
no
one's
dropped
yet
with
a
minute
left.
So
thank
you,
but
let's
take
a
a
sod.
Can
we
take
a
reading?
A
pulse,
there's
been
some
good
points.
Brought
up.
They've
been
captured
in
the
video
aaron
has
is
going
to
look
at
the
video
and
create
notes.
Sid
has
some
notes
from
the
beginning,
so
we'll
we
will
we'll
aggregate
those
together.
Where
do
we
stand
right
now
in,
in
your
guys's
opinion,
on
on
this
cat.
B
I
think
this
looks
good.
Let's
maybe
address
the
feedback
offline
that
we
think
we
have
consensus
on
and
then
let's
do
another
round
next
week,
and
maybe
we
can
deep
dive
into
the
this
cozy
spec
itself.
A
A
Okay,
because
I
we
we
have
that
cozy
slack
channel
now.
B
Okay,
that
could
be
used
as
a
point
of
a
place
to
discuss
if
you
want
offline
yeah.
A
Okay,
great,
let's
do
it
on
slack;
initially,
the
pr
is
cluttered
right
now,
with
so
many
comments.
I
I
intentionally
didn't
resolve
the
comments
because
I
wanted
to
preserve
them,
but
maybe
I
need
to
clean
that
up
too.
B
A
All
right
everyone
one
minute
late!
Thank
you.
Please.
Please
look
at
the
cap
and
add
your
comments
there.
I
do
try
to
respond
to
every
question
or
piece
of
feedback
on
the
cap
and
we
will
shoot
for
finishing
this
up
a
week
from
now
with
the
intention
of
getting
approval
on
merging
a
quick
question.
What's
the
name
of
the
slack
channel?