►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket Review Meeting - 08 October 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
All
right,
so
I
want
to
start
off
by
quickly
recapping,
where
we
left
off
in
the
last
meeting
on
monday.
So
we
talked
about
our
progress
in
terms
of
development.
Last
week.
Sorry,
on
monday,
we
had
all
the
tasks
in
green
here
we
had
completed,
so
we
had
a
basic
setup
of
the
three
components,
each
doing
their
own
tasks.
A
Completed
now,
the
next
step
is
actually
setting
up
a
deployment
spec
and
and
writing
the
kubernetes
yaml,
so
that
on
parallelly
we
are
also
developing
the
sample
provisioner,
which
will
be
used
for
testing
and
to
show
the
demo
as
well
and
and
for
any
other
purpose
that
you
need
a
sample
provisional,
say
even
for
new
provisioners
that
others
are
implementing.
This
will
serve
as
the
example
now.
A
As
of
today,
like
I
said,
the
work
has
begun
on
the
sample,
provisional
implementation.
We
have
just
started
out
with
it.
We
have
established
the
cli
for
it
and
implemented
the
first
grpc
calls
for
provisionals,
which
is
the
provisional
get
info,
and
the
next
step
will
be
implementing
provisional,
create
bucket.
A
We
haven't
gone
gotten
there
yet,
and
once
that
is
done,
we
can
take
all
the
deployment
specs
and
actually
deploy
and
test
them
out
as
a
single
unit.
That
is
end
to
end.
A
Before
we
move
forward
with
further
development,
it
is
important
that
we
set
up
a
test
infra,
given
that
we
have
enough
functionality
now
to
test
out
one
end-to-end
scenario,
which
is
create
a
bucket
and
mount
the
mount
in
double
quotes
mount
the
bucket
into
a
pod.
So
I
I
want
to
first
find
out
what
kind
of
you
know
setup
will
need
or
what
kind
of
work
we
need
to
do
in
order
to
complete
this
e
to
e
test
infra
setup.
A
So,
for
instance,
we're
gonna
need
some
sort
of
testing
framework
to
do
this
in
kubernetes
itself.
I
believe
jenko
is
being
used
since
we
haven't
started
out
with
it
at
all,
I'm
open
to
suggestions
on.
A
A
Okay,
so
yeah
we
we
can
do
some
research
and
figure
out
if
jinko
is
still
the
best
way
to
go
forward
with
the
writing
the
framework
for
it.
Ideally,
we
we're
gonna
need
a
separate
repository
which
will
utilize,
which
will
utilize
some
one,
one
of
the
one
of
the
test.
Infra
you
know,
frameworks
and,
and-
and
you
know,
it'll
run
on
every
pull
request.
Ideally
so
once
that
is
once
we
establish
the
testing
framework,
we
need
a
mechanism
to
auto
deploy.
A
In
the
sense,
we
need
a
mechanism
to
test
out
a
change
where
all
the
three
components
are
deployed
and
tested
as
one
now
at
first
start,
I
think
every
test.
Let's
say
we
do
it
on
every
pr
to
yeah.
This
is
this
is
a
question
I
have
so
should
we
run
these
tests
on
every
pr
to
every
one
of
the
components.
B
Yeah,
it
should
right
the
only
one
after
you
completely
set
it
up
should
run
for
every
pr.
So
if
you
fail,
then
you
should
not
get
merged
right,
that's
the
right
goal
and
then
you
you
can
take
a
look
of
the
existing
testing
framework
for
the
e2
test.
We
have
that
already
for
the
pv
pvcs
yeah
just
to
start
with
that.
B
The
only
thing
I'm
I'm
just
thinking,
because
right
now
that
actually
the
csi
it
has
actually
intrigued,
but
right
now
I
think
you
can't
be
adding
those
tests
in
tree
yet
because
so
right
now,
this
is
a
provisional
right,
I'm
wondering
if
we
need
to
have
another
repo
for
the
test
or
because
this
would
be
a
test
for
all
the
we
have
right
now.
There
are
like
three
repos
for
cozy
right.
B
I
wonder
if
we
need
another
repo
just
for
all
the
e2
tests,
because
yeah
so
so
right
now
we
can't
be
adding
them
in
tree.
Yet.
A
That's
right,
I
don't
know,
is
this
spec,
I
don't
know
if
you
can
put
in
the
spec
ripple
now.
Spec
is
supposed
to.
A
Have
so
we
have
we've
cleared
five
ripples
of
four
so
far,
four.
B
A
So
the
thing
is
the
the
purpose
of
spec
is
simply
to
just
have
the
specification
in
it.
I'm
not
sure
if
e2e
tests
belong
here,
yeah.
B
Normally
it
does
not.
This
is
like
the.
If
you
look
at
the
container
storage
interface
that
because
the
overweight
test,
you
need
to
test
the
kubernetes
controller
part
right,
so
that's
usually
does
not
belong
here.
A
B
B
A
And
and
actually
actually
that
might
be
better
now
that
you
mention
it,
I
think
if
we
have
I'm
not
sure
actually,
if,
if
we
have
cozy
pro
provisioner
for
say
s3
now,
that's
where
e3
test
should
be
around.
A
Right,
it's
true
or
yeah.
We
don't
have
to
confuse
it
right
now.
My
thought
process
was
in
order
to
do
the
e-tv
test.
We
are
going
to
need
a
driver
regardless
yeah
yeah,
so
so
the
tests,
the
e2e
test
itself,
can
reside
in
the
driver,
repo.
A
That's
just
the
thought.
We
don't
have
to
go
through
it
right
now,
but
we
can
just
keep.
B
Right
so
ideally
it
should
be
run.
You
should
run
those
tests
across,
but
I
think
that
probably
will
be
like
different
tests
yeah.
I
think
you
probably
need
to
run
different
tests,
but
if
you
want
to
run
end-to-end
tests,
then
that
should
be
like
the
highest
level
component,
which
is
the
is
that
the
provisioner
that.
A
Would
be
I
mean
you
can
call
it
the
controller
in
a
sense.
B
The
cozy
controller,
this
one,
the
container
object,
switch
yeah
surface
control
is
the
one
okay.
Then
maybe
then
you
start
there,
then
yeah,
so
you
can.
I
mean
you
can
put
your
test
there.
But
if
you
look
at
how
the
csi
side
e3
tests
are
set
up,
actually
we
have
those
run
in
each
side
card
repo.
Actually,
so
the
tests
are
the
e2
tests
are
submitted
entry
or
is
it
like
a
e3
test,
repo
under
kubernetes
and
the
click,
and
there
is
a
yeah?
B
There
are
etv
tests
so
yeah,
so
there
are
so
there
are
tests
that
are
running
entry
as
part.
You
know,
whenever
people
submitted
yeah.
A
B
A
Anything
there's
something
for
csi
here.
B
B
Yeah,
so
in
the
when,
when
this
become
alpha,
then
that's
how
because
that's
actually
crd
right.
So
when
this
become
alpha,
then
you
can
add
those
tests,
entry,
maybe
to
this
location
or
so
should
be
under
e3
anyway
yeah,
but
for
now
we
just
need
to
have
those
in
a
different
place.
Yeah.
A
Yeah
yeah,
that
makes
sense,
I
don't
know
if
you
should
add
it
to
kubernetes
directly.
Okay,
no.
B
A
So
this
this
is
for
e
entry
providers,
csi
providers,
no.
B
But
under
under
the
folder,
if
you
actually
go
to
storage
tests
with
you
can
actually
go
to,
maybe
it's
better
just
to
go
there.
You
can
see
the
tests
test,
suites
yeah,
so
you
see
in
our
test
for
snapshot
snapshots
right.
So
there
are.
There
are
many
different
tests
here.
I.
B
And
then
the
drivers
right
now
there
are
hosts
best
traps,
he's
the
host
pass
driver
and
there's
a
tcpd
css
driver
is
also
run
a.
A
A
Yeah,
I
think,
that's
what
that's
what
I
was
talking
about
when
I
said
sample
provisional,
like
the
equivalent
of
a
csi
mark
driver,
yeah,
anything
that
we
do
entry.
I
think
we
can
add
those
entry.
I
mean
anything,
that's
a
part
of
the
cozy
project
itself
and
not
a
part
of
some
vendor
driver.
I
think
we
can
add
tests
here
for
that
mm-hmm
yeah,
yeah,
all
right
moving
forward.
A
So
third
step
is
actually
to
go
ahead
and
write
the
tests,
sir.
So
now
we
know
where,
to
put
it
we'll
we'll
have
to
plan
out
the
development
of
actually
writing
the
tests.
I
think
to
begin
with
we'll
just
start
with
just
the
simplest
use
case,
which
will
be
to
create
a
bucket
actually,
which
will
be
to
register
a
new
provisioner
and
we'll
start
iterating
based
on
that
in
terms
of
test
infra.
A
B
Yeah
so
yeah,
because
I'm
not
sure
if
this
one
should
be
there
yet
because
this
is
provisional
so
right
now
I
have
a
check
on
that
because
I
think
before
before
this
lava,
you
probably
maybe
should
not
be
there,
because
once
it's
there,
then
actually
it
should
be
added
under
that
it
that
kkk
it
has
folder
right.
So
so
let
me
check
on.
Let
me
check
my
son,
so
I'm
not
quite
sure.
A
Okay,
okay,
that
makes
sense.
I
think
I
think,
if
we
have
that
we
can
accelerate
development
quite
well
even
before
the
cap
gets
merged.
A
So
so,
if
if
we
have
the
e2e
test
running.
A
You
know
we,
we
can
have
more
developers
writing
code
and
we
can
be
more
confident
in
what's
being
contributed
and
we
can
move
faster.
That's
the
that's
the
main
goal,
I'll
I'll,
follow
up
with
you
string
out
of
this
me
like
on
I'll,
follow
up
with
you
separately
about
how
to
set
up
this
testing,
for
or
in
case
you
know,
we
can
run
it
before
kpix
merge
that
will
be
good
or
before
alpha
I
mean.
B
A
I
see
actually
that
might
be
good
enough
to
be
honest,
to
begin
with,
at
least
just
as
a
part
of
the
ci
process.
Just
take
the
latest
kubernetes
binary
or
something
and
late
the
head.
The
the
latest
comment
from
each
of
these
projects
and
and
put
them
together
and
run
the
e-tweeters
that
I
think
that
that's
that's
a
good
start.
If
you
ask
me,
is
that
that's
possible
you're,
saying.
B
I
don't
need
you.
I
need
to
check
on
how
how
exactly
how
to
set
that
up,
because
I
so
if
we
want
to,
I
think
it's.
I
don't
think
it's
possible
to
get
everything
like
dynamically
because,
like,
for
example,
the
entry
to
test,
we
actually
use
images
of
the
set,
because
it's
not
like
you,
always
get
that
from
the
master.
It
has
to
be
like
a
some
image.
A
B
Yeah
yeah,
so
I
need
to
my
script.
D
B
A
So
that's
the
next
step
now
that
we
have
some
clarity
on
the
tweet
tests.
One
thing
I
want
to
talk
about
is
so
so
this
week
many
of
the
comments
that
we've
been
getting
on
the
cap
has
been
around
the
axis
and
I
wanted
to
go
over
how
we
designed
the
bucket
access
workflow
once
with
everyone
here
to
clarify
what
we
originally
designed
and-
and
you
know,
to
keep
everyone
on
the
same
page
about
this.
A
Now,
going
back
to
the
bucket
access
class,
the
bucket
access
class
has
a
policy
actions,
config
map,
which
is
a
contrary
map
that
which
is
appointed
to
a
conflict
map
which
contains
the
access
policy
in
a
format
that
the
provision
understands.
So
this
will
be
in
aws,
s3,
im
format.
If
it
is
an
s3
provisioner
for
gce,
it
will
have
the
equivalent
of
you
know,
gces
I
am,
and
for
azure
its
version
of
the
same
thing
and.
A
The
cozy
central
controller
will
take
the
bucket
access
request
and
bucket
access
class
and
create
the
bucket
access
object.
This
bucket
access
object
has
fields
from
both
and
it
has
an
access
secret
name,
which
it
first
copies
the
data
from
the
policy
actions
config
map.
So
here
it
it's
actually
a
copy
of
the
entire
data,
rather
than
just
a
pointer,
because
it
is
possible
that
the
policy
actions
conflict,
map,
changes,
someone
deletes
it
and
creates
a
new
one
by
that
name.
A
A
Yes,
bucket
access
class
and
the
bucket
access
are
immutable.
Good
thing,
so
bucket
access
is
immutable.
In
the
sense
we
do.
I
love
the
psycho
controller
to
update
the
minted
secret
name,
because
after
the
secret
is
minted,
it
has
to
be
filled
in
inside
of
this.
So
the
way
we
are
doing
it
is,
we
have
an
admission
controller.
A
F
A
So
the
name
of
the
secret
is
is
going
to
be
an
auto-generated
name,
something
like
cozy
dash
or
well.
A
Let's
see
bucket
access
ba
dash
auid,
which
which
is
similar
to
how
pvcs
sorry
pvs,
which
auto
generator
are
named,
so
that
will
be
filled
in
here
and
whenever
a
port
requests
a
bucket,
they
would
request
it
by
specifying
the
bucket
access
request,
we'll
be
able
to
follow
through
from
here
to
the
bucket
access,
and
we
will
be
able
to
see
that
the
minted
secret
name
is
set
to
whatever
and
we'll
be
able
to
download
that
data
and
then
put
it
into
the
part
from
that
meant
secret.
A
Now
there
is
another
way
to
provision
access
which
is
through
service
accounts,
so
in
in
certain
cloud
providers
it
is
possible
to
associate
a
workload
to
an
identity
in
the
cloud,
so,
for
instance,
on
amazon
you
can
you
can
associate
a
service
account
or
kubernetes
service
account
to
an
iam
user.
A
Now
any
pod,
that's
using
this
service
account
will
get
authenticated
as
that
im
user
when
calling
the
aws
apis
and
what
what
we
do
in
that
case,
what
what
cosy
does?
In
that
case
is
if,
if
we
want
service
account
based
authentication,
what
cozy
does
is
it
takes
the
service
account
name
as
a
part
of
the
bucket
access
request
and
in
the
workflow,
where
it
calls
the
provisioner
to
grant
access
to
create
credentials?
A
It
actually
passes
through
the
service
account
and
uses,
and
and
adds
access
for
that
specific
bucket.
Based
on
this
policy
actions
counting
map,
it
grants
access
for
that
service,
account
based
on
this
policy
actions
complement
so
any
user
or
any
workload.
That's
using
this
service
account
will
have
access
granted
for
this
bucket
of
an
access
will
be
based
on
the
the
policy
that.
A
So
yeah,
I
don't
know
if
roll
is
the
right
word,
but
yes,
overall,
yes.
D
Right
because,
while
I
did
well
at
least
in
the
the
s3
implementation,
what
it
does
is
it
is
it
projects
a
json
web
token
into
the
pod,
the
pod
that
and
the
sdk
then
consumes
that
that
token
and
then
generates
an
sts
request
using
assume
assume
rule
with
web
identity
and,
let's
see
and
then
that's
it
tested
whether
or
not
that
that
particular
service
count
effectively,
can
assume
a
particular
role.
So
then
there's
a
role
in
the
bucket
policy.
You
can
define
roles
as
being
as
having
the
ability
to
access
buckets
or
bucket
prefixes.
D
So
I
guess
so.
I
guess
that
that
config
map
data
would
just
have
some
linkage
that
then
the
the
the
control,
the
controller
s3
rsa
compatible
controller,
would
would
would
then
take
those
bits
and
and
and
set
up
that
bucket
policy
on
behalf
of
the
creator.
Yes,
that
is
exactly
the
way
yeah,
okay,.
F
So
I'm
sorry,
could
you
please.
I
heard
you
you
mentioned
you're
gonna,
mind
something
in
bucket
tech
access
request
or
we
are
gonna
require
something
in
the.
A
A
F
I
think
I'm
talking
about
the
latter.
One
is
the
service
accounts.
A
The
service
account
name
and
name
space
needs
to
be
provided.
F
A
Yeah
so
yeah,
those
are
all
good
questions
so
to
first
start
with
this.
This
associating
this
model
of
associating
a
service
account
to
a
particular
identity.
External
identity
is,
is
common
across
all
the
cloud
providers.
Gce
provides
something
like
that
and
azure
provides
something
like
that.
I
believe
this
dish
lotion
also
does
but
dislocation
yeah
dissolution
also
has
a
object,
storage
service,
which
is
based
on
ceph
anyways,
so
so
yeah.
That
is
a
common
setup.
A
It
is
now
talking
about
portability
when
porting
between
two
providers-
two
infrastructure
providers-
that
support
this
this
model,
the
service
account
based
authentication
model.
The
user
would
not
have
to
change
the
bucket
access
request.
The
bucket
access
class,
however,
will
have
to
reflect
the
conflict.
Map
will
have
to
have
the
new
format
for,
for
whatever
the
new
provider
is
and
yeah,
so
the
bucket
access
request
itself
can
remain,
as
is
the
admin
who
would
have
to
create.
The
bucket
access
class
would
have
to
make
changes
accordingly
to
whatever
the
provider
requires.
D
So,
for
example,
like
in
google
and
google
cloud
storage,
you
can
write
buck
apollo,
like
you.
Can
you
can
do
like
web
identity
based
access
control,
but
it's
usually
at
a
at
a
course
per
bucket
level
right.
So
so
the
you
know,
whatever
the
google
implementation
of
this,
the
policy
config
map
would
would
not
have
something.
For
example
like
like,
I
guess,
the
amazon
one
could
support
and
the
policy
actions
config
map
they
could
have
like
specifics,
prefixes
or
something
right
is
that?
How
is
that?
How
that
would
work
if
it
was?
D
If
it
was
a
controller,
specific
property,
it
would
be.
It
would
be
a
special
parameter
in
the
config
map
like
if
you
wanted
to
do
a
limit
by
prefix.
That
would
be
a
parameter
or
something.
A
Yeah
so
yeah
that
would
be
controller
specific,
yes,
so
say
in
s3.
You
know
the
protocol
signature
looks,
you
know
or
prefix
looks
like
s3
called
slash,
slash
whatever
we
have
its
equivalent,
whatever
that
is
gccs.
F
A
I
think
I
think
it's
fair
to
it's
good,
to
discuss
it
here
as
well.
How
it
is
it
is
in
the
cap.
It
is
under
I'll
quickly
even
show.
E
D
I
mean,
I
guess
the
fact
that
the
name
and
the
namespace
is
not
part
of
the
policy
config
map
and
instead
kind
of
an
abstraction
that's
supposed
to
like
cross
the
different,
like
cut
across
all
the
different
provisioners.
D
You
know,
since
this
kind
of
to
the
point
to
where
the
the
service,
like
the
whole
idea
of
mapping
a
servicer
account
and
and
namespace
to
good,
the
ability
to
access
is
not
there's
not
really
like
a
controller
specific
thing
I
mean,
I
guess
not.
Every
controller
would
have
to
would
necessarily
be
able
to
provide
those
capabilities,
but
that's
I
mean
that's
true
in
csi,
too
right,
there's,
there's.
B
A
Yeah
yeah,
I
think,
in
terms
of
having
this
this
ability,
it
really
depends
on
the
infrastructure
providing
a
service
account
based.
You
know,
like
an
a
mechanism
like
sts,
to
associate
a
service
account
to
a
workload.
C
Can
I
ask
how
large
we
expect
these
config
maps
to
be
in
practice
and
and
if
the
spec
puts
any
size
limit
on
them.
A
So
so
these
config
maps
are
simply
just
im
policy
rules
like
for
s3.
C
A
A
That,
with
with
the
hierarchy
or
so
so
this
will,
this
will
just
have
the
policy.
Now
it
will
be
up
to
the
controller.
To
you
know:
choose
set
the
policy
for
the
bucket
specifically.
A
Right
right,
so
this
will
be
either
escape
json
or
base64.json.
Okay,
so.
C
It'll
be
encoded
and
just
shoved
in
as
a
as
a
blob,
essentially
yeah.
Okay,
I
only
bring
this
up
because
in
csi
that
they
do
have
explicit
limits
on
string
lengths
and
map
sizes
to
avoid
people
from
trying
to
shove,
like
a
megabyte
of
text
into
a
string
somewhere
and
then
expect
the
other
side
to
just
be
able
to
store
something
of
that
size.
C
So
this
is
getting
to
the
size
where
it's
like
you
might
want
to
put
an
upper
bound
on
it,
so
that
we
don't
have
to
deal
with
like
policies
that
are
megabytes
in
size.
What's
the
issue
with
megabytes
policies?
C
Well,
they
end
up
getting
stored
in
kubernetes
objects
in
kubernetes,
api
server
on
the
lcd
database
and,
if
you're
stuck
on
that
risk.
A
Regardless,
don't
we
like,
if
someone
wants
to
create
a
megabyte,
size,
config
maps,
the
same
problem
would
exist,
wouldn't
it.
C
But
potentially
yeah,
I'm
just
like
csi,
took
the
step
of
putting
a
limit
on
things
to
avoid
objects
getting
huge.
I
think
the
concern
there
was
that
the
drivers
could
return
data
back.
That
could
end
up
getting
enormous
and
they
didn't
they
wanted
to
prevent
that.
If
you
have
to
be
the
administrator
deciding
to
put
an
enormous
amount
of
data
in,
but
it's
just
something
to
think
about.
A
Yeah,
I
think
I
I
think
putting
it
on
the
administrator
and
expecting
them
to
know
the
limit
is
fair
and
I
don't
see
these
policies
getting
insanely
large,
to
be
honest,
like
a
megabyte
size,
json
file
will
have
probably
ten
thousand
lines
or,
more
probably
even
more
fifty
thousand
lines.
D
Yeah
yeah,
there's
probably
there's
probably
interview
like
this
probably
and
the
same
is
probably
true
for
google,
but
there's
probably
like
limits
at
the
iam
and
whatever
that
google
equivalent
is
api
level
around
the
size
of
those.
So
I
mean
we.
If,
if
we,
if
we
found
what
those
were
and
we
would,
we
could
still
set
an
upper
bound
on
it
based
on
right,
because
it's
not
useful
to
be
able
to
have
unbounded
config
maps
or
whatever.
C
A
I
think
the
only
challenge
would
be
like
how
well
lcd
can
handle
the
scale,
wouldn't
it
possibly
yeah
yeah
yeah.
I
think
I
think
yeah
we,
let's,
let's
think
about
it
and
I'll
I'll,
do
some
research
on
if
there
are
any
limits
for
all
the
cloud
providers
and
and
we'll
go
from
there.
A
F
A
Than
that
yeah
that's
correct,
yeah,
it
is
a
it
is.
It
is
a
lot
more
complicated
than
yeah
simply
just
rewrite.
There
is
also
differences
between
the
different
cloud
providers
and
and
trying
to
abstract.
It
is
going
to
probably
limit
what
people
can
do
rather
than
help
us
more.
A
Okay,
okay,
so
that's
basically
that
that's
all
is
at
a
high
level.
That's
all
are
the
two
steps
now
are
there
any
any
other
questions
you
have?
A
Okay,
so
so
jeff
did
you
have
something
to
ask
about
the
service
account
based
provisioning.
E
The
question
I
have,
I
think
I
understand
it-
was
for
a
bucket
access.
E
The
the
bucket
access
request,
which
is
a
user
created
instance,
has
the
service
account
as
an
optional
field,
and
my
my
concern
was
that
there
might
be
a
security
gap
where
the
user
does
not
fill
in
the
service
account
name.
But
the
admin
has
a
config
map
representing
some
access
policy
and
I'm
not
sure
that
and
and
the
driver
has
the
ability
to
mint
credentials,
and
I
wanted
to
make
sure
we
don't
have
a
case
where
we
can
grant
access
to
a
bucket
when
we
shouldn't.
A
And
don't
we
take
care
of
that
by
the
allowed
name,
spaces.
E
E
Although
we've
gone
back
and
forth
about
whether
a
bucket
request
has
to
re
reference,
a
bucket
class
in
the
case
of
brownfield,
but
if
it
does,
then
bucket
class
has
allowed
namespaces
and
that's
a
way
for
an
admin
to
control
it.
If
brownfield
request
does
not
need
to
reference
a
bucket
class,
then
we
still
don't
have
a
way.
No,
that's
not
true.
A
So
we
do
actually
because
the
bucket
request
always
needs
to
represent
a
bucket
bucket
class
is
copied
over
to
the
bucket,
so
the
allowed
namespace
is
list
I
mean.
So
if
you.
E
It's
not.
The
brownfield
bucket
instance
is
created
by
the
admin,
and
I
I
know
we've
talked
I'm
not
sure
it's
been
reflected
in
the
in
kept
drafts,
but
we've
talked
about
a
bucket
class
not
being
required
for
brownfield
and
then
at
what
would
be
the
point
of
the
admin
creating
a
bucket
class
when
they're
filling
in
all
those
fields
in
the
bucket
instance.
Anyway.
No
again
so.
A
A
bucket
class
is
not
required
in
the
bucket
request
for
static
brownfield.
However,
the
allowed
namespaces
needs
to
be
filled
in
by
the
admin
in
the
bucket
object
itself
in
a
static
brownfield
provisioning
case.
The
admin
creates
the
bucket
object
and
that
bucket
object
should
have
allowed
namespaces
filled
in
so
that
cosy
can
allow
it
to
be
copied
across
all
the
alarm.
Namespaces
yeah.
E
I
and
I
agree
that
I
it
was
just
confusing
to
me
and
a
little
bit
indirect,
but
I
think
that
works.
A
All
right,
so,
thank
you,
everyone.
We
can
end
the
meeting
now
on
monday,
I'll
get
back
with
that
size,
limit
question
and
please
review
the
cap
once
the
review
process
is
really
helping.
We
have,
you
know,
clarified
few
things.
We've
learned
a
few
things,
so
so
please
keep
that
up
and
yeah
I'll
look
forward
to
any
reviews.
You
have
and
I'll
see
you
all
again
on.