►
From YouTube: KEP Review: Object Bucket API (18June2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
A
And
I
imagine
this
would
be
in
the
bucket
class,
but
there
are
certain
fields
that
go
into
a
create
call
that
we
would
give
to
you.
You
know,
as
a
driver
you'd
be
either
through
an
SDK
or
through
HTTP,
just
restful
interface.
You
would
pass
at
least
these
fields
for
us
three,
for
instance,
GCS
has
their
own
set.
That
don't
differ
too
too.
Wildly
and
Azure
is
similar
in
the
case
s3
there
there's
a
distinction
between
Ackles
and
what
are
more
of
like
a
identity
like
permissions
list.
A
Is
they're
not
doing
I
am
doing
gross
right
yeah.
So
this
is
a
confusing
distinction
with
AWS
that
I've
had
to
reread
the
docs
on
a
few
times.
They.
D
A
Three
separate
subsystems
for
controlling
access.
The
simplest
one
is
an
Akal
at
by
their
definition,
and
that
is
simply
the
anonymous
access
right.
So
the
bucket
it
is
either
private.
No
one
can
know
it's
not
publicly
available.
It's
public
read
or
it's
public
read/write
or
it's
authenticated,
read
these
very
general.
They
don't
define
identities,
they
just
say
you
know.
If
you
have
network
access
to
this
bucket.
This
is
the
permission.
The
can
permission
that
you
get.
It
requires
no
credential
to
access
new
identity.
A
The
second
system
that
they
are
using
now
is
in
the
I
am
side
where
an
I
am
role
is
specifically
given
access
to
a
specific
storage
instance
under
a
set
of
permissions,
and
that's
the
the
current
and
use
role.
The
third
role
or
the
third
system
which
is
deprecated,
is
housed
within
the
storage
side
right.
So
the
I
am
side.
Is
the
identity
management
with
the
roles
and
identifies
the
resources
that
that
role
can
access?
The
deprecated
version
is
written
onto
the
bucket
metadata
and
defines
the
identities
that
can
access
this
resource.
C
A
Right
sure
has
something
similar
to
the
the
only
difference.
I've
seen
thus
far
is
that
AWS,
specifically,
is
discouraging
people
from
using
that
system
of
writing
to
the
resource
metadata,
which
identities
can
access
it.
So
one
question:
that's
up
in
the
air
is
more
broadly
should
policy
creation
on
the
storage
side
when
I'm
creating
my
storage
instance
be
part
of
bucket
creation.
At
the
very
least,
should
we
allow
users
to
set
a
canned
Akal
on
that
bucket
being
this
one
here.
C
B
C
C
A
C
B
I
think
it
should
be
an
opaque
parameter
as
part
of
the
bucket
class.
Just
like
we
specify
different
attributes
of
the
cloud
within
the
storage
class
like
zone
or
I
ops,
things
that
can
change
depending
on
what
you're
using
I
guess.
When
you
say
first
class
citizen,
that
implies
required
right
when.
E
B
A
Yeah
so
there's
maybe
a
couple
of
different
philosophies
around
this
one,
the
the
one
that
was
driving
me
initially
was
exposed
to
the
Cooper,
not
necessarily
through
the
user
facing
API
Bay,
at
least
through
a
bucket
class
as
many
configurable
things
as
that
cloud
platform
offers.
If
I
can
at
my
at
the
point
of
creation
for
a
bucket
specify
these
things,
then
it
seems
like
that's
something.
We
should
also
expose
the
at
the
kubernetes
layer
instead
of
stripping
out
some
of
the
api's.
Now,
on
the
other
hand,
another
philosophy
is
well
bare
minimum.
A
We
just
create
the
bucket
for
them.
The
user
has
access
to
the
bucket
and
if
we've
made
them
the
owner,
because
it's
a
new
bucket,
they
have
the
rights
to
you,
know,
modify
the
the
apples
after
the
fact
or
object
policies,
etc.
So
why
would
we
do
that?
Why
would
we
make
that
I
ate
as
Andrew
for
that
first
class?
You
know
filled
in
the
API
I.
F
F
C
C
I'm
suggesting
that
that
the
okay,
so
if
you
decide
you
want
these
apples
as
a
fundamental
property
of
the
bucket
and
not
as
a
property
of
the
interaction
between
a
particular
subject
on
the
bucket
right,
then
that
would
make
sense
to
be
something
that
is
involved
in
the
creation
of
the
bucket.
But
I'm.
All
I'm
suggesting
is,
if
you're,
creating
buckets
for
which.
C
For
which
there
is
any
kind
of
Apple
other
than
private
you,
then,
then
it
strikes
me
that
that
is
really
something
that
you
should
be
doing
out
of
ban
and
then
just
accessing
that
bucket
submit.
So
you
know
so
yeah
then
you
provide
separate,
I
am
and-
and
maybe
those
I
you
know.
If
it's
a
public
read
then
maybe
we
need
a
path
where
you
cans.
C
They
basically
just
rely
on
the
existing
apples
that
are
in
the
bucket
and,
don't
don't
add
any
extra
I
am
for
me
all
right,
so
I
could
understand
having
to
be
able
to
support
that
path.
It
just
seems
weird
to
support
the
setting
of
global
permissions
on
the
bucket
other
than
through
kind
of
an
opaque
parameter
to
the
extent
they
did.
You
want
to
do
that.
I.
Just
don't
think.
That's
just
weird
to
dynamically
provision.
Public
global
parameters.
B
H
Comes
it
comes
from
the
difference
between
the
location
of
the
provider
right.
So
if
you
Takei
taking
a
cloud
service
which
is
external
to
the
cluster,
then
clearly
creating
like
a
public
bucket
for
an
application
is
not
really
considered
anything.
But
if
you
take
a
provider
inside
the
cluster,
then
suddenly
public
just
just
means
that
it's
not
providing
any
I
am
limitations
inside
the
cluster
first
and
then
the
networking
in
the
cluster
is
the
one
really
creating.
F
C
You
know
subjects
initiative
permissioning,
but
I
also
want
to
support
as
a
first-class
capability
the
ability
to
not
have
to
specify
those
kinds
of
things
and
to
just
make
them
aware
of
the
bucket,
and
then
they
can
use
it,
and
in
order
for
that
latter,
one
to
be
a
fully
supported
path.
That
means
I've
got
to
be
able
to
support
it.
Even
at
the
bucket
provision.
I.
H
C
H
So
by
not
standardizing,
we
were
taking
an
opinion
as
well
right,
but
if
we
standardize
some
thing
which
is
required,
we
are
essentially
creating
like.
Maybe
we
are
forcing
somebody
to
implement
something
irrelevant,
and
so
it
might
be
an
optional.
So
it
can
be
standard
in
the
sense
that
it
does.
You
can
specify
what
that
this
thing
might
exist,
but
it
doesn't
have
to
be
implemented.
C
Well,
I
think
that
the
two
layer
model
suggests
that,
and
this
is
why
it
bleeds
over
a
little
bit,
because
normally
we
wouldn't
think
of
putting
any
kind
of
permissioning
at
bucket
creation
time
right.
We
were
doing
all
of
that.
It
could
access
definitions,
and
so
this
is
positing
a
model
where
the
bucket
access
definition
doesn't.
C
Have
permissioning,
it
says,
just
rely
on
the
underlying
permissioning
and
move
that
permissioning
to
the
bucket
I.
Don't
think
personally,
that
ought
to
be
the
only
thing
we
support,
but
if
it
is
in
a
loud
model
that
we
support,
I
mean
I,
guess
I
can
see
some
of
the
rationale
for
doing
that,
but
but
but
I
think
it's
a
little
bit
tricky
to
deal
to
express
the
fact
that
there
are
both
of
these
models.
And
how
do
you
yeah?
C
A
C
Right
at
the
time,
so
the
question
would
be,
which
which
thing
do
you
think
ends
up
providing
you
the
info
on
the
bucket
and
so
we're
talking.
We
were
talking
at
some
point
about
possibly
having
the
bucket
access,
be
the
thing
that
does
that
and
then
it
references
the
bucket.
So
that
would
mean
you
would
need
a
bucket
access,
even
if
you
had
no
permission.
C
H
H
H
A
Let
me
pull
out
a
prop
point,
yeah
and
just
another
good
point.
It's
something
we
haven't
quite
at
least
cited.
I
haven't
understood
that
we
flushed
out
yet,
and
that
is
what
are
the
responsibilities
of
the
bucket
requests
in
the
bucket
access
in
terms
of
representing
that
data
to
the
user
side.
Right,
because,
in
my
mind
at
least
I
thought
that
the
bucket
request
would
be
responsible,
transmitting
connection
information
access,
just
credential
information,
rather
than
wrapping
them
both
into
the
bucket
access.
H
A
H
A
Yeah,
so
the
the
way
I
had
been
envisioning
this
up
until
now
was
at
the
bucket
request
for
both
Brown
field
and
green
field
was
responsible
for
transmitting
connection
information,
URL
bucket
name,
etc
back
to
the
pod,
and
that
the
bucket
access
request
was
only
responsible
for
transmitting
if
there
was
credentials
credentials.
Otherwise,
it
would
result
in
the
through
the
the
cose
controller,
an
annotation
on
the
service
account
to
associate
it.
You
know,
through
the
like
a
workload
identifier
or
identity
or
or
whatever
to
the
cloud
provider
now.
C
A
F
A
A
C
A
C
C
You've
got
one
affordance
from
the
perspective
of
the
of
the
of
the
workload
that
keeps
the
workload
from
starting
until
it's
available
and
everything
else,
and
so
that
one
thing
can
say:
okay
I'm
using
this
bucket
and
that
I'm
using
this
bucket
can
be
something
that
you
know
obviously
either
as
a
brownfield
or
greenfield
can
be
an
existing
buck.
It
could
cause
provisioning
of
a
bucket
and
but
also
then
has
its
own
life
cycle
so
that
it
has
its
own
sort
of
I
need
to
take
care
of
my
provisioning
operations
etc.
C
A
Okay,
it
also
simplifies
work
flows
between
greenfield
brownfield
and
the
going
back
to
the
ackles
case,
in
that
maybe
I
don't
need
access,
because
there
it's
just
public
access
within
the
cluster,
but
the
the
work
flow
is
still
the
same.
I
still
have
to
create
a
bucket
request.
I
still
have
to
create
a
bucket
access
request
with
the
appropriate
reference,
and
then
that
should
still
result
in
the
same
form
of
access
with
connection
and
in
this.
C
C
F
A
You
would
well
I
would
hope
that
we
can
design
this
and
I
think
we
can
such
that
you
can
create
one
before
the
other
there.
There
shouldn't
be
a
sequential
mandate
on
that,
and
given
that
it'll
be
handled
by
separate
controllers,
that
should
be
pretty
doable.
The
idea
being
that,
okay,
so
for
green
field
I,
would
create
a
bucket
request.
A
bucket
access
request
now
I,
have
I
know
the
name
of
my
buck
request,
because
I
probably
defined
the
name
static
per.
A
If
I
have
to
find
the
name,
statically
I
know
it
I
would
create
my
bucket
request.
A
the
central
cozy
controller
would
detect
this
in
my
name
space
and
create
a
bucket
cluster
scoped
object.
A
provisioner
would
detect
the
bucket
cluster
script
object,
which
is
a
composition
of
bucket
class
fields
and
the
bucket
requests
and
the
driver
side
of
that
provision
within
traded
bucket
in
the
cloud
provider.
A
It
would
write
back
data
to
the
bucket
connection
information
primarily,
and
that
would
be,
and
then
it
we
don't
have
a
binding
because
we're
still
we're
following
he
kind
of
mini
to
one
bucket
request
bucket,
but
we
would
have
to
signal
to
the
bucket
request
that
the
the
either
provisioning
is
complete
or
if
it
were
brown
field,
that
the
connection
is
available
once
that's
done.
A
bucket
access
request
with
reference
to
this
bucket
request
would
specify
either
the
surface
account
in
my
name
space.
A
If
I'm
trying
to
use
the
kubernetes
service
account
to
cloud
provider,
integration
that
all
or
most
of
the
cloud
providers
provide,
or
it
would
specify
a
secret
I-
would
think
by
name
that
does
not
yet
exist,
so
I'd
be
like
a
desired
name.
For
you
know
the
secret
and
the
I
am
side
of
this,
so
back
in
the
central
controller
would
detect.
This
would
create
a
bucket
access
object
that
would
result
in
either
a
new
I
am
identity
or
role
or
it
would
perform
the.
A
Yeah
so
either
it
associates
the
yes
service,
account
sort
of
creates
an
identity
with
credentials,
one
of
the
the
more
or
less
elegant
parts
of
this
is
that
because
the
provisioner
doesn't
have
purview
across
the
cluster,
it
has
to
store
that
that
credential
summer,
so
it
could
be
passed
back
to
the
user.
So
it
has
to
write
an
access
secret
within
its
own
namespace,
which
the
central
controller
not
depicted
will
and
clone
to
the
users
namespace.
A
C
Also
I
think
correct
me
if
I'm
wrong,
but
this
also
would
allow
for
a
different
cardinality
right.
You
wouldn't
have
to
have
a
1:1
between
bucket
access
request
and
bucket
request.
You
could
have
one
bucket
request
in
the
namespace
and
have
several
different
workloads
have
their
own
access
requests
for
that
bucket,
because
there's
different
workloads
might
run
under
different
service.
C
A
H
H
D
A
E
C
That's
that's
something
that's
going
to
bite
us,
but
but
maybe
not,
but
it
just
feels
like
that.
Binding
stuff
is
kind
of
there
for
a
reason
inside
just
get
worried
about.
You
know,
like
you
mentioned
like
it,
if
you're
provisioning
a
bucket
in
response
to
a
bucket
request,
I
guess,
if
you're
doing
the
provision,
that's
pretty
straightforward
if
it's
green
field,
but
if
it's
multiple
Brown
fields
it
just
because
I
guess
you're
operating
on
the
bucket
request,
so
your
status
and
everything
is
just
bucket
request.
Maybe
it's
okay!
C
Well,
that's
just
it
I'm,
not
sure
you
need
reference
counting,
because
if
we
really
split
it
up
between
green
field
and
brown
field
and
that
green
field
is
kind
of
one-to-one
and
or
or
or
that,
you
know,
I
guess
that
yeah
there
you
go.
The
green
field
case
is
what
scares
me
a
little
bit,
because
if
we
let
the
green
field
turn
into
brown
field,
if
you
end
up
or
you
have
multiple
references
to
green
field,
how
do
you
know
how
to
clean
up
right?
B
Like
if
you
create
it
here,
you
manage
the
lifecycle
here
and
you
don't
want
it
to
be
orphaned
out
in
the
wild
outside
of
the
controls.
So
having
being
the
count,
there
allows
you
to
be
proper
about
its
cleanup.
Is
that
what
you're
concerned
about
is
like
I
created
here?
I
dereference
it
from
every
application,
but
that
bucket
exists
somewhere
out
there
in
the
wild
and
that.
H
I
think
if
I
can
translate
just
something,
maybe
you
said
it,
maybe
not
that
you,
it
seemed,
like
you
said,
you're
more
worried
about
Greenfield.
Turning
to
brothel
and
I.
Think
you
meant,
maybe
because
of
the
ownership
that
this
like
Greenfield
allocation,
has
over
the
bucket,
so
its
own
also
is
assumed
to
be
deleted
when
that
application
is
deleted
or
something
or
changed.
State.
F
H
So,
just
to
complete
that
so
for,
but
on
typical
brownfield
you
you
don't
take
ownership,
you
never
delete
to
you.
You
always
retain
or
whatever
right
that
it
means
like
you,
just
import
a
bucket
to
use
and
not
really
control
it,
and
you
are
worried
because
we
adding
control
in
will
having
applications
control
the
bucket.
H
But
we
it's
not
clear
that
we
define
that
if
I
sort
of
am
the
owner
and
I'm
deleting
it,
the
other
applications
will
have
stale
references
versus
okay,
another
opportunity
which
you
might
be
like
if
I
delete
that
be
I
transfer
it
like
it's
just
a
floating
rough
count
until
the
data
can
be
expired.
So
there's
like
models
to
that
as
well.
H
F
Question
is
so,
let's
say
we:
we
have
some
sort
of
retention
policy,
while
creating
the
bucket
saying
delete
this
once
the
job
ends
so
so
the
same
controller,
those
responsible
for
creating
the
bucket
would
just
go
clean.
It
up
is
that
the
idea,
so
is
that
going
to
be
a
sort
of
automatic
deletion
after
the
workload
ends
right?
That's.
C
B
C
You
can,
in
fact,
the
way
that's
that
solved
is
exactly
the
way
I
suggested
that
this
might
have
to
be
solved,
which
is
that's
a
doubly
bound
thing
to
1
to
1
between
PVC
and
PV
right
I.
If
you
have
the
1
to
1,
then
that
model
I
think
completely
works.
The
problem
is
when
you
get
and
I'm
suggesting.
Maybe
we
don't
do
a
strict
many-to-one
where
we
lose
the
many.
We
literally
just
keep
the
list
of
all
the
references
at
the
bucket
level,
and
then
we
have
effectively
a
a
double
binding.
C
H
H
H
But
you
have
less
race
conditions
when
you
have
information
once
right.
In
this
case,
it's
it's
easier
to
any
worried
about
the
deletion
point
right
deciding
to
delete
something
I
think
you
know
it's.
It's
probably
solvable
in
in
one
or
two
ways,
though,
just
to
make
sure
that
something
will
not
accidentally
delete.
F
That's
gonna
make
life
so
much
easier.
I
am
still
wondering
about
the
whole
delete,
automatically
kind
of
feature
where
you
create
a
bucket,
and
this
retention
policy
is
set
to
delete.
It
just
gets
cleaned
up
on
its
own.
Well,
how
about
we
make
deletion?
Is
we
never
delete
by
default
and
and
deletion
could
be
a
whole
other
operation
like
bucket
deletion,
request
we'd
still
need
the
line.
H
I
think
it's
a
very
simple
usability
for
you
know
it's
I,
don't
think
it's
really.
It
might
be
difficult
actually
for
drivers
to
sometimes
implement
it,
of
course,
but
I,
maybe
that
maybe
that
is
a
good
case
for
separating
it
somehow,
because
cleaning
all
the
data
synchronously
might
be
complex
in
one
ways,
of
course,
if
you
do
implement
it,
it's
a
it's.
A
very
good
feature
for
automation,.
H
Think
who,
if
you
use
it
for
I,
think
it
mostly
makes
sense
when
you
automate
like,
if
you
run
a
CI
to
ask
or
whatever,
like
anything
automated
triggered
from
somewhere,
which
allocates
a
bucket
does
some
you
know
work
even
maybe
spans
out
more
applications
to
share
that
bucket
in
some
way
or
another
and
then
shuts
it
down,
and
you
know
you
want
everything
to
be
cleared
up.
You
don't
want
to
leave
anything
anyway.
You
can.
H
F
I
think
about
it,
I
think
maybe
I
just
so.
If
I,
don't
I,
can't
think
of
any
of
the
kubernetes
resource.
That
has
this
request
kind
of
concept
and
and
the
reason
I
think
it's
because
if
you
have
this
concept
you
you
are
you're,
essentially
making
it
imperative,
because
you
can
only
delete
after
its
created
yeah,
probably
a
better
idea
to
all
this
yeah.
F
The
original
reason
I
thought
of
this
was
because,
if,
if
deletion
wasn't
a
part
of
creational,
it
wasn't
a
retention
policy
thing
and
if
deletion
was
always
manually
in
working
somewhere,
basically
we're
creating
a
bucket
deletion
request.
Maybe
we
didn't
need
the
windings
is
what
I
thought
I
don't
know
how
I
went
there,
but,
yes,
it
doesn't
make
sense.
So.
F
Andrew,
saying
I
think
that's
the
best
approach,
at
least
so
far.
We
have
a
binding
for
every
bucket
request
to
a
workload.
So,
each
time
a
new
workload
needs
a
bucket.
They
make
a
separate
bucket
request
and
we
we
know
exactly
what
the
bindings
are,
even
if
they
find
to
the
same
bucket.
So
you
can
have
multiple
bucket
requests
binding
so
for
the
same
bucket
in
a
given,
namespace
and
cleanup
would
would
know
how
to
clean
up
the
Chris.
All
of
this
listed
out.
C
Yeah,
just
just
to
be
clear,
I
wasn't
I,
wasn't
trying
to
necessarily
strongly
propose
that
model.
I
was
just
saying:
yeah,
III
I
have
been
uncomfortable
with
the
multiple,
with
the
fan
out
of
multiple
requests,
pointing
to
the
same
bucket,
because
I
just
haven't
seen
it
all
flushed
out
what
the
lifecycle
management
looks
like
that's.
All
it
was
pointing
out
is
that
I
think
it'd
be
worth
going
through
all
the
lifecycle
management
cases
and
figuring
out
how
provisioning
and
de-provisioning
works,
and
you
know
when
you're
gonna,
be
you
know
doing.
C
F
C
F
C
A
Definitely
I
know
that
we
had
considered
early
on
keeping
a
list
of
references
on
the
bucket
bez.
Guy
pointed
out
where
we
had
some
concerns
about
the
just
the
reliability
of
that
list
and
how
easy
it
would
be
to
keep
that
actually
up
to
date.
But
I,
don't
think
that
that's
a
reason
not
to
explore
it
as
a
solution.
A
Keeping
the
the
actual
list
of
references
up-to-date
with
the
actual
bucket
requests
in
the
cluster
that
do
reference
that
bucket
or
to
put
it
the
way,
I
think
I
put
it.
You
had
you're
now
using
two
separate
models
right
where
you're
you
have
a
list
of
references
in
the
bucket,
but
then
you
also
have
to
quarry
all
the
bucket
requests
and
see
who
references
the
bucket
is
that
what
you
were
saying
guy
or
am
I
in
my
off
track
here.
A
H
The
same
information
from
a
different
angle
but
I
do
agree
with
Andrew
I
think
we
we
should
go
over
it.
Maybe
next
we
should,
you
know,
go
over
how
the
bucket
request,
cardinality
and
sharing
works
and
the
use
cases
so
that
we
see
like
which
tweaks
we
need
to
the
API,
for
example
like
deleting
like
the
lien
data
versus
not
leading
data,
etc.
Right.
A
F
The
different
cloud
part
is
one
of
the
requirements
that
comes
up
is
a
project
ID
or
owner
ID.
That's
required,
I'm,
not
sure
we
we
accounted
for
this
in
the
existing
API
spec
if
within,
if
say,
I'm
using
Google
Cloud
I'm
running
a
kubernetes
cluster
in
there
and
I
want
to
use
the
storage
buckets
for
the
same
cluster.
The
kubernetes
cluster
I
could
have
multiple
projects
using
multiple
Google
Cloud
projects
using
buckets
so
with
that.
Would
that
be
a
field?
That's
passed
in
by
the
workload
in
the
bucket
requests,
or
would
that
be
a
field?
D
The
way
that
we
handle
it
on
the
CSI
side
is
we
require
the
driver
when
they
give
us
a
unique
ID
to
be
effectively
complete.
In
this
case,
it's
a
fully
qualified
ID,
that's
returned
by,
for
example,
Google
clouds
I
want
to
give
you
a
volume
ID,
instead
of
it
just
being
the
name
of
a
persistent
disk,
for
example,
it'll
be
a
fully
qualified
name,
including
the
project
and
zone,
and
so
when
kubernetes
returns
that
to
the
CSI
driver
at
a
later
point
operate
on
it.
A
D
A
D
A
G
C
I
I
C
Would
mean
they
would
need
a
mechanism
either.
They
would
have
to
have
permission
to
another
project
in
which
cases
just
I
am
again
right,
and
so
you
just
do
things
with
with
full
references
to
include
project
ID
and
everything,
but
generally
conquers.
Our
project
scoped
resources
themselves
as
our
disks.
So
none
of
these
things
span
clusters
and
see
the
only
way
to
span
a
cluster
is
to
explicitly
do
operations
in
an
I'm
sorry
expand
project.
C
J
A
C
Say
you
you
would
have
to
so.
The
problem
here
is,
you
know,
I
think
everybody
understands
the
provisioner
is
going
to
have
a
set
of
permissions.
Okay.
The
question
here
is:
how
do
you
want
to
limit
the
use
of
that
provisioner
by
other
users?
And
do
you
want
this
to
somehow
you
know
if
that
provision
or
could
potentially
have
access
to
resources
and
out
of
the
project?
But
you
want
to
keep
this
user
from
having
access
to
another
project.
C
I
think
the
only
way
that
you
can
do
that
is
with
bucket
parameters
in
the
in
the
bucket
class
and
not
put
that
kind
of
thing
in
the
bucket
request,
but
we're
sort
of
not
allowing
for
explicitly
asking
for
names
and
stuff
like
that
anyway,
so
I
think
that
kind
of
thing
should
already
have
been
handled
through
bucket
class
right
references
to
other
projects
or
anything
else.
Oh.
G
A
Rephrase
like
the
the
bucket
cluster
scoped
object,
would
have
to
have
been
written
and
have
to
exist,
and
then
that's
how
I
would
get
access
to
a
brownfield
bucket,
mm-hmm
and
we've
talked
about
allowing
denialists
specifying
namespaces
in
the
cluster
and
that
cluster
scoped
object
as
a
way
of
kind
of
controlling
access.
In
that
way,.
F
C
Then
in
the
cluster
scoped
objects,
basically
uuuu,
that's
how
you
control
what
individual
namespaces
can
get
to.
Is
you
craft
a
bucket
or
you
craft,
a
bucket
class?
That-
and
you
say
you
have
access
to
this
bucket
class
or
you
have
access
to
this
bucket.
You
know
which
actually
is
an
interesting
point.
Having
access
to
the
bucket
is
easy.
C
A
Yeah
I
had
I
had
been
thinking
about
doing
on
the
bucket
object,
a
cluster
scoped,
but
we
could
do
that
for
the
bucket
class
as
well,
because
there's
two
points
of
entry
here
at
that
point
ready
like
bucket
classes
strictly
Greenfield
and
a
bucket
cluster
script
object
wouldn't
exist.
The
inverse
is
true
for
brownfield,
where
the
bucket
class
isn't
considered.
A
A
A
D
A
Okay,
go.
We
will
in
the
last
single
minute
here
I
actually
thought
I
had
a
question
for
you
regarding
the
new
repos,
this
process
turned
out
to
be
a
little
more
complicated
than
I
anticipated
looks
like
now.
If
we
want
a
either
a
new
or
migrated
repo,
we
have
to
have
a
sub
project
in
the
sig.
According
to
saying,
storage
is
shorter.
That
requires
a
cap
and
then
consensus
among
sig
leadership.
So
I
looked
for
examples
of
this
I
didn't
see
any
caps,
but
obviously
six
storage
has
so.
D
I
I
I
Mean
that
you
would
have
really
its
cross
sit
another
guess
that
I'm
just
saying
I,
don't
know
exactly
what
you
are
trying
to
propose,
but
there
are
like
example,
so
for
sub
projects
already.
But
if
you
run
a
working
group,
then
I
think
that
is
a
little
bit
more.
But
it's
a
bit,
but
you
can
ask
me
off
life.
You
want
to
I,
actually
need
you.
A
drop
has
another
meeting.
Okay,.