►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 22 July 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Okay,
thank
you
all
right,
so
in
terms
of
the
cap
itself,
so
this
is
this
is
how
I've
written
out
the
api
reference
now.
So
it
looks
very
much
like
how
the
apis
look.
A
If
you
look
at
the
kubernetes
api
repository,
so
I've
kind
of
used
the
same
tags
and-
and
you
know,
follow
the
same
style
here
as
much
as
possible,
so
done
it
for
the
other
resources
too.
A
Other
than
that,
I
think
we
talked
about
bucket
info,
where
we
earlier,
we
had
the
object
matter
and
type
meta,
so
kind
and
metadata
name
and
all
that,
and
also
we
had
some
fields
that
that
that
weren't
absolutely
needed
here,
like
the
provisional
name,
I've
gotten
rid
of
them
and-
and
you
know
just
put-
what's
absolutely
needed
here-
what
else
other
than
that
I've
actually
yeah
finished
out
the
driver
grpc
api.
I
think
it's
it's
ready
for
review
at
this
point.
A
One
thing
I
can
see
being
added
here
is
actually
maybe
a
few
more
diagrams
to
make
things
easier
to
understand,
and
but
but
that's
really
just
a
nice
to
have
and
more
info
about
the
architecture
like
like
explaining
how
explaining
why
we
have
stuff
like
this.
Only
one
controller
manager,
it
kind
of
makes
sense,
just
as
it
is,
but
why
only
one
control
manager
with
multiple
multiple
drivers
inside
cars
and
and
a
cluster-wide,
node,
adapter,
and
and
also
things
like
how
this
leads
to
portability
or
scalability.
A
A
Like
fully
fleshed
out
and
and
everything
so,
okay
moving
on
to
the
next
steps,
can
you.
B
Can
we
paste
the
link
to
the
doc
on
sig
storage,
cozy
and
yeah
and
get
some
input
from
the
group.
C
C
A
A
E
E
Nope,
can
you
hear
me
I
I'm
trying
to
remember
too.
I
thought
we
were
still
going
on
about
the
access
like
so
the
bucket
access
and
the
exact
fields
that
would
be
available
in
it.
E
B
E
A
We
were
trying
to
work
out
seemed
like
something
really
important,
so
some
of
the
things
we
I
followed
up
on
were
obviously
the
cap
and
we
were
trying
to
architect
it
was
it
had
it
either
had
something
to
do
with
the
back
of
the
cap.
It
had
something
to
do
with
one
of
these
structures
we're
trying
to
figure
out
what
needs
to
go
where
I
think
we
were
trying
to
talk
about
not
access
granting.
I
think
it
was.
It
was
something
to
do
with
this.
E
D
E
Because
the
flow
of
information
is
the
sp
produces
an
access.
When
you
call
grant
access,
we
have
to
store
that
in
our
bucket
access
object
and
then
pod
gets
bound
to
a
bucket
access.
We
have
to
read
the
information
back
out
of
the
bucket
access
object
and
generate
this
json
file,
and
so
that
the
flow
of
the
information
goes
from
the
sp
to
the
bucket
access
object
to
the
bucket
to
json
file,
and
it
all
needs
to
look
more
or
less
the
same
in
those
three
places.
I
think.
A
A
All
right
so
so
let
me
remind
where
we
were
last
week:
okay,
so
it
just
just
came
back
to
me
so
so
this
is
going
to
be
the
new
provisional,
create
bucket
call
where
you
pass
the
name
and
parameters.
We
don't
have
any
protocol
structures
that
have
protocol
specific
parameters
going
in,
but
the
response
field
will
have
will
have
protocol
specific
structures
that
that
will
that
will
be
filled
in
by
the
driver
with
appropriate
information.
C
A
And
then
you're
saying
so,
this
would
work
as
well
as
the
other
one.
Where
is
there
any
reason
to
choose
one
versus
the
other?
Because
because
so,
if
this
would
work,
what
you're
saying
would
work
where
we
get
the
entire
bucket
infrastructure
back
as
a
response
to
provisional
grant
bucket
access.
E
Yeah,
that
was
my
idea.
You're
right,
some
of
the
fields
could
in
principle
be
supplied
back
earlier,
but
they're
not
going
to
be
usable
by
anyone
until
you
grant
access.
So
why
not
just
return
them
all
in
bucket
access.
I
I
there
has
to
be
some
corner
case
where,
like
maybe
you
don't
know
the
name
until
you
grant
the
access
or
something
or
maybe
the
the
name
could
vary
per
access
under
certain
circumstances.
A
C
D
E
Is
supposed
to
do
say,
for
instance,
right,
like
things
like
bucket
name,
are
going
to
be
protocol
specific.
So
if
we
can
keep
the
protocol
specific
stuff
out
of
bucket
create
and
have
it
only
in
bucket
access,
then
I
think
we
have
a
simpler
api
market.
Name
is
going
to
be
protocols.
D
E
E
A
E
This,
I
think
the
difference
is
if,
because
it's
protocol
specific
field,
if
you
put
it
in
grant
access,
it's
very
easy
to
sort
of
add
it
to
the
rest
of
the
protocol.
Specific
fields,
if
you
put
it
in
bucket,
create
then
bucket
create
also
needs
to
have
a
protocol
specific
set
of
responses
and
some
sort
of
a
discriminated
union
struct.
On
the
reply
from
the
grpc
that
creates
the
bucket,
and
I
was
hoping
that
we
could
just
avoid
that
and
say
you
know,
the
the
crate
bucket
rpc
is
protocol
agnostic.
E
A
Okay,
so
so
in
in
this
scenario,
we
would
we
would
take
the
entire
bucket
and
the
entire
response
from
bucket
access
response,
grant,
bucket
access
response
and
and
store
it
into
a
secret,
and
anyone
who
requests
it
just
gets
that
whole
secret
mounted
into
the
file
is
that
that
would.
E
I
wasn't
thinking
of
making
it
all
a
secret
but
you're
right
that
some
of
the
information
is
secret
and
if
we
just
put
it
all
in
the
bucket
access,
we
have
a
security
problem.
So
we'd
have
to
put
it
in
a
secret,
well
yeah,
if
you
could
somehow
put
part
of
it
in
the
secret
and
part
of
it
in
the
bucket
access
object.
That
might
be
a
little
bit
nicer,
but
I
sort
of
see
where
you're
going
mm-hmm
well,
wait
a
minute,
no
okay,
okay!
E
Going
to
have
exactly
like
these
five
fields,
whatever
they
are,
and
so
we
we
can
say,
okay
for
these
five
fields.
These
three
are
going
to
go
in
the
bucket
access
in
some
s3
specific
portion
of
the
kubernetes
object,
and
then
these
two
are
going
to
get
shoved
in
a
secret,
the
name
of
which
will
also
be
stored
in
the
bucket
access.
So
so
we
can.
A
Yeah
we
can
do
that
so
put
the
bucket,
so
it's
just
the
bucket
info
should
also
reside
in
the
bucket
right.
A
I
mean
just
just
in
in
terms
of
like
what
what
it
signifies
like
what
it's
about.
E
A
Information
for
finally
creating
that
bucket.json
for
the
workload.
D
A
A
E
Well,
you're
you're
returning
the
access
info,
which
is
all
of
the
information
you
need
to
access
the
bucket.
This.
E
B
E
So
I
mean
I
yeah:
it's
not
impossible
to
do
the
other
way
it
just
feels
cleaner
to
do
it
all
in
bucket
access
from
where
I
sit,
but
I
I
want
to
hear
what
other
people
think
does
anyone
care,
like
I'm
just
thinking
in
terms
of
less
lines
of
code,
if
everything's
in
the
bucket
access,
then
bucket
create
becomes
really
really
simple,
you
don't
have
any
sort
of
discriminated
union.
You
don't
have
complicated
structures
inside
your
bucket
object.
E
B
If
the
previous
structure
is
the
same,
where
there's
a
single
kubernetes
bucket
resource
that
rep,
that
is
the
abstraction
of
a
back-end
bucket
and
if
there's
multiple
bucket
access
resources,
because
there
can
be
different
accessors
to
that
same
bucket,
then
it
seems
that
there's
a
division
there,
that's
natural,
where
properties
specific
to
a
bucket
would
be
in
the
bucket
resource
and
property
specific
to
accessing.
That
bucket
would
be
in
the
bucket
access
resource.
Are
you
proposing
something
different
than
that
ben.
E
B
E
E
E
A
A
E
F
E
E
E
Whatever
it
wants
to
do,
and
then
it
returns
an
id
back
and
it
has
to
generate
an
actual
bucket
name
and
then
when
access
is
granted,
it
has
to
return
that
bucket
name,
but
cozy
doesn't
care
what
the
what
the
s3
bucket
name
s3
level
bucket
name
is
because
it's
just
going
to
shove
that
in
bucket.json
and
let
the
pod
use
it.
It's
not
going
to
ever
use
that
string
for
anything
it's
so
it's
effectively
opaque.
F
So,
for
in
in
the
aws
example
right,
a
bucket
id
might
be
an
arn
right
like.
F
No,
I
I
don't
know
but
you're
saying
that
essentially
like
if
I
would
look
at
this
from
more
the
identify
having
a
unique
identifier
for
my
buckets
right,
so
I
might
use
an
arn
which
is
a
longer
string
that
represents
my
bucket
resource
in
the
back
end,
but
for
the
purpose
of
the
protocol
access.
I
I
want
I,
I
need
to
just
provide
this
bucket
name,
and
this
is
just
for
bucket
access
reasons
right,
right,
okay,.
E
I
see
and
the
the
key
is,
is
that
for
item
potency,
if,
if
you
get
retries
on
the
create
call
with
the
same
name,
you
have
to
be
able
to
map
that
to
the
same
id
reliably.
That's
the
only
trick
right.
You
can't
literally
generate
a
random
id
every
time
you
create
a
bucket,
because
if
you
get
a
retry,
you
need
to
return
the
same
id.
So
you
don't
generate
two
buckets
in
response
to
one
request
being
repeated.
E
F
From
sort
of
perspective
of
the
driver,
the
the
cr
name
is
the
the
key
right.
It's
the
key
to
to
identify
retries,
basically
right.
E
Right
right
and
and
yeah
so
so
so
you
can
use
that
as
your
bucket
id
that
you
return
or
you
can
create
a
new
one.
But
if
you
do
create
a
new
one,
it
has
to
reliably
map
back
to
the
or
from
the.
A
Input
then,
so
why
are
we
modeling
this?
I'm
I'm
confused.
So
are
we
still
on
topic
about?
Are
you
still
talking
about
access
info
coming
all
from
the
bucket
access
response
or
grant
bucket
access
response.
E
D
A
I
I
would
rather
have
it
that
way
than
than
to
make
it.
You
know
just
ignore
it.
E
E
F
E
E
F
That
you
have
multiple
names
folders
for
for
the
same
thing,
and
you
always
need
to
go
through
some
mappings
and
confusion
of
what
like,
which,
which
of
all.
E
F
So
who's
who's,
creating
like
when
I,
when
I
do
the
flow
of
bucket
requests
right
when
I'm
provisioning
them
asking
for
a
provision
who
generates
the
bucket
name
assuming.
E
E
F
F
The
driver
can
decide
how
to
name
those
and
like
it's
a
cluster
global
thing.
F
F
Global
thing:
no,
it's
a
cr
within
a
cluster,
so
it's
like
like.
If
I
have
two
drivers
in
my
cluster
for
providing
cosy,
I
don't
I
can't
have
both
of
them.
Selecting
the
same
names
colliding
right.
So
this
is
why
cozy
controller
is
taking
a
uuid
before
and
saying.
This
is
the
bucket
identifier
for
the.
D
F
Right
right
and
then
passing
on
this
as
an
identifier
for
the
driver,
so
that
it
knows
the
the
request
key
right,
I'm
provisioning
a
bucket
with
this
id
this
uid
and
it's
like
I
can
retry
that
and
the
driver
knows
it's
the
same,
one
that
that
works.
But
then
the
driver
returns
back
a
bucket
id
which
might
be
very
different
right
or
might
be.
A
F
On
naming
is
there
anything
in
the
bucket
request
that,
like
a
prefix
for
the
back
end
bucket,
that
can
be
like
there
was
once
something
like
that,
but
I
don't
remember
if
it's
still
there,
like
some
control
of
the
user
workload
on
on
the
backend
bucket
name
like
any
any
type
of
control,
because
one
of
the
management
issues
we
had
with
obcs
was
that
you
know
we've
started
having
so
many
back-end
buckets
with
generated
names
and
nobody
knew
to
which
cluster
they
belong
to
which
namespace
they
belong
right.
F
I
I
mean,
after
clusters,
were
already
expired,
like
removed
and
dead
for
a
while
right,
it
was
quite
difficult
without
having
some
user
generated
the
prefix
or
anything
like
that
to
identify
what
like
to
manage
this
backend.
F
Yeah
we
did
it
with
the
prefix
for
the
name
right,
because
we
like
yeah
and
we
just
explaining
what
we
did,
but
there
was
a
problem
there
that
a
lot
of
teams
were
using
the
same.
In
our
case,
it
was
an
aws
account,
but
it
ended
up.
You
know,
being
swamped
with
stale
buckets
which
nobody
knew
to
what
they
belong
and
who
and
which
cluster
and
it
was.
F
A
The
metadata
so
the
yeah
sorry.
D
A
E
Like
no
on
on
the
same
storage
controller,
but
created
by
like
perhaps
a
different
kubernetes
cluster
that
couldn't
have
known
about
the
names
that
you
chose,
I
see
so
the
problem
is
within
a
kubernetes
cluster.
You
can
ensure
the
names
are
unique,
because
the
kubernetes
server
will
literally
prevent
you
from
creating
one
that
collides
with
something
else.
But
if
you
go
to
another
kubernetes
cluster
and
have
another
cozy
plugin
that
points
to
the
exact
same
storage
controller,
nothing
prevents
you
from
recycling
the
names
and
so
the
way
csi
handle.
E
This
is
saying
like
you're,
it's
only
names
are
only
promised,
are
guaranteed
to
be
globally
unique
within
that
instance
of
csi,
but
not
globally,
because
no
one's
in
a
position
to
guarantee
that
you
can
only
guarantee
that
it's
unique
within
that
instance.
And
so,
if,
if
it
turns
out
that,
like
you
have
a
collision
somewhere,
you
can
you
your.
F
Right
and
then
we're
adding
to
this
another
layer,
which
is
the
work
of
protocol
layer
bucket
name,
which
is
also
which
we're
saying
it
might
also
need
to
be
different.
In
some
cases
than.
E
F
Yeah,
so
you
want
another
level
of
freedom
there,
because
because
we're
saying
that
the
bucket
ids
might
be,
I
don't
know
whatever
too
long
for
the
protocol.
I
don't
know
whatever.
E
That
people
are
going
to
say:
well,
I
don't
like
that,
do
it
different,
and
if
we
leave
the
flexibility
in
there
to
do
it
different,
then
people
aren't
going
to
yell
at
us.
The
only
requirement
is
the
bucket
name
is
used
for
add
importancy,
yes,
that
that
is
how
csi
did
it
and
I
would
advocate
we
do
the
same
here.
E
D
F
That
might
not
be
usable
in
the
protocol
and
the
id
should
be
somewhat
more
than
just
the
cr
name,
because
there
might
be
multiple
clusters.
So
these
three
make
sense
to
me.
The
one
thing
that
I'm
missing
is
any
type
of
like
any
level
of
control
from
from
somebody,
either
the
user
or
the
administrator
on
the
like
the
generated
bucket
names
in
the
back
end.
F
I
I
I
feel
like
it
might
be
missing,
or
you
know
a
little
bit
too
difficult
to
manage
without
it.
A
F
Yeah
so
I
meant
to
say
that
managing
so
I
I
think
it's
we
are
missing
some
control
from
the
on
the
on
the
backend
name
right,
so
the
driver
can
provide
it,
but
then,
if
cozy
has
no
standard
for
it,
it
will
might
be
a
little
difficult.
I
don't
know
we
might
say
that
it's
a
vendor-specific
mechanism
but-
and
you
know
we
put
it
in
a
bucket
class
or
something
like
that-.
F
A
This
is
saying
yeah,
so
yeah
I
see
I
see
where
you're
coming
from
for
for
manageability.
It
makes
sense
to
have
something
like
that
and
yeah.
I
can
see
there
being
a
problem
if,
if
we
said
you
know,
the
driver
will
retain
that
mapping
because.
B
A
Mapping
is
lost,
then
you
know
that
whole
information
is
lost,
so,
like
ben
was
saying,
we
need
a
mechanism
to
store
that
metadata
in
the
back
end
right.
Otherwise
you
can't
do
this.
E
A
A
That
just
makes
the
name
longer
like
I
mean,
if
no,
no,
it
makes
it
easier
for
an
admin
later
on
to
see
which
buckets
came
from
which
cluster,
let's
have
10
kubernetes
clusters.
Each
of
them
are
creating
buckets
right,
but.
E
But
the
driver,
what
I'm
saying
is
the
driver
could
already
do
that
if,
if
it
wanted
to
like
whatever
prefix
we
put
on,
the
driver
is
free
to
strip
off,
because
we
already
said
you're
free
to
ignore
the
name
or
mangle
it.
And
and
if,
if
you
don't
put
a
prefix
on
the
driver
can
add
its
own
prefix
and
the
driver
knows
the
name
of
the
cluster,
and
it
knows
that
it's
cozy,
and
so
it
could
put
cozy.
E
It
could
format
the
names
the
way
you
want
to
and
you're
entirely
free
to
do
that
within
the
driver,
and
I
think
that's
the
better
way,
because
there
could
be
implementations
where
there's
very
severe
length
limits
that
we
don't
want
to
exceed.
So
we
just
want
to
give
the
minimal
string
from
the
cozy
layer
for
that.
A
Risk
that
is
exist
regardless,
but
when
I
see
what
you're
saying
I
see
what
you're
saying.
E
A
We're
trying
to
mandate
what's
the
right
way
to
use
buckets,
I
mean
we're
trying
to
move
the
industry
forward.
I
I
mean
I
don't
think
it's
such
a
terrible
thing
to
mandate
or
try
to
give
suggestions
on
how
a
bucket
should
be
named.
More
than
likely
vendors
are
just
going
to
use
the
names
that
we
provide.
A
All
the
buckets
names
are
should
be,
you
know,
dns
addressable,
so
so
as
long
as
it's
you
know
it
doesn't,
it
doesn't
break
the
rejects,
for
you
know,
url,
which,
if
you're
not
even
yeah,
not
a
complete
url,
just
just
a
domain.
E
Then
then
it
should
just
work,
but
the
key
is:
is
that,
like
for
all
of
the
ones
created
by
cozy,
they're
gonna
have
these
random
names,
and
and
like
that,
that
you
know
these
are?
This-
is
all
automated
buckets
created
by
automation
right,
so
humans
shouldn't
be
looking
at
the
names
and
gleaning
any
information
from
the
only
thing
humans.
E
E
You
know
if
you're,
actually
administering
the
back
end
and
not
the
kubernetes
cluster
and
and
the
driver
is
free
to
like
attach
that
kind
of
information
in
as
needed,
but
like
the
individual
names
are
just
random,
so
you
know,
alice
will
create
some
buckets
and
bob
will
create
some
buckets
and
the
only.
E
E
F
E
F
E
Did
with
with
trident
you
know,
we
have
all
kinds
of
metadata
that
you
can
attach
to
the
trident
driver
object
and
that
gets
stamped
on
the
metadata
that
for
the
volumes
the
trident
creates
and
it's
very
flexible.
You
know
what
you
can
put
there,
and
so
we
don't.
We
don't
have
any
dependency
on
csi,
giving
us
useful
information
or
kubernetes,
giving
us
useful
information
we
just
have
you
know
we
just
asked
the
user,
like
what
information
do
you
want
to
be
stamped
on?
E
The
volume
is
created
by
this
driver
on
this
back
end
and
we'll
do
it,
and
so
you
tell
us
that
and
then
then
we're
responsible
right.
So
it's
something.
A
Like
would
it
have
been
yeah?
Would
it
help
if
us,
as
cozy,
gave
clear
guidelines
that
it
helps
to
have
metadata
about
the
cluster,
the
the
driver,
id
or
something,
and-
and
you
know,
just
information
that
that
can
uniquely
identify
where
this
bucket
came
from.
E
F
E
Yeah,
but
that's
something
that
like
csi,
doesn't
provide
to
you
and
you
would
have
to
go
get
it
yourself
if
you
were
really
right
and
that
that's
that
was
by
design
in
csi,
and
you
can
debate
whether
it
was
a
good
idea.
D
E
The
the
pvc
is
bound
to
the
pv,
so,
given
the
pv,
you
can
find
out
what
pvc
is
bound
to,
and
you
know
that
kubernetes
knows
the
name
space,
it's
just
that
the
driver
doesn't
because
it's
not
relevant.
So
as
long
as
you
don't
lose
your
kubernetes
database,
you
have
all
the
auditing
information.
You
need.
D
F
Yeah,
it's
true
for
for
a
lot
of
backend
resources,
like
I
don't
know
little
balancers
services
of
I'm
just
looking
at
aws
as
an
example,
but
there's
more
of
those
leaked
resources
on
the
back
end.
It's
a
the
the
problem.
Like
is
always
that
people
ask
you,
as
the
provider
of
the
driver
or
something
like
to
provide
some
mechanism,
and
and
anybody
can
do
it
right-
I'm
not
saying
you
can't,
but
I'm
pretty
sure
every
driver
will
be
required
to
do
something
like
that.
E
A
F
A
F
The
user
level,
so
the
driver
can
do
that
for
the
behalf
of
the
user
if
it
can
track
back
to
which
namespace
created
the
ber
and
all
that.
F
But
I'm
I'm
I'm
a
little
bit
like
nervous
about
having
no
control
from
the
user
level
of
of
this
on
on
having
no
tags
at
all
right
as
a
user
for
what
I'm
creating,
and
I
know
I'm
creating
some
heavier
resources
in
the
back
end.
What.
A
A
E
Do
that
with
csi
is
because
csi
wasn't
envisioned
to
be
kubernetes
specific
and
so
like
what
would
name
space
mean
on
something
other
than
kubernetes
doesn't
have
that
right?
I
was
gonna
say
if
we're
willing
to
tie
ourselves
to
kubernetes,
then
that
argument
goes
away
and
you
can
say:
okay,
we're
going.
A
E
A
I
think
I
think
guys
that
would
solve
a
problem
right
like
what,
if
what,
if
we
started
passing
in
a
meta
field
into
all
of
the
requests,
actually,
all
of
the
create
requests
create
bucket
requests
and
grant
bucket
access
requests.
Like
a
meta
field,
I
don't
mind
calling
it
meta,
which
has
you
know,
just
information
about
where
this
request
came
from.
F
Do
you
mean,
is
this
information,
something
that
I,
as
a
as
a
the
creator
of
the
br,
provide
custom
metadata
that
I
can
provide
to
the
no
no
it'll.
A
Just
be
cozy
infer
data
like
the
name.
F
A
F
We
will
collect
like
a
so
it
will
be
another
tiny
schema
of
these
fields
that
we
add
on
to
the
request,
so
that
drivers
can
use
to
identify
a
workload
within
the
cluster
or
something
like
that
is
that.
D
Yeah
I'll
use
the
construct
request,
or
you
know,
bucket
name
and
humanity.
F
Yeah,
I
I'm
I'm
gonna,
think
about
it
a
little
bit
more.
I
I
think
it
does
make
sense
so
because,
if
we
really
open
it
to
so
I
I
mean
what
we
did
eventually
with
obc
is.
We
did
open
it
for
the
users
to
provide
whatever
they
want
to
driver,
but
I
I
know
that
for
cozy,
it's
not
something
we
want
to
do,
because
it
will
break
portability
in
some
way
or
it
can
break
portability.
It
can
be
an
open
opening
for
it.
F
A
A
Yeah,
I
think
I
think
this
metadata
going
in
through
the
driver
would
help
a
lot,
but
what
kind
of
metadata
would
be
in
there?
Let's
talk
about
that,
we
have
a
few
minutes,
maybe
just
a
just
a
quick
glimpse
into
what
we
should
have.
Can
we
discuss.
F
D
D
E
F
A
E
Right
right,
but
I
mean
yeah
people
want
to
know
for
historic
reasons.
Was
this
a
gold
bucket?
Was
this
a
silver
bucket?
You
know
like
maybe
in
my
specific
installation,
silver
means
different
things
on
different
days
of
the
week,
and
so
it's
not
so
useful,
but
there
are
installations
where
it
absolutely
means
something
to
somebody.
I
I
think
that
the
most
important
thing
is
any
any
of
this
kind
of
metadata.
We
have
to
explicitly
say
the
driver
is
free
to
ignore
all
of
this.
This
is,
you
know,
purely
sort
of
for
your
information.
E
A
A
F
What
if
the
entire
br
goes
in
there,
I
mean
what
what
makes
what
makes
it
anything
in
the
vr
irrelevant
for
the
purposes
of
logging,
and
you
know
tracking.
F
Is
there
anything
that
you
might
say?
Well,
I
I
would
never
want
that
to
be
part
of
my
my
logging
or
anything
like
that
for
the
driver.
A
Deletion
policy:
this
is
the
br
the
videos
here
last
name
protocol.
Oh
the
reason
priority
is
not
in
the
br.
Sorry,
it's
just
protocol
and
bucket
class.
I'm
sure
we
can
put
this
whole
thing
in
and
and
the
metadata.
A
A
Right
right,
but
more
than
bucket
class
name
yeah.
Why
not
name
namespace
and
labels
rather
than
name
namespace
and
bucket
class
name
like
those
three
make
way
more
sense
to
me
than
having
just
the
bucket
class
name
it.
It
makes
sense
to
have
the
fields
in
the
bucket
class.
But
I
don't
know
if
it
makes
sense
to
have
the
name
of
the
bucket
class.
F
Object
matter
sure,
I'm
sorry
guys
I
really
have
to
drop,
but
I'm
gonna
check
back
next
time.
A
A
E
I
thought
just
popped
into
my
head,
which
could
throw
a
wrench
into
this.
What
if,
if
someone
just
creates
a
bucket
without
a
bucket
request,
that's
still
going
to
cause
the
provisioner
to
do
its
thing
right.
E
A
A
E
A
A
Okay,
everything
is
going
forward
all
right,
so
yeah
I
won't
be
making.
I
won't
be
talking
about
these
things
in
the
cab
right
now.
Yeah.
Please
take
a
look
at
the
cap.
The
link
is
in
the
in
the
in
the
slack
group
six
stories
cozy
give
your
reviews
and
leave
your
feedback
on
on
the
on
the
pr
itself.
That'll
help
you
know
improve
it
before
the
api
reviewers,
you
know
get
a
chance
to
even
you
know
have
to
go
through
it
yeah!
That's
all
from
me.
A
Let's
catch
up
again
next
thursday
and
we'll
continue
this
discussion
on
metadata
in
the
in
the
grpc
request,
cool.