►
From YouTube: KEP Review: Object Bucket API (May 6 2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Okay,
cool,
so
wheel
off,
left
off
last
week
was
kind
of
hanging
question
of
how
to
sort
of
dynamically
represent
existing
object,
store
buckets
such
that
administrators.
Don't
have
to
create
a
bucket
content
object
for
every
new
connection.
That's
made
to
that.
The
initial
solution
propose
that
we
kind
of
overload
bucket
clasp
that
doesn't
really
that
doesn't
really
feel
good
to
anybody.
So
guy
and
Jeff
myself
have
been
throwing
around
some
ideas
since
then,
and
we
we
have
a
different
proposition
it
before
I
share
screen
here,
I'm
just
going
to
describe
a
little
bit.
B
So
the
the
idea
with
buckets
is
that
they
they
different
fundamentally
from
volumes
and
that
they
are
globally
accessible
over
the
network
so
long
as
you're
you're
within
that
network
and
they're
naturally
read/write
many
as
opposed
to
volumes
which
are
not
now
we're
not
familiar
with
why
the
the
one-to-one
binding
comes
up
with
PVCs
and
PBS.
Why
that's
a
mandated
thing,
but
we
don't
think
it's
a
good
fit
necessarily
for
buckets.
B
Bucket
content
object
being
accessible
from
multiple
bucket
api's,
the
the
fit
for
green
field
and
brown
field
seems
to
be
pretty
easy
and
in
terms
of
sure
ability
it
would
allow
access
from
multiple
namespaces,
which
I
know
in
in
the
volume
and
block
and
filer
world
is
typically
a
no-no
with
buckets.
We
wanted
to
get
the
cigs
temperature
on
on
how
they
feel
about
this
buckets
normally
are
are
well
at
least
there's
use
cases
where
they
are
publicly
reachable
from
the
internet
from
anywhere
within
a
network
and
to
represent
that
sort
of
public
access.
B
It
kind
of
makes
sense
that
the
the
bucket
content
represents
that
end
point
and
people
just
say
they
want
to
connect
to
that
end
point
so.
First
off
can't
everyone
see
my
screen
clearly
yeah,
okay.
So
the
way
we
re-envision
this
is
that,
given
a
green,
fill
okay,
so
I
want
to
dynamically,
create
a
bucket.
The
user,
firstly
creates
their
bucket
con
our
bucket
object
to
reference
a
bucket
class.
So
that's
that
remains
the
same.
The
back
end
bucket
is
provisioned
and
a
bucket
content.
B
Object
is
created
for
that
and
references
the
bucket
and
then
is
bound
to
the
originating
bucket
API.
So
the
the
sense
of
binding
still
exists
in
green
field,
but
serves
a
slightly
different
purpose
once
created
bucket
content.
Objects
can
then
be
accessed
by
other
names
faces
by
in
sort
of
the
a
static
binding
way
where
a
bucket
is
created.
B
So
just
this
kind
of
example,
with
the
API
would
look
like
you.
Have
your
originating
slash,
owning
bucket.
It
has
a
reference
to
like
a
Content
bucket
content
has
a
back
reference
to
the
owning
owning
bucket.
That's
all
set
through
automation,
follow
on
buckets
access
or
buckets.
Then
reference
this
bucket
directly
to
access
this
backing
store
as
part
of
cleanup
it's
a
little.
It
requires
a
little
little
figuring
out
generally.
We
we
think
that
the
way
object,
stores
work
is
if
I
own,
the
bucket
and
I
have
people
accessing
the
bucket.
B
B
Should
the
owning
bucket
be
deleted
if
the
retain
policy
is
for
retain
the
Bucky
content
would
be
deleted,
but
those
who
have
access
can
still
access
that
bucket,
which
is
still
reflective
of
how
the
real
world
well
how
this
would
represent
the
real
world.
If
we're
not
deleting
the
bucket
we're,
not
if
we
delete
the
bucket
content,
we're
effectively
disallowing
more
accessors
to
connect
to
that
bucket,
but
we're
not
ripping
it
out
from
under
workloads
that
may
need
it.
B
D
Have
a
question:
it's
my
understanding
that
some
providers
don't
support
deleting
a
pocket
that
is
not
empty,
so
I
guess
if
this
is
a
very
large
pocket
after
a
workflow
is
completed
now
the
responsibility
will
fall
to
the
driver
to
perform
the
whole
delete
operation
on
the
provider.
In
the
case
that
the
release
policy
is
delete,
correct.
B
Yeah,
so
the
the
kubernetes
troller
site,
automation
is
not
going
to
be
checking
whether
or
not
there
are
objects
in
there.
If
the
driver
does
not
support
deleting
while
there's
still
data
in
there,
then
it's
up
to
the
workload
to
clear
that
out.
Otherwise,
you
know
if
there's
objects
in
there
should
return
an
error
up
to
the
bucket
layer
so
that
you
know
we
record
the
event.
Users
can
see,
can't
delete
because
of
x
y&z.
They
can
take
the
necessary
actions,
otherwise,
yeah
it's
up
to
the
driver.
D
D
If
they
believe
the
pocket
right
and
I
guess
we
ever
out,
we
couldn't
delete
it,
but
we
delete
the
pocket
resource
anyway
and
the
pocket
content
and
then
another
pocket
resource
tape
is
created
and
about
continent
realizes
that
we'll
the
pocket
already
exists.
So
it
cannot
be
created
right
so
will
that
fall
into
the
brownfield
flow
know
where
you
had
the
assumption
that
you
were
going
to
create
a
pocket,
but
the
pocket
is
pre-existing
because
he
wasn't
successful
in
the
previous
time.
Well,.
C
B
C
D
Will
be
that
if
someone's
trying
to
do
cleanup
run
in
space
and
they
cannot
believe
the
pocket
because
pocket
content
has
a
reference
to
it,
maybe
if
we
release
I
mean
if
he
is
that
referenced
from
pocket
content
back
to
pocket
in
case
their
pocket
delineation
goes
and
we
just
leave
pocket
content
alone.
Then
I
mean
the
the
pocket.
Readers
can
be
deleted
successfully.
Now.
B
Yeah
true,
so
the
there's
a
couple
of
different
ways
to
approach
this:
the
the
first
way
that
diagrammed
here
is
kind
of
one
that
was
off
the
top
of
our
heads.
The
another
way
to
do
it
would
be
to
rather
than
settin,
owning
bucket,
manage
bucket
content.
Cleanup
via
checking
for
owner
are
for
references
in
the
connections
right
so
and
our
API
objects,
our
user
and
api
objects.
If
a
delete
occurs,
we
would
check
to
see
if
anyone
else
references
that
bucket
content.
B
If
no
one
else
does
then
it's
safe
to
delete
her
the
retain
policy.
If
we
can't
delete
because
there's
there's
objects
in
there
again,
then
that's
just
that's
an
error
and
we
can't
delete
we're,
not
gonna
clean
it
up
depending
on
the
driver.
So
that
was
another
way
we
have
thought
about,
maybe
managing
managing
cleanup
operations.
There,
questions.
E
E
B
That
is
so
that
is
dependent
on
the
retained
policy.
That's
defined
in
the
bucket
class,
which
will
then
be
populated
into
the
bucket
content
object.
The
way
we've
defined
them
thus
far
is
we
have
to
delete
and
retain
if
delete
is
specified,
then
the
kubernetes
automation
will
tell
the
driver
to
delete
the
bucket
now,
if
the
driver
can't
do
that
for
reasons
that
never
turns
those
reasons
back
to
kubernetes
and
reports
them
retain.
E
E
F
That's
the
policy
that
the
policy
says:
what
should
we
apply
to
the
actual
world
when
this
is
deleted?
So
a
bucket
content
can
have
a
return
policy
saying
I
would
like
to
retain
the
bucket
when
I'm
deleted,
or
it
can
have
a
policy
saying
I
would
like
to
delete
it
ref
if
empty
or
it
could
have
a
policy
saying
I
would
like
to
delete,
including
content
right
sure.
E
A
D
That's
where
my
only
concern
will
be
actually
the
copying
pocket
content
the
reference
to
pocket,
so
that
I
mean
I
can
understand.
There
were
happening
on
the
driver
and
keeping
the
pocket
content,
because
now
we
need
to
keep
some
sort
of
like
we
created
his
pocket
and
it
cannot
go
away
we're
going
to
keep
it,
but
the
pocket
resource
can
be
now
deleted
itself
and
then,
if
it
comes,
if
it
gets
recreated
again
I
guess
it
will
get
rid
coupled
with
pocket
content
and
then
that
everything
will
go
smoothly
right
from
here
can.
A
F
Say
we
create
a
bucket
and
then
and
the
controller
creates
a
bucket
content,
because
it's
a
green
field
case
we're
asking
for
a
new
and
provisioning
and
then
the
driver
would
create
a
bucket
done.
We
have
a
binded
a
bucket
bound
to
bucket
content.
Another
namespace
spine
asks
to
bind
to
the
same
bucket
content
and
we
bind
it
now.
One
the
original
bucket
gets
deleted
right
and
the
bucket
that
provisioned
would
like
was
setting
that
it
wants
to
delete
the
bucket
content
when
it
gets
deleted,
like
an
owner
of
the
bucket.
F
In
that
case,
the
controller
would
go
and
delete
the
bucket
content
would
apply
the
retain
policy
on
the
bucket,
but
in
any
case
that
will
break
that
will
make
the
other
bucket
the
second
bucket
from
the
second
namespace
unbound
right,
because
it
cannot
be
bound
to
that
bucket
content.
It
doesn't
exist.
So
essentially,
this
is
just
a
named
reference
which
we
are
respecting.
Also
the
owner
is
somebody
that
manages
the
lifecycle
automatically
of
the
bucket
content
for
for
a
very
owner
greenfield
cases
right.
This
is
where
this
use
case
is
more
relevant.
F
Well,
just
a
few
so
I
think
the
reason
for
it
is
that,
though
we
don't
have
to
to
be
honest,
I
think
that
it's
just
a
view
of
an
application
that
might
be
an
owner
complete
owner
of
the
data,
rather
than
the
administrator
being
the
owner
of
the
data,
not
Church,
that
more
popular
or
less
popular
to
to
think
about
it.
This
way,
but
I,
think
it's.
It
makes
some
sense
to
give
applications
their
own.
D
I
guess
when
that
happens,
the
second
namespace,
with
ax
trying
to
access
the
same
bucket,
wouldn't
be
as
popular
I
would
like
to
imagine
I
guess
unless
you're
running
the
same
application
in
two
different
namespaces
and
they
tend
to
create
the
same
bucket.
At
the
same
time,
then
maybe
we
will
run
into
this
situation
where
two
buckets
are
referenced
in
the
same
bucket
content
right,
but.
F
D
F
D
I
actually
like
because
on
the
design
from
last
week,
you
had
one
bucket
content
for
one
bucket,
and
you
said
that
that
was
for
MVP.
Oh
I,
don't
know
whoever
wrote
Whitman
and
I
like
it
that
now
we
are
reusing
single
bucket
content
to
two
pockets,
but
then
again,
I
think
it
won't
do
any
harm
to
decouple
the
ownership
from
bakit
continent
pocket
and
just
try
to
do
our
best
effort
to
delete.
D
F
On
so
that's
just
another
option,
we
discussed
that
we
do
would
actually
and
the
reference.
So
we
don't
reference
can't
we
just
query
to
understand
how
many
references
are
left
at
each
point
that
we
reconcile
a
bucket
content.
But
you
know
we
can
reconcile
a
bucket
content,
discover
that
there's
no
other
buckets
referencing
it
just
by
querying
by
label
or
something
that's
sort
and
then
decide
to
get
it.
F
The
like
the
shared
pointer
model
for
for
what
we
want
to
do,
I
think.
The
other
way
to
look
at
this
is
that
in
some
there
are
different
ways
to
manage
buckets
and,
and
by
that
I
mean
there's
either
that
it's
a
bucket
managed
by
the
application
completely,
and
the
lifecycle
of
the
bucket
is
bound
to
the
application
and
or
or
there's
another
model,
which
means
that
this
is
a
data
set
or
there's.
F
F
C
C
G
C
I
would
imagine
a
controller
that
monitors
for
deletion
for
bucket
objects
and
bucket
contents,
and
so
as
soon
as
all
the
bucket
objects
pointing
to
a
bucket
content
disappear.
That's
the
signal
that
whatever
the
deletion
policy
is
for
that
bucket
content
should
be
enforced,
meaning
if
it's
auto
delete
delete
it
right.
F
C
There's
a
little
bit
of
a
kind
of
what
I
would
like
to
see.
In
addition
to
this
is
in
the
bucket
content,
you
could
have
a
whitelist
of
name
spaces
that
are
allowed
to
access
this
bucket
content,
and
so
that
way,
your
you
as
the
administrator
can
say
only
you
know
my
name
stays
creator.
Namespaces
allowed
to
access
this.
If
you
really
care
I,
think.
B
Right
so
in
this
we
we
kinda
need
a
bucket
policy,
so
in
a
similar
fashion
but
yeah
you
listen
to
namespaces
list
of
permissions
and
because
object
stores
allow
for
granular
targeting
of
data
with
these,
you
can
specify,
you
know,
object,
one
object
to,
and
the
policy
for
this
would
say
you
know
this.
Namespace,
fubar
and
fuzzbuzz
can
read
and
write
objects.
One
and
two
fubar
is
allowed
to
live
all
objects.
For
instance,
John.
B
D
C
So,
taking
a
step
back
I
do
like
the
direction
of
this.
This
is
kind
of
something
I
thought
about
earlier,
but
one
of
the
drawbacks
of
this
approach
I
want
to
run
by
make
sure
everybody
is
okay
with
it
is
we
said
that
we
want
to
make
it
easier,
the
brownfield
case
or
the
driverless
case,
because
it
tends
to
be
more
common,
that
a
bucket
already
exists,
and
so
in
order
to
make
make
the
brownfield
case
work
when
you
create
a
bucket
object,
you
need
to
have
a
identifier
for
a
bucket
content
to
use.
C
D
B
Were
abstracting
that,
with
bucket
classes
I,
my
and
not
I'm,
not
an
extensive
user
objects.
I
think
guys
probably
got
more
expertise
on
this,
but
my
own
experience
you're
able
to
list
all
the
bug
all
the
objects
within
are
all
the
buckets
rather
within
the
like
organization
that
you
belong
to.
So
if
you're,
a
user-
and
you
know
GCS
org-
you
can
list
and
see
all
the
buckets
that
are
in
there.
You
may
only
have
access
to
a
few
of
them.
B
F
B
D
I
guess
here
you,
you,
don't
need
the
pocket
classes
again
or
separate
pocket
content
like
if
you
wanna,
control,
more
tactile
control
right.
If
you
want
main
students
want
to
be
able
to
list,
but
you
don't
want
namespace
to
be
able
to
list,
and
you
want
to
control
that
may
be
true:
either
a
policy
or
different
set
of
furniture,
then
maybe,
even
though
the
pockets
have
the
same
name
on
the
respective
me
face,
maybe
they'll
have
to
buy
into
a
different
bucket
content
right,
because
that
Minister
chose
to
control
no
listing
for
this.
D
F
Right
so
so
we
we
are,
the
one
option
is
to
represent
names,
just
as
if
they
so
have
no
bucket
content
right,
for
example,
and
then
we
refer
directly
from
the
bucket
to
the
external
bucket
in
the
service
identity,
but
the
layer
of
mapping
names
between
bucket
content
and
the
external
buckets
allows
the
administrator
a
way
to
publish
something
without
you
know
necessarily
having
to
conform
to
what
the
cloud
service
is,
how
the
cloud
service
is
exposing
it
right.
So
you
can
say
here's
my
logs
bucket
right
for
this
cluster.
F
So,
in
terms
of
separating
the
like
thing,
the
internal
cluster
mapping
is
actually
a
good
thing,
because
it's
a
burden
on
somebody
which
we
would
like
to
automate
and
help,
but
it
is
helpful
if
you
want
to
assign
things.
So
it's
like
a
namespace
for
buckets
right.
This
cluster
has
an
anxious
from
the
buckets
we
even
we
suggested
without
really
thinking
through
that
it
will
be.
There
will
be
like,
like
there's
a
service
name
names
names
based
on
SVC
cluster
local.
There
will
be
like
a
no
own
bucket
dot,
namespace
dot.
F
You
know
the
string
bucket
to
represent
like
services,
so
that,
like
bucket
contents,
can
be
even
published
to
DNS.
Their
endpoint
can
be
polished
to
DNS,
for
example,
so
that
you
know
it's
like
having
a
namespace
of
buckets
that
this
cluster
represents
and
I'm
sure
that
it's
not
really
well
designed
for
multiple
tenants
in
this
case,
but
we
need
to
think
it
through.
How
do
we
want
to
support
tenants
in
clustered
resources?
F
D
B
B
C
Yeah
I
think
going
back
to
my
comment
about
being
able
to
list
all
the
bucket
contents,
whether
that's
an
issue
I
think
an
argument
can
be
made
that
if
you
are
concerned
about
that,
maybe
the
bucket
contents
that
you
expose
within
a
single
cluster
you
know
need
to
be
scoped
such
that
everybody
who's
on,
that
cluster
should
be
allowed
to
see
them.
And
if
you
have
two
tenants
that
you
don't
want
to
be
able
to
see
each
others,
okay
content,
they
should
not
be
sharing
a
cluster.
D
C
A
D
Like
a
generic
name-
and
you
really
don't
know,
what's
inside
the
PB,
unless
someone
names
the
key
be
classified
builder
or
something
like
that,
so
I
was
asking
about.
You-
know
the
mapping
between
pocket
beside
bucket
names
and
actual
bucket
name,
because
it
all
comes
down
again
to
even
deadly
sting
operation
right.
So
where
you
say
some
people
may
be
allowed
to
list,
and
some
people
may
not
be
it'll,
at
least
and
on
the
s3
standard,
that's
already
taking
care
about
by
policy
right.
D
So
that's
why
I
was
thinking
that,
for
that
case,
you
may
have
to
pocket
contents
that
expose
different
amount
of
access
to
the
to
the
box
contents
right
so
and
if
that's
the
case,
that
will
actually
be
means
that
you
have
to
pocket
contents
connected
to
the
same
pocket
through
different
policy,
but
they
will
have
to
have
different
name
right.
So
then
the
bucket
content
name
it
becomes
maybe
not
so
like
a
commodity,
not
irrelevant.
Now
right,
because
you
know,
pumping
name
was
going
to
be
bucket
clash,
that's
random
right,
yeah,.
C
The
problem
with
that
I
think
is
the
brown
field
case
right
where
you
end
up
with.
If
you
have
a
single
bucket,
that
you
want
to
reuse
multiple
times
in
multiple
applications,
you're
going
to
need
to
go
and
create
a
bucket
content
for
each
one
of
those
which
becomes
painful
because
you
have
to
rely
on
the
cluster
admin
to
create
non
namespaced
objects.
C
D
D
Because
if
that's
I
mean
we
could
make
the
assumption,
and
it
will
be
a
very
strong
assumption-
trying
to
say
that
okay,
we're
not
going
to
conserve
salsa
with
pocket
listing,
we
expect
pocket
listing
market
content
to
be
like
the
default
behavior
for
any
pocket
comment
created
by
the
driver
and
then
the
debacle
come
domain
could
be
something
even
even
if
it's
listed
doesn't
expose
the
underlying
pocket.
Then,
if
that's,
if
the
cloud
service
pocket
is,
he
said
not
to
you,
but
the
pocket
content
right.
F
Let's
let
me
create
bucket
contents
immediately
for
all
the
buckets
that
I
can
my
driver
can
access
right
and
then
I
can
reflect
everything
from
that
cloud
service
account
to
my
cluster.
Well,
that's
the
extreme
publishing
way
of
looking
at
the
buckets
for
a
format,
cluster
but
I.
Think
and
then
I
can
list
everything
like
I
get.
My
coaster
can
do
any
listing
that
I
want,
and
it's
fully
capable
and
I
can
even
imagine
a
way
where
I
can
manage
permissions
as
to
which
namespaces
can
list
which
buckets
etcetera.
I.
F
Think
that,
if,
if
we
are
getting
to
tackle
the
listing
problem
here
with
this
API,
we
might
need
an
actual
API
to
to
answer
to
that,
because
by
managing
just
entities
it
will
be
very
difficult
to
use
our
back
or
things
like
that
to
control
visibility,
I,
don't
think,
there's
a
model
to
control
visibility.
This
way.
F
F
F
C
D
F
C
Even
if
you
do
that,
I
think
there
is
a
you
know,
you
could
have
using
the
same
protocol
same
back-end
have
a
different
logs
bucket
for
each
of
your
applications
in
different
namespaces.
And
if
that's
the
case,
then
you
have
collision
again.
So
you
really
need
a
way
to
be
able
to
I,
don't
know
encode
like
the
unique
bucket
ID
or
something
into
the
bucket.
So.
F
C
D
D
D
F
So
that's
a
use
case.
We're
provisioning
is
mostly
important
right
that
the
the
namespaces
would
actually
get
most
of
the
provisioning
flow
and
the
administrator
we
get
most
of
the
you
know
not
backup,
but
I
mean.
Am
you
archiving
capabilities
and
being
able
to
you
know,
save
logs
for
some
amount
of
time
or
interest
them
into
another
system
for
logging,
yeah.
C
I
think
we
don't
need
to
solve
this
problem
for
the
admin
created
bucket
content
right.
It's
up
to
that
meant
to
create
a
bucket
that
doesn't
collide.
It's
the
admins
problem.
The
case
that
we
have
to
worry
about
is
the
automatic
provision
case
where
you
create
a
bucket
object
and
it
provisions
a
new
bucket
content.
What
should
the
name
of
that
new
bucket
content
be
ideally
I
mean
we
have
two
kind
of
competing
concerns
here.
C
One
concern
is
we
don't
want
to
have
collisions
at
the
cluster
scope
because
that's
obviously
problematic
and
the
second
concern
is
we
want
it
to
be
discoverable
such
that.
If
you
know
you
have
another
namespace
that
wants
to
use
this,
they
can
easily
discover
it
by
listing
all
the
bucket
content
objects.
So
how
do
we
both
make
it
a
collision
proof?
The
generated
name
as
well
as
discoverable
is
I.
Think
the
goal
here,
I.
F
Think
it's
the
matter
of
the
bucket
crass
to
define
why
somehow
it's
different
use
cases
of
the
same
mechanism
we
can.
We
can
set
one
class
to
to
you,
know,
create
completely
generated
suffixes
for
these,
which
means
they
are
office.
Gated
and
you
know
only
the
administrator
can
make
sense
of
them
in
some
aftermath
and
on
the
other
hand,
if
the
bucket
car
says
I
want
to
strictly
map,
let's
say
the
external
bucket
name
to
my
content
market
content,
and
you
know
it
would
fail
if
it
it's
taking
or
something
like
that
right.
F
F
B
Like
that
would
be
a
space
where
we
could
use
templating
similar
the
way
as
CSI
uses
a
secret
name
templating.
You
can
use
templates
as
defined
in
the
parameter
list
in
the
bucket
class.
How
bucket
objects
are
named
to
come
out
of
this
class?
There
could
be
a
default
where
you
can
catenate
the
bucket
class
name
with
the
bucket
name.
D
Solution
because
that's
also
pushing
to
the
user
when
they
do,
they
pick
the
template
right
on
CSI.
They
say:
I
want
to
drive
with
these
capabilities
and
the
storage
class,
and
if
it's
available,
the
CSI
driver
will
satisfy
that
requirement,
and
in
this
case
the
driver
for
kasi
could
be
the
same
right.
So
can
it
be
satisfied?
Yes
automatically?
Yes
else,
we
need
other
major
intervention
to
come
and
make
that
request.
Satisfiable.
B
D
Yeah
so,
but
if
we,
if
we
were
gonna,
entertain
the
idea
of
templates
that
will
now
go
into
the
discussion
of
how
a
pod
will
be
requesting
a
bucket
right,
it
will
be
requesting
a
bucket
along
with
us,
a
storage
class
or
it
will
it
be
requesting
a
pocket
by
name
and
a
class
and
see
if
a
bucket
content
can
figure
that
out.
If
that's
the
case,
then
the
pocket
resource
that
we
are
defining
here
means
face,
may
not
be
needed
anymore.
D
F
D
F
D
Yeah,
but
if
we,
if
we
look
at
CSI
I
like
it
example
of
templates,
because
if
there's
a
statehood
set
with
a
template,
that
cannot
be
satisfied,
the
state
resulting
it
will
won't
be
scheduled
right.
It
will
be
stuck
until
someone
takes
a
look
at
it.
Be
yourself
either
updates
the
definition
to
reflect
the
way
that
the
storage
requests
can
be
satisfied
or
that
ministerium
goes
and
satisfies
that
right,
they're
requesting
a
special
Sox
players
or
more
space,
or
something
like
that,
so
it
could
work
the
same
way.
D
You
know
see
if
the
the
template
comes
from
the
application.
Saying
I'm
gonna
need
a
pocket
with
this
class
and
its
name
is
it
available?
It's
not
available,
then
the
mystery
needs
to
come
and
say:
ok,
I
need
to
create
pocket
content
to
satisfy
this
requirement,
and
then
now
that
I've
done
it
once
then
anything
requesting
that
same
combo
pocket
by
name
and
class,
then
I
will
be
addressed
by
by
the
driver.
F
B
Yeah
I'm
sorry
I'm
trying
to
wrap
my
head
around
precisely
what
you're
describing
here
so
you're.
What
threw
me
off
is
that
you
were
saying
that
a
pod
request
a
bucket
or
request
that
bucket
be
created,
which,
to
me
doesn't
quite
seem
like
how
what
the
use
case
I
guess
is
no
different
from
a
user
just
needing
a
bucket
by
a
specific
name.
D
Yeah
I'm
just
trying
to
make
the
compilation
of
those
CSI
templates
that
imagine
a
template
where,
if
you're
always
requesting
volumes
or
pointing
about
with
the
standard
storage
class,
they
always
get
satisfied
because
maybe
that's
the
default
of
the
cluster
or
if
someone
says
I
need
a
storage
class
called
SSD,
storage
and
I
need
some
other
values
for
my
volumes.
Then
the
plot
becomes
unstable
right.
It
cannot
be
stagnant,
so
an
administrator
needs
to
come
and
say
well
to
satisfy
this
requirement
either.
D
I
change
the
amount
of
space
or
I
create
a
storage
class
may
change
the
pot
too
much
I
start
classes
excuse
on
this
cluster,
so
that
could
be
the
same
for
kausi
right.
The
Casa
strategy,
where
the
pod
says
I
need
access
to
one
bucket
by
name
with
this
storage
class,
when
the
search
class
will
allow
it
to
people
provided
or
the
protocol
in
me,
and
now
that
me,
if
that's
already
present
in
the
cluster
and
the
driver,
can
satisfy
it,
then
pocket
on
the
good
credit
to
satisfy
that,
but
and
it
will
be.
D
G
But
in
but
in
that
example,
we
generally
do
just
a
claim
name
right,
so
you're
you're,
suggesting
something
similar
to
specifying
the
actual
claim
name
under
the
persistent
volume
claim
stanza
of
the
pod
template
to
directly
ask
for
that,
but
instead
specify
a
class
of
buckets
that
could
be
satisfied.
That
may
already
exist
to
fulfill
that
request,
or
you
mean
a
specific
bucket
name.
I.
D
Mean
familiar
pockets
but
I
guess
that
could
also
work
with.
Maybe
just
one
I
pasted.
The
volume
claim
template
on
the
assumed
channel
chat.
So
if
you
guys
want
to
look
at
it-
and
you
can
see
here
where
this
is
a
pod-
is
requesting
something
that
they
would
like
to
have
a
template
for
a
volume
cometa
data
ww.
D
F
G
Ahead,
what
you're
getting
at
is
that
the
workflow
should
be
the
same.
We
should
be
able
to
either
specify
a
storage
class
by
which
our
application
serves
from
a
pool,
or
we
should
be
able
to
specify
a
specific
bucket
name
in
our
pod
template
to
reference
a
specific
bucket.
So
the
workflow
should
be
the
same
as
we
would
expect
with
the
persistent
volume
correct.
D
F
D
Then
it's
it's
not
off
the
application
right
and
if
the
application
cannot
be
satisfied,
it's
also
signals
that
make
sure
that
it
needs
intervention
right.
You
often
get
some
stateful
sets
and
or
deployments
they
may
be
requesting
some
storage
class.
That
is
not
valid
on
this
system
and
then
some
you
need
to
go
and
updated
the
storage
class
to
match
the
ones
are
on
your
cluster.
D
F
Similar
mechanism,
which,
when
you
mount
a
config,
mapped
and
your
pod
waits
before
the
the
bucket
request,
is
provisioned
so
in
a
similar
way
to
PVC
template
that
the
part
is
waiting
for
that
volume
to
be
provisioned
and
you
know
bound
and
etc.
So
that
was
doing
the
same
so
work
for
sorry
timeline
synchronization
for
things,
so
that
the
part
will
wait
for
that.
If
that's
what
you're
in
intended
to
do,
it's
like
it's
less
direct
and
a
volume,
but
it's
it's
also
possible
with
the
same
toes
but
I
I,
agree.
F
D
B
B
D
B
Yeah
and
as
guide
mentioned
in
the
previous
model
and
is
kind
of
blood
into
the
current
design,
we're
not
making
specific
provisions
for
it.
Yet,
with
the
OB
c
OB
model,
we
had
config
maps
that
would
be
deterministically
named
based
off
what
the
the
bucket
name
would
be.
So
you
could
define
your
pod
to
say:
I
need
this
config
map,
the
config
map
would
not
exist
until
the
storage
was
provisioned
or
available,
and
then
the
pod
can
play
I.
Think.
C
We
talked
about
kind
of
disking,
an
end
goal
where
eventually,
we
want
object
and
object,
content
and
all
this
stuff
to
be
first
class
in
kubernetes,
and
so
you
can
have
pods
hold
off
until
any
reference
to
these
objects
were
complete
and
in
the
meantime,
kind
of
for
prototype.
We
were
talking
about
potentially
doing
a
CSI
adapter
layer
which
would
effectively
give
you
the
same
behavior
of
the
pod.
B
D
D
Like
I
mean,
if
we
come
with
a
cozy
pocket,
templating
standard
that
could
also
save
you-
know
I'm
requesting
pocket
loss
I'm,
expecting
these
values
to
be
placed
under
these
conversion
variables
right
away
right,
so
that
will
reduce
the
need
for
knowing
the
config
map
ahead
of
time
and
the
cost
runtime.
Who
will
be
doing
the
mappings
that
you're
requesting
I.
C
Think
that,
let's
talk
about
that,
when
we
kind
of
have
a
fix
on
the
rest
of
the
API
about
how
are
we
going
to
integrate
it
with
pods
I
think
we're
in
agreement
about
the
end
goal
here,
which
is
have
pods
be
able
to
hold
off
until
the
buckets
are
ready
to
be
consumed
and
but
the
details
of
exactly
how
we
do
that?
Let's
discuss
that
after
we
kind
of
have
fixed
the
rest
of
the
API
and
is
there
one
too
many
buckets?
C
Is
there
you
know
one
to
one
all
the
rest
of
this
I
think
the
big
high-level
takeaway
from
today's
meeting
is
I.
Think
everybody
on
the
call
seems
to
be
in
consensus
about
a
one
two
or
many
to
one
mapping
from
bucket
to
bucket
content.
There
is
some
question
about
the
generated
name
for
bucket
content.
C
What
is
that
going
to
look
like
I
think
I
had
a
proposal
around
a
strategy,
it's
kind
of
approach
where
market
class
can
specify
different
strategies
for
generating
that
name.
I
think
we
need
to
flush
that
out
and
I'd
also
like
to
run
this
by
Andrew
and
Sean
and
make
sure
they're.
Okay
with
this
new
approach,
hopefully
they
can
attend
the
next
meeting.