►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 21 January 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
So
good
morning,
everyone
so
first
I
want
to
go
over
where
we
are
in
terms
of
our
milestones
and
then
quickly
jump
into
technical
discussion.
A
So
for
the
last
last
week,
or
so
we
we
started
discussing
the
status
of
the
demo
that
we're
that
we
promised
to
show
and
other
than
that.
The
other
priority
that
we
have
is
the
api
review.
A
So
I'll
first
discuss
the
progress
on
these
two
tasks
and
we'll
go
from
there
for
the
demo.
We
said
we'll
have
a
development
done
for
the
basic
use
case
of
creating
a
bucket
granting
access
for
that
bucket
to
a
part
and
provisioning
that
bucket
into
a
pod.
So
that
was
supposed
to
be
the
development
milestone.
A
So
I'm
happy
to
say
we
have
that
in
place.
There
was
one
thing
that
was
pending
as
of
last
week,
which
was
having
a
parameters
field
inside
of
the
grpc
spec
for
the
protocol,
well
for
things
other
than
the
protocol.
So
earlier
we
had
in
the
grpc
spec
we
had
in
the
bucket
create
request.
We
had
the
bucket
name.
We
had
the
access
mode
and
a
generic
map
string
string
called
bucket
context.
A
This
generic
field
was
supposed
to
hold
both
protocol,
specific
information
and
driver,
specific
information.
However,
that
can
easily
lead
to
conflicts
and
undefined
behavior
and
it
goes
against
our
idea
of
having
protocol
be
a
type
strongly
type
structure,
so
we
decided
that
we
will
have
a
protocol
as
a
separate
type
field
in
the
grpc
create
bucket
request
structure,
and
we
have
a
separate
parameters.
Generic
map
string
string
field
for
driver
specific
parameters
both
of
them
would
come
from
bucket
class.
A
A
B
Is
saad
wants
an
identity
interface
just
like
csi
for
the
provisional
get
info
to
split
that
out
right.
A
A
Io
related
like
well,
you
know
creating
provisioning
related
goes
into
the
provisional
facts.
Right
and
identity
of
the
of
the
driver
itself
is
a
separate
concern
right.
This
should
be
easy
to
implement
call.
You
should
treat
these
as
opaque,
receive,
responsible,
passing
and
validating.
B
Well,
that
would
change
the
spec
dot
md
also,
so
we
have
to
do
that
way
and
then
there
is
another
concern
that
he
says
that
that
proto
should
be
generated
rather
than
yeah.
That's
easy,
so
we
can
follow
what
csi
is
doing
in
that
regard
too.
So
those
are
the
two
major
items
I
take
away
from
this
I
see
so
he
probably.
A
Just
grabs
from
the
spectre
md,
and
it's
a
good
idea
to
do
that,
because
that
will
ensure
that
spectre
md-
and
this
are
always
in
sync-
it's
a
really
good
idea.
I
think,
expand
the
comment
for
bucket
name
to
explain.
Purpose:
okay,
fair
enough,
be
concerned,
another
comments.
Yes,
what
is
s3
signature
version
how's
it
different
from
string
version
below
yep
question,
wrap
these
and
one
off
there's
a
problem
with
that.
Oh
yeah!
We
can
do
that
now,
since
we
are
defining
it
twice
shouldn't
this
return.
A
A
bucket
struck,
which
at
a
minimum
should
contain
string
bucket
id
well
bucket
name,
is
any
identifier
should
not
be
input
anywhere
rather
than
create
bucket.
The
credible
call
should
return
a
string
bucket
id
now.
Subsequent
calls
should
only
operate
on
string
bucket,
ready,
not
string
bucket
name,
I
mean
bucket
name
is
unique,
so
bucket
name
is
the
bracket
id.
There
is
no
like
in
s3
there's
no
such
thing
as
bucket
id.
D
A
D
D
A
Name
is
globally
unique
name
is
globally
unique,
so
there
is
no
need
to
have
a
separate
id
on
any
provider
because
they
have
to
be
dns
addressable.
So.
D
But
there's
like
there's
a
question
of
statefulness,
so
even
if
the
name
is
unique,
if,
if
the,
if
the
plugin
has
to
mangle
it
somehow
or
mangle,
is
the
wrong
way
if
the
plugin
has
to
generate
an
another
id
internally
and
then
store
the
mapping,
and
then
you
only
come
back
later
with
just
the
name,
then
the
plugin
has
to
have
access
to
that
mapping
to
basically
reverse
map
it
back
to
find
the
id
that
it
chose
later
on,
and
that
puts
a
statefulness
requirement
on
the
the
to
basically
remember
its
mapping.
D
D
E
C
C
E
D
C
D
B
Well
to
understand:
do
we
have
to
consider
that,
for
the
sake
of
topology
by
any
means,
like
you
know,.
A
A
Yeah,
if
they're
two
different
kubernetes
clusters
again,
the
unique
bucket
name-
is
what
was
used
to
address
it.
If.
E
They're
sharing
the
same
blockade,
then
that
should
still
be
the
same,
isn't
it
I
mean
this
is
like
the
name.
Unreal
object,
storage,
back-end
right
so.
B
A
Like
in
in
in
google
cloud
azure
and
in
s3,
you
know,
alibaba
cloud
is
the
one
that
I
haven't
used
and
all
three
they're
global
globally
unique
market
names
are
globally
unique.
E
D
D
I
think
you
have
to
be
very
careful
about
the
claim
that
it's
globally
unique,
because
how
do
you
enforce
that?
It's
really
globally
unique,
like
someone
can
very
easily
set
up
a
clone
environment
where
all
the
dns
names
are
the
same.
All
the
ip
addresses
are
the
same,
although
everything
is
the
same
and
then
generate
two
different
buckets
with
the
same
name
and
like
they
won't
be
the
same,
but
they'll
they'll
appear
to
be
the
same
in
every
respect.
That
seems
to
matter.
A
Yeah
so
so
let's
talk
about
that,
so
I
don't
think
there's
any
reason.
That's
completely
unsolvable.
That
is
a
unique
situation.
Let's
say,
let's
say
that
extreme
situation
is
being
built
up,
so
the
easiest.
The
easiest
thing
one
can
do
is
have
two
separate
drivers
for
for
the
two
environments
and
one
driver.
Would
you
know
the
unique
names
of
the
generates
would
talk
to
the
local
cluster
and
the
other
one
would
talk
to
the
global
one
which
is
possibly
you
know,
local.
D
I'm
suggesting
that
if
you
have
two
environments
like
this
and
they're
in
two
instances
of
the
driver
and
they're
pointed
at
the
same
actual
object
store
and
they
try
to
create
buckets
that
are
not
the
same,
but
have
the
same
name
because
there's
no
way
to
prevent
them
from
being
the
same.
Let
me
put
it
this
way:
an
application.
D
A
A
D
A
A
A
E
A
E
I
I
think
to
me
it's:
it
should
be
sufficient
just
to
use
the
background.
A
E
A
A
I
think
I
know
where
saad
is
coming
from
is
coming
from
the
idea
that
he's
seen
this
problem
in
csi
and
he's
trying
to
avoid
it,
but
I
think
if
we
just
let
him
know
that
in
in
object
storage,
that
is
not
an
issue,
then
it
should
be
fine.
Now
let
us
say
that
it
does
become
an
issue.
It
is
not
at
all
a
big
deal
to
add
a
new
field,
but
removing
a
feel
is
impossible.
A
A
A
Should
start
this
out
agreed
agreed
yeah
yeah,
but
you
know
we'll
know
more
as
we
also
start
implementing.
I
think
it
is
little
early
to
to
come
up
with
a
use
case
for
a
design
choice
that
was
made
first.
It's
like
we
have
a
design
choice
first
and
then
we're
coming
up
with
the
use
case.
That's
the
wrong
way
to
do
it.
We
should
do
it
the
other
way
where
the
use
case
is
very
clear
and
as
a
response
to
that,
we
add
such
a
field.
What
do
you
think.
D
So
I
I
just
wanted
to
to
point
out
that
if
we
do
go
down
the
path
of
the
the
name
is
the
identifier,
then
that
puts
an
onus
on
the
co
to
ensure
that
that
name
is
not
only
unique
within
that
co,
but
unique
with
anything
else.
That
could
be
using
the
same
object
storage,
which
is
just
it's
a
bigger
requirement.
D
Then
I
would
pop
a
I
would
pop
a
volume,
but
but
it
would
have
a
unique
id
associated
with
it
and
then
later,
when
I
go
back
to
refer
to
that
volume,
I
would
use
the
unique
id
and
then,
if
anyone
else
creates
a
volume
called
test,
they
would
get
a
they
will.
They
would
get
the
same
volume
unless
no,
they
wouldn't.
D
It
well
no,
no,
what
would
happen
is
if
the
sidecar
was
just
reusing.
The
pvc
name
and
another
user
created
the
same
volume.
The
csi
driver
would
would
would
interpret
that
as
a
retry
of
an
old
create
call
right
because
it
would
say.
Oh
I
already
have
tests,
you
know
success
or
or
if
the
or,
if
the,
if
the
parameters
didn't
match
it
would
say,
you
know,
error
conflict,
but
if
they
did
match
it
would
just
say
yeah
I
already
created
that
which
is
how
item
potency
works
right.
So
so.
D
A
It's
the
same.
No,
no!
It's
not
true.
You
have
to
ensure
that
the
ids
are
global.
You
know,
I
don't
think
we're
solving
a
problem,
but
by
adding
that
complexity
onto
ourselves.
D
Couldn't
we
say
that
for
names
too,
I
mean,
but
if
there's
multiple
instances
of
the
csi
driver
that
are
talking
to
a
common
storage
backend,
they
can
like
different
instances
of
the
csi
driver
can
actually
get
the
same
name
and
because
it's
a
different
instance
of
it.
They
know
it's
a
different
request
and
they
can
map
that
to
a
different
volume.
A
The
the
name
and
the
id
need
to
have
similar
properties,
I
mean,
I
don't
think
again.
There
is
enough
justification
to
add
a
new
field
given.
C
A
D
A
Fair
enough,
fair
enough-
and
I
trust
your
gut
instinct
as
well,
but
I
want
a
stronger
argument-
I'm
open
to
listening.
D
A
So,
let's,
let's
have
this
conversation
start
as
well.
I
would
ideally
like
to
have
it
in
everyone's
present,
so
let
us
try
to
have
it
on
monday.
Does
deletion
really
require
opaque
parameters?
Good
point,
srini
or
rob?
Are
you
on
the
call.
B
I
rob
is
not
here,
but
I
don't
know
much
about,
but
there
could
be
certain
criteria
when
you
do
delete
like
you
know
that
you
want
to
send
to
the
back
end.
Like
you
know,
archive
or.
A
B
B
I'm
guessing
I
mean
probably.
B
A
A
good
point:
yeah,
that's
about
that's
something
that
one
of
our
customers
is
doing
where
if
no
objects
are
accessed
within
seven
days
or
so
or
whenever
a
delete
is
called.
The
thing
is,
you
know,
left
alone
for
seven
seven
days
before
it
actually
deletes
and
that's
configurable.
D
A
A
I
mean,
if
anything
I
think
you
know
if
there
is
something
that
that
is
decided
on
the
fly
rather
than
something
that
can
be
stored
on
the
bucket.
Even
in
that
case,
I
don't
think
it
should
be
opaque.
A
So
in
that
case,
I
think
we
should
remove
the
parameters
from
delete.
What
do
you
think.
D
Shrini
and
others
do
well.
I
was
going
to
point
out
that
in
in
the
csi
spec
one
of
the
the
things
that's
not
so
obvious
is
that
deletion
is
especially
complicated
because
it
may
be
called
in
a
situation
where
the
co
is
like
recovering
from
an
error
and
it
doesn't
have
any
state
and,
like
the
co,
is
only
interested
in
cleaning
up
at
that
point.
C
A
A
Yeah
yeah,
it
is
interesting,
yeah,
never
mind.
I
don't
think
it's
worth
getting
into
that
right
now
anyway,
so
I
see
what
you
mean
and
yeah.
We
will
be
light
on
the
deletion
request.
Here
I
wrote
that
you
know
we
don't
have
parameters
here.
We
we
don't
have
a
field
at
all
here,
rather
just
the
unique
identifier
for
the
bucket,
which
will
be
either
the
name
or
the
id
based
on
what
we
decide.
B
D
A
Selling
some
bucket
life
cycle
rules
you
set
it
up
while
creating
or
after
creating
well
always
after
creating,
but
you
know
as
as
a
part
of
cozy
we
could
send
both
at
once,
and
we
can
ask
the
ceo
to
do
one
after
the
other.
A
However,
during
delete
there
isn't
really
any
thing
you
can
pass
in.
I
know
this
in
case
of
s3
and
min
io
I'll
have
to
look
into
it
for
azure
and
gcs,
but
you
know
just
logically
thinking
I
think
there
is.
There
are
no
fields
that
you
can
really
pass
through
for
delete.
A
Yeah,
I
think
that's
fair,
so
yeah,
so
you
know
I
want
to
move
this
along
fast.
So
so
who
do
we
have
here
so
so?
Chris
has
been.
You
know,
looking
into
different
clouds
for
us
anytime,
some
some
some
parameter
or
some
some
call
has
to
be
looked
at
across
different
providers.
A
Okay,
so
the
the
the
conversation
was
around
passing
parameters
during
bucket
deletion.
C
A
A
So
if,
if
you
could
look
into
that
for
the
three
major
cloud
providers,
that
is
aws,
google
cloud
and
azure
and-
and
you
know-
let
us
know
what
it
is
that
would
be.
That
would
be
good.
We
just
need
to
make
sure
that
you
know
delete
does
not
require,
or
it
does
require
extra
parameters
or
not.
C
A
C
A
All
right
so
continuing
so,
as
of
you
know,
since
we
had
made
that
pull
request,
I
was
considering
sorry
development
more
or
less
done
status.
However,
based
on
the
responses
from
sad,
I
would
I
would
put
development
in
still
in
progress.
So
the
next
step
is
a
deployment
full
stack
deployment.
A
A
Does
I
o
customize
doesn't
support
canonical
urls
vanity
urls,
like
that
it
only
supports
using
full
github
parts,
so
you
just
specify
the
name
of
the
project
and
cube
city
create
dash
k.
Name
of
the
project
should
deploy
it
for
you,
we've
already
set
up
the
customize
files,
the
right
way,
the
hardback
rules
for
it,
and
it
should
just
work.
A
The
next
step
is
so.
I
would
consider
a
full
stack
deployment
in
a
good
state
and
for
the
sake
of
this
demo,
I
would
consider
it
done
testing,
so
we
have
unit
tests
and
ci
in
place.
However,
e2e
tests
are
still
not
in
place.
Also,
I
think
manually
testing
is
also
required.
A
So
testing
is
something
in
progress
before
the
demo.
I
think
it's
reasonable
if
we
perform
manual
e2e
testing
because
it
looks
like
it
is
not
completely
in
our
control
to
get
the
actually
tweeted
tests
in
place.
A
It
looks
like
we'll
need
to
get
some
hardware
allocated
for
this
purpose
and
the
amount
of
time
that
it
takes
is
a
little
long
for
that.
So
so
I
I
want
to
bring
up
that
that
one
change
and
then
say
that
you
know
in
case
of
e
to
e
tests.
We
want
to
keep
it
manual
for
now.
A
Documentation
we
have
getting
started
in
spec
guides,
but
when
I
was
looking
through
the
documentation
that
we
had
in
place
so
this
hold
on
so
in
the
getting
started
guide.
Let's
talk
about
that
first,
so
the
getting
started
guide.
We
have
it
in
place
for
pretty
much
all
of
the
repositories,
but
then
it
is
made
as
if
you
know
it's
made
with
html
tags
in
it
in
a
markdown
file.
So
the
formatting
has
to
be
changed.
That's
that's
the
one
step.
A
That's
that's
required,
so
some
minor
changes
are
required
in
the
getting
started
guide
and
the
spec
guide.
There's
a
pull
request
out
there
for
it
to
be
reviewed
and
merged
in
the
spec
repository.
A
Am
I
right
shiny
about
that?
I
believe
you
were
working
on
the
spec
okay.
A
All
right
so
is,
has
someone
responded
to
it?
Are
you
waiting
for
someone.
B
No,
I
think,
I'm
good.
I
asked
shin
to
review
it
and
that's
like
we
discussed
right.
I
have
done
half
of
it.
There
is
a
lot
more
to
do
in
that
respect,
but
that
part
jeff
has
done
review
and
I
addressed
all
his
comments
so
just
have
to.
A
Okay,
so
srini,
given
that
saad
wanted
us
to
generate
the
cozy
dot
proto
from
the
spectre
md
you
know
so
spec
would
still
need
some
changes
right
before
shooting
reviews.
E
B
B
First
sections
are
seem
to
be
pretty
solid,
I'm
just
following
the
csi
kind
of
structure
in
terms
of
sections
in
the
document.
So,
okay
I'll
add
that.
A
Yeah,
so
so
so
shing,
I
think
he's
saying
we
should
go
ahead
and
merge
this.
If
the
docs
themselves
look
good
and
then
he'll
do
a
second
pr
that'll
make
the
spec
be
something
that
we
can
generate
from
right
streaming:
yeah,
yeah,
yeah,.
E
A
Also,
I
think
it's
good
if
you
know
we're
reviewing
smaller
chunks
of
code.
A
Okay,
so
any
questions
so
far.
A
Okay,
all
right:
let's
go
back
into
deletion
and
finalizers.
We
had
a
good
conversation
about
this.
Last
friday
I
wanna
I
wanna
first
bring
up
the
context
we
had
last
week
and
then
some
new
in
well.
Some
new
findings
were
made
about
this,
so
I
want
to
bring
that
up
as
well.
So.
A
Oh,
I
I'll
actually
pause
you
for
a
second
ben.
Do
you
want
to
ask
your
question
now
or
do
you
want
to
ask
after
we
go
through
fine,
like
this.
D
Well,
yeah:
this
is
kind
of
brief.
So
so
we
are
interested
in
writing
a
a
plug-in
like
soon,
and
I
was-
and
I
wanted
to
get
a
sense
of
like
how
much
of
the
is
first
of
all
is
there?
Is
there
a
sample
plug-in
that
we
can
like
copy
and
paste
from
or
yeah.
C
A
I
think
I
think
you
can
get
started
right
away.
Obviously,
most
of
these
things
are
in
development.
So
the
reason
I
still
say
that
you
should
get
started
right
away
is
one
you'll
get
exposed
to
the
code.
I
want
more
people
to
be
exposed
to
the
core.
D
D
And
if
they
you
know,
and
if
someone
is,
is
willing
to
do
it
like,
where
do
we
go?
Look
for
like
the
actual
bits.
A
Yeah,
so
the
four
repositories
that
I
had
shown
here-
this
is
where
you
go
look
for
the
actual
bits.
I
am
right
in
the
process
of
writing
a
doc
that
that
line
that
the
details
out
how
to
write
a
driver,
it's
still
a
work
in
progress,
but
is.
A
World
driver
yeah,
we
do
have
a
lower
driver,
yes,
okay,
yeah
cool
yeah
that
actually
talks
to
a
real
minion
back-end
and
provisions
buckets
and
grants
access.
D
A
Yeah
I'll
even
quickly
show
you
are
you
on
your
phone,
or
can
you
see
your?
I
don't.
D
A
I'm
just
showing
you
the
repository
structure
and
where
we
have
the
sample
driver
and
everything,
can
you
see
it
now?
Can
you
see
the
video,
can
you
see?
I.
A
Anyway,
so
so
this
is
the
sidecar
repository
yeah.
This
is
the
html
that
I
was
talking
about.
We
shouldn't
have
html
here,
should
just
use
markdown
templating,
so
under
command.
We
have
two
separate
ones:
the
sidecar
and
the
sample
driver.
A
The
sample
driver
is
the
server
that
the
provisional
sidecar
access,
the
client
for
the
sample
driver
has
to
you
know,
have
provisional
get
info,
satisfied,
create
bucket
delete,
grant
bucket
access
and
remote
bucket
access.
D
So
the
whole
thing
is
140
lines.
Is
that
what
you're
saying.
D
A
That's
awesome,
yeah
yeah
and
you
know,
we've
tested
it.
It
works,
but
you
know
we're
making
changes
as
we
go
along
too,
like
we're
changing
the
spec.
If
we
move
the
get
info
to
identity,
it's
going
to
look
a
little
different,
so
stuff
like
that
has
to
do
with.
A
Right
right
right
right,
but
but
regardless
this
is
a
good
place
to
get
started.
D
B
A
That
the
reason
I
say
that
is,
I
want
to
encourage
more
people
to
start
using
the
system
using
what
we
have,
because
only
then
we
get
feedback
on
what
it's
like
so
yeah.
I'm
I'm
we're
in
a
good
shape
for
people
to
start
working
with.
A
Yeah
yeah,
okay,
so
let
me
remove
this
again,
so
the
zoom
tab
at
the
top
prevents
me
from
changing
like
tabs
on
my
browser
window,
so
I
have
to
move
it
each
time.
I
have
to
do
something
with
the
tabs
on
the
browser.
A
Anyways,
you
got
to
learn
your
browser,
hotkeys
hotkeys,
to
change
the
thing
is
I've
got
emax
keys
enabled
so
that
conflicts
with
this
like
emacs
bindings
for,
like
you
know,
I
can
do
a
ctrl,
a
control,
e
control
space
copy
and
all
that
anyway.
So
if
you're
not
familiar
with
emacs,
that
might
not
make
sense.
A
Okay,
so
we
have
15
minutes
might
not
be
enough
to
go
over
then.
Actually
it
might
be
enough
to
go
with
the
finalizers.
So
the
proposal
I
had
last
week
was:
we
would
have
a
one
finalizer
per
bucket
access
on
a
bucket
object.
A
A
A
If,
if
this
is
a
good
idea-
or
you
know
the
alternate
choice,
which
is
to
have
a
single
finalizer
for
all
the
parts
that
were
using
it
using
a
bucket
access
or
a
single
finalizer
for
all
the
bucket
access
that
we're
using
a
bucket
and
how
it
might
work
and
the
only
problem
with
that
approach
that
that
we
discussed
was
it's
harder
to
implement
later
on.
A
When
I
was
thinking
about
it-
and
I
think
srini
mentioned
it,
that
the
the
finalizer
logic
for
the
bucket
access,
which
is
which
which
needs
to
know
which
pods
are
aligned,
which
spots
are
there
because
it'll
have
to
list
the
parts
you
know
if
we
went
with
the
second
approach,
the
approach
that
the
other
approach,
not
the
one,
that's
on
the
screen
right
now,
the
site
car.
In
that
case,
the
provisional
sidecar
would
have
to
list
all
the
pods
and
then
check
to
make
sure
that
none
of
them
are
using
this
bucket
access.
A
D
A
Is
the
access
control
the
provisional
sidecar
runs
in
a
in
a
name,
space
meant
for
the
sidecar,
and
it
is
you
know
it
is
not
supposed
to
have
access
to
anything.
Any
information
about
the
actual
workloads
or
parts
that
are
using
the
access
objects,
access
tokens.
That's
where.
A
Needs
that
access
well!
No,
if
we,
if
we
have
the,
if
we
have
the
csi
adapter,
simply
just
it,
doesn't
need
to
know
even
what
part
it
is.
It
simply
just
needs
to
update
the
the
bucket
access
object
that
it
is
currently
working
with.
A
It
can
be
a
dumb
csi
adapter,
with
no
intelligence
about
what
the
driver
is
or
what
name
space
it
is,
and
none
of
that
it
just
needs
to
go
and
update
the
bucket
access
object
for
whatever
access
request.
It
gets
that's
enough.
D
Yeah,
like
I
think,
on
the
csi
side
like
there's
a
separate
kubernetes
object,
called
like
a
volume
attachment
which
represents
the
the
linkage
between
the
pod
and
the
and
the
volume,
and
so
so
the
in
order
to
deal
with
attachments.
You
only
ever
have
to
look
at
volume
attachment
objects.
You
don't
have
to
look
at
pods,
and
so
I
guess
it's
less
of
a
security
risk
right
to
allow
a
controller
to
see
the
volume
attachments
and
not
the
pods.
But
here
we
don't
have
any
sort
of
intermediating
object.
D
That
represents
the
attachment
that
maybe
that's,
maybe
that's
an
omission
that
should
be
fixed,
like
maybe
we
maybe
there's
a
security
benefit
to
having
a
a
proper
kubernetes
object.
That
represents
an
attachment
from
a
pod
to
a
bucket,
and
then
you
can
just
wait
till
all
those
disappear
to
say.
Oh
yeah,.
A
Object,
it
is,
it
is,
however,
the
the
a
part
or
a
csi
driver
that
wants
to
access.
The
part
still
gets
information
about
the
part
where
it
can
go,
and
you
know
see
the
name,
space
and
name
of
the
part,
but
that
comes.
D
A
A
D
I
was
getting
at
is:
is
the
the
scheme
that
I'm
proposing,
which
is
you
know,
don't
don't
allow
any
new
attachments
on
a
deleting
pod
and
then
don't
don't
delete
them
sorry
and
deleting
the
bucket
access
and
then
don't
delete.
The
bucket
access
until
the
pods
are
gone,
would
involve
some
controller
somewhere
having
a
watch
on
the
pods
so
that
you
could
determine
when
there
were
no
pods
accessing
that
particular
bucket
access.
And
then
you
could.
A
D
C
D
F
F
A
D
A
Attachments,
I
mean
csi
driver
does
not
by
default,
have
access
to
the
volume
attachment
itself.
D
A
You
know
this
is
a
csi
driver
that
we
use
and
I
don't
know
if
we
use
volume
I
mean
it
works,
so
I'm
just
checking.
If
you
have
volume
attachments,
we
have
volume,
snapshots,
storage
classes,
persistent
volumes.
Yes,
I
know
it's
css,
oh
yeah.
There
we
go
yeah,
we
have
it
you're
right,
yeah,
it's
necessary.
A
Yeah,
I
still
think
you
know
someone
has
to
do
the
aggregation
to
say
that
for
this
one
finalizer,
every
workload
that
was
using
it
is
now
done
in.
In
our
case,
you
don't
even
need
the
aggregator.
B
Csi
adapter
is:
is
it
going
to
be
a
long-term
solution
or
to
depend
on
volume
attachments?
That's
my
problem.
Basically,
we
might
have
a
native
support.
If
that
is
the
case,
and
if
that
goes
away,
we
will
not
have
this
option
right.
We.
A
D
Well,
no,
because
there's
this
weird
upgrade
step
where
you
have
existing
workloads
and
existing
attachments,
when
you
make
the
upgrade
from
the
old
version
of
the
cozy
driver
to
the
new
version
of
the
cozy
driver.
Now
the
new
version
has
to
be
backwards
compatible
with
all
the
existing
stuff
that
the
old
one
created
it
can
be.
It
can
be
extraordinarily
painful
to
migrate
forward
right,
so
you
don't
want
to
create
a
situation,
that's
hard
to
build.
On
top
of,
I
absolutely
agree.
D
So
if
we
I
mean,
let's
imagine
a
thousand
pods
sharing
a
bucket
right.
That's
that's.
A
A
Like
a
deployment
or
a
demonstrator
stateful
set
or
whatever,
unless
that's
the
case,
they're
not
gonna,
have
thousand
finalizes
on
it.
A
Right,
the
thousand
parts.
What
I'm
trying
to
say
is
if
the
thousand
parts
all
belong
to
us,
the
the
same
bucket
access,
then
a
bucket
access
would
have
a
thousand
finalizes,
but.
C
A
You,
if
you're
having
say
two
different
deployments
for
the
same
bucket,
you
get
two
different
bucket
accesses
and
you're,
not
looking
at
a
thousand
in
this
case,
you're
looking
at
500.
Instead.
A
Yeah
for
for
a
different,
that's
why
I
call
it
a
pod
manager.
So
say
you
have
a
deployment
and
a
stateful
set.
You
want
them
to
be
using
two
different
bars
for
the
same
bucket.
So
I
I
don't.
B
D
A
B
D
A
Check
that
limit
again
but
yeah,
that
would
be
the
one
limit.
I
don't
think
again.
I
don't
think
there's
a
hard
limit
on
how
many
finalists
you
can
have
and
again
I'm
I
have
to
verify
that.
That's
the
I
think
it's
2mb,
sorry,
sorry
yeah
yeah,
let's
actually
check
it
out
lcd
size
limit.
Sorry,
not
not
for
oh,
no,
not
the
maximum
database
size
of
entry,
1.5
mb
maximum
size
in
the
request
is
1.5.
Md,
yeah,
yeah.
E
D
D
A
D
Well,
the
problem
with
updates
are
that
they're
not
very
backwards
compatible
version
wise
because
if
you
add
a
new
field
to
a
to
a
structure,
and
then
someone
uses
an
update
to
manipulate
that
structure
like
and
if.
If,
if
one
of.
C
E
I'm
saying
that
actually
merge
is,
I
mean
the
patch
is
better.
The
update,
yeah.
D
A
Very
careful
I'll
actually
sit
with
you
and
let's
actually
discuss
that
as
a
part
of
the
next
meeting,
then
I
want
to
hear
how
how
you
have
done
it
or
how
you
what
your
solution
is.
D
Okay,
I
mean
I,
I
just
use
json
patches
with
the
test
with
the
test
element
in
them.
In
that
I
see
that
seems
to
well,
I
don't
know
I
I
haven't
proved
to
myself
that
it
is
that
it
is
race,
condition
proof.
Yes,
it's
less
bad
than
the
default
behavior.
Let
me
put
it
that
way.
A
I
see
yeah-
let's
let
maybe
next
week
during
our
call,
if
you
could,
if
you
want
to
prepare
for
it
in
any
way,
that
would
be
cool,
but
if
not,
we
can
I'll
start.
The
discussion
we'll
go
from
there.
Okay,.
A
All
right,
so
we're
actually
out
of
time.
I
think
this
is
very
productive.
Today
we
will
follow
up
on
the
changes
to
the
spec
pl
the
comments
to
spec
pr
and
yeah.
We
can
go
from
there.
Yeah.
E
A
A
A
Yeah,
let's,
let's
make
sure
we
prioritize
that,
then
that's
good
thanks,
awesome,
all
right,
so
yeah.
I
think
the
plan
is
clear
next
week,
we'll
discuss
the
finalizer
patching
logic
and
before
that
we'll
try
and
have
all
the
spec
changes
in
place,
including
cab.