►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 14 January 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
So
today
I
want
to
actually
bring
up.
Let
me
start
sharing
share
screen.
A
Yeah,
okay,
so
after
the
after
the
new
year
started,
we've
been
focusing
on
getting
the
demo
out
and
I
want
to
start
by
going
back
and
you
know
taking
a
look
at
our
priorities
right
now,
so
the
the
first
milestone
that
we
had
planned
for
was
the
demo.
A
B
Hey,
I
think
you
should
get
the
api
review
start
early,
because
the
the
deadline
for
the
cab
merch
is
february
9th.
A
Got
it
so
so
we
need
to.
We
need
to
make
the
api
proposal
in
the
cap
itself
right.
B
I
think
the
cap
you
so
just
yeah
update
whatever
you
need
update
and
then
you
need
to
add
a
section
for
the
production
revenues.
I
think
last
time
I
think
you
didn't
fill
that
out
right
last
time.
I
think.
B
About
that
so
this
time
you
need
to
fill
that
out
and
then
okay
then
pin
someone
on
the
production
that
in
his
channel
to
review
it
and
then
also
pin
the
api
reviewer
to
start
reviewing.
I
think.
A
Got
it
okay,
okay,
makes
sense,
because
yeah
we
do
have
consensus
on
how
the
aps
should
look.
So
I
guess
we
can
get
started
soon.
Jeff
is
not
here
today,
but
jeff
was
taking
care
of
changes
to
the
cap,
I'll
follow
up
with
him
and
and
shrini
here
also
shiny
if
you're
taking
notes.
This
is
an
important
thing.
I
think.
C
D
So
yeah
there's
a
couple
of
things
in
the
new
process.
One
is
the
production
review.
Production
review
is
not
required
until
later
stages,
so
it's
not
required
for
alpha.
So
you
should
be
fine
there.
Second,
but
they're.
D
D
I
think
the
big
change
this
cycle
was
that
you
need
to
get
a
production
reviewer
to
sign
off
on
your
cap
and
that
review,
as
far
as
I
understand
only
happens,
I
think
either
at
beta
or
ga
stage.
It
doesn't
happen
at
alpha
stage,
so
I
don't
think
you
need
to
get
a
sign
off
from
a
production.
Reviewer,
like
xing
said
you
might
still
have
questions
that
are
that
you
need
to
answer
in
the
cap,
but
I
don't
believe
that
you
need
to
get
a
sign
off
from
a
production
reviewer.
D
D
C
Sounds
good,
okay
and
then
shiny.
Do
you
know
who
to
reach
out
to
at
this
point
I
heard
only
sad
michelle
should
be
able
to
add
to
that
spreadsheet.
A
B
I
think
you
also
need
to
have
an
api
reviewer
right
so
sad,
I
want
to
comment
on
that,
so
I
think
do
they
just
need
to
add
to
the
is
that
I
think.
D
Yeah
that
says
I
I
need
we
need
an
api
review
for
this.
I
see
okay
and
yeah.
Just
look
at
the
there's,
I
think
a
set
of
commands.
I
forget
what
it
might
be,
but
basically
you
put
one
of
those
slash
commands
as
a
comment
and
it
should
add
that
label
okay
and,
if
not
just
reach
out
to
me
or
shang,
and
we
can,
we
can
help
with
that.
A
Okay
sounds
good,
so
yeah,
there
are
some
questions
about
api
that
we
want
to
get
clarified
today,
and
you
know
if
everything
looks
good,
we
can,
we
can
go
get
the
api
review
process
started
today,
itself,
all
right
so
for
the
demo.
We
we
set
our
own
requirements
to
be.
A
We
want
to
show
you
know
we
want
to
show
a
demo,
but
when
we
show
it
it's
it's
not
going
to
be
a
smoke
and
mirrors
type
of
demo.
It's
actually
going
to
be
what
we
have
fully
working
at
that
point
in
time
and
and
the
requirements
we
we
specified,
where
we
would
create
a
bucket
grant
access
to
that
bucket.
That
is
we'll
mint,
a
credential
for
it,
and
we
will
provision
that
bucket
into
a
part.
A
So
this
is
this
is
what
we
had
set
as
the
demo
milestone.
Along
with
that,
we
said
it's
going
to
be
done
in
such
a
way
that
if
anyone
wants
to
get
started
with
it,
you
would
you
know
you
would
have
deployment
files
set
up
in
a
in
a
proper
manner.
There
would
be
testing
that
would
have
been
performed
on
on
the
entire
code
base
and
there
would
be
documentation
for
people
to
follow
and
understand.
A
What's
going
on
and
when
I
say,
testing
and
documentation,
I
don't
mean
full-fledged
100
coverage
of
testing
or
detailed
documentation,
but
getting
started
documentation,
case,
documentation
and
testing.
You
know
important
for
the
most
important
code
flows,
unit
tests
and-
and
you
know,
cici
and
e2e
tests
for
the
full
stack
deployment.
A
So
I
want
to
talk
about
development
first
and
then
we'll
go
into
the
other
three
parts
here
in
terms
of
development.
This
this
last
week
we
we
have
been
putting
together
the
three
components
and
testing
them
out
and
we
actually
have
create
bucket
working.
However,
even
though
it
is
working,
we
did
reach
a
point
where
we
we
had
to
clarify
a
little
bit
about
the
api,
so
I'm
going
to
get
into
it
now.
A
So
this
is
a
slide
copied
from
the
meeting
on
december
3rd,
and
this
reflects
the
result
of
the
discussion
we
had
about
how
protocol
and
protocol
defined
here
and
parameters
defined
here
would
would
would
be
in
the
api.
So
our
conclusion,
after
that
meeting
was
the
protocol
field
would
be
okay.
So
this
this
structure
is
wrong.
Actually,
there
should
be
an
s3
substructure
under
which
these
three
would
come
anyways.
A
A
Now
this
decision
to
have
this
as
a
structured
field
or
a
strongly
typed
structure
here,
allows
us
to
strictly
control
the
evolution
of
this
api
and
prevent
different
vendors
or
different
implementations
of
a
particular
protocol
from
adding
their
own
fields
and
and
kind
of
s3
stops
being
s3
at
that
point,
because
everyone
has
their
own
field.
A
If,
if
we
allowed
that
so
the
workflow
in
in
in
this
kind
of
structure
would
be,
there
would
be
a
bucket
class
created
by
the
admin
with
the
necessary
parameters
filled
in
inside
of
the
protocol,
and
there
would
be
another
parameters
field
which
is
for
driver
specific
parameters,
and
the
bucket
request
will
reference
that
bucket
class
when
the
bucket
request
is
created
by
the
user
and
the
controller
would
go
ahead.
Read
these
two
and
come
up
with
a
bucket
object.
A
So
bucket
context,
we
originally
said
would
include
the
fields
that
are
needed
to
provision
the
bucket
and
we
had
defined
it
as
a
map
string
string.
A
So
my
first
question
is
given
that
we've
we've
created
a
strongly
typed
field
for
in
the
in
the
api.
A
D
Yeah,
I
think
this
goes
back
to
that
question
of
how
we
wanted
to
treat
protocol
right
right
and
it
sounds
like
we
agreed
protocol
should
be
first-class
fields,
and
if
we
go
down
that
path,
then
on
the
grpc
side,
I
think
it
would
also
make
sense
for
it
to
be
a
first
class
field
as
well.
Exactly
yeah.
A
That's
what
I
was
hoping
we
would
end
up
with
and
and
that
way
we
can
actually
you
know
on
the
wire,
it
can
still
be
a
map
string
string.
However,
on
the
on
the
driver's
side,
they
should
just
they
should
be
working
with.
You
know,
structured
protocol
objects.
D
A
Yeah,
okay,
so
yeah!
So
that's
good!
Then
it's
good
that
you
know
we're
going
through
a
very
careful
process
of
adding
anything
to
api
or
grpc,
then
because
they
have
to
be
kept
in
sync.
E
I
I
have
a
question
about
the
mechanism
by
which
they
stay
in
sync.
If
we,
if
we
decide
to
add
a
new
field,
we
have
to
add
it
to
one
side
first,
which
would
we
add
it
to
the
grpc
first
and
then
the
kubernetes
api
and
default
it
to
blank
until
the
kubernetes
api
had
a
value
or
do
it
the
other
way
around
we'd
have.
F
E
That
older
clients
will
just
omit
it
and
older
plugins
will
or
newer
plug-ins
we'll
just
see
empty
values,
okay
and
then
so,
if,
if
it's
added
in
a
backwards
compatible
way,
do
they
have
to
be
strictly
the
same
because
it
sounds
like
you
said:
that's
your
escape
hatches.
You
can
have
more
things
at
the
grpc
layer
than
you
have
at
the
kubernetes
layer
and
then.
A
What's
the
point,
there's
a
subset
in
terms
of
just
managing
the
releases
and
the
evolution
of
this,
this
entire
thing,
I
think
it's
kind
of
a
best
practice,
though
technically
we
don't
have
to.
E
A
A
The
best
answer
I
have
for
you
is,
you
know,
that's
just
how
it
evolved.
It
needs
to
be
fixed.
D
Okay,
yeah,
if
we
call
it
parameters,
that
seems
perfectly
rational.
I
think
that's
exactly
what
we
would
do
on
the
create
volume
side
for
csi,
so
that
makes
sense
perfect.
A
Okay,
so
in
that
case,
can
we
all
agree
that
I
mean,
I
think
we
already
have?
An
agreement
on
this
protocol
is
going
to
be
a
strongly
typed
field,
and
it's
going
to
exactly
reflect
the
protocol
structure
in
the
api
and
parameters
will
be
a
map
string
string,
which
would
be
a
reflection
of
the
parameters
field
in
the
bucket
class.
A
It
okay
perfect,
so
this
this
shrinking.
Does
this
resolve
all
the
questions
we
had
yesterday?
We
were
talking
about
yeah.
C
Absolutely
yeah,
so
we
we
do
have
to
change
the
cap
to
reflect
the
api
as
well
as
grpc.
A
Okay,
yeah,
we
will
could
you
if
you
have
some
time,
could
you
add
issues
for
this
because
I
don't
want
to
miss
it.
A
Right
yeah,
perfect,
thank
you
all
right,
so
this
is
resolved,
so
we
are
on
track
in
terms
of
development.
Now
that
this
is
resolved,
we
should
be
able
to.
You
know:
I've
been
pretty
aggressive
about
the
deadlines
for
the
demo.
It
you
know
having
aggressive
deadlines,
has
helped
us
move
forward
quite
a
bit,
and
you
know
we
ran
into
this
issue
now,
so
we
will
aim
to
do
the
demo
next
thursday.
A
However,
I
wouldn't
consider
that
a
very
hard
deadline,
I
would
say
within
the
next
two
weeks,
if
not
next
week,
we
should
be
able
to
do
the
demo
the
week
after
that
and
satisfying
all
of
the
constraints
we
placed.
A
So
that
would
be
a
that
would
be
a
good
milestone
all
right.
So
so
development
is
still
in
progress.
A
Let's
get
into
deployment,
we
have
added
customized
files
to
each
of
the
repositories
so
to
deploy
the
entire
stack.
A
You
can
use
these
four
commands,
cube,
ctl,
create
dash
k,
k
for
customize,
and
you
would
give
the
entire
path
of
the
repository
so
github.com
slash,
kubernetes,
slash
the
name
of
the
repository,
so
this
simple
bash
script
will
install
the
full
stack.
We've
been
that's
how
we've
been
testing,
that's
how
we've
been
using
it.
A
We've
we've
ironed
out
issues
with
the
artwork,
our
back
rules,
name,
spaces
service,
account,
tokens
and,
and
just
a
whole
bunch
of
things
and
and
right
now
these
four
commands
give
you
the
entire
stack
already
it's
just.
The
stack
doesn't
include
the
api
changes
that
we
discussed
today.
A
So
I
encourage
everyone
here:
who've
been
waiting
to
try
it
out
to
go
ahead
and
try
this
out
and
and
add
issues,
give
us
feedback
and
start
conversations
on
the
slack
channel.
Everything
is
welcome
because
we
want
feedback
on
this.
D
And
to
put
this
into
perspective
once
this
gets
to
beta
ga,
the
crds
and
controllers
should
be
pre-installed
by
distributors
by
distributors
like
gke
and
azure
and
amazon,
and
then
the
sidecars
and
node
adapter
would
come
from
the
storage
vendor.
The
node
adapter
also
would
be
pre
pre-installed.
C
A
Yeah,
so
that's
where
we
are
in
terms
of
deployment,
so
next
is
testing.
A
So
I
know
chris
is
here.
I
know
srini's
here
is,
is
rob
also
here
rob
yep,
hey,
so
so
I'll
start
with
streamy
he's
been
doing
most
of
the
work
on
the
controller.
So
I
haven't
followed,
you
know
the
I
haven't
gone
through
all
the
unit
tests
or
you
know,
ci
progress.
That's
happened
on
the
project,
so
srini.
Could
you
give
us
an
update
on
where
we
are
in
terms
of
the
three
kinds
of
tests.
C
Sure
so
the
united.
C
For
both
bucket
and
access
objects-
okay,
I
think
I
have
enough,
but
not
probably
requires
more
in
the
future,
but
we
are.
We
made
good
progress
on
the
unit
test.
On
the
ci
side,
we
have
the
basic
ci
for
make
and
make
unit
make
tests
to
run
on
the
ci
side
post
submit
job.
There
was
a
pr
outstanding.
I
had
made
some
comments
once
that
is
merged,
then
we
could
run
the
post
submit
properly.
What
is
the.
C
C
But
from
the
etas
I
have
code
ready,
but
I
haven't
issued
a
pr
yet,
okay,
that
will
be
a
very
basic.
It
does
not
have
many
tests
in
it,
but
it
will
help
us
start
writing
edu
tests.
We
haven't
made
much
work
progress
on
the
data
yeah.
A
I
see
okay.
C
C
C
No,
it
assumes
that
for
full
stack
deployment
is
done.
We
do
not
have
any
bootstrapping
in
the
ada
test
right
now.
Okay,
so
certainly
controller
is
running
and
that
kind
of
stuff,
but
that
that
can
be
done.
A
Okay,
so
it
assumes
that
the
deployment
is
already
in
place
and
then
are
there
tests
already.
C
Very
it's
very
preliminary,
that's
what
I
see,
but
it
is,
you
can
do
ginkgo
test.
It
probably
doesn't
do
much,
but
there
is
some
code
there
skeleton
so
you
mean
the
command
executes
and
and
it's
not
a
small
test
with
a
little
bit
of
setup
during
the
before
and
aft,
and
a
little
bit
of
teardown
and
after.
A
What
about
do
we
need
any
resources
from
kubernetes
like
cncf,
in
order
to
run
the
tests.
C
Right
now,
that's
why
I
did
not
issue
the
pr.
You
know
one
of
the
things
that
I'm
using
is
since
we
are
not
in
the
kubernetes
tree.
There
are
few
of
those
elements
I
picked
up
from
the
test
directory
as
imports,
which
will,
which
is
which
makes
us.
You
know,
go
modules,
go
crazy
with
this,
because
it's
the
code
that
we
are
pulling
from
when
it
is
so
hold
on.
So.
A
Is
that
related
to
so
so
I
just
want
to
go
back
to
that.
Do
we
need
any
help
from
say,
sad
or
shane
to
set
up
e3
tests?
No,
I
don't
think
so.
Okay,
okay,
so
you're
saying
about
the
go
mods.
C
Yeah,
because
you
know
I
am
using
some
of
the
test
etu
framework
from
the
kubernetes
free
that
that,
because
we
are
outside
of
kubernetes
3
is
it
is,
it
does
not
work
the
same
way
like
dc
tests
for
other
storage
components.
Right
so
could
we.
A
C
My
code
is
ready,
just
yeah
yeah,
we
can
sync
up
and
then
I
can
push
it
this
way.
A
Okay,
do
you
think
we
could
make
a
pr
by
next
week,
yeah?
Okay,
great
okay,
yeah,
that's
good!
I'm
glad
that's
possible!
So
you're
saying
this
ci
so
unit
test.
We
have
a
good
start,
you're
saying
for
the
controller
and
we
have
the
ci
running
for
the
controller.
Correct,
correct
and
d3
tests
are
in
progress.
So
do
we
have
the
ca
for
the
rest.
C
C
It's
just
the
cloud
build.yaml
we
need
to
yeah,
we
have
a
pr
outstanding
and
once
we
have
that
pr,
some
of
the
comments
addressed
on
that
okay
should
be
able
to
it's
very
simple.
We.
A
See:
okay,
yeah,
let's
follow
with
him
and
you
know
get
that
added
okay.
So
next
rob!
Where
are
we
on
these
three
kinds
of
tests?
I
guess
you
don't
have
to
answer
about
the
e
to
e
test,
because
I
think
xiaomi
is
working
on
that.
F
Yeah,
so
we
don't
have
any
hv
tests,
obviously
unique
to
sidecar,
but
sidecar
has
got
a
pretty
decent
set
of
unit
tests
that
we
that
I
add
to
it
anyway.
Whenever
I
find
an
issue,
so
we've
got
certainly
a
test
for
adding
and
deleting
buckets
for
all
three
of
the
different
protocols
and
a
couple
of
different
variations
like
setting
parameters
and
things
and
those
should
be
being
run.
A
A
Okay,
so
so,
okay,
I
was
just
gonna
ask
if
we
add:
if
we
make
this
new
api
change,
will
it
fail.
A
F
It
should
change
because
it
should
fail,
because
we're
using
we're
going
to
have
to
obviously
make
the
change
right.
We
have
that
client
that
we
have
in
spec
that
we
need
we're
going
to
need
to
make
sure
is
updated,
since
we
have
to
create
that
manually
right,
so
we'll
need
to
make
sure
that
client
is
updated
and
then
more
than
likely
once
that
is
updated,
the
builds
will
fail
right.
The
test
will.
F
A
And
also,
and
also
the
client
itself,
you
know
it's,
it
has
a
go
mod
on
it
like
in
the
sense,
unless
you
update
the
the
the
version
on
the
in
the
go
mod
file,
it's
not
going
to
fail
right.
A
Okay:
okay,
that's
fine
yeah!
If
it
is
the
case
that
anytime,
we
make
a
change
to
spec
and
all
repositories
start
failing,
that's
not
okay,
but
but
you
know
if
it
is
controlled,
where
you
know
you,
it
only
fails.
If
you
update
go
mod
without
updating
the
code,
that's
that's
acceptable.
We
can
work
with
that.
C
A
G
Yeah,
it's
just
a
pull
request
that
makes
a
few
minor
changes
related
to
the
bind
amount
and
the
files
that
we
mount
on
on
the
pods
in
terms
of
testing.
At
this
point,
there's
not
that
much
testing
for
the
csi
adapter
in
terms
of
unit
tests
or
like
ci
tests,
and
I
guess
that's
one
thing
that
I'll
have
to
spend
some
time
working
on
over
the
next
week.
A
G
I
I
think
I
started
on
the
unit
test,
so
I
have
them
in
one
branch,
so
I
would
say
that
it's
in
progress,
but
okay,
there's
still
a
bit
of
work
that
needs
to
be
done.
Okay,.
G
For
the
time
being,
yes,
I
think
there's
still
some
other
decisions
that
we
need
to
make,
which
will
then
inform
changes
that
will
need
to
be
made
on
the
csa
adapter,
but
for
now,
aside
from
the
pr
that's
open,
there's
no
other
outstanding
changes.
Okay,.
G
Yeah,
the
finalizers
and
the
unit
tests
and
yeah.
A
Okay,
okay
sounds
good.
I
think
I
have
a
good
idea
of
where
we're
at
so
it
looks
like
we
need
to
make
the
changes
to
the
api
at
add
unit
tests
in
case
of
the
csi
adapter
and
add
end-to-end
tests.
A
Okay,
so
the
final
thing
is
documentation,
so
this
is
an
effort,
that's
already
in
progress
now
that
we
understand
how
to
deploy,
and
we
have
a
mechanism
to
deploy
things.
A
You
know
we
need
a
getting
started
guide
which
clearly
explains
how
to
deploy
this
and
and
what
to
expect
when,
when
the
deployment
succeeds-
and
you
know
how
to
create
a
bucket
and
just
the
most
basic
workflows,
two
sections
of
you
know
it
would
just
be
you
know
two
paragraphs
or
two
sections
within
a
simple
readme
file.
Yeah,
a
very
simple
getting
started
guide
is
required
at
this
point.
A
We
don't
have
this,
so
the
only
people
who
know
how
to
deploy
this
are
people
who
are
in
the
park
who
are
in
this
meeting
right
now,
so
that
that
needs
to
change.
Anyone
who
visits
the
project
should
be
able
to
deploy
it
and
try
it
out
and,
in
addition,
so
I've
added
issues
going
back
to
the
getting
started
guides.
I've
added
issues
for
each
of
the
repositories
to
to
add
this
getting
started,
guide
and-
and
you
know,
I'm
hoping
people
from
the
community.
A
A
A
Okay,
so
that's
where
we
are
in
terms
of
demo.
There
are
a
few
outstanding
tasks.
We
are
in
progress
and
they're,
not
the
most,
I
would
say
they're,
not
the
most
complicated
tasks
in
in
this
whole.
For
this
whole
milestone,
I
think
I
think,
we've
gotten
over
the
big
bump
of
of
you
know
getting
things
together,
having
the
deployment
in
place
and
also
having
all
the
different
controllers
in
place.
A
A
A
Okay,
so
I
can
get
into
deletion
and
finalizes
now,
but
I
think
we've
discussed
a
lot
of
topics
today
I'll.
Let
everyone
here
decide
if
you
want
to
get
into
deletion
and
finalizes
right
now
or
do
you
want
to
fund
it
for
monday
and
we
can
end
the
meeting
early
today.
A
Perfect,
okay,
all
right
all
right.
So
so
I
want
to
talk
about
finalizers.
We
we
brought
this
up
last
week
and
I
want
to
continue
the
discussion.
A
So
when
a
bucket
is
created
when
a
bucket
request
is
created,
it
leads
to
creation
of
the
bucket
and
then
a
bucket
access
request
is
created
that
leads
to
the
creation
of
a
bucket
access.
Now
multiple
pods
utilize
that
bucket
access
to
to
provision
access
to
that
bucket
for
those
parts.
So
we
need
to
make
sure
that
while
the
pods
are
alive,
the
bucket
access
is
not
the
which
is
the
bucket
access
is
the
representation
of
the
access
token
or
the
service
account.
A
We
need
to
make
sure
that
the
bucket
access
is
not
deleted,
while
the
pods
are
still
using
it,
and
we
also
talked
about
having
an
escape
hatch
where,
if
an
admin
wants
to
pull
the
rug
out
from
underneath
from
the
pods
just
pull
the
axes
out
from
underneath
the
parts
there
should
be
a
mechanism
to
do
that,
so
those
two
requirements
are
needed.
Third
requirement
is,
while
any
of
the
pods
are
using
the
bucket
you
through
the
bucket
access,
the
bucket
also
should
not
be
deleted.
A
The
bucket
request
should
not
be
deleted
before
the
bucket
itself
has
delete
called
on
it.
E
A
Okay-
let's
get
into
that,
so
we
shall
we
start
with
the
pods
and
bucket
access,
yeah
yeah.
I
think.
E
That's
an
easy
one
so
like
what
I
would
do
there
is
establish
a
single
finalizer
and
call
it.
I
don't
know
bucket
access
protection
because
you
know
for
pvcs
and
pcs.
We
have
like.
We
have
the
pv
protection
finalizer
pvc
protection
finalizer.
So
this
will
be
the
bucket
access
protection
finalizer,
and
the
idea
would
be
that
you
would
always
put
it
on
to
new
bucket
accesses
as
soon
as
the
controller
sees
them
and
does
whatever
it's
going
to
do.
In
response
to
that.
A
E
And
then
the
only
other
thing
you
need
is
a
rule
that
once
accesses
in
the
deleting
state
that
any
existing
pods
that
we're
using
it
may
continue
to
use
it.
But
no
new
pods
can
start
to
use
it.
When.
C
A
Well,
yeah.
That
should
be
enough
now
in
terms
of
pvcs
and
pv
right.
There
is
a
pv
protection
finalizer
per
pvc.
So
if
you're
mounting
four
volumes,
you
get
four
pv
protection.
Finalizers
in
case
of
bucket
access,
there
is
a.
There
is
a
hook
that
csi
gives
you,
which
is
a
node
on
stage
volume
that
we
can
utilize
to
take
the
finalizer
out
when
the
part
goes
away.
Now,
when
we
talk
about
having
one
finalizer
for
you
know,
how
do
we
know
there
are
no
more
parts
referring
to
this
bucket
access.
A
E
E
It
would
have
a
consistent
or
an
eventually
consistent
notion
of
of
which
pods
are
using
each
bucket
access,
and
then
the
idea
is
as
long
as
no
new
pods
can
obtain
access
to
a
bucket
once
it
once
the
user
has
requested
to
delete
the
bucket
access,
then
that
then,
the
number
of
pods
that
are
using
it
is
monotonically
decreasing
and
will
eventually
hit
zero
yeah.
Isn't
there
still
a
race
condition?
C
E
A
Between
the
time
between
the
time
it
goes
into
deleting-
and
you
know
let's
say,
you've
enumerated
the
list
of
parts
already
and
well
you
go
into
deleting
and
you
enumerate
the
list
of
parts
between
that
window
or.
A
Right
for
calling
delete.
E
A
With
using
multiple
finalizers.
B
E
A
Yeah
I
mean
like
yeah,
I
know
I
I
get
where
you're
coming
from
that.
Currently
there
are
no.
I
I
also
haven't
come
across
something
that's
using
finalizers
this
way.
However,
if
we,
if
we
look
at
one,
is
because
it's
not
being
used
right
now,
it's
that
by
itself
is
not
a
good
reason
to
not
do
it.
But
here,
if
you
look
at
pods
and
pvs,
each
pv
gets
a
pv
protection.
Finalizer.
E
C
A
But
if
you
have
a
pod
using
four
volumes,
you
have
four
finalizers.
D
A
Right
right
right,
so
that
I
agree
with
that
here,
also
that
when
you
look
at
the
usage
pattern,
it
looks
for
a
single
bucket
access.
That
is,
if,
for
a
equivalent
of,
I
would
say,
equivalent
of
a
pv
here
would
be
one
part
using
the
bucket
access.
A
So
so
let's
say
we
have
three
pods
using
it.
How
do
we
know
that?
I
I
I
still
it
goes
back
to
that
race
condition
so
yeah.
E
E
E
A
So
you're
saying
we
okay,
so
you're
saying
we
would
we
we
would
have
to
enumerate
multiple
times.
Basically.
E
Yeah
you
would
have
to
the
controller,
would
have
to
have
a
a
watch
on
pods
and
it
would
have
to
have
a
comp.
It
would
have
to
have
a
notion
of
how
many
pods
are
using
each
bucket
access
and
then
every
time
anything
changes
with
any
pod.
You
have
to
update
your
notion
of
how
many
you
know,
and,
and
so
eventually
those
pods
will
die,
and
eventually
the
controller
will
notice
that
and
it
will
decrea
decrease
the
count
for
a
given
yeah.
C
E
A
I
can
yeah,
that
is,
that
that
is
technically
correct,
but
it
is
much
much
much
easier
and
more
resource
efficient
to
have
a
finalizable
part,
because
the
hook
to
inform
the
bucket
access
that
the
part
has
stopped
using.
It
is
just
to
remove
the
finalizer
and
that
can
be
done
on
node
on
stage
volume.
E
A
C
E
A
But
you
you're
enumerating
parts
and
you
know
in
a
system,
let's
say
more
than
a
thousand
parts.
You
know
paginating,
so
it's
it's
definitely
not
more
efficient
in
in
terms
of
I
o
or
you
know,
in
terms
of
I
o
talking
to
the
api
server.
A
Also
gets
it
doesn't
just
get
deltas,
it
also,
you
know,
does
the
resync.
A
E
E
E
About
deviating
from
a
tried-and-true
pattern,.
A
I
don't
think
this
is
a
stup,
it's
a
deviation,
really
it
is
a
very
clean
hook
and,
and
it
ensures
that
things
always
just
fall
in
place
when
a
node
unstage
volume
is
called.
You
know
that
the
part
is
gone.
A
Done
it
with
some
of
the
projects
in
min
io,
and
there
are
really
no-
I
mean
when
you're
at
a
particular
scale,
it's
hard
to
tell
if
it's
because
of
the
finalizers
or
if
it's
because
of
the
too
many
parts
or
if
it's
because
of
just
the
number
of
things
that
are
going
on
at
that
point,
you
just
scale
up
at
cd,
but
this
in
particular,
does
not
add
a
huge
overhead.
E
E
I'm
I,
having
I
mean
I've,
read
a
bunch
of
controllers
and
watching
pods
is
not
that
big
of
a
deal.
It's.
E
But
I
mean
the
code
would
be
in
the
same
place
where
you
would
be
doing
whatever
work
needed
to
be
done
when
a
pod
was
going
to
use
it,
so
it
I.
I
also
don't
think
it's
a
good
idea
to
have
finalizers
that
are
touched
by
two
different
controllers,
like
usually
a
finalizer
is
created
by
and
consumed
by
the
same
controller.
E
F
F
E
A
Yeah
version
skill
is
a
good
reason,
but
but
those
are
all
manageable.
The
the
complex
again
we
we
looked
at
the
same
way
to
be
honest
with
you.
When
yeah
we
started
thinking,
you
know
when
we
started,
we
wanted
to
just
have
one
finalizer,
but
but
the
race.
E
Condition
is
what
what
got
us
here,
but
but
there
isn't
a
race
condition
if
you
have
a
watcher
on
the
pods
and
you
just
yeah,
and
you
rely
on
that
ratchet
from
when
an
object
goes
from.
You
know
existing
to
deleting
at
that
moment
you
can't
add
any
new.
A
E
E
He
kicks
in
the
door
and
you
delete
the
object
out
from
under
him
yeah,
so
you
enumerate
twice
basically
yeah.
A
E
E
Leader
all
the
time
yeah
but
like
yeah,
but
if
the
sidecar
gets
killed
and
has
been
restarts,
he's
able
to
reassemble
all
of
his
state
of
the
world
from
talking
to
the
kubernetes
server.
Sure.
E
A
And
the
pods
can't
utilize
bucket
access,
because
how
do
you
prevent
pods
from
utilizing
a
bucket
access?
What
is
that?
What
does
that
mean
in
the
code.
E
A
E
F
So
we
we
have
talked
in
the
past
internally
about
not
keeping
the
bucket
access
request
around
during
a
deleting
state,
so
yeah,
the
backup
access
request
is
created
by
the
user.
The
ba
is
created
by
cozy
a
pod
references,
a
bar
right.
So
if
a
user
deletes
the
bucket
access
request
that
goes
away
immediately,
then
the
bucket
access
hangs
around
until
the
pods
clean
up
and
all
that
stuff
and
that
prevents
any
new
pods
from
ever
using
the
ba.
E
C
E
But
yeah
I
mean
all
these
things
can
race,
but
as
long
as
you
define
what
your
invariants
are
and
have
controllers
that
have
that
enforce
those
invariants
under
a
finalizer
everything's
fine
and
you
only
need
one
finalizer
per
controller
per
object.
I.
A
I
I
don't
know
how
big
of
a
hit
that
is,
like
you
said,
let's
talk
to
the
api
team
in
terms
of
just
you
know,
implementation
and,
and
if,
if
you
know
code
is
more
straightforward,
it
does
improve
reliability
and
you
know
there
are
fewer
bugs
so
mean
I
I.
A
I
just
want
to
find
out
if,
if,
if
one
there
are
any
patterns
where
multiple
finalizers
are
used
and
to
how
that
how
that
has
turned
out
for
people,
so
I
mean
from
again
I
would
say
my
experience
is
like
I
said
at
best
it
wasn't
obviously
the
bottleneck,
but
but
that
doesn't
mean
much
in
terms
of
making
this
decision.
I
would
say,
because
you
know
it's
not
like
it
was
measured
specifically,
but
also
it
kind
of
tells
us
that.
E
It
might
there
is
a
possibility
that
it's
not
an
issue,
so
I'm
just
trying
to
think
of
all
this.
So
so
in
this
alternative
scheme,
where
you
have
a
finalizer
per
pod
and
you
try
to
remove
it
in
node
on
stage
like,
are
there
situations
where
node
on
stage
never
gets
called
because
something
gets
deleted
abnormally?
E
E
The
multiple
finalizers
is
they'll
proliferate
and
they'll
become
a
cleanup
headache
for
someone.
If
anything
goes
wrong,
because
it's
much
harder
to
in
an
automatic
way
go
through
and
say
are
any
of
these
finalizers
bogus
without
without
doing
the
hard
thing
that
that
you,
that
I'm
saying
you
should
do
anyways,
which
is
just
enumerate
all
the
pods
and
and
no
at
any
time.
A
Yeah,
but
if
we
followed
this,
you
know,
cleaning
up
the
front
is
actually
pretty
simple.
A
simple
garbage
collector
would
just
enumerate
see
if
you
know
any
of
the
finalizers
are
pointing
to
a
pod
that
doesn't
exist
and
take
it
out.
E
Right,
but
that
garbage
collector
has
to
do
all
of
the
work
that
I'm
saying
we
should
just
do
as
a
matter
of
course.
So
if
you're.
A
A
E
Yeah
well,
and
what
I'm
saying
is
the
same
thing
would
happen
for
cleanup.
You
know
when
you
have
to
do
this
logic
of
looking
at
the
pods
to
determine
when
it's
safe,
to
remove
the
finalizer
on
the
regular
deletion
path.
The
deletion
path
isn't
blocking
anyone
from
doing
anything.
It's
just
slowing
down
how
long
it
takes
to
delete
when
you,
when
you
want
to
do
a
delete.
I.
A
See
where
you're
coming
from
I
mean
at
the
minimum,
I
think
we
would
still
need
two
finalizers
one
to
yeah
one
to
make
sure
that
revoke
bucket
access
is
called
in
the
back
end
in
the
driver
and
another
to
make
sure
that.
E
A
Again,
like
I
said
I
I
understand
where
you're
coming
from
I'll.
You
know,
let's,
let's
find
out
more
information
about
it
and
and
if
it
turns
out
that
you
know
this
is
this
doesn't
work
we're
having
multiple
finalizers
per
part
or
finalize
the
profile
on
the
bucket
access.
Then
then
you
know
we
will
obviously
go
with
the
other
approach,
yeah
yeah,
so
yeah.
I
just
want
to
make
sure
the
decision
was
informed.
That's
all.
E
A
Also,
you
don't
need
to
make
the
excess
calls.
You
just
know
that
someone's
using
it
when
the
finalizer
is
there
again
we'll
have
to
measure
probably
the
the
difference,
but.
E
E
A
A
test
and
see
if
this
is
doable
again,
it's
all
coming
from
the
point
of
view
that
code
is
more
elegant,
the
method
is
simpler
and
if
the
performance
hit
is
not
there
or
if
it's
you
know,
if
it's
very
close
in
terms
of
performance,
I
I
I
personally
prefer
having
multiple
finalizers
if,
if
the
only
concern
is
performance,
I
do
agree
that
version
skew
is
a
problem.
So
I'm
not
I'm
not
discounting
that,
but
you
know,
I
think
I
think
we
run
out
of
time
and
we
should.
A
D
B
Right
yeah.