►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review - 03 September 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Oh
so
I'll
give
a
quick
update
on
the
monday
meeting
for
those
of
you
who
couldn't
attend.
So
on
monday,
we
ended
up
discussing
the
references
issues
that
that
we've
been
going
over
the
last
two
weeks,
or
so
I
explained
three
different
approaches
we've
been
considering
and
we
discussed
which
one
would
be
the
best
and
we
agreed
on
one
of
them
pretty
unanimously
at
the
end
of
it.
A
So
I
today
I'll
go
over
those
approaches
and
I'll
also
explain,
which
is
the
decision
we
made,
and
why
and
after
that
you
know
I
want
to.
I
want
to
explain
and
get
feedback
on
the
next
steps
in
terms
of
the
cab
approval
and
finally,
I've
got
the
roadmap
for
the
development
of
this
this
project
and
also
in
terms
of
next
steps
in
in
terms
of
other
features
and
considerations
as
well.
So
I'll
get
started
with
the
approaches
that
we
discussed
so
here.
A
What
I've
drawn
here
are
the
different
resources
that
cosy
ecosystem
handles.
So,
on
the
left
hand,
side
we
have
the
bucket
and
bucket
class
and
bucket
request.
A
Now.
This
is
how
the
different
resources
are
interconnected
with
each
other.
In
this
model,
which
is
the
original
the
first
model,
we
considered
a
bucket
a
bucket
access
request.
The
way
it
would
access
it
would
define
what
bucket
it
wants.
Access
for
would
be
by
referring
to
the
bucket
request,
but
this
approach
had
a
bunch
of
issues
that
we
addressed
the
first
one
being
so,
I'm
gonna
go
to
the
next
slide.
Okay,
this
is
the
same
diagram,
but
with
bucket
access
requests
from
multiple
namespaces
wanting
access
to
the
same
market.
A
So
what
happens
in
such
an
approach
when
multiple
namespaces
want
to
appro
want
to
utilize
the
same
bucket?
A
The
second
issue
we
saw
with
this
approach
was
when
we're
trying
to
port
this
workload
from
say
one
cluster
to
another.
The
way
the
bucket
buckets
are
generated.
Let's
say
in
the
first
cluster
I
create
a
bucket
request.
I'm
pointing
to
a
bucket
class
and
cozy
goes
and
goes
ahead
and
creates
a
bucket.
The
name
of
the
bucket
will
be
auto
generated,
so
it's
likely
going
to
be
a
uuid.
A
A
We
end
up
with
portability
issues.
Let
me
think
this
is
true.
I
will.
I
will
get
back
to
the
portable
issue
in
a
second.
I
want
to
actually
also
go.
This
actually
plays
into
the
next
approach.
Also
that
this
approach
fundamentally
has
the
many
to
one
problem.
Let
me
quickly
go
to
the
next
one
and
we'll
address
this
also,
okay,
yeah.
So
one
of
the
things
we
discussed
was
rather
than
having
the
bucket
access
requests
point
to
the
bucket
request.
A
Why
not
point
to
the
bucket
directly
from
the
bucket
access
class,
the
idea
being
the
bucket
access
class
would
be
pointing
would
be
created
by
the
admin
and
if
the
bucket
name
actually
ends
up
being
new,
that's
okay,
because
the
admin
would
be
the
one
that
has
to
do
that
manual,
step
of
finding
the
right
bucket
name
and
putting
in
the
bucket
x
class.
So
the
assumption
is
the
admin
button
can
be
is
acceptable
in
this
case,
rather
than
the
burden
on
the
user.
A
To
get
this,
and
also
the
other
assumption
we
made
was
any
user
resources
such
as
the
bucket
request,
the
bucket
access
request
should
be
portable
out
of
the
box,
so
in
order
to
facilitate
that
we,
we
came
up
with
this
approach
of
using
bucket
or
pointing
to
the
bucket
directly
from
the
bucket
access
class.
A
So
the
problem
with
this
issue
with
this
model
was,
if
we
create
a
new
bucket
if
a
user
creates
a
new
bucket
by
creating
a
bucket
request,
and
if
that
user
wants
to
use
that
bucket,
the
admin
has
to
go
ahead
and
create
a
bucket
access
class
for
that
bucket
before
the
user
can
use
it.
So
there's
an
admin
step
involved.
A
Finally,
what
we
agreed
on
was
this
approach,
where
a
bucket
request
would
point
to
just
one
bucket,
and
the
reference
to
a
bucket
will
always
be
a
single
reference.
So
a
bucket
request
to
bucket
is
always
a
one-to-one
relationship
which
is
much
easier
to
manage
and
for
cases
where
the
bucket
needs
to
be
shared
across
namespaces.
A
The
plan
was
to
create
a
copy
over
the
bucket
for
the
next
names
for
the
namespace.
That
wants
it
and
have
a
bucket
request.
Have
the
user
create
a
bucket
request
pointing
to
that
bucket
and
then
utilize,
the
bucket
by
pointing
the
bucket
access
request
to
the
bucket
request
now
yeah?
So
this
is
where
we
ended
up
now.
B
So,
just
to
clarify
this
is
a
one-to-one
mapping
between
bucket
request
and
bucket,
and
you
could
have
multiple
bucket
access
requests
pointing
to
this
bucket
request
in
the
same
name,
space
right
and
the
only
place
you
have
duplication
is
across
namespaces,
which
is
an
existing
issue
with
pvpc
right
right.
So
that's
the
model
and
then
for
the
brown
field,
where
you
already
have
the
bucket.
The
cluster
admin
would
create
the
bucket
object
with
some
known
name
yeah,
and
the
bucket
request
would
bind
just
the
way
tvpc
does
for
existing
volumes
right.
C
B
D
D
So
could
you
repeat
that
please
yeah
I'm
trying
to
see
how
we
model
the
brown
field?
Is
it
like
similar
to
how
static
binding
of
pv
pvc
works,
or
it's
a
bit
different
here.
A
Yeah,
it
is
similar
to
the
static
binding
of
pvpc
yeah,
okay,
yeah
yeah.
It's
not
we.
We
don't
have
a
mechanism
to
do
something
like
the
dynamic
binding
of
freebc.
The
dynamic
binding
in
pvpc
uses
access
modes.
I
believe
and
capacity
I
think
so,
combination
of
those
two
yeah,
those
yeah.
A
They
don't
have
that
here
yeah
this
is
this
is
the
this
is
the
approach
you
know
we
all
liked-
and
this
was
the
last
thing
that
that
we
discussed-
and
I
think
this
was
the
last
item
on
the
list
of
things
we
needed
before.
We
can
go
forward
with
the
cap.
A
So
what
we
ended
up
doing
after
that
was,
we
went
ahead,
updated
the
cap
to
have
the
latest
design
this
one
and
we've
made
sure
that
it
follows.
It
follows
all
the
you
know,
standards
that
that
you
know
it
needs
to
follow
and
and
we've
we've
updated
it,
and
I
think
we
sent
out
a
message
yesterday
to
ask
for
everyone's
review.
So
please
go.
Please
take
some
time
to
review
it
and
leave
your
comments
on
the
cap.
We
would
appreciate
it
yeah,
that's
that's
where
we
are
right.
D
To
the
roadmap,
sorry,
I
have
one
more
question:
can
you
please
go
back
the
bucket
to
bucket
access
in
the
left
bottom
corner?
Is
it
bucket
pointing
directly
to
bucket
access?
Is
that
right
and
it's
and
it's
one
to
many
right,
I'm
just
trying
to
understand.
A
So
actually
so
let
me
let
me
so
jeff:
did
we
remove
the
bindings
from
bucket
to
bucket
access,
or
did
we
say.
E
Normal
the
field
that
had
binding
in
its
name
is
removed.
It's
instead,
it's
there
is
it's
a
the
cap
right
now.
The
api
spec
for
the
bucket
instance
and
the
bucket
access
instance
shows
a
two-way
reference.
The
bucket
points
to
the
ba
and
vice
versa.
So
there's
an
implicit
binding
in
that
you
see
the
reference.
B
Wait
so
we
have
bucket
requests
pointing
to
bucket
and
bucket
pointing
to
bucket
request
and
then
bucket,
also
pointing
to
bucket
access,
and
vice
versa,.
A
E
Okay,
let's
go
through
the
cab
that
now
there
haven't
been
review
comments
on
that,
but
at
one
point
in
the
life
of
the
kept
we
actually
had
like
a
binding
array
in
there
when
we
had
the
many
to
one
and
instead
it's
just
a
one
to
one
reference.
A
A
Yeah
so
so
let
me
ask
you
this
so
last
we
spoke.
We
had
removed
that
field
from
the
cap.
The
bindings
field.
A
And
there
was
no
explicit,
you
know
you,
we
couldn't
actually
go
from
bucket
to
bucket
access.
This
diagram
can
be
updated,
so
let.
H
A
I
E
A
A
A
A
B
Axis
refers
to,
I
would
try
to
minimize
the
number
of
pointers,
because
the
more
pointers
you
have
back
and
forth,
the
more
likely
state
is
going
to
end
up
skewing
and
pointing
to
the
wrong
things.
So
yeah.
C
I
get
that
you
will.
You
will
never
create
a
bucket
access
request
without
knowing
what
bucket
you're
pointing
to
and
you'll
never
change
it.
So
it
will.
You
know
it
once
at
creation,
time
and
you'll
set
it
and
then,
if,
if
you
only
want
to
look
at
the
non-namespaced
objects,
it's
nice
to
know
which
access
requests
or
which
accesses
are
related
to
which
buckets.
A
A
For
yeah,
it's
for
the
admin
to
know
where
it's
pointing
to
again
no.
H
A
H
C
B
Yeah,
I
can
buy
the
argument
that
a
bucket
access
request
is
simply
requesting
access
to
a
specific
bucket,
but
in
order
for
the
the
bucket
needs
to
confirm
that
that
has
happened,
and
it
can
use
an
internal
pointer
to
do
that
and
that
way,
even
if
the
bucket
access
request
goes
away
in
the
future,
because
the
user
deletes
it,
the
bucket
has
a
way
to
be
able
to
dereference
what
the
bucket
is
that
it's
associated
with.
I
don't
know
if
that's
a
real
concern,
but
especially
now
that
we
have
finalizers.
C
Well,
you
need
to
know
which
finalizers
to
to
delete
when
you
delete
a
bucket
access
right
when
you
delete
the
says
bucket
access.
If
it's
the
last
one
that
refers
to
a
bucket,
then
you
can
remove
whatever
finalizer
is
being
used
by
the
access
requests
by
the
accesses
and
in
particular,
I'm
thinking.
B
I
guess
if
we
didn't
have
this,
that
would
mean
the
controller
would
have
to
kind
of
go
back
up
the
chain
right
and
come
back
down
right.
It
would
need
to
go
look
at
the
bucket
access
request.
Then
look
at
the
bucket
request
and
then
look
at
the
bucket
yeah,
and
I
guess
this
would
save
that
work.
H
But
I
have
another
use
case.
If
I
do
we
plan
to
have
static
provisioning
of
bucket
access
as
well,
so
that
I
pre-create
as
an
admin
bucket
access
and.
A
Yeah,
so
we
thought
about
it.
So
the
way
we
look
at
it.
If
the
admin
wants
to
provide
access,
a
static
set
of
credentials,
they
would
just
have
to
create
the
secret
with
the
credentials
and
in
the
bucket
access
request,
point
to
that
secret
and
a
static
bucket
access
would
be
created
by
cozy,
with
with
no
provision
or
set
so.
B
And
so
I
guess
that's
the
case
that
I'm
curious
about,
because
if
the
user
foo
bars
it
and
and
points
to
the
wrong
bucket
in
the
bucket
access
object
and
the
bucket
access
request
points
to
a
different
bucket,
how
are
we
going
to
handle
that?
Does
that
mean
we
need
to
do
checks
everywhere
to
make
sure
that
both
of
those
align,
which
means
we
have
to
de-reference
them
all
the
time
or
do
we
just
say
well,
hopefully
these
are
both
pointing
to
the
same
thing
and
just
use
them
wherever
we
need
them.
A
So
the
secrets
are
opaque
to
us.
So
if
the,
if
the,
if
the,
if
the
admin
creates
a
bucket
request,
assuming
the
secret
is
for
say
bucket,
one
would
actually
and
the
bucket
request.
However,
points
to
bhakti
request
points
to
bucket
one,
but
then
the
secret
is
not
for
bucket.
A
One
koji
can't
verify
that
it's
simply
just
going
to
hope
that
they
got
it
right
and
and
work
that
way
it's
going
to
fail
when
it's
mounted
to
the
part,
and
they
would
see
that
access
denied
and
and
the
admin
can
intervene
at
that
point.
G
B
B
It's
not
necessarily
a
mutation,
it's
a
creation
right.
If,
if
you're
statically,
creating
the
bucket
access
request
and
bucket
access,
you
know
you
could
say,
make
a
mistake
and
point
to
a
bucket
in
your
bucket
access.
B
That
is
different
from
the
bucket
that
your
bucket
access
request
is
going
to
point
to,
because
bucket
access
request,
points
to
bucket
request
or
bucket
request.
C
H
But
isn't
this
just
a
case
of
of
being
the
like?
The
the
user
cr
crs
like
bir
and
vr,
are
just
a
representative
of
a
request
and
like
the
system
would
you
know,
reject
it
if
it
doesn't
make
sense,
but
it's
not
yeah.
C
It's
not
that
cozy
that
one
of
the
bindings
would
fail
if,
if
the
non-namespace
object,
mapping
between
the
access
and
the
bucket
and
the
name,
space
mapping
between
the
access
request
and
the
bucket
request
were
not
the
same,
one
of
the
bindings
should
fail
to
happen.
Because
of
that,
and
then
you
have
a
broken
object.
H
C
H
I
think
you
don't
want
the
user
to
to
take
off
a
reference
which
is
so
crucial
for,
for
you
know
the
the
model
that
we're
representing.
A
H
A
H
Do
the
binding
we
probably
want
to
represent
that
in
the
cozy
structures,
administrative
structures.
B
I
think
that's
fine
with
me:
if
we're
gonna
have
it,
though,
let's
call
it
out
in
the
spec
how
we're
gonna
make
sure
that
we
don't
prevent
inconsistencies.
I.
B
Gonna
have
to
do
more
work.
To
do
that.
I
I
do
agree
with
then
that
if
you
prevent
binding-
and
you
do
your
validation
during
binding,
that
could
be
a
potential
way
to
prevent
it,
but
I'm
not
entirely
certain.
Given
what
alexi
said
about
this
is
a
transaction
less
system,
I'm
not
sure
we
can
guarantee
that
we're
going
to
get
it
right
in
all
cases,.
E
So
it's
the
case
we're
trying
to
account
for
here
where
the
va
points
to
a
bucket
instance,
that's
different
than
what
the
bar
through
br
is
pointing
to.
So
the
br
points
to
a
bucket
instance
and
the
va
is,
and
the
ba
was
created
because
of
a
baar
and
the
bar
references,
the
vr.
So
that's
our
linkage
and
and
what
you're
saying
saad
is
that
if
somehow
that
gets
broken
or
corrupted
that
they
could
be
the
they,
the
ba
could
be
pointing
to
a
different
bucket
instance
than
the
br
2..
C
Right,
yeah
yeah,
so
so
in
a
dynamic
case,
the
the
the
non-namespace
objects
will
only
be
created
as
a
result
of
the
the
namespace
objects,
and
so
you
can
just
copy
the
binding
correctly.
But
in
the
pre-provision
case,
where,
for
some
reason
someone
statically
created
a
ba
and
pointed
it
at
a
bucket
and
then
the
user
came
along
and
created
corresponding
bucket
requests
and
bucket
access
requests.
Referring
to
those
two
objects
but
to
to
to
the
wrong
bucket,
then
one
of
those
bindings
should
fail.
A
Right,
okay,
is
it
okay
to
say
there
is
no
manual
or
static
provisioning
of
bas.
E
J
F
J
A
Yeah,
I
see
what
you
mean,
so
that's.
Why
that's
why
we
had
this
concept
of
bars
are
the
only
things
that
would
be
created
and
if
the
admin
wanted
to
provide
a
static
access
credential,
they
would
just
point
to
the
secret
from
here
and
that
way
it's.
It
follows
the
cozy
life
cycle
of
ba
being
created
by
cozy,
using
whatever
is
provided.
H
H
From
the
same
reasons
for
buckets,
it
made
sense
to
have
that
managed
as
a
you
know,
the
cluster
wide
cr
and
let
the
administrator
provision
that
brownfield
cases
using
that
object.
Do
you
think
it's
still
the
right?
You
know
decision
to
have
that
like
created
using
secrets
and
all
instead
of
a
ba
the
thing
with.
A
H
So,
like
andrew
suggested,
I
mean:
are
we
going
to
to
plug
it
in
such
a
way
that
the
rest
of
the
flow
would
would
follow
from
the
existence
of
that
va
by
the
admin?
So.
A
So
let's
say,
let's
say
someone
goes
and
creates,
creates
a
ba
and
in
the
pod
workflow
they
would
have
to
create
a
bar
that
points
to
this
ba
and
we'd
have
to
create
the
bar
or
we'd
have
to
model
this,
such
that.
If
there
is
a
bucket
access
name
already
set,
then
we
use
what's
what's
created
by
the
admin
that
will
work.
J
Right
click
buckets
right,
it's
just
like
the
field.
No,
what
are
you
triggering
because
all
right,
so
there's
got
to
be
a
binding.
I
guess
right,
so
you
have
a
one
direction
binding
at
provision
time
and
then
that
goes
into
a
dual
binding.
J
J
That
request
gets
created,
doesn't
actually
itself
have
to
provide
a
reference
to
a
bucket
access.
Then
you
do
the
same
kind
of
binding
logic
you
do
for
volumes.
You
just
basically
say.
Is
there
a
bucket
access
already
out
there
that
works?
For
me?
If
there
is,
you,
don't
actually
do
any
provisioning
of
the
bucket
access,
but
I
guess
there
is
there's
always
this
notion
that
post
provisioning
of
a
bucket
access,
you
then
have
to
take
additional
steps,
possibly
depending
on
the
type
of
bucket
access
to
write
credentials
into
the
mounted
volume.
J
And
so
I
guess
you
would
my
point
is
you
would
have
to
distinguish
between
a
bound
and
an
unbound,
because
if
it's
bound
we're
not
going
to
do
anything,
we
assume
the
credentials
are
already
there
so
unbounded.
I
guess
it
would
be.
Is
that
you
have
both
a
provisioning
and
a
binding
step
right
and
you
can.
J
I
guess,
if
you
do
it
the
way
snapshots,
are
you
actually
provision
the
resource
and
bind
it
before
you
fill
in
the
details
in
this
case,
if
the
details
are
already
filled
in
then
you
kind
of
it's
just
a
different
flow.
I
guess
in
the
in
the
binding
logic
where
you
go
well,
it's
already
here.
I
don't
need
to
provision
it,
but
then
I
need
to
bubble
the
the
actual
credentials
down
into
the
volume.
D
J
Oh
well,
so
if
we
assume
it's
not
the
workload
identity
class
here
assuming
it
is
the
I
have
a
key
that
I
need
to
communicate
to
the
application
right
and
remember
once
upon
a
time
we
talked
about
back
in
the
day
that
was
going
to
be
done
through
a
secret.
We
decided
no
we're
going
to
use
ephemeral
volumes
and
and
there's
a
there's,
a
we're
going
to
do
basically
a
custom
volume
and
we're
going
to
write
that
secret
data,
not
as
a
secret
but
in
well-known
location
in
that
volume.
J
D
J
A
So,
coming
back
to
this
question,
yes,
there
is
a
problem
if
we,
if
the
stat
is
the
static
provisioning
of
bucket
access,
where
we'll
have
to
deal
with
the
bindings,
all
the
way
from
pocket
access,
bucket
access,
request,
bugger
request
to
the
bucket,
so
we
can
handle,
we
can
choose
either
either
of
the
two
ways
either
have
the
bucket
instance
name
and
deal
with
the
binding
logic
or
don't
have
it
and
use
the
bucket
access
request
to
the
bucket
request
as
the
source
of
truth
in
terms
of
implementation.
J
Are
we
losing
anything
going
that
way,
not
well
a
source
of
truth?
I
mean
I,
I
I
assume
we're
still
saying:
if
you've
got
inconsistencies,
we
failed
that's
what
I
mean
right.
So
that
means
you've
got
to
basically
be
reference.
Everything
and
evaluate
it
all,
but
you
can
evaluate
it
at
binding
time,
which
is
a
one-time
event
at
least,
and
then
I
guess
maybe
alex's
point
is
that
there
are
race
conditions
in
there,
because
resources
can
change
after
you
read
them
yeah.
So.
D
J
Then
you
could
do
that
another
check
at
bar
attach
time
right
and
which
would
be
a
read-only
check
at
that
point
you
could
just
evaluate
that,
but
you
could
still
change
these
things
after
the
fact
too
right.
So
I
don't
know
it
might
be
good
enough
to
just
do
it
at
binding
time.
Yeah.
G
B
Yeah
implementing
that
binding
logic
is
going
to
be
fairly
complicated.
It
may
be
worth
writing
a
design
doc
and
what
that's
going
to
look
like
just
based
on
how
complicated
the
pv
pvc
binding
logic
was.
D
Right,
I
I'm
afraid.
Maybe
there
is
nothing.
Maybe
there
won't
be
any
security
concerns,
but
we
potentially
may
have
situation
when
bucket
access
points
to
one
bucket
and
bucket
request
points
to
another
bucket
in
the
end
right
and
I'm
not
sure
like.
If
we
can,
if
it
doesn't
create
any
security
kind
of
holes
there.
D
No,
but
bucket
bucket
request
itself
is
under
user's
control
right,
so
they
can
actually
change
it.
Oh
the
bucket
request.
A
A
A
A
H
H
But
it,
but
is
part
of
of
the
cube
that
flow
right.
The
cable
is.
A
Yeah
that
way,
we
can
even
probably
just
assume
that
the
bindings
are
right
and
all
the
way
until
it's
actually
provisioned
to
the
part.
We
don't.
We
don't
even
have
to.
H
Check
it
sounds
pretty
secure
to
me
also
because
we
we
can,
we
can
enhance
the
security
checks
and
and
make
sure
we
even
you
know,
reject
and
an
access.
A
So
I
think
we
should
just
do
the
bindings
check
right
before
right
before
the
volume
is
mounted
into
the
pod
and
that
solves
these
problems
all
of
them
and
we
can
continue
having
the
bucket
instance
name
here.
H
Is
is
there
any
way
technically
to
pull
that
to
pull
off
this
csi
volume
if
we
want
to
reject
access,
for
example,
and
then
cause
the
po
pods
that
the
pod
that
you
did
to
restart
or.
A
So,
are
you
saying
deleting
revoking
credentials?
Is
that
your
question
yeah
yeah,
it's
possible.
So
in
order
to
revoke
the
credentials,
you
just
follow
the
normal
volume
workflow
to
to.
A
So
I'm
thinking
about
how
this
will
be
done,
so
the
cubelet
has
a
way
to
unstage
a
volume
and
unpublish
it
generally
it
there
is
there's
a
way
to
do
it.
I
don't
know
the
exact
answer
right
now.
I
guess
in
I
don't
know
if,
in
case
of
ephemeral
volume,
you
have
a
pv
that's
created.
If
it
is,
then
you
know
we
can
remove
the
pv
and
that
will
trigger
the
unpublished
volume
to
be
called
and.
A
The
question
is,
how
do
I
take
the?
How
do
I
revoke
whatever
credential
was
given
to
a
part,
and
let's
say
why
do.
I
A
H
E
In
stuff
too,
that
comes
up
where
access
to
to
data
needs
to
be
revoked
because
you've
discovered
a
security
problem.
D
D
H
E
Yeah,
that's
that's!
That
should
be
the
trigger.
Yes!
Well,
it
doesn't
work.
If
you
delete
the
ba
it
doesn't
it
doesn't
change
the
running
pod.
The
running
pod
is
consumed,
a
mounted
secret
and
has
its
tokens,
and
there
are
now
variables
inside
this.
This
binary
and
you're
not
revoking
anything
unless
you
do
it
at
the
back
end,
yeah.
J
C
D
C
C
D
C
J
C
I
think
the
pod
needs
to
have
a
finalizer
on
the
bucket
access
request
just
for
correct
deletion
behavior,
but
you
still
need
a
way
to
say.
I
want
to
delete
this
and
and
even
before
the
kubernetes
objects
go
away.
I
want
to
revoke
the
credential
off
the
back
end
and
then
you
know
kubernetes
will
clean
up
when
it
cleans
up
but
like
I
want
to
sort
of
go
go
around
with.
C
J
C
Is
that
what
you're
saying
I
think
so
or
else
the
admin
must
delete
the
pod?
At
the
same
time,
he
revokes
the
credential
to
actually
affect
the
the
real
revocation
on
the
back
end,
which
doesn't
seem
like
I
mean
I,
I
guess
I
would
be
okay
with
that
too,
but
we
just
said
you
don't
like
the
idea
of
just
killing
the
pod,
and
I.
D
F
It
can
only
be
deleted
after
the
part
is
deleted.
I
say
yes
because
if,
if
part.
C
Of
if
you
have
a
pod
that
has
a
bunch
of
state
and
part
of
what
it
does,
when
you,
when
you
kill
it,
is
it
flushes
its
state
to
the
bucket
right
at
the
end
like,
but
you
delete
them
both
at
the
same
time,
there's
a
risk
that
the
pod
loses
access
to
the
bucket
and
it
can't
flush
its
final
state.
And
now
you
have
some
corrupted
or
incomplete
data
in
your
bucket,
but.
C
With
pvc
with
pvs
and
pvcs,
where,
if
you
delete
like
a
pod
and
a
pvc
at
the
same
time,
it
used
to
be
possible
that
we
would
actually
delete
the
volume
while
it
was
still
attached
to
the
pod
because
of
a
race
and
then
the
pod
would
you'd
have
dirty
data
that
you're
trying
to
flush
to
the
volume.
But
the
volume
isn't
there
anymore,
and
then
the
kernel
gets
very
unhappy.
C
B
Access
you
just
get
photo
ones.
Maybe
we
make
this
a
option
on
the
request
or
on
the
bucket
access.
B
How
do
I
do
it,
then?
Well,
no,
I'm
not
saying
retain
or
delete
I'm
saying
whether
you're
going
to
have
a
finalizer
or
not.
B
A
C
D
B
D
A
E
The
current
design
of
cozy
has
no
watches
on
pods.
You
know
across
all
name
spaces,
so
we
don't
know
that
here
we
only
have
our
node
adapter.
You
know
the
cozy
node
adapter
that
gets
called
by
the
on
startup
and
termination
of
a
pod,
but
that's
it.
Otherwise.
We
have
no
connection
to
the
pod.
A
Yeah,
I
think
so
so
I
think
that
solves
it.
What
what
solves
it
sid
so
so
we're
saying
that
if
a
bucket
access
request
is
deleted
initiated
by
the
user,
then
we
wait
for
the
part
to
die
before
the
credentials
are
revoked.
A
But
if
a
bucket
access
is
deleted,
while
the
bucket
access
request
is
still
alive,
then
we
know
that
admin
initiated
it
and
we
want
to
revoke
credentials.
E
E
A
E
You
know
so
so:
okay
yeah,
so
we
we,
the
design,
does
have
a
finalizer
on
the
on
the
bar.
Well,
everything
except
for
the
classes,
and
so
so
yes,
the
node
adapter,
the
cozy
adapter
could
be
the
piece
of
code,
that's
responsible
for
removing
finalizers.
So
it
has
the
final
say.
A
For
everyone
here,
so
so
we
so
to
summarize
what
we
talked
about
a
bucket
instance
name
should
be,
there
can
be
there.
No,
I
actually
should
be
there,
and
we
also
know
how
we're
going
to
check
the
binding
that
is
right
before
it's
mounted
into
the
part
and
for
revoking
access.
The
model
you're
following
is:
if
bucket
access
is
deleted.
First,
we
revoke
directly,
even
if
the
part
is
running,
we
don't
even
check.
If
the
bucket
access
request
is
deleted,
then
it
won't
actually
revoke
the
access
until
the
part
is
gone.
A
A
Okay,
so
in
that
case,
let's
do
this,
so
I
I'll
I'll
we'll
update
the
cap
with
whatever
we
discussed
today.
Please
review
it
and
and
share
your
thoughts
and
we'll
continue
after
that.
G
A
B
G
Okay,
okay,
then
you
have
to
six
minutes.
Can
we
go
through
the
graduation
criteria?
What
else
needs
to
be
done
in
the
cap
or
oh.
A
Criteria.
Okay,
so
I
think
we
have
an
understanding
of
the
api
we
we
actually
discussed
today.
If
you
want
to
add
or
remove
fields,
we
want
to
remove
the
bucket
access
reference
from
the
bucket
object:
green
field,
green
ground
flavor,
yes,
evaluate
gaps,
update,
kept
on
conduct,
reviews,
fault,
yeah
that
we're
doing
today,
yeah
develop
unit
test
cases.
So
is
this
required
for
cap
approval
or
or
is
this
like
alpha,
where
you
know
we
have
a
working
code?
What's
the
what's
the
difference
between
cap
and
alpha.
K
Test
is
the
development
phase.
That's
the
code
itself
yeah,
but
you
just
need
to
write
down
like
what
type
of
test
like
at
a
high
level.
A
Okay,
so
we
have
that
okay,
so
if
it
looks
like
we
need
to
update
the
cap
and
update
whatever
we
talked
about
today-
and
you
know,
if
everyone
reads
it,
you
know
thinks
it's,
it
looks
good
to
them.
Then
yeah.
We
should
be
able
to
move.
K
Forward
there's
one
thing
I
want
to
ask,
because
I
remember
initially
we
talked
about
making
this
like
provisional,
putting
that
a
cap
merged
and
then
started
doing
poc.
I
think
that's
more,
like
experimental
work.
Now
we
are
trying
to
reach
alpha.
I
think
we
need
to
also
go
through
the
api
reviewer
to.
A
Someone
was
saying
something:
please
go
ahead,
but
now
I
was
just
gonna
agree.
Okay,
so
what's
needed
for
alpha
like
so,
if
it's
not
provisional,
I
think
in
the
top
we
had
it
written
somewhere
right.
I
K
Provisional
is
more
like
doing
a
poc.
You
just
have
that
problem.
A
So,
wouldn't
we
have
to
do
implementable
at
some
point
anyways
like
don't
we
have
to
do
provisional
first
and
then
implementable.
What
I
mean
is,
let's
say
we
go
ahead
with
probational.
Wouldn't
we
have
to
do
the
api
reviews
next
and
then
change
it
to
implementable.
K
Maybe
that
is,
I
think
that
maybe
it's
not
that
critical
for
that.
It's
just
like
a
a
status
there,
but
I
think,
I
think,
depends
on
whether
you
want
to
reach
afar
in
this
release
right.
Oh,
I
see
what.
K
If
you
want
to
reach
1.20,
then
just
make
sure
that
you
do
the
api
review.
I.
A
K
A
A
Okay
and
what's
the
quick
question,
what's
the
last
date
for
like
the
code
freeze
was
for
120
version,
I.
G
K
B
B
Freeze,
feature
freeze
normally
comes
within
the
first
third
of
the
cycle,
so
normally
the
cycles
used
to
be
a
quarter
long,
so
it
would
be
at
the
beginning
of
the
month,
the
first
month
and
then
the
end
of
the
second
month
would
be
the
code
freeze,
but
because
of
covid
we
ended
up
removing
one
of
the
quarters
and
we
only
have
three
releases
this
cycle.
So
we
have
a
little
bit
more
time.
B
I'm
not
sure
what
exactly
the
dates
for
feature
freeze
for
code
freeze
are
for
120..
We
should
target
an
alpha
for
120
and
if
we
slip
we
can
slip
it
to
121.
But
I
completely
agree
with
sharing:
let's
get
the
ball
rolling
on
getting
the
api
review
the
process
for
that
is
going
to
be
first,
let's
get
consensus
within
the
sig
and
make
sure
we're
all
on
the
same
page,
we're
good
with
the
cap.
B
If
we
are,
let's
pull
in
the
api
reviewer
folks,
whether
that's
jordan,
tim
or
whoever
and
have
them,
take
a
look
and
make
sure
they
sign
off
on
it.
That
needs
to
be
done
before
the
feature
freeze
date
for
120.
If
we
want
to
get
an
alpha
into
120.,
if
we
missed
that
date,
then
we're
going
to
have
to
wait
for
121.
A
I
see
yeah,
let's
aim
for
that
date,
given
the
cap
as
it
is,
we
can
quickly
update
it
and
and
get
it
to
a
point
where
you
know
it's
updated
with
whatever
we
talked
about
as
of
today
and.
B
So
for
api
review,
they're
going
to
want
to
see
kind
of
the
whole
api,
so
you'll
have
to
create
the
the
pr's
for
that
we've
already.
A
B
Oh
excellent,
so
that'll
include
the
spec
as
well
as
the
the
kubernetes
api.
A
Yeah
yeah
so
yeah,
so
we've
got
that
anyways
yeah.
It
needs
to
be
pushed
up
because
we've
been
actively
developing
right.
I
don't
think
I
see
the
latest
part
here,
but
yeah
we
have
the
api
and
how
do
we
get
in
touch
with
the
the
api
review
folk?
Should
you
know,
should
we
go
through
you
or
directly
just
contact
them
or
what's
the
process
there
shane.
B
I
I
A
Because
I
can
hear
some
okay,
so
I
think
we
should
add
that
flag
or
the
label-
and
you
know
we
can-
we
can
quickly
get
the
cap
up
to
a
point
and
and
also
the
api
repo
up
to
a
point
where
you
know
it's
ready
to
be
reviewed
yeah.
A
So
let's
go
ahead
and
do
that
and
I'll
follow
up
with
you
sad
and
you
sing
and
we
will
go
from.
There
sounds
good
thanks,
perfect
all
right.
How
much
time
do
we
have
left?
Okay,
it's
time
up!
Thank
you!
Everyone!
We
won't
have
the
monday
meeting
this
week.
So
I'll
talk
to
you
all
on
thursday
next
thursday.