►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 08 February 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
I,
let's
see
it
looks
like
it's
on
number
19..
Okay,
I
can
let
me
post
a
link
to
that.
A
Okay,
so
what
I'd
like
from
from
you
really
is,
I
think
that
need
that
that
part
needs
to
be
fixed.
So
first
I'd
like
you
to
figure
out
what
the
right
answer
there
is
for
one.
I
am
not
a
big
fan
of
cube
builder.
I
think
it
complicates
things
quite
a
bit
by
trying
to
be
a
type
system
for
any
new
type
that
that
any
new
crd
type
that
we
can
introduce
it
ends
up.
A
You
know
since
go,
doesn't
have
dynamic
types,
it
just
isn't
a
natural
fit
and
the
way
it's
designed,
you
know
we
do
a
lot
of
gymnastics
to
just
be
able
to
do
normal,
very
simple
things.
That's
why
you
know
I
went
and
wrote
that
controller
for
things
to
be
simple,.
A
I
think
the
controller
I
mean
so
I
also
use
the
controller
in
a
different
project
and
I
had
to
issue
some
bug
fixes
to
get
it
working
right,
but
for
this
project
you
know
for
the
for
the
other
project,
where
you
know
we
were
able
to
get
the
bug
fixes.
A
There
was
a
there
was
an
internal
project
in
in
the
company
I
work
at,
but
for
this
project
I
want
it
to
be
a
community
driven
effort,
and
I
want
all
of
us
to
come
to
consensus
on
on
what
the
right
approach
is
for
the
controller.
B
Yeah,
I
I
I
have
done
some
like
sort
of
investigation
on
on
my
own
of
like
cube
builder
and
the
operator
sdk,
and
I
don't
I
don't.
I
guess,
find
them
bad.
I
also
don't
know
that
I
necessarily
find
them
particularly
useful
compared
to
what
they're
built
on
top
of
is
another
kubernetes
sig,
which.
B
And
we
just
use
controller
runtime
directly
in
rook,
and
I
find
that
it
like
writing.
The
controllers
is
pretty
yeah,
it's
it's
pretty
straightforward
and
I,
I
think
it
probably
looks
pretty
similar
to
cube
builders
and
skewbuilder
just
uses
controller
runtime.
B
I
think
some
of
those
like
type
type
issues
that
you
mentioned
are
are
kind
of
there,
but
I
do
find
that
it.
It
needs
less
boilerplate
to
get
things
working
and
I
think
that
allows
for
a
little
bit
better
of
a
high
level
understanding
of
what's
going
on.
I
especially
like
the
oh,
I'm
trying
to
remember
what
they
call
them.
It's
not
it's
not
primitives.
It's.
B
Yeah,
like
predicates
like
they,
you
can
like
just
they
like,
add
or
the
create
delete,
update
predicates
are
pretty
easy
to
add
and
there's
not
really
extra
boilerplate
around
them
and
it
it
like
it
works
in
rook.
I
don't
think
we
ever
actually
had
any
issues
with
with
things
not
quite
working
because
of
that.
A
Library
yeah
this
works
too.
I
think
there's
one
bug
here
with
with
a
race
condition
but
and
that's
easy
to
find
and
to
fix.
But
but
you
know
I
don't
want
there
to
be
something.
That's
that's.
That's
that's
kind
of
hiding
in
there
that
we
figured
out
say
six
months
from
now
or
a
year
from
now
when
things
are
in
production,
so
I
want
to
move
to
something:
that's
more
utilized
by
others.
That's
that's
more
common.
What
do
you
think
so?
A
One,
like
you
said,
controller
runtime
is
a
great
option.
What
do
you
think
about
using
the
controller
framework
that
kubernetes
itself
uses
it's
not
based
on
controller
runtime
or
anything?
It's
just
it
just
uses.
Client
go
and
builds
the
control.
On
top
of
that,
and
it's
pretty
simple
or
straightforward.
Have
you
have
you
tried
it
out.
B
I
guess
I
haven't
looked
really
deeply
into
specifically
what
kubernetes
does
around
that.
I
think
I
don't
know.
I
guess
I've
been
pretty
happy
with
the
yeah,
with
what
controller
runtime
gives
us
in
rook,
okay
and
some
of
the
tools
it
allows
us
to
like
build
on.
B
I,
okay
yeah.
I
haven't
really
seen
any
like
talks
or
anything
from
people
in
kubernetes
about
like
a
comparison
of
of
those
two
things.
A
Understood:
okay,
okay,
so
maybe
we
should
sync
up
again
separately
to
go
over
how
how
we
fix
it
or
should
we
even
fix
it
and
then
we'll
go
from
there,
because
because
you
know,
I
think
a
fix
might
be
needed
and
when
we
do
it,
I
think
probably
controller
runtime
is
a
good
option.
And
would
you
be
able
to
work
on
that
plane?
Would
you
be
able
to
actually
make
the
pr
and
the
code
changes
for
that.
B
Potentially
I
don't,
I
don't
know
that
I
would
be
able
to
get
to
it
particularly
quickly.
There
is
a
sort
of
larger
work
item
that
I
have
that
I'm
sort
of
just
getting
out
of
the
planning
phases
of
in
rook
that
I
need
to
go
through
and
do,
but
I'm
I'm
yeah.
I've
started
having
the
discussions
around
like
where,
where
in
our
sort
of
priority
list
does
quasi
integration
lie
and
yeah,
and
I
I
would
be
the
one
starting
on
that
and
then
okay,
if.
F
B
A
Right
right,
yeah,
that
is
part
of
the
integration,
so
so
I
think
you
mentioned
it
last
time
too
about
your
availability
and
that's
why
I
was
thinking
we
could
start
with
this
this
this
one,
this
one
task,
because
this
is
one
of
those
things
where
we
don't
need
to
address
it
right
this
second,
but
it's
important
to
address
it
sooner
rather
than
later.
So
I
think
I
think
this
would
be
a
great
fit
for
that.
B
Yeah,
I
I
think
that
sounds
good
and
I
I
think
it's
yeah.
It
should
be
fairly
straightforward.
I
think
it
is
yes.
E
B
The
controller
like
we
yeah
in
rook
we've
sort
of
seen
what
it
looks
like
to
to
start
using
the
the
controller
runtime
from.
E
B
A
G
A
So
there's
no
plug
yet
the
what
we're
doing
is
so
so
I
I
wrote
the
controller
for
for
for
the
cozy
projects
and
I
and
I
and
I
wrote
it
by
hand
in
the
sense
that
I'm
not
using
controller
on
time.
I'm
just
yeah
using
the
primitives
from
clientgo
and
and
you
know,
cueing
and,
and
you
know,
dealing
with
error
states
and
all
that
myself.
Just
using
an
informer
and-
and
you
know,
resource
handler.
A
Right
right,
right,
agreed
yeah,
that's
where
I
learned
how
that
controllers
too.
The
thing
is
I
I
we
don't
need
a
full-fledged
informer
here.
I'm
trying
to
remember
what
were
the
compromises
I
made
and,
and
it's
been
a
while
and-
and
I
just
want
someone
else
to
go
over
it
and
make
sure
everything
looks
good,
because.
A
There
is
one
one
issue
in
in:
there
is
a
small
race
condition
in
in
the
implementation
that
I've
already
fixed,
but
but
I
want
others
to
also
go
through
this,
so
so
you
know
it
improves
the
confidence
in
that
code.
Maybe
you
could
take
a
look
at
it
as
well.
Ben.
F
F
A
C
Co-Host
jeff,
oh
she
made
me
okay,
so
let
me
can
I
assign
it
to
you
sid
hold
on
sure.
I
don't
know
how
to
do
that,
though.
So
just
just
make
him
or
I
think.
A
C
A
Yeah,
it's
it's
a
simple
slide.
I
mean
yeah.
If
you
can
share
this,
I'm
going
to
share
it
on
the
zone,
chat
I'll,
try.
H
C
C
H
C
A
A
Yeah
presented
presented
so
there's
just
a
lot
of
things
going
on
so.
A
This
slide
so
so
he
so.
H
A
The
question
was:
how
do
we
handle
mutation
of
a
bucket
after
it's
been
provisioned,
so
say,
for
instance,
there's
a
bucket,
which
is
the
data
source
for
a
website,
and
we
keep
the
bucket
private
until
the
information
that's
required
to
complete
or
launch
the
website
is
present,
and
then
we
make
the
bucket
public.
In
such
a
case,
we
are
changing
a
property
of
the
bucket
after
it's
been
provisioned.
A
A
A
So
we
came
up
with
a
solution
last
week
that
seemed
to
check
off
all
the
different
concerns
we
had
for
for
them
for
making
this
or
for
adding
this
feature
enabling
this
feature.
G
Wanted
to
caveat
that
the
we
we
had
a
good
proposal
for
the
mechanism,
whether
like
public
private,
is
one
of
the
things
which
should
be
mutable
by
this
mechanism
is
a
separate
question,
and
that
was
the
example.
You
happened
to
use
yeah,
so
I
I'm
still
not
convinced
that,
like
you
should
be
able
to
set
public
private
on
an
existing
bucket
through
this
mechanism,
but
if
there
is
something
that
you
should
be
able
to
set,
then
the
mechanism
we
talked
about
is
the
right
one.
I
think.
A
Yeah,
oh,
we
can.
We
can
take
any
number
of
examples.
You
know
turning
on
object,
versioning,
we,
you
know
just
anything
really
turning
on
access
logging,
so.
A
So
so
so
ben
remind
me
here:
we
we
considered
two
alternatives.
The
first
one
was
the
admin,
goes
and
changed
the
bucket
object
directly
and
that
would
that
would
trigger
the
sidecar
controller
to
call
the
driver
which
would
go
and
make
the
change
in
the
cloud
backing.
A
One
of
them
was
if
we,
if
you're,
sharing
a
bucket
across
namespaces
in
that
case,
what
we
do
is
we
create
another
copy
of
the
bucket
and
that
copy
of
the
bucket
will
not
have
this
change
show
up.
So
if
we
were
to
make
this
change,
if
you
had
to
change,
say
object,
versioning
or
access
logging
or
any
other
parameter,
then
only
one
of
those
buckets
would
see
that
change
the
the
bucket
on
which
the
admin
goes
and
changes.
That
value
is
the
only
one.
G
So
the
only
alternative
that
we
seriously
considered
was
going
back
on
our
one-to-one
binding
proposal
and
since
and
asking,
if
maybe
we
need
to
do
many
to
one
binding
again
and
we
decided
that
for
all
the
same,
all
the
reasons
we
decided
not
to
do
that
in
the
first
time.
This
is
not
a
good
enough
reason
to
change
that
decision
right,
but
but
it
doesn't
matter
whether
an
end
user
does
it
or
an
admin.
G
Does
it
I
mean
at
the
end
of
the
day
something
needs
to
update
a
kubernetes
object
and
in
the
world
where
there
are
many
of
them
playing
for
the
same
resource
in
the
back
end
yeah,
pointing
to
the
same
resource
the
only
well.
I
guess
the
only
other
alternative
was
not
representing
at
all
in
the
kubernetes
object
and
saying.
G
Proposal
was,
we
could
say:
kubernetes
is
just
going
to
get
out
of
your
way.
Yeah
and-
and
you
have
to
do
it.
A
F
B
G
G
Basically,
just
what
we
said
go
go
ahead
ben
yeah,
so
so
the
idea
is,
we
could
have
much
like.
We
have
a
deletion
policy
on
objects.
That
tells
you
you
know
when
when
this
kubernetes
object
gets
deleted,
should
I
delete
the
actual
object
or
not?
We
can
have
a
similar
one
for
updates
called
like
an
update
policy.
G
That
says
you
know
if
if
the
kubernetes
object,
if
some
parameter
changes,
should
I
update
the
object
or
not
and
then
and
then
you
could
in
in
the
controller
that
was
watching
for
these
things
only
only
react
to
changes
on
objects
that
had
the
policy
set
to
true
and
furthermore,
in
the
case
of
conflicts,
where
the
you
know,
multiple
objects
were
saying.
G
G
Locking
is
the
kind
of
thing
that,
like
you,
depend
on,
and
if
it's
not
there
bad
things
are
going
to
happen
so
like
you'd.
Rather,
the
whole
thing
fails
before
the
pod
starts
up.
If
object
level,
locking
isn't
available
or
something
or
maybe
that's
a
bad
example.
Maybe
maybe
the
the
pod
needs
to
change
its
behavior
if
it
doesn't
have
access
to
object
level,
locking
it
needs
to
be
able
to
query
that
when
it
starts
up
so
that
it
can
do
something
different.
G
G
A
And
we
said
things
won't
do
it.
We
don't.
G
A
G
A
Right,
that's
what
we
said
so
coming
back
to
this.
I
feel
like
we're
missing
something
yeah,
so
so
for
those
fields
that
that
kubernetes
is
going
to
match.
What
would
look
like
you
know?
What
what
will
it
look
like
say?
All
clever
locking
is
enabled.
How
is
that?
How
does
how's
that
gonna
work.
G
G
Right
because
there
will
always
be
moments
of
time
when,
like
you've
asked
for
it
to
be
turned
on,
but
it's
not
turned
on
yet
and
you
need
to
know
which,
which
state
you're
in
that's
what
the
status
would
be.
G
G
G
A
Something
happened:
okay,
okay,
so
what
did
we
just
talk
about?
I
was
asking:
how
would
that
mechanism
work?
Yeah.
G
G
Oh,
did
you
drop
that
early
okay
yeah
like
like
you
would
need
a
you,
would
need
to
have
a
spec
and
a
status
field
on
the
bucket
object
and
the
controller
would
have
to
you
know,
look
at
the
value
of
the
spec
field
and
and
also
look
at
the
value
of
the
update
policy
field
and.
G
If
it
could
reliably
determine
what
the
value
should
be
and
then
it
would
be
responsible
for
changing
it
and
then
reflecting
that
back
into
the
status
and
the
value
of
the
status
field
is
to
cover.
You
know
that
brief
period
of
time
between
when
you
ask
for
the
friction
to
be
turned
on
and
when
it's
actually
turned
on,
so
that
you
can.
You
know
know
when,
when
it's
been
responded
to.
G
On
a
per
feature
basis,
we
would
have
to
determine
like
whether
it
was
switchable.
You
know,
while
something
was
attached
or
not
and
we'd
have
to
go
through.
You
know
and
figure
out
all
those
things
whether
you
know
mutations
are
valid
and
whether
you
know
whether
existing
pods
should
see
the
old
behavior
or
see
the
new
behavior,
whether
they
should
be
notified.
When
the
behavior.
G
G
G
G
G
The
file
from
underneath
right.
E
G
A
A
Fair
enough,
okay,
so
so
jeff,
you
know
we
got
a
chance
to
discuss
this
and
I
think
last
week
after
the
after
the
community
meeting,
did
this
clear
up
some
of
the
questions
you
had.
A
C
Okay,
yeah,
I
mean
it
clears
up.
I
missed
the
first
five
minutes
or
ten
minutes
of
the
last
meeting.
So
yes,
yes,
it
does.
C
What
the
one
is
the
question
you
brought
up,
I
mean
I
have
several,
but
one
is
the
question
you
brought
up
at
the
end
sid,
where,
if
the
criteria
for
an
immutable
attribute
of
a
bucket
is
that
it
matters
to
the
workload
which
is
what
ben
suggested
and
it's
reasonable,
then
sid,
your
follow-up
question
is
really
important,
and
that
was
how
do
we
notify
the
workload?
In
other
words,
kubernetes?
I
think
in
storage
today
has
the
concept
that
the
workload
gets:
its
gets,
a
snapshot
of
storage
information,
it's
static
and
the
workload
isn't.
C
My
knowledge
isn't
made
aware
of
any
changes,
and
there
aren't
that
many
changes
that
can
be
done
anyway
and
now,
with
buckets
we're
talking
about
a
much
more
sophisticated
and
complicated
system
where,
where,
if
we're
going
to
allow
an
attribute
to
mutate,
it
seems
reasonable.
The
workload
ought
to
know
about
it.
If
the
definition
of
a
mutable
attribute
is
that
it
matters
to
workloads.
G
So
so
so,
on
this
volume
side
like
you're
right,
we
almost
never
do
this,
because
the
vast
majority
of
things
that
that
that
the
workload
could
care
about
are
defined
at
the
beginning,
and
then
we
just
don't
let
you
change
them.
I
believe
the
one
exception
is
the
volume
size.
You
can
resize
the
volume
and
we
do
specify
that,
like
you
know,
we
will
go
back
in
and
update
the
volume
size
on.
I
think.
G
That
that's
more
of
a
that's
a
core
kubernetes
thing,
but
but
yeah
like
there
was
a
vast
vast
amount
of
of
controller
just
to
handle
resize,
because
it's
it.
It
was
important
enough
to
do
all
this
extra
work,
and
so
we
did
the
work
and
it's
it's
literally
like
I
would
say,
half
the
code
in
csi.
Well,
I
don't
know
I
could
be
exaggerating
because
I
spent
too
much
time
in
the
resize
code
paths
but
like
half
the
code
feels
like
it's
dedicated
to
resize
because
of
all
of
the
extra
things
you
need
to
do.
G
When
you,
the
size
of
a
running
of
a
volume
attached
to
a
running
pod
is
changed,
but
of
course
resizing
a
pod
or
a
volume
is
a
really
important
workflow.
So
we
decided
it
was
worth
all
this
effort.
I
don't
know
what
what
on
the
object?
Storage
side
might
rise
to
that
level.
Where,
like
you,
want
to
be
able
to
do
it
on
a
in
a
while,
the
workload
is
still
running,
and
then
you
want
to
tell
the
workload
that
it's
changed.
G
A
A
G
Adopt
the
new
paradigms
quickly,
but
but
they
should
and
we
should.
We
should
make
sure
that,
like
that's
the
preferred
way
to
do
things,
I
don't
want
to
throw
everyone
else.
You
know
to
the
curb.
I
want
to
try
to
help,
but
I
mean
my
base
assumption
is
you
know
if,
if
restarting
a
pod
is
a
big
deal
for
you,
you
probably
designed
something
wrong.
A
Yeah
yeah,
that's
what
I'm
saying:
yes,
okay!
So
it's
good
that
we
know
the
trade-offs,
so
you
know
for
the
api
reviewer.
A
What
we'll
do
is
you
know
we
in
the
cap
we'll
update
to
have
this
text
be
in
there,
this
new
approach
and
then
we'll
put
it
in
as
as
as
of
our
latest
discussion.
A
This
is
the
way
thinking
of
it
and
and
just
to
give
the
confidence
that
that
we
have
a
possible
solution
to
move
forward.
With
this.
With
this
question
of
bucket
mutation
and
and
and
we'll
see
how
it
goes.
C
I
just
the
other
thing
that
I
that
I
don't
like
about,
it
is
to
say
our
cozy
solution
to
mutating
bucket
properties
is
to
use
proprietary
back
end
hooks.
F
G
C
So
I
mean
that's
a
that's
reasonable
ben
but
and
I
think
sid
brought
this
up
or
someone
did.
I
mean
cozy
defines
certain
bucket
properties
through
in
a
bucket
class
and
a
parameters,
map
map
and
but
then
but
then
we
say
the
way
you
change.
It
is
to
go
in
the
back
back
end
and
change
it
directly.
F
G
C
C
I
start
revisiting
the
one-to-one
mapping,
which
I
know
you're
a
proponent
of
them,
but
when
you
start
making
a
change
to
a
bucket
instance,
the
b
and
then
you
and
then
you
need
to
decide
and
then
you
need
to
somehow
propagate
that
change.
To
other
b's
that
point
to
the
same
back-end
bucket.
In
my
mind,
that
reveals
a
architectural
flaw.
C
C
G
So
so
the
yeah,
let
me
try
to
sharpen
up
the
proposal
and
describe
what
actually
would
happen
under
this
proposal.
So
let's
say
we
had
some
first
class
parameter
like
let's
say,
object
level,
locking
again
because
that's
fairly
easy
to
understand
what
it
means.
G
Let's
say
that
that's
reflected
all
the
way
from
the
storage
class
through
cozy
as
a
as
a
first
class
option
on
your
bucket,
and
it
also
shows
up
in
the
the
downward
facing
api,
the
pod
c,
and
so,
if
you
have
multiple
buckets
or
if
you've
created
a
bucket
and
then
you've
cloned
it
a
bunch
of
times
to
share
it
across
your
name,
spaces
you'll
have
a
bunch
of
a
bunch
of
brs
and
a
bunch
of
b's
all
one-to-one
bound
with
each
other,
all
with
basically
the
same
values.
G
Presumably
only
one
of
them
will
have
a
deletion
policy
of
delete
and
the
other
ones
will
not,
because
you
don't
want
everyone
to
have
the
ability
to
delete
this
thing
yeah,
but
you
can,
you
could
decide
otherwise.
Similarly,
the
update
policy
would
be
the
same.
You'd
have
to
say:
okay,
look.
I
don't
want
everyone
to
just
be
able
to
go
twiddle.
G
The
switch
right
if
four
different
name
spaces
have
brs
that
refer
to
the
same
bucket
and
they
all
start
flipping
the
switch
this
to
enable
object
level,
locking
on
object
level,
locking
off
you're
going
to
have
problems
right.
You
have
like
that's
a
that's
a
our
back
problem
like
who
is
allowed
to
make
the
change.
G
So
what
the
update
policy
does
is
it
tells
you
which
one
is
basically
the
master
copy
like
which
bucket
gets
to
make
changes
and
which
ones
are
mere
shadows
or
reflections
of
the
original
bucket,
and
they
don't
get
to
make
changes
because
they're
just
using
it.
The
update
policy
gives
you
that.
So
it's
it's
a
combination
of
our
back
control.
You
know
who
can
make
the
changes
and
also
disambiguating,
you
know
potentially
conflicting
sources
of
truth,
so
that
you
have
a
way
to
determine
what
the
user
really
wanted.
G
I
mean
nothing
stops
someone,
unfortunately,
from
setting
the
update
policy
to
true
on
multiple
of
the
buckets,
but
what
you
can
do
in
that
situation
is
at
least
go
through
all
the
ones
that
do
have
it
true,
compare
all
the
values
and,
if
they're
all
the
same,
do
it
and
if
they're
not
generate
an
error
and
say
hey,
I
I
don't
know
what
to
do
now.
You
need
to
either
update
all
of
your
buckets
or
take
away
someone's
update
policy
so
that
it
becomes
clear
what
I'm
supposed
to
do.
It's
and
but.
G
G
And
you
have
there's
a
bunch
of
issues
with
trying
to
do
this
on
the
emission
controller
side.
So
what
we
proposed
was
let
the
change
go
through
and
hit
hit
the
fcd
and
then
react
to
it
at
the
on
the
mutation
controller.
When
it
sees
the
object,
update
come
through
and
it
can
say,
hey,
I
don't
know
what
to
do.
I'm
not
doing
anything
and
just
log
a
warning
generate
an
event
or
something.
But
the
other
thing
I
was
going
to
say
is
for
these
first
class
fields.
G
G
C
Yeah,
I
mean
ben,
I
think
I
think,
effort
to
look
at
re,
re-examine,
one-to-n,
binding
is
a
a
big
distraction
and
and
potentially
doesn't
have
a
good
answer
so
being
able
to
keep
that
all
the
code
assumes
that
right
now
and
and
all
the
documentation
is
saying
that
so
being
able
to
stay
with
that
is,
is
pretty
helpful.
I
think
at
this
time.
G
Yeah
and
all
of
this
awkwardness
and
challenge
comes
out
of
the
fact
that
kubernetes
doesn't
have
a
way
to
basically
atomically
update
multiple
objects.
That's
at
the
core
of
all
of
this.
So
as
long
as
we
don't
have
atomic
multi-object
updates,
it's
better
just
to
have
one-to-one
binding
and
to
make
these
copies
and
then
to
deal
with
the
ugliness
that
evolves
from
that.
A
All
right
yeah,
so
so
so
jeff
you
are
working
on
the
cap.
What
could
you
could
you?
Let
him
know
that
this
is
what
we're
thinking
about
on
the
on
the
cap
itself.
C
C
I
C
Think
anyone's
addressed
his
long
multi-paragraph
question
or
comments
and
in
the
chat
yeah
he's
thinking
about.
A
We
go
forward
and
we
we
let
him
know
how
we're
thinking
about
this.
I
think
his
thought
process
is
coming
from.
You
know
trying
to
solve
this
same
bucket
mutation
issue
and
he's
looking
at
the
problem
a
different
way.
I
think
we
should
just
let
him
know
what
they're
thinking
and
then
and
then
evolve
from
there.
C
G
Okay,
but
but
I
wanted
to
revisit
the
concern
about
the
fields
that
are
opaque
and
whether
or
not
cozy
should
be
able
to
mutate
those
fields,
because
that
seems
like
a
very
interesting
question
from
my
perspective,
especially
because
jeff
used
to
feel
strongly
that
cozy
should
be
able
to
mutate
things.
Even
if
they're,
opaque,
too
cozy.
G
C
G
C
C
It
just
I
guess
I
guess,
philosophically
I
would
like
cozy
to
be
a
powerful
framework
around
buckets,
and
I
would
like
an
administrator
or
storage
bucket
manager,
object,
store
manager
to
be
able
to
make
their
changes
to
the
underlying
bucket
through
cozy,
and
that
would
include
first
class
field
changes
as
well
as
protocol
specific
changes.
So
that's
that's
kind
of
the
grandiose
view
I
have
of
it.
G
E
G
It's
hard,
I
mean
yeah,
if
you,
if
you
think
about
just
the
number
of
vendors
and
their
weird
proprietary
options
and
the
fact
that
we
just
packed
them
into
a
string
map
in
the
storage
class
and
yeah.
You
know
that's
it
like
if
you
had,
if
you
had
not
only
specified,
there's
a
creation
time
that
a
lot
of
mutation
of
them,
you
can
scarcely
imagine
the
the
types
of
corner
cases
that
are
going
to
pop
out
of
the
woodwork
when
people
are
like.
Oh,
but
like.
G
I
need
to
notify
the
pod
when
this
option
changes,
but
it's
it's
opaque
to
kubernetes.
So
the
mechanism
by
which
I
notify
the
pod
is
also
has
to
be
opaque
and
proprietary
and,
like
your
head,
would
quickly
explode,
and
so
I
think
we
just
said
you
know
what
like
until
until
someone
has
a
good
use
case.
We're
not
going
to
do
this.
A
Right,
yeah,
yeah,
okay,
so
I'm
glad
we
we
got
to
discuss
this
and
get
some
more
clarity
on
this
issue.
I
want
to
do
this
again
on
thursday,
for
the
others
who
haven't
had
a
chance
to
go
through
this
and
and
formalize
this
and
and
kind
of
come
to
agreement
that
this
is
the
right
approach.
A
I
think
I
think
we're
almost
there.
I
just
I
just
want
some
more
people
to
to
you
know
go
over
this
just
so
that
we
can
have
some
more
people
kind
of
try
and
poke
holes
into
this
yeah
yeah.
That's
it
from
me.
E
So
how
are
you
sorry,
I
I'm
just
joining
for
part
of
10
minutes.
So
how
are
you
doing
with
the
api
review?
You
need
to
address
the
team's
comments.
Yeah.
A
Yeah,
so
so
so
tim
is
going
so
tim
is
getting
into
this
from
scratch.
He's
just
learning
about
this
and
he
has
a
lot
of
questions
now.
A
I'm
wondering
if,
if,
if
we
can
ask
him
to
move
forward
with
the
cap
comments
as
the
next
step,
so
I've
asked
him
to
you
know
a
video
call
anytime
possible
as
soon
as
possible,
but
he
hasn't
responded
yet
so
so
I
also
right
now,
though,
he
has
some
comments
on
the
pull
request
and
many
of
them
are
even
just
like
ideas.
A
Jeff
can
you
share
your
screen
and
I
mean
open
that
tab
open
a
tab
with
the
enhancements,
pull
request.
Okay,
yeah.
E
C
Yeah,
but
this
is
this-
is
just
the
cap.
What
was
the
pr.
A
I
On
the
link,
above
just
yeah,
that
one.
E
I
A
Yeah,
I
think
it
doesn't
know,
can
you
see
my
screen.
A
So,
white,
okay,
one!
Second,
let
me
close
that
one
chrome
what
it
doesn't.
Let
me
share
the
whole
screen
anymore.
A
A
Yeah,
these
are
basic
questions.
There's
one
really
long
comment
overall,
lack
of
self-service
beyond
day
zero
and
when
he
says
this,
you
know
this
he's
talking
about
bucket
mutation
and
then
yeah
self-service
is
not
possible,
with
even
the
current
approach
that
we're
talking
about,
because
because
we
are
asking
the
admin
to
either
go
change
the
bucket
directly
or
where
we
are
asking
the
admin
to
go,
do
it
out
of
brand.
G
No,
no,
if
it
needs
to
be
out
of
done
if
it
needs
to
be
done
out
of
band,
it
just
depends
who
has
access
to
do
it?
It
could
be
the
end
user,
it
could
be
the
admin,
but
if
we
do
it
in
band,
there's
no
reason.
You
can't
just
change
the
br
and
have
the
change
propagate
through
the
b
to
the
back
end
using
another
controller
to
also
reflect
changes
from
the
rs
to
bs.
Again,
following
the
the
update
policy.
A
A
Yeah
yeah
yeah
yeah,
so
so
so
shane,
I'm
you
know,
I'm
going
to
try
and
tell
you
know,
tell
him
to
move
forward
with
this,
because
with
this
current
approach,
we're
not
really
closing
out
a
possible
solution
for
the
questions
he's
raised
and
and
it
will
be
good
to
go
through
the
ava
process
right
away.
A
Review
process,
so
so
you
know
the
cap
merges
tomorrow,
so
so
I
want
to
understand.
This
is
the
merging
the
cap
is,
is
different,
or
is
it
the
same
as
starting
the
api
review
process.
E
I
So
so
the
cap
review
deadline
for
tomorrow
which
review
because
the
kept's
been
merged.
Is
this
no.
E
C
A
Cold
level
means:
will
this
eventually
end
up
in
in
kubernetes
upstream?
Will
it
be?
Will
it
have
a
dot
k,
tested,
io
api.
G
That's
the
regular
feature
proposal
freeze.
Oh
I
see
for
4121.
like
there's
another
one
in
three
months.
A
Yeah
yeah
there's
another
one,
three
months.
Yes,
yes,
yes,
I
it's
just
it's
just.
The
only
reason
we
won't
be
able
to
reach
alpha
in
in
three
months
will
be
because
this
cap
isn't
much
because
we
will
have
the
rest
of
the
things
in
place.
C
There's
some
other
sections
missing
for
for
being
at
an
alpha
level
right.
E
F
A
A
E
No,
not
production,
but
the
alpha
when
you
reach
alpha
is
actually
not
replacing
ready
right,
but
there's
a
yeah
but
the
offer
they
do
have
a
few
things
that
you
need
to
fill
out.
Maybe
did
you
add
that
session
yeah.
E
A
E
C
G
E
G
E
A
No,
no,
it's
understandable!
It's
fine!
It's
more
yeah,
we'll
wait
for
him
and
we'll
do
it
the
right
way.
That's
what's
more
important
and
then
you
know
we
should.
We
should
make
sure
the
api
reviews
also
feel
like
this
is
ready
to
move
forward
and
we
shouldn't
just
push
for
it.
So
it's
fine
three
months
is
okay,
we'll
we'll
we'll
just
you
know
by
three
months:
we
can
be
more
than
ready.
We
can.
We
can
have
that
thing,
watched
and
and
also
you
know,
make
more
progress.
So.
A
Yeah
so
yeah,
that
is
right.
So
so
we'll
we'll
I
mean
with
this
pr
itself.
You
know
we're
already
pinging
him
early
for
the
next
time
so
yeah.
This
is
not.
This
is
not
too
bad,
we'll
we'll
continue
doing
what
we're
doing
and
you
know,
try
to
get
the
kept
moist
we'll
still
try
for
tomorrow.
It's
unlikely,
but
it's
worth
a
shot.
E
B
A
All
right,
that's
it
for
now,
let's
meet
again
thursday
and
we'll
go
over
the
same
decision.
One
more
time
give
give
you
all
enough
time
to.
You
know
just
think
about
it,
a
little
bit
more
see
if
you
can
poke
more
holes
into
it
and
then
and
then
we'll
decide.
Finally,
on
thursday.
G
A
We
do,
how
do
I
open
up
a
calendar
somewhere
here.