►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 22 March 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
So,
instead
of
the
controller,
there
was
an
issue
where,
if
there
were,
if
there
was
a
lot
of
changes
happening
to
any
of
the
bucket
style
resources
bucket
resources,
then
there
was
a
chance
of
concurrent
map,
reads
and
writes,
so
we
fixed
that
it
was
just
noise
today.
A
Similarly,
we've
been
making
updates
to
the
provisioner
and
the
controller
where,
instead
of
the
controller
there
there
was,
there
was
some
yeah
we've
been
constantly
making
changes
to
the
docs.
We
we
moved
over
from
g-log
to
k-log
in
most
of
the
implementations,
and
there
was
there
was
a
bug
in
here
too
that
was
fixed
here.
I
can't
remember
what
was
that
I
fixed
just
looking
at
it
to
see.
If
I
remember.
A
Oh
yeah
yeah-
this
was
an
interesting
case
actually
so
in
this
is
the
bucket
request
controller,
where,
after
the
bucket
request
is
created
on
the
controller
on
the
central
controller,
this
bucket
request
will
be
this
bucket
request.
Listener
will
be
listening
for
bucket
request
events
and
then,
if
a
new
bucket
request
comes
in
with
a
valid
bucket
class
or
a
default,
then
we
respond
by
creating
a
bucket
object
and
then
once
the
creation
is
done,
we
update
the
bucket
request,
object
itself
to
have
a
reference
to
this
bucket.
A
So
it's
a
two-step
process
and
between
the
first
and
second
step,
the
controller
can
go
down.
So
this
one
fixes
it
such
that,
even
if
the
controller
goes
down,
we
don't
end
up
in
a
situation
where
you
know
the
second
operation
is
never
done
so
so
that
was
one
of
the
changes
that
was
brought
here.
This
same
change
needs
to
be
carried
out
for
the
bucket
access
controller.
It
hasn't
been
done
yet
so
anyways.
This
is
just
supposed
to
be
a
quick
intro
to
what
we've
been
doing
this
week.
A
A
A
new
person
has
added
themselves
to
the
cozy
project
yeah
this
one,
and
I
don't
know
who
this
person
is,
but
they
have.
A
Yeah
so
they've
said
their
projects
are
cozy
related,
I
mean
the
involvement
is
cozy
links
to
the
projects
they're.
B
I'm
just
saying
this
person
do:
what
is
this,
but
was
he
in,
but
he
listed
this,
but
did
he
really
this
person
or
he
or
she
really
contributed
to
cozy.
A
Made
a
bunch
of
documentation
prs,
so
you
know
deployment
okay,.
B
B
General
kind
of
thing,
so
he
this
person
just
mentioned
everything
he
or
she
has
done.
So.
That's
why
I
don't
think
she's
trying
to
become
this
is
the
membership
for
kubernetes.
This
is
like
the
overall
og,
so
it's
not
like
become
an
approver
or
reviewer
in
cozy
at
all,.
A
Well,
that's
what
I
thought
originally,
but
then,
when
you
know
we
just
you
know
chris
joined,
he
just
got.
He
basically
wrote
the
same
thing
and-
and
he
has
like
lgtm,
merge
privileges
on
on
all
the
cozy
projects
cool
chris.
Let
me
let
me
pull
that
up
really
yeah,
who
appreciate
that
no
krish
I
approved
it.
So
so
in
a
sense
I
only
said
plus
one
here,
but
I
didn't
specifically
say
you
know
for
these
projects,
but
the
projects
that
will
so
don't.
A
A
B
B
A
Yeah
yeah,
they
mean
stunning
seven
pr's,
however,
with
chris
right
yeah,
if
he's
there,
he
can
also
confirm
this.
So
he's
just
said:
these
are
the
projects
he's
involved
with,
but
now
he
can
lgtm.
I
want
that
for
krish,
so.
B
B
A
Okay,
but
okay,
so
right
now,
if
anyone
lgtms
appear
can
be
approved
by
the
way.
That's
okay,.
B
Can
can
that
can
be
approved?
It
cannot
be
approved
right.
A
Well,
right
now
it
can.
I
I'm
not
sure.
B
A
So
I'll
show
you
this
one
in
the
process.
Okay,
I
added
this
sample
driver
yaml
file
and.
A
B
B
Yeah,
that's
because
it's
you,
but
if
this
is
somebody
that
that
is
not
approver,
then
that's
not
the
case.
So
actually
the
whoever
give
your
approach
should
be
called
for
right,
because
if
you
actually
don't
want
this
to
be
approved
that
quickly,
then
maybe
you
should
add
a
hold
there.
I'm
not
sure.
What
is
your
intention
for
this,
because.
B
Yeah,
so
that's
that's
in
general
is
like
that
right.
So
that's
why
they
are
actually
sometimes
a
little
bit
hesitant
of
giving
you
the
membership,
but
once
you
get
the
membership
it's
actually
you
know
you
can
lgtm
any
repos
under
kubernetes.
Six,
if
you
are,
you
know,
if
you
are
a
member
under
kubernetes
six
right,
understood.
B
Yeah
basically
yeah
any
repo
under
that
org
you
will
have
lgtm
power,
and
then
you
know,
if
it's
already
approved,
then
yeah
that
will
just
merge
it.
So
sometimes
you
may
want
to
want
to
call
for
want
to
be
covered.
If
you
want
someone
to
check
it
before
it's
approved,
then
you
probably
want
to
add
a
hold,
if
that's
the
case,
otherwise
yeah
yeah.
A
Understood:
okay,
okay,
that's
good,
so
yeah!
So
that's
so
that
you
know
you
don't
have
someone
else
approving
that's
what
you're
saying
all
right
so
so
the
next
thing
is
so
so,
while
working
with
some
of
the
vendors
another
thing
that
was
brought
up-
and
this
is
a
very
important
concern-
was
that
we
don't
have
an
automated
ci
mechanism
to
push
the
latest
builds
onto
the
registry
that
we're
using
so
for
that
they
just
came
up
with
a
brilliant
idea
and
I'll.
Let
him
explain
that
further
stages.
D
Hey
I'll
just
share
my
screen.
Oh,
can
you
give
me
permissions.
B
Oh
okay,
you
want
me
to
make
it,
who
is
that
pages?
Okay,
okay,.
D
All
right,
can
you
see
my
screen.
E
D
So
basically
the
problem
we
were
trying
to
solve
was
you
know
having
a
container
image
built
periodically
from
our
master
branch
from
you
know,
I
think
we
have
four
container
images
right
now,
and
so
we
I
I've,
created
this
repo
here
ci
in
the
container,
object,
storage,
interface
organization
and
added
couple
of
github
workflows.
So
all
these
workflows
do
is
basically
they're
scheduled
to
run
every
day
at
midnight.
D
They
check
out
the
repo
master
branch
or
main
run
and
they'll
build
the
container
and
push
the
container
to
quay.
That's
where
we
are
hosting
our
canary
images
right
now.
You
know
in
the
pre-alpha
state
until
we
basically
push
where
the
official
images
go
right
around
alpha.
So
we
have,
you
know,
there's
couple
of
minor
changes
in
the
make
file
as
we
build
this,
but
right
now
like
so
I
set
this
up
yesterday
and
if
we
go
look
at
the
images
in
quay-
and
this
is
the
default
in
our
customized
template
right
now,.
D
So
I'll
go
to
adapter
here,
so
you
can
see
this
was
built
18
hours
ago.
So
now,
every
day
at
midnight,
utc
will
basically
build
out
a
new
canary
image
and
push
it
to
these
reports.
D
So
this
so
we
have
controller
building
in
this
workflow
adapted
in
a
separate
workflow
and
the
provisioner
and
sidecar
images
building
in
a
separate
workflow.
A
So
one
of
the
main
reasons
we
went
to
the
outside
the
org
repository
was
because
we
couldn't
really
do
deployment
through
our
ci
process,
because
one
is
we
need
to
get
an
image
approved
like.
I
think
we
need
to
be
alpha
before
we
can
start
pushing
images
through
through
pro
and
so,
but
what
people
still
want
to
try
out?
People
still
want
the
latest
build,
and
this
is
our
temporary
workaround.
D
Yeah-
and
you
know
said
or
others
could
not
configure
secrets
on
the
repo
in
the
kubernetes
sig
org,
so
we
basically
set
this
workflow
up
here
so
yeah,
it's
it's
fairly
straightforward,
but
you
know
this
will
give
us
a
new
image
every
day
at
least
one
image.
The
actions
are
also
set
up
to
run
manually
if
needed.
So
you
know
whoever
has
access
to
the
r
can
go
and
kick
off
the
workflow
from
the
ui
or
through
github
api.
A
And
can
you
see
the
builds,
can
anyone
see
the
bills.
D
Yeah,
okay,
well,
actually,
I'm
not
sure,
not
sure
if
the
workflow
ui
is
available,
if
you're
not
logged
in.
I.
D
F
D
A
I
see
okay,
okay,
good
to
know
but
anyways.
I
think
this
provides
a
good
stop
gap
measure.
A
A
H
So
if
you
scroll
up
that's
the
same
within
a
cluster
and
scrolling
down
in
this
document,
the
links
in
six
storage,
cosy,
I've
got
some
notes
about
multi-cluster.
It
was
a
good
exercise.
I
didn't
get
to
spend
much
time
on
this
because
of
other
work,
but
it's
the
same
diagram,
but
it's
it's.
The
point
of
view
is
scale
out
multi-cluster,
where
it's
brownfield
sharing
of
a
common
back
end
bucket
the
pail
in
this
bucket.
The
abstraction
is
bucket
one,
that's
the
bucket
instance
in
cluster
two
and
how
did
you
and
and
what?
A
H
You
want
to
copy
bucket
one
to
the
new
cluster.
If,
if
there's
an
issue
with
this,
you
may
need
to
copy
bucket
classes.
I
don't
know
no
we're
not
so
why
do
we
need
to
copy
this
that
are
copied
to
cluster
two?
It's
it's
always
done
this
way
in
in
federation
and
multi-cluster,
but
you
want
the
tool
you
want
cozy
to
automate
as
much
as
it
is
able
to,
but
cozy
can't
automate
the
creation
of
bars
so
well.
A
I'm
questioning,
if
we
do,
we
need
to
automate
that
even
so,
so
I
I'm
not
sure
if
we
need
to
copy
bc's
and
bacs,
even
if
you're
going
to
reference
them.
A
So
so
we
said
in
the
case
of
I
think
ben
ben
also
was
mentioning
this
in
case
of
a
sharing
across
clusters,
or
it
can
be
considered
the
same
as
static
brown
field,
wherein
a
bucket
exists
outside
and
on
this
bucket.
You
cannot
perform
life
cycle
operations,
but
you
can
perform
access
and
deny
operations
in
such
a
bucket.
A
We
should
you
know
I
was
thinking
more
like
an
admin
would
go
manually,
create
this
bucket
and
it
would
have
some
fields
in
there
to
denote
that
this
is
static,
brownfield
and
and
that's
about
it.
It
doesn't
need
a
bc.
It
doesn't
need
preset
bac,
you
know
bucket
accesses
on
it
anytime,
a
bucket
access
is
required.
I
mean
access
required
on
the
bucket.
A
user
would
go,
create
a
bucket
access
object,
pointing
to
that
bucket.
I
H
I
H
H
H
I
H
I
I
A
brownfield
scenario
where
you're
just
manifesting
a
bucket
out
of
whole
cloth-
and
you
have
to
put
you-
have
to
put
something
in
that
provisioner
field.
And
if
the
system
is
just
going
to
ignore
it
and
use
the
one
on
the
bac
that
feels
weird.
But
if
it's.
H
A
I
I
think
it
makes
sense
to
just
leave
it
in
the
bucket,
because
one
thing
I
know
I
see
between
moving
between
clusters
is,
is
you
know
you
could
use
a
different
provisioner
if
you
wanted,
that
would
still
be
able
to
talk
the
s3
api
or
whatever
to
get
you
the
privileges
right.
I
Right,
but
if
there
was
if
there
was
stuff
in
your
bucket
object,
that
was
stamped
on
it
by
the
original
provisioner
and
there's.
If
you
have
two
different,
I
haven't
really
thought
this
through,
but
if
you
have
two
different
provisioners,
they
both
more
or
less,
do
s3
compatible
things,
but
with
proprietary
extensions.
A
No,
no,
no,
definitely,
no
so
so.
This
is
a
static
brownfield
case,
the
admins
manually,
creating
this
bucket
for
now
and
and
they
would
choose
what
fields
are
copied
through
copied
over
and
and
again,
we
should
document
this
very
well
saying
that
this
this
static
ground
field
does
not
manage
the
life
cycle
of
the
bucket.
It
can
only
you
know,
grant
and
rework
privileges.
A
Not
really
driverless,
we
still
need
to
do.
You
know
revoke
and
grant.
A
Regular
brown
field,
we
talked
about
that
deletion.
Life
cycle
right,
like
you,
you
have
references
to
you
know
even
other
namespaces
that
are
using
this
bucket
and
regular
field
is
not
manually,
created,
either
and
and
in
regular
brown
field.
We
can
you
know
when
a
bar
accessing
that
bucket
goes
away.
We
we
update
the
references.
A
A
Okay,
so
let's
say
you
have
a
bucket
created
outside
of
cozy:
it's
it's
existed
forever
and
now
you
want
to
start
using
it
using
the
cosy
model,
but
you
know
you
just
want
to
specify
you
know
you
want.
You
want
to
use
it.
Just
like
you
use
all
of
the
cozy
buckets
specifying
the
part
spec
somewhere
that
you
know
you
need
a
bucket
and
then
you
somehow
reference
this
bucket
by
referencing
to
the
bar.
A
So
in
that
case,
if
an
admin
wants
to
bring
that
bucket
into
the
cluster,
they
would
be
able
to
do
it
by
creating
the
bucket
object
by
hand.
H
H
A
In
today's,
so
I'm
talking
about
all
of
this
in
the
context
of
your
proposal,
which
we
have
right
in
front
of
us
right
now,
I.
H
A
H
G
H
Stations
right,
okay,
okay,
so
then
it
so
in
your
view,
then
static
brown
field
means
cozy's,
not
cloning.
The
b's
an
admin
is
creating
a
b
to
represent
a
back
end
bucket
and
then
making
that
be
known
to
a
user
who
references
references
it
in
a
vr
area
of
bar.
H
Yes,
so
this
okay,
so
I
understand
that
term
now,
where
was
I
going
with
that?
So
you
were
saying
okay,
so
now
it
makes
sense,
and
yes
but
but
I
mean
we
can
do
that,
we
do
that
anytime.
We
want
with
either
design
we
can
do
this
static
brown
field
right.
A
Right
so
my
whole
point
was:
if
you
were
to
go
down
your
route.
The
whole
proposal
that
you
have
sharing
across
namespace
or
sharing
across
clusters
is
not
a
no
go.
We
can.
We
can
definitely
address
it
and
we
even
have
a
path
to
address
it,
and
we
can.
We
can
actually
implement
it
when
the
time
is
right.
H
H
Out
something
maybe
I'm
wrong
like
I
said
I
didn't
get
much
time
to
work
on
this,
but
when
I
drew
the
diagram
that
you
can
see
now,
I
hope
with
cluster
two.
You
can
see
that
it's
different
from
the
diagram
above
and
what's
different
is
that
yeah.
H
There
is
no
brs
anymore,
because
this
proposal
says
we
don't
need
brs
for
brownfield.
We
just
need
bars,
that's
all,
and
so
with
a
scale
out
to
cluster
two
or
including
cluster
two
into
the
use
of
a
bucket
a
back
in
bucket.
We,
we
don't
need
brs,
because
it's
a
they're
brownfield
use
cases
or
even
staff
brownfield
like
citizens
and
so
there's
a
difference
which
might
matter
and
it
might
not
be.
It
sit
well
with
some
folks,
and
so
the
exercise
was
useful
because
I
hadn't
recognized
that
part
of
it
well.
A
I
I
don't
see
a
big
problem
here,
to
be
honest,
because
we're
making
a
huge
improvement
in
in
not
having
to
make
copies
of
the
bucket
a
little
bit
of
asymmetry
is
totally
fine.
If
you
ask
me,
as
long
as
we're
not
getting
into
weird
cycles
and-
and
you
know,
recursions-
that
that
don't
end.
A
Yeah
yeah,
so
so
I
I
think
you're
talking
about
the
asymmetry
right
like
yeah
yeah.
I
think
that's,
that's
that's
totally
understandable.
Actually
I
don't.
I
don't
think
it
it's
causing
a
huge
confusion.
I'm
not
sure
what
anyone
would
say
against
it.
Simply
saying
asymmetry
is
bad.
I
don't
think
is
acceptable
because
because
in
some
cases
you
need
it
because
you
know
the
the
user
usage
patterns
are
different
between
greenfield
and
brownfield
yeah.
H
So
this
diagram,
in
my
view
at
least
accurately,
represents
a
new
cluster
being
brought
up.
That
would
reference
the
same
back.
End
bucket,
like
I've,
said
there
may
be
some
omissions
here
or
or
errors,
because
I
didn't
get
to
think
about
it
too
deeply
over
the
weekend.
A
I
think
I
to
be
honest,
I
think
we
can
move
forward
with
this,
but
I
want
others
also
to
I
want
to
make
sure
others
also
are
on
the
same
page,
and
if
not,
I
want
to
understand
why,
if
you
can
address
it
so
ben,
what
are
your
thoughts.
I
I
don't
have
any
issues
with
with
the
with
this,
I'm
still
trying
to
figure
out
whether
where
the
friction
is
yeah.
I
like
I,
was
kind
of
following
you
with
your
discussion,
but
yeah.
This
doesn't
bother
me
the
the
they're
not
having
a
br.
I
I
H
That
would
be
both
fields
and
an
admission
controller,
or
someone
enforces
mutual
exclusivity
right.
I
Yeah
yeah:
that's
how
it
works
with
snapshots.
I
think,
and
it's
a
good
good
model:
okay,
but
but
getting
back
so
where
I
kind
of
lost
the
thread
was
back
when
we
were
talking
about
the
provisioner,
so
the
bucket
has
a
provisioner
on
it
and
you're
saying
the
bac
can
also
have
a
provisioner
on
it,
but
yeah,
but
why
the
cap.
G
No,
the
cab
we
can
update
if
needed
right,
it
could
be.
The
kept
is
gotten
stale
in
that
area.
I
Right,
well,
I
I'm
asking
so
forgetting
about
the
cap
like
if
we
had
the
option
to
take
it
out
of
the
bac.
Would
we
do
that
because
or
is
it
adding
some
value
that
I
can't
see?
It's.
H
But
but
we
ought
we
automatically.
Yes,
we
have
a
b,
but
in
brownfield
yeah
then
I
think
what
you're
saying
makes
sense.
Actually
I
keep
switching
between
the
two
models,
but
in
this
current
proposal
here
in
brownfield,
all
we
need
is
a
bar
and
the
bar
points
to
a
b
and
the
b
contains
the
provisioner
we're
done.
A
Yeah
yeah,
and
I
think
the
implementation
is
is
like
that
we
don't
have
the
provision,
I
think
we
removed
it
quite
a
while
back
saying
we
didn't
need
it.
B
A
Yeah,
let's
do
the
second
part
thanks
for
reminding
us
so
yeah,
so
I
mean,
I
think
I
think
we
should
go
forward
to
the
proposal.
I
don't
I
don't
see
any
major
issues
with
it
and
if
we
do
run
into
issues,
I
think
we
should.
You
know
we
can
always
address
them.
So,
let's,
let's
start
with
the
officers,
so
I
want
to
open
up.
You
know
questions
from
everyone.
What
we
can
do
is
we
can
we
can
look
at
the
code.
We
can
look
at
how
to
deploy
you.
A
Can
you
can
ask
me
or
anyone
else
here
why
we've
made
the
decisions
we've
made
in
the
court
stuff
like
that
and
and
also
I
can
help
you
get
started
with
any
if
you
want
to
contribute
or
have
any
questions
with
what
we've
been
trying
to
do
with
cozy.
A
Okay,
so
seeing
can
I
get
share
permissions
on
the
other
login.
A
Yeah
I
have
two
okay,
so
the
one
that
that
doesn't
have
yeah.
I
can
do
it
now:
okay,
cool
all
right.
So,
let's,
okay,
can
you
all
see
my
terminal
and
is
the
text
size
big
enough
for
you
to
follow?
Yes
and
yes,
okay,
perfect
so
vyani,
I
remember
last
we
talked
there
was
there
was
a
there
was
an
issue
with
what
was
it?
Was
it
with
mounting
the
bucket
into
the
pod?
What
was
it.
K
Oh
hi:
first,
there
was
multiple
issues.
K
There
was
a
name.
First,
there
was
a
namespace
issue
like
right.
E
K
Yeah
was
suffixed
by
something
like
dash
dev,
something
and
in
the
code,
and
I
had
to
change
that
and
recompile
to
make
it
working
and
then
finally,
I
could
create
the
bucket
and
all-
and
everything
was
fine,
so
the
bucket
was
created
in
io,
but
it
was
not.
The
second
part
of
the
mounting
in
the
in
the
container
does
not
work
right.
A
Got
it
okay,
I
I.
K
Could
do
it
manually,
so
let
me
find
prepare
this.
I
need
to
log
on
my
cluster,
but
basically,
if
you
go
into
the
source
folder,
can
you
go
into
the
the
example
folder,
where
you
have
all
the
crds
and
stuff.
E
Sometimes
yeah
I
do
it
was
that
was
in
the
controller,
so
controller
and.
E
No
sorry
maybe
connect
to
my
cluster.
A
I
know
know
what
you're
talking
about:
there's
a
br
dot
yaml
grab
pr
dot,
yaml,
something
like
that
there.
It
is
it's.
K
It's
in
the
dba,
csi
adapter.
K
Yes,
resources.
A
Okay,
understood
okay.
So
let's
try
to
do
that
then.
So
let
us
try
to
deploy
the
three
projects
and
see
how
far
we
get.
E
A
A
A
A
So
so
so
tejas
are
you.
A
There
yeah
so
you
you
cannot
do
six
dot
kt.
I
o
you'll
have
to
do
the
full
path.
A
So
that's
something
that
needs
to
be
fixed
if,
if
okay,
so
this
is
already
running,
I
have
the
sidecar
running
and
I
probably
also
have
the
controller
running.
A
E
A
Hey
chris,
I
don't
want
to
forget,
so
I
want
to
ask
you
for
a
favor.
Could
you
please
add
issues
for
the
things
that
we
find
as
problems
here.
A
A
All
right,
so
let
me
clean
up
what
I
already
have
b
f,
delete
buckets
dash.
A
A
Okay
and
let
me
open
up
the
bucket
request
that
I
had
used
earlier,
so
this
is
a
very
simple
bucket
request.
It
just
has
a
name
and
a
bucket
class
name,
and
the
bucket
class
has
a
pointer
to
the
provisioner
and
it
simply
just
sets
the
signature
version.
So
this
is
a
very
bare
bones
kind
of
test.
The
test
is
just
going
to
be:
does
it
create
a
bucket
and
then,
after
that,
does
it
grant
access
to
that
bucket?
A
A
So
I'm
gonna
do
cube.
Ctl
create.
A
A
A
Yeah
and
then-
and
it
was
supposed
to
be-
if
the
br
is,
if
there's
no
bucket
prefix,
then
it
creates
a
bucket
by
br
dash
right.
I
think
that's
what
we
said:
let's,
let's,
let's.
H
A
Okay,
so
I'm
looking
at
the
buttons
jeff.
Can
you
mute
yourself?
Please?
I
it's
kind
of
a
lot
of
echo,
sorry,
okay,
so
when
we
create
the
bucket
name,
okay
here
is
where
we
create
the
bucket
name.
This
is
the
logic
we
see
if
the
bucket
okay,
we
first
start
with
the
bucket
prefix.
A
If
the
bucket
prefix
exists,
then
we
do
bucket
prefix
dash
uuid.
If
the
bucket
prefix
doesn't
exist,
we're
supposed
to
do
br.
A
A
This
would
fix
it,
so
this
is
another
fix
that
needs
to
be
made.
Chris,
are
you
making
note
of
this
also.
A
All
right
so
coming
back
to
the
bucket,
we
have
the
protocol
set
correctly
and
we
have
the
protocol
field
set
correctly
and
yeah.
The
other
things
look
good.
Aren't
we
getting
rid
of
anonymous
access
mode.
K
K
Yeah,
okay,
so
cd
sample
yeah,
you
see
so
you
can
create
a
bucket
and
it
will
automatically
call
the
right,
sidecar
driver
and
all
so
that
that
works
and
you
end
up
with
a
bucket
physically
created
in
binayo,
but
then
for
the
pods.
So
the
part
is
supposed
to
do
the
bucket
access
request
right.
K
Doesn't
work
so
you
have
to
do
it
by
hand,
and
you
also
have
to
create
the
secret
by
hand.
I
I
suppose
the
developer
create
the
secret.tml
to
in
the
meantime,
because
the
development
is
not
done.
K
A
J
A
Okay,
so
this
secret
so
rob
when
when
he
was
still
contributing
there
was,
I
think
there
was
an
assumption
saying
that
secret
should
always
be
in
the
objects
in
the
namespace
called
object-
storage
system.
I
can't
quite
remember
having
this
conversation,
but
but
it
was
hard
coded
this
way
and
there
was
no
check
to
see
if
the
namespace
already
existed
and
right
now.
We
should
not
be
doing
this
right
now.
A
What
we
should
do
is
we
should
just
create
the
secret
in
the
same
name
space
as
the
as
the
sidecar,
because
what
we've
said
is
the
sidecar
is
going
to
run
in
this.
You
know
for
every
vendor
the
admin
can
decide
to
run
the
sidecar
in
in
their
own
name,
space,
correct
right.
So
if
that's
the
case,
the
secrets
for
that
provisional
should
also
reside
in
that
name
space.
Wouldn't
you
would
think
right.
A
That's
what
I
thought
all
along
sid
yeah,
but
but
the
current
implementation.
G
H
Do
that
correctly
and
then
the
node
adapter,
the
c
cosy,
node
adapter
com
basically
writes
that
secret
from
the
provisioner's
namespace
into
the
pod
workspace
space
a
mount
inside
the
pod
in
the
csi
driver
right
right
right!
That's
how
it's
chris!
Where
did
you
get
the
secret
from
when
you
wrote
the
note
adapter.
J
It
just
uses
the
bucket
access,
I
believe,
to
get
a
reference
to
the
secret.
If
I
remember
it
correctly,.
A
So
so
he
so
what
we
do
is
we
came
up
with
the
concept
of
a
principle.
The
principle
was
supposed
to
be
the
user
id
for
whom
the
access
is
granted
and
from
whom
the
access
is
revoked
and-
and
this
logic
seems
very
weird
weird
to
me.
So
what
it
says
is
earlier.
We
had
this
concept
where
an
admin
or
user
could,
you
know,
could
ask
for
a
request
for
a
specific
user
that
already
existed,
but
but
this
was
way
back
and
later
on,
I
was
like
we.
A
A
A
I
don't
know
what
that
exactly
does
so
yeah
rob
wrote
a
bunch
of
this
code,
so
so
this
needs
to
be
fixed,
because
what
I
can
tell
from
here
is
this
update
principle,
never
happens
so,
once
a
bucket
access
gets
created,
that
principle
field
is
never
filled
in,
which
is
the
which
is
the,
which
is
what
I
saw
so
essentially
this.
This
should
not
be
here.
J
Are
you
going
to
open
a
pr
with
this
fix,
or
should
I
make
a
note
of
it?
No.
A
I
can
out
of
it
yeah
just
make
a
note
of
it,
because
I
think
I
think
there's
more
to
it
than
just
that.
We
need
to
test
and
make
sure
some
some
other
cases
are
also
not
messed
up
and
also
you
know
this
needs
to
be
fixed
this
this
hard
coding.
It
too.
J
A
If
the
bucket
was
actually
created,
so
we
created
a
bucket
object
earlier
10
minutes
ago.
I
want
to
go
see
if
the
bucket
is
actually
you
know,
it
exists.
Okay,
so
menu
is
not
running.
So,
let's
see,
let
me
let
me
actually
start
menu.
A
All
right,
so
this
should
have
menu
running
tube
ctl
parts
default,
namespace
yeah
there
just
started
okay
and
there's
a
service
that
would
point
to
that
part.
So
we
should
be
able
to
start
seeing
the
buckets
being
created
there.
A
So
let
me
see
what
the
provisioner
sidecar
says:
keep
ctl
get
ns
video
get
pods
dash
and.
A
A
Okay,
so
so
it
kept
failing
because
it
couldn't
it
couldn't
talk
to
the
back
end
and
then
about
a
minute
ago,
when
we
just
created
it,
it
successfully
created
the
back
in
bucket
okay.
So
here's
another
problem,
and-
and
I
noticed
this
before
it-
calls
bucket-
create
multiple
times.
K
You
go,
I
saw
it
as
well,
but
in
my
case
it's
using
the
same
id,
so
it
works
because
it's
idempotent.
A
Yes,
so
I
think
I
know
why
so
so
again
with
the
previous
implementation,
so
make
sure
you
make
a
note
of
this,
so
what
rob
did
was
he
was
generating
the
bucket
name
on
the
fly,
so
so
then,
what
he
does
is
he
does
the
find
on
them.
So
this
was
this.
Was
this
logic
is
wrong,
so
what
he
does
is
on
a
bucket
request.
When
you
get
a
bucket
object,
he
generates
a
name.
A
He
generates
uuid
for
that
bucket
and
then
he
tries
to
create
a
bucket
by
that
name
then
say
on
a
sink
call.
The
same
request
comes
in
for
whatever
reason.
Then
he
does
a
find
on
them.
So
he
lists
all
the
buckets
to
see
if
a
bucket
by
the
new
unity
that
was
generated
exists,
it's
never
gonna
exist,
so
it
ends
up
constantly
creating
buckets,
and
if
you
notice,
I
think,
it'll
happen
exactly
30
seconds
away,
one
minute
30
seconds
away,
see
which
is
the
same
period.
A
A
A
Yeah,
it's
okay.
We
can
fix
it
like
I
can.
I
can
push
out
the
code
for
this
and
next
week
or
two
weeks
from
now
when
we
do
the
office
hours
again,
I
want
to
get
into
a
position
where
the
bucket
access
is
what
we
look
into,
not
this
bucket
creation
anymore.
A
This
should
just
work
so
so
christian.
Whoever
other
else
is
here,
who's
writing
code.
That's
that's
the
goal.
I
want
to
set.
A
A
To
be
honest,
we've
been
we've
been
changing
the
design
for
a
while
yeah,
so
I'm
not
entirely
sure
so
so
yeah
this
is
this
is.
I
think
this
is
one
of
the
best
ways
to
make
sure
that
I
get
to
see
what's
going
on,
so
I'm
trying
it
out.
A
A
A
Okay,
so
what
does
it
say
it
does
that,
so
it
should
point
to
a
bucket
right.
A
A
Yeah
yeah-
this
is
good.
This
is
yeah.
Time
is
up.
Thank
you.
Everyone!
Let's
continue
on
thursday.
I
want
to
make.
I
want
to
go
forward
with
the
decision
that
we
made
with
jeff's
proposal
so
jeff,
please
update
the
cap
and,
let's
start
the
api
review
process
again.