►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 20 May 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Said
that's
always
needed
and
then
just
talk
about
how
the
things
work
instead
of
going
into
any
details
whatsoever.
Just
what
do
you
use
the
bucket
request
for
how
does
you
know
how
does
the
user
go
from
bucket
request
to
actual
bucket?
How
does
it
use
it
done
at
the
end
of
that?
You
know
it
should
be
a
much.
It
will
be
a
very
small
cap
at
that
point
there
won't
be
like
a
thousand
lines
and-
and
you
know
at
that
point-
it'll-
be
a
much
cleaner
discussion
in
terms
of
what
the
issues
are.
A
What
are
they're
not
and
and
like
ben
is
saying.
First,
we'll
have
the
discussion
amongst
ourselves
try
to
see
if
there's
a
better
model
for
brownfield
vs
greenfield
and
if
we
either
figure
out
that
what
we
have
is
the
best
or,
if
there's
a
better
model,
once
we
have
that
answer,
I
think
we
can
go
back
to
tim
and
get
things
through
like.
I
think
I
think.
B
The
thing
that
bothers
me
is
that,
like
there's,
there's
five
different
things
that
are
like
that,
where
it's
like.
We
know
that
what
we
have
is
problematic
to
somebody
and
we
got
to
fix
it
but
like
I
would
rather
fix
one
thing
and
then
get
the
person
who
had
a
problem
with
it
to
say
yeah
that
looks
good
and
go
around
to
all
the
different
things,
one
at
a
time
and
get
them
all
addressed
and
then
come
back
once.
B
Inside
of
some,
you
know
closed
room
where,
like
we're,
not
releasing
updated,
specs
and
docs
and
implementations
and
everything,
because
a
lot
of
the
best
feedback-
and
this
is
this-
is
my
experience
from
doing
a
poc
driver
for
cozy
in
our
hackathon
and
netapp
is
like
we
needed
to
just
sit
down
and
just
write
the
code
and
use
the
existing
sidecars
and
use
the
existing
crd
definitions
and
try
it
to
realize.
Like
oh
wait,
bucket
access
classes
are
totally
insane
and
we
need
to
change
them
right.
B
That
wasn't
obvious
even
being
in
like
years
worth
of
these
meetings.
You
just
have
to
do
it
and
I,
unless
we're
constantly
iterating
and
giving
people
opportunities
to
like
use
it
and
try
it
out,
we're
never
going
to
get
to
their
real
feedback.
It's
always
going
to
be.
You
know
just
something
they
notice
from
reading
the
doc,
rather
than
like,
actually
trying
it
out
and
that's
the
valuable
feedback.
I
think.
C
Especially
when
netapp
and
other
other
vendors
want
to
expose
their
special
properties
of
their
object
store,
and
we
need
to
make
sure
that
our
api
will
support
that.
C
B
Really
don't
want
to
do
anything
special,
like
I
mean
maybe
someday,
but
like
it's.
It's
far
more
important.
That,
like
the
base
thing,
is
usable
and
will
get
acceptance,
because
that
no
one
in
netapp
wants
to
write
a
cozy
driver.
If
there's
going
to
be
no
users
for
it
right,
there
has
to
be
users
to
make
it
worth
our
while
and
so
that
the
highest
priority
is
to
make
it
something
that
actual
consumers
will
consume.
A
We've
got
minion,
I
think
red
hat
has
saif,
so
we
have
a
few
drivers,
but
they
all
just
follow
what
we've
prescribed.
We
we
need
people
to
give
feedback
like
ben's
team
has
ben
has
given
us
yeah.
It
was
always
confusing
what
what
we
were
going
to
do
with
the
bucket
class
in
the
sense,
what
we're
going
to
do
the
policy
actions,
conflict
map
and
and
monday's
discussion
was
fruitful
because
we
got
rid
of
it.
We
still
have
some
unanswered
questions.
At
least
we
can
go
ahead
and
answer
them
yeah.
A
A
The
the
reason
I
say
that
is
because
we
can
proceed
forward
without
like
alpha,
turns
out
to
almost
be
a
distraction
for
us,
because
we
focus
on
that,
rather
than
just
focusing
on
development
moving
forward
and
and
that
forces
us
to
you
know
kind
of
you
know,
decide
on
shortcuts
and
wait
on
the
approval
before
we
move
forward.
I
would
rather
like
design
it
right,
and
you
know
in
order
to
do
that.
I
think
you
know
we
answer
all
the
questions
and
only
for
questions.
We
simply
just
can't
answer.
B
I
agree
with
that,
and
I
was
gonna
say
like
some
of
the
debates
we
end
up
having
like
how
to
handle
brownville
versus
greenfield
really
don't
matter
in
terms
of
like
how
you
would
write
a
driver
that
much
or
how
an
end
user
would
consume
the
feature
I
mean
they
end
up
being
relatively
minor
details
and
like
whatever
you
choose,
it's
just
it's
just
different
yaml
that
you
have
to
use,
but
like
the
the
workflow
remains
more
or
less
the
same.
I
feel
like.
B
We
spent
a
lot
of
time
on
some
of
those
discussions,
because
we
didn't
like
the
way
that
the
kubernetes
api
was
going
to
look
when
it
ended
up
not
moving
us
forward,
whereas
I
think
focusing
on
the
the
downward
api
that
pods
will
be
consuming
when
you
connect
a
bucket
to
them
and
figuring
out
some
of
these
questions
about
access
control.
The
discussion
we
started
having
on
monday,
like
those
are
going.
B
More
important
and
and
useful
then
arguing
about
greenfield
and
brownfield
and
precisely
how
we
represented
at
the
kubernetes
level,
because
we
could
get
a
fully
implemented
design
and
then
tim
could
say.
I
hate
your
brownfield
implementation.
Do
it
like
this
and
we
could
say:
okay,
just
do
it
and
it
probably
wouldn't
change
much
whereas
like
if
you
change
the
way
that
the
downward
interface
is
gonna
appear
to
pods,
like
that's
gonna,
ripple
through
everything
in
the
design,
because
it's
far
more
impactful
and
central
to
what
we're
actually
doing.
A
Yeah,
absolutely
yes,
all
right!
So
so,
let's
actually
answer
that
specific
question:
what
are
we
going
to
give
down
to
the
down?
You
know
downward
api
and
and
in
terms
of
plan,
xing
and
ben,
I
think,
rather
than
focusing
so
we
will
go
alpha,
we're
all
aligned
with
going
alpha,
but
rather
than
specifically
focusing
on
on
getting
it
through
alpha.
I
think
tim
would
also
understand
this.
How
about
we?
A
We,
you
know,
we
focus
on
fixing
the
problems
that
are
there
in
the
in
the
kept
right
now
and
then,
once
we
fix
them,
we
reach
out
to
tim,
I'm
pretty
sure
it
will
be
very
likely
way
before
any
deadline
for
version
123,
so
it'll
be
like
in
the
middle
of
the
you
know
the
review
cycle.
I
think
I
think
that
would
be
good
right.
D
Well,
if
you
really
want
to
get
your
cap
cap,
merge
then
approved
by
tim.
I
would
suggest
you
approach
him
early,
rather
than
wait
until
we
are.
D
A
C
Let's
see,
but
the
tim
is
a
critical
resource
and
hard
to
get
as
a
time
and
sid
sid
was
really
good
at
pinging,
tim,
politely
and
respectfully
and
and
wanting
time
and
it
it.
It
came
very
late
in
in
the
in
the
request.
C
C
B
That
I'm
confused
about
this,
why
I
mean
we've
already?
We
already
have
a
bunch
of
repos
with
like
the
side,
cars
in
them
and
yaml
definitions
and
controllers
and
they're
already
being
built
and
they're
already
being
pushed
to
key.io
like
what?
What
what
do
you
mean
by?
We
can't
do
an
alpha
release.
D
You
cannot
do
well.
Actually,
we
cannot
really
push
it
up
to
okay,
that
you
know
that
official
ripple
right
now,
because.
B
D
This
api
review
this
we
need
you.
I
think
we
need
to
pin
tim
more
get
him
to
review
this
earlier.
A
So
to
answer
your
question
jeff,
what
shrink
is
saying
is
we
need
to
get
to
him
as
early
as
possible?
I
mean
we.
We
were
pinging
tim
for
five
weeks.
Don't
you
know
it's
not
like
we
went
early,
but
what
shrink
is
saying
is
we
should
be
like
thinking
him
now,
not
even
five
weeks
before
the
deadline
right
shing.
D
D
I
think
it's
going
to
take
take
quite
quite
some
time,
just
if
you
just
look
at
it
like,
even
though
I
was
saying
one
snapshot
right,
that's
like
half
even
half
of
the
size
right
and
then
every
time
we
go
from
pre-alpha
to
average
for
after
debate
all
went
through
big
api
reviews.
It
takes
a
long
time
right,
so
this
is
even
more
a
lot
more
complicated
than
that.
A
D
Don't
know
about
that,
but
because
I
haven't
worked
on
clover,
I
have
no
idea,
but
this
one,
you
know
you
are
introducing
like
six
new
api
objects
right,
just
just
like
that,
just
the
size
of
that
that's
a
lot
more
because
I
know
it
took
forever
for
for
one
slam
shot
to
get
in
that.
I
know
two
years
for
the
whole.
No,
but
that's
that's
the
to
the
ga.
B
So,
but
we're
we're
in
that
pre-alpha
phase,
and
the
thing
I
was
trying
to
emphasize
is
like
we
can
make
a
lot
of
progress
without
you
know,
wrapping
ourselves
around
the
actual.
I
I
want
tim's
feedback.
We
should
ping
him
and
ask
him
for
feedback
but
like
if
he
if
he
says
I
don't
like
this-
and
I
don't
like
this
like
it
doesn't
mean
we
have
to.
D
A
B
B
D
I'm
not
saying
you
stop
changing,
you
know
whatever
you're
doing
you
know
the
downward
part.
I
mean
that
part
is,
I
think
it's
good
to
have
some,
so
your
road
driver,
you
got
experience,
you
know
what's
right,
what's
wrong
right,
so
I
think
that's
good
feedback.
Definitely
we
should
continue.
C
D
A
A
Okay,
let's
do
that
and
and
ben
that
would
address
your.
B
A
Right,
I
want
to
set
the
tone
and
language
for
this
skip
and
then
and
then
I'll
have
others.
You
know
also
update
it,
but
I
want
to
set
the
tone
in
like
language
so
that
it's
very
concise
and
clear.
That's
all
I'm
trying
to
do,
but,
but
today,
rather
than
even
focusing
on
the
cap,
I
would
like
to
focus
on
the
download
api,
because
that's
what
we
said
we'll
talk
about.
B
And
and
we
should
clarify
what
we
mean,
or
maybe
I
should
clarify
what
we
mean
by
that,
because
it
could
be
confusing,
but
I'm
talking
specifically
about
like
what
what
the
pods,
what
you
see
from
inside
the
pod,
if
you've
created
a
pod
that
has
a
bucket
attached
to
it.
Like
there's.
Also
the
question
of
exactly
how
the
pod
api
changes
in
the
long
run
like.
How
exactly
do
you
specify
that
you
want
a
bucket
to
be
bound
to
a
pod
and
what
what
sort
of
fields
become
available
in
the
pod?
B
You
know,
because
there's
you
can
do
things
like,
like
certain
certain
fields
in
the
pod
become
environment
variables
inside
the
container,
where
the
pot
is
running
like
those
kinds
of
things
might
be
interesting
to
define
as
well.
But
I
really
want
to
focus
on
yeah
what?
What
exactly
does
the
stuff
that
we
provide
to
the
pod
look
like
inside
the
pod,
so
that
so
that
we
can
make
sure
that
we're
providing
that
all
the
way
from
the
top
to
the
bottom.
A
A
Okay,
so
so
let
let's
not
get
into
how
to
specified
in
the
part
yet,
because
I
think
that
will
be
two
different
discussions
that
yeah.
So,
let's
let
me
just
like
put
something
here.
You
know
I'm
just
using
this
to
write
down
right
now,
whether
this
becomes
a
section
of
the
capital
notice,
different
story,
but
let's
just
call
this
download
api
and
yeah.
So
you
know
some
of
the
requirements
we
had
for
this
download
api
was.
A
A
So
so
it
has
to
be
a
backward,
compatible,
managed
api
version,
and
you
know
all
the
managed
api
I
mean
like
by
managed,
I
mean
versioned
and
backward
compatible
and
all
that
good
stuff.
Yes,
I
agree
with
all
that:
okay,
okay,
now,
let's
get
into
I'm
just
setting
the
tone
for
like
I'm
just
setting
the
requirements
for
it,
but
now
we
can
get
into
well.
B
I
I
think
that
the
first,
the
first
important
decision
is
like
whether
to
allow
like
opaque,
pass-throughs
or
not,
and
we
might
make
one
decision
and
revisit
it
later,
but,
like
I
I'm
kind
of
against,
I
know
at
various
times
we
said
well,
if,
if
vendor
x
has
some
extension
that
allows
them
to
do,
you
know
special
thing
y,
then
what
they'll
just
have
some
extra
fields
that
you
know
in
their
creds
file
or
whatever
the
file,
whatever
the
file
ends
up
being
called
to
communicate
that
extra
stuff
and
like
I
I'm
kind
of
opposed
to
allowing
opaque
things
to
flow
all
the
way
through
to
the
pod,
because
I
think
it
harms
portability
too
much.
A
B
Well,
I
think
that
one's
debatable,
because
there
are
a
lot
of
scenarios
where
it's
expected,
that
you
just
have
a
tls
certificate,
already
yeah.
If
it's
like
a
public
cloud
like
no
one
hands
you
the
tls
certificate
for
the
public
cloud,
it's
just
it's
signed
by
the
regular
pki
authorities,
and
it
just
you
just
trust.
It.
B
A
B
And
I
think
it's
also
important
that,
like
whatever
we
put
in
here,
is
something
that
clients
will
unambiguously
know
how
to
interpret
right,
because
if
you
put
weird
stuff
in
there,
then
it's
like
well.
My
client
knows
how
to
what
to
do
with
that,
but
another
client's
like
what
is
this
yeah?
Don't.
A
Reinvent
any
new
things
like
that's
what
you're
saying
don't
don't
put
in
new
stuff
like
don't,
don't
create
a
new
bucket
api
that
has
certificates
in
a
weird
location,
sort
of
like
you
know,
or
you
know,
access
key
secret
key
no
way,
rather
than
just
the
usual
way.
B
Yeah
well,
and-
and
I
think
this
so
so
this
is-
this
is
one
of
the
yeah.
We
certainly
want
to
use
what
already
exists.
If
that's
what
you're
saying
I,
I
do
think
that
there's
you
know
that
the
mine
field
here
is
is
the
different
authentication
mechanisms.
So
so
we've
talked
about
you
know:
s3
buckets
with
access
keys
and
secret
keys.
B
You
know
sort
of
being
a
standard
that
more
or
less
everyone
supports,
but
then
we've
also
said
that
there's
these
weird
s3
access
modes,
where,
like
you,
don't
actually
have
an
access
key
in
a
secret.
What
you
have
is
some
aws
token,
from
which
you
could,
in
principle,
generate
an
access
key
and
a
secret
and
like
we.
A
B
That's
called
something
else
and
say
that
that's
not
what
we're
trying
to
do
so
so
that
you
know
so
that
it
can
be
very
clear
that
everyone
who
who
creates
a
pod
and
creates
you
know
a
bucket
request
and
binds
them
in
kubernetes.
When
the
pod
comes
up,
it's
going
to
get
that
access
key
in
that
secret
and
he
can
expect
that
across
the
board.
And
so
then
he
feels
comfortable
that
his
development
effort
is
worthwhile
because
it's
portable
across
kubernetes
clusters,
okay
yeah!
So
so
so
I
I
like
the
id.
A
A
So
the
thing
is
ip
address
is
still,
you
always
need
the
protocol.
So
it's
not
just
an
ip
address
right.
It's
a
url
or
whatever.
A
D
B
B
That
would
be
nice,
but
but
we
have
to
be
very
clear
about
whether
whether
that's
allowed
or
not
so
maybe
we
should
have
like
the
endpoint
field
is
going
to
be
a
url
that
could
be
http
or
https
might
contain.
Domain
names
or
ip
addresses
or
ipv6
addresses
an
optional
port
number.
D
B
Then
give
a
bunch
of
examples
of
valid
endpoint
so
that
people
who
are
consuming
these
things
know
what
they
might
get
and
then
and
then
yeah.
Then,
of
course
we
just
tell
the
the
the
cozy
driver
implementers.
You
gotta,
you
gotta
supply
one
of
these
and
of
course
it
should
be
obvious
how
to
how
to
come
up
with
the
url.
B
That
represents
your
endpoint,
okay,
okay,
so
so
so
we,
I
think
that
the
four
essential
things
are
that
endpoint,
the
bucket
name,
access,
key
and
secret
right,
like
you
always
need
those
four
things.
Sometimes.
D
B
A
Even
aws
can
ignore
it,
so
there
is
a
default
region
which
is
usc
one.
If
you
don't
specify
a
region
name,
it's
expect
it's
assumed
that
you're
talking
to
usc
spawn,
but
now
what
they've
started
doing
is
because
s3
bucket
names
are
globally
unique
across
regions.
If
you,
if
you
hit
a
bucket,
whether
you
put
the
region
or
not,
it
goes
and
finds
it
in
a
different
region.
A
If
you
haven't
specified
any
region,
so
that's
actually
bad
because
you
might
end
up
paying
for
a
cross
region
traffic,
which
is
about
10
cents,
a
gigabyte
that
adds
up
very
very
quickly.
You
know,
I
think
it's
okay
to,
like
you
know,
every
every
implementation
of
every
object.
Storage
system
supports
the
concept
of
a
region,
the
three
that
we
support,
support
the
constitutive
region.
It's
up
to
the
driver,
it's
up
to
the
actual
vendor.
If
they
want
to
support
it
or
not,
but
we
should
have
a
region
field.
B
A
B
G
B
B
G
B
A
So
so
one
thing
is:
we
don't
have
to
get
into
that
business
at
all
like
we
can.
We
can
keep
things
even
more
crisp
if
we
said
the
bucket
endpoint
is
the
full
url
to
the
actual
bucket
s3
gcs,
and
you
know
azure
support.
You
know
path
based
bucket
access.
In
the
sense
you
can,
you
can
construct
a
url,
a
unique
url
to
that
specific
bucket,
which
contains
which,
which
encodes
the
region
and
the
bucket
name
and
everything
in
it.
G
G
A
Whatever
bucket
one,
but
now
you
don't
have
the
region
information,
I'm
saying
if
it's,
if
you
look,
you
don't
have
to
have
the
region
information.
The
expectation
is
the
bucket.
Endpoint
is
a
unique
identifier
to
the
bucket.
That's
all
is
needed
to
actually
talk
to
the
back-end,
see
if
you
were
to
use
an
s3
client
if
you
were
to
use
an
s3
sdk
any
s3
sdk,
and
you
wanted
to
talk
to
the
back
end.
You
nee,
you
need
not
specifically
provide
the
region
as
a
separate
parameter.
A
B
Okay,
well,
so
so,
rather
than
getting
wrapped
up
on
this
specific
thing,
I
think
we
need
to
make
some
statement
about
whether
there's
a
field
called
region
or
a
field-
that's
not
called
region.
If
there
is,
we
need
to
have
very
specific
requirements
about
what
assumptions
you
can
make
about
it
like.
Can
it
be
empty?
Do
you
always
get
something
you
know,
and
it
doesn't
matter
so
much
what
the
details
are.
You
know
we
want
them
to
be
right,
but
I'm
saying
we
don't
need
to
argue
about
it
right
now.
B
Maybe
not
but
like,
but
so
so
we
need
to
come
up
with
with
the
things
that
are
required
and
then,
instead
of
things
that
are
optional
and
specific
details
about,
you
know
what
what
what
you
can
rely
on
from
it
and
so
that
we
can
implement
error.
Checking
right
like
let's
say
there
is
a
region
id.
Let's
say
it
can't
be
empty,
then
we
would
want
the
kubernetes
sidecar.
That
is
filling
these
things
in
to
throw
an
error.
B
A
region,
then
we
change
the
spec
and
we
add
it
yeah.
I
I
this
is
one
of
the
reasons
I'm
very
keen
on
like
putting
a
stake
in
the
ground
and
then
iterating
on
it
right,
because
the
the
group
of
us
here
probably
aren't
going
to
get
it
100
right.
We
can
hopefully
get
really
close
and
then
get
more
eyes
on
it
through
the
act
of.
If
you
know
releasing
something
and
getting
people
to
to
try
it
and
say:
wait
you
screwed
this
up.
You
know
you
really
need
regions.
Oh
okay,
yeah.
A
A
Even
for
azure
and
gcs.
B
No
no,
but
so
this
is
this
is
the
other
important
part
of
the
conversation
is
yeah
that
we
have.
We
need
to
have
a
conversation,
a
concept
of
the
protocol
from
top
to
bottom.
From
the
time
you
create
your
your
bucket
request
from
the
bucket
class,
you
should
know
what
protocol
you're
expecting
to
to
get,
and
I
think
everywhere
in
the
api
like.
B
If,
if
what
you
were
expecting
was
s3,
then
we
need
to
ensure
that,
like
you
get
exactly
what
s3
was
supposed
to
give
you
and
if
you
were
expecting
to
get
a
gcs
thing,
then
the
set
of
fields
you
get
will
be
different
and
they'll
be
exactly
what
you
would
expect
to
get
from
a
gcs
bucket.
There
might
be
a
lot
of
overlap
between
them,
but
but,
like
you,
should
know,
if
you're
getting
a
gcs
thing
or
an
s3
thing,
so
that
you
can
parse
it
accordingly.
B
Right
right
for
for
now,
I'm
yeah
s3,
so
so
there
needs
to
be
some
higher
level
sort
of
clue
to
the
to
the
workload
that
tells
it
whether
what
it's,
what
we're
giving
it
is
is
an
s3
thing
or
gcs
thing,
because
somebody
might
want
to
design
a
workload
that
can
work
with
anything
right.
There's
no
reason
I
couldn't
have
a
pod
that
says
you
know
I
don't
care
if
it's
a
gcs
bucket
or
an
or
an
azure
blob
or
a
s3
bucket.
I
will
support
all
of
them.
B
Just
tell
me
what
I
have
that
needs
to
be
another
signal
that
comes
down
that
says
this
is
the
protocol
that
we're
telling
you
about
yeah.
So
it's
again
discriminated
union,
well
it
at
the
golang
struct
level,
but
like
I'm,
imagining
that
what
we're
presenting
to
the
pod
is
some
file,
and
so
maybe
it's
in
the
file
name
or
maybe
the
file
is
json
formatted
and
there's
a
field
that
tells
you
what
the
protocol
was
like.
This
is
what
we
need
to
decide
is
is
what
you
know.
What
does
it
look
like?
B
B
B
A
Right
we
talked
about
this.
You
know
chris
was
working
on
it
for
a
while.
Actually
so
one
of
the
easiest
ways
we
can
do
that
is
to
is
to
have
that
in
it
container
that
just
translates
a
standard,
cosy
spec
into
either
s3
or
whatever
you
want
it
to
be,
so
so
that.
B
A
And
that
developer
is
really
happy
right.
Can
we
focus
on
the
code
of
the
problem
now
like?
Let's,
let's
define
what
the
downward,
because
when
we
start
talking
about
how
we'll
deliver
it,
then
you
know
other
specifics
come
into
question
like
you
know
what,
if
you
know,
you
won't
have
multiple
buckets
yeah,
so
I
would
rather
focus
on
like
just
to
begin
with.
I
understand
the
questions
and
the
direction
you're
going.
A
B
It's
also
so
some
intermediate
data
structure
that
that
gets
passed
down
and
then
how
we
present
that
might
be
manipulatable
with
your
podiamal
spec,
your
pod,
spec
gamma,
okay,
okay,
so
yes,
so
at
the
top
level
there
needs
to
be
the
well
okay.
Here's
a
question:
if
you
want
to
support
multiple
buckets,
which
which
I
think
we
do
in
principle,
different
buckets
could
have
different
protocols
right,
you
could
have
an
s3
bucket
and
a
gcs
bucket,
both
vended
to
the
same
pod.
A
Yeah,
so
so
the
way
we
distinguish
between
buckets
is
so
so
one
part:
can
you
know
right
now
we
use
the
volumes
to
say
this
is
the
bucket
we
want.
So
the
idea
is
you
know
I.
Finally,
I
don't
want
to
end
up
as
a
volume,
because
it's
not
really
volume,
but
what
we
can
say
is
you
know
a
bucket.
The
volume
is
mounted
at
a
particular
part
and
inside
that
inside
each
of
the
parts
you
get
only
one
bucket.
You
don't
get
multiple
buckets
in
one
part.
A
So,
okay,
we
always
call
it
bucket
file
as
bucket.json,
and
you
know
we
have
you,
know,
protocol
specific
structures
in
there
like
protocol
s3
would
have
the
following
fields
and
protocol
gcs
would
have
different
fields,
and
you
know
it's
json,
so
you
know,
and
it
looks
like
a
proper
kubernetes
spec
if
we
did
it
this
way.
A
B
So
so
it's
always
going
to
be
called
bucket.json.
You
have
some
control
over
where
it
lands
on
a
per
bucket
basis
and
then
inside
the
json.
You
get
a
protocol
and
then,
depending
on
what
the
protocol
is
a
couple
of
fields
that
are
very
strictly
defined.
What
you,
what
you're
going
to
get,
what
you're,
what
you
can't
or
the
limitations
on
what
those
fields
can
contain
and
what
they
mean
and
nothing
extraneous.
B
D
A
C
A
So
you
know
if
we
could,
we
could
essentially
represent
them
here
itself
like
like.
If
you
look
at
cube
config,
it
has
the
concept
of
a
cluster
the
clusters
ip
address
and
each
one
can.
You
know
multiple
clusters
can
be
defined
and
each
one
has
a
certificate
associated
with
it,
the
pem
file
and
that's
that's
just
base64
encoded
and
put
in
there.
We
could
do
something
like
that.
B
So
the
the
reason
I
feel
weird
about
certificates
is
because
I
know
that
developers
already
have
to
deal
with
this
in
other
contexts
like
if
you're
interacting
with
some
rest
api
from
your
pod
right
and
that
rest
api
is
hosted
on
a
private
server
like
nobody
helps
you
get
the
certificate
like
it's
your
job
as
the
pod
author
to
just
slip
stream.
The
certificate
in
somehow.
A
Yeah
use
a
secret
mount
the
certificate
as
secret,
and
then
you
know,
but
in.
A
The
client
should
not
say
yeah,
that's
true
the
client.
The
client
should
not
be
making
decisions
based
on
you
know,
based
on
this
spec
about
you
know
which
certificate
to
use
the
certificate
should
just
be
loaded
into
the
search
chain
of
the
of
the
host.
You
know,
which
will
be
the
container
in
this
case,
and
and
it
should
just
just
work
when
you
make
the
calls.
B
A
So
isn't
it
isn't
it
weird
to
ask
the
driver
to
give
back
certificates
like
because
the
driver
itself
shouldn't
have
to
know
about
certificate
certificates
are
more
of
a
you
know:
it's
not
a
it's,
not
a
it.
Shouldn't
have
to
be
a
part
of
the
the
data
flow
of
cosy.
It
should
just
be.
You
know
something
that's
present
to
ensure
that
the
communication
between
cosy
and
communication,
between
workload
and
well
communication
between
cozy
and
the
back
end
and
the
communication
between
workload
and
the
back
end
are
secure.
That's
all
certificates
are
needed
for
so.
B
Yeah,
I
have
mixed
feelings
about
this
because
I
can
see
it
from
both
sides
like
like
it.
It
can
be
a
pain
in
the
butt
to
deal
with
getting
certificates
into
the
right
place,
but
I
think
it's
it's
a
pain
that
people
already
have
to
deal
with
pretty
widely
and
I'm
not
sure
that
us,
just
I
mean
we'd,
be
shifting
the
pain
onto
the
driver,
authors
to
always
know
what
certificate
to
send
down
right
and
and
that
that's
not
an
easy
problem
to
solve
on
the
driver's
side.
B
A
B
G
G
A
A
So
here's
the
self-portable-
that's
not
true
certificate
injection
is
itself
portable.
So
this
this
problem
doesn't
exist.
What
you're
saying
hey
so
by
the
way,
we're
almost
running
out
of
time
but
nicholas.
Please
finish
your
thought,
your
response
to
whatever
I
said
and
then
let's
quickly
decide
where
this
will
recite
this
api.
I
mean
we
have
to
decide
we'll
talk
about
it
and
then
and
then
we'll
call
it.
You
know
end
of
the
meeting
today
but
yes
nicholas.
G
We
should
continue
at
some
other
point
in
time.
I
I
so
I
I
have
some
use
cases
in
mind
for
our
customers,
which
is
all
premises
which
is
self-signed
certificates
everywhere,
and
that
flow
should
be
simple
and
I'm
not
sure
we
are
currently
achieving
that
goal.
A
Itself
but
we'll
get
to
it
yeah,
you
know,
I'm
I'm
coming
from
the
point
of
view
that
you
know.
Maybe
we
could
have
a
separate
file
for
things
like
this,
that
that
are
more
like
connection
specific
information,
or
maybe
we
could
just
offload
to
the
pod,
but
if
needed,
obviously
we'll
add
it
here.
It's
not
a
problem.
A
Yeah
we'll
just
understand
that
better
in
the
next
meeting,
so
so
for
this
api,
I
think
we
should
have
it
in
our
central
api
repo,
like
wherever
we
store
all
the
other,
cosy
apis
and
yeah
we're
we're
almost
out
of
time.
So
if
you
have
anything
to
spring
up
or
say,
please
do
so,
otherwise
we
can
continue
on
monday.
B
I
I'd
like
to
to
keep
iterating
on
on
this,
this
part
of
the
spec
and
then
once
we
have
something
we're
happy
with
work
that
backwards
through
the
through
the
grpc
and
into
the
yaml
for
bucket
access
classes
and
bucket
access
is
because
I
think
it
all
ties
together.
This.
A
B
Because
this,
I
think,
is
the
part
of
the
spec
that
needs
the
most
work,
and
once
we
get
this
right.
B
B
Yeah
yeah,
this
is
it.
This
is
at
the
core
of
what
we're
doing
the
the
brownfield
stuff
is
is,
is
just
a
little
bit
of
syntactic
sugar
on
how
you
import
them
or
create
them.