►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket Review Meeting - 29 October 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
B
Okay,
so
good
morning,
everyone
I
want
to
start
by
talking
about
our
next
milestone,
that
we've
decided
on
and
where
we
are
in
terms
of
the
milestones
progress,
and
there
was
one
issue
that
we
were
discussing
last
week
and
we
wanted
to
go
back.
Do
some
research
come
back
with
more
data
to
continue
the
discussion?
I
want
to
continue
the
discussion
now
that
we've
done
the
research.
B
So,
let's,
let's
start
with
the
milestone,
so
we
want
to.
We
wanted
to
show
a
demo.
The
demo
will
include
the
simplest,
cosy
use
case,
which
is
provisioning,
a
green
field
bucket
provisioning,
bucket
access
for
the
green
field
bucket
and
then
providing
that
bucket
into
a
pot
for
a
workload.
B
So
one
of
the
one
of
the
properties
I
wanted
for
this
demo
was
that
the
the
quality
of
code
and
the
quality
of
demo
would
would
be
as
close
to
you
know,
production
quality
as
possible.
At
this
stage.
What
I
mean
by
that
is,
we
want
to
follow
the
best
coding
practices
and
have
unit
tests
integration
tests
in
place
before
we
show
the
demo
so
that
when
we
do
show
the
demo,
we
know
that
this
feature
is
actually
in
place
and
people
can
start
trying
it
out
music.
B
So,
in
terms
of
the
progress
for
this
demo,
as
of
this
is
as
of
last
week,
I've
got
the
dates
mixed
up,
but
as
of
last
week,
we
were
still
working
on
the
sample
provisioner
and
we
were
implementing
the
bucket
access
create
credentials,
part
for
for
our
sample
provisioner,
and
this
week,
we've
finished
that
another
thing
is:
we've
added
more
unit
tests
for
the
site,
car
controller
and
we
are
continuing
to
add
more
unit
tests
on,
on
the
other
components
as
well.
B
So
for
those
who
are
looking
to
go
through
the
code
or
would
like
to
subscribe
to
updates
in
terms
of
the
coding
progress,
these
are
the
repo
names
they
all
fall
under
the
kubernetes
dash
six
github
organization,
all
right.
Moving
on
to
the
next
step,
so
the
issue
we
were
discussing
last
week
was
about
credential
minting.
B
So,
let's,
let's
go
into
credential
minting,
so
we,
when
we
provision
access
for
a
bucket,
we
do
it
in
two
ways:
we
either
meant
credentials
which
are
access
key
and
secret
key
or
some
sort
of
token,
and
the
second
way
is
we
provision
access
for
a
service
account
that
the
workload
is
associated
with.
B
In
this
case,
I'm
only
talking
about
the
minting
credentials,
part
where
we
create
access,
key
and
secret
key
or
a
token,
so
that
the
workload
can
authenticate
itself
to
the
back
end.
B
What
happens
is
we
can
mint
the
credentials
and
we
can?
We
can
put
the
credentials
into
the
part
here.
I've
taken
the
example
of
s3
and
I've,
given
the
path
for
the
credentials
as
home,
slash
ubuntu,
slash,
dot,
aws
credentials.
D
D
B
Yeah
that
too
yeah
I
was
going
to
bring
that
up
in
the
context
of
multiple
buckets
and
bucket
discovery.
But
yes,
so
even
if
you
have
this
credential
file,
there
needs
to
be
a
mechanism
to
let
the
workload
know
the
name
of
the
bucket,
because
when
the
workload
starts,
there
is
no
guarantee
that
the
bucket
already
exists
and
if
the
bucket
doesn't
already
exist
and
cosy
is
creating
it.
The
name
of
the
bucket
is
not
known
upfront.
It
can't
possibly
be
specified
in
the
parts.
D
D
D
B
I'm
saying
that
it
can't
be
specified
in
the
pod
spec
up
front.
It
needs
to
be
discovered
by
the
workload
somehow
yeah
yeah
yeah.
Okay,
that's.
B
Yeah
bucket
discovery,
so
we
need
a
mechanism
to
let
the
workload
know
that
this
is
the
name
of
the
bucket.
D
B
D
B
Be
in
the
same
file,
okay!
So
going
to
the
next
next
scenario,
we
will
get
to
the
bucket
discovery
and
and
search.
The
next
scenario
is,
if
you
have
multiple
buckets
with
multiple
providers.
B
In
this
case,
you
can
have
each
of
the
providers
have
their
own
default
paths
so
for
something
like
for
aws,
it
is
dot
aws,
slash
credentials
and
for
google
cloud,
it's
dot
g
cloud,
slash
config.json!
B
So
in
this
case
there
is
no
conflict
of
paths
and
each
bucket
the
client
for
each
of
the
each
of
the
providers
would
look
for
their
their
individual
configurations
or
credential
information
in
their
default
paths.
We
expect
it
to
just
work
now
going
to.
The
next
scenario
is:
if
we
have
two
different
buckets
for
the
same
provider.
B
Now,
if
we
were
to
follow
the
model
that
that
we'd
originally
discussed,
one
of
the
problems
that
we'll
run
into
is,
if
you
are
trying
to
use
the
default
directory
for
the
provider,
then
we'll
end
up
with
a
conflict.
B
You
could
say
one
of
the
ways
to
resolve
this
conflict
is
to
come
up
with
a
merge
strategy,
but
I
think,
based
on
our
discussion,
the
last
two
meetings
and
and
just
being
able
to
reason
about
it.
It's
it's.
It
makes
things
more
complicated
and
it's
it's
worth
exploring
simpler
approach
here.
B
B
There
are
a
few
problems
here
that
gets
that
kind
of
come
up.
When
you
see
this
one
is
we
don't
know
which
buckets
credential
recites?
Where
other
than
the
merge
conflict?
We
have
a
few
problems.
E
E
It's
just
partial,
and
even
we
even
chose
to
follow
a
model
where
every
bucket
access
request
has
its
own
credentials,
where
which
means
that
it's
not,
that
we
assign
a
single
identity
per
pod
and
then
use
that
somehow.
But
you
know
we,
we
kind
of
delegate
to
the
pod,
multiple
identities
per
every
connection.
It
needs
to
make
so
and
aws
credentials.
That
has
a
model
for
that.
It's
called
profiles,
but
I'm
not
really
sure
most
applications
use
it.
This
way
today,
so.
B
B
E
B
E
B
C
B
So
bucket
discovery
is
still
an
issue.
The
the
workload
still
doesn't
know.
What's
the
name
of
the
bucket.
E
But
if,
if
you,
if,
if
the
workload
provides
a
path
for
every
bucket,
binding
bucket
access
request
that
it
binds
into
it
and
that
bug
access
request,
binds
into
like
say
a
credentials,
files
in
each
of
these
paths
say.
B
E
Directory
each
one
is
a
directory,
and
then
you
have
a
credentials
in
each
one
and
then
they
config
in
each
one
and
they
and
maybe
even
if
needed,
like
you,
know,
a
json
file
for
for
google
cloud
storage
and
things
like
that.
Right
and.
E
Yeah
exactly
yeah,
but
then,
but
then
we
kind
of
will
have.
We
will
have
to
adapt
some
applications
in
this
way
because
they
will
not
guess
it
unless
you
have
a
single
and
then
you
just
mount
it
to
your
home
directory.aws
and
it's
you
know
that,
because.
B
B
Yes,
so
so
just
like
you
mentioned,
so
we
were
thinking
of
three
different
predefined
files,
one
for
credentials,
so
in
case
of
aws
it
would
be
called
credentials,
but
in
case
of
gca
google
cloud
it
would
be
called
config.json
and
and
appropriate
names
for
appropriate
providers
like
that
we
wanted
to
have
a
bunch
of
third
files,
so
in
case
of
client
tls
here
we'll
need
a
client
cert
and
a
ca
if
it's
self-signed.
B
Not
in
case
of,
say,
s3,
so
the
credentials
file
is
pretty
narrow.
The
way
it's
defined
for
s3
yeah.
B
So
there
are
use
cases
where
I
still
realize
in
client
search.
So
there's
something
called
aws
snowball,
so
aws
snowball
is
something
I've
used.
So
what
they
do
is
if
you
want
to
transfer
a
lot
of
data
from
s3
to
outside
of
s3,
say
a
single
petabyte
one
petabyte,
then
you
know
you
couldn't
do
it
over
the
internet.
You
can
get
a
maximum
of
a
gigabit
link
to
s3
and
it
takes
a
really
long
time
to
transfer
a
petabyte
of
data.
B
Yeah
it
takes
days
or
months,
I
think
so
anyway,
so
getting
so
what
they
do.
Is
they
ship
a
box
like
a
suitcase
which
has
four
aws
instances
in
it
like
it's,
it's
a
real.
B
It
has
the
same
experience
as
as
you
would
use
you
know
the
aws
cloud.
Now
it's
got,
it's
got
all
the
bits
there
for
orchestrating.
B
You
know
ec2
instances
and
and
creating
s3
buckets,
and
you
can
connect
your
own
100
gigabit
link
physical
link
from
from
that
box
to
your
data
center
and
transfer
the
data
out
and
this
box
gets
shipped
within
the
week
and
you
can
you
can
ship
it
back
to
them.
They
send
it
to
the
label
and
everything
so.
B
Yeah
yeah
I've
actually
done
that
better
by
transfer
once
wait
till.
B
No,
this
one
can
only
do
80,
terabytes,
they'll
ship
12
of.
B
Them,
okay,
so
actually
it
can
only
do
75,
even
though
it
says
80.
anyways
getting
back
to
this.
So
in
that
case
you
end
up.
B
You
know
using
self-signed
certs
and
when
you
do
that,
you'll
have
to
you'll
have
to
provide
css
or
client
search
for
something
like
s3.
D
B
D
B
B
The
discussion
is,
can
we
assume
that
the
credentials
file
will
always
have
something
to
specify
search.
D
Well,
my
only
assertion
was
if
a
client
search
is
necessary
to
to
authenticate
to
the
server,
then
it
it's
implicitly
part
of
your
credentials,
and
it
sounds
like
in
the
case
of
this
snowball
thing,
there's
a
modified
version
of
s3
that
involves
a
client
cert,
and
so
we
should
conceptualize
that,
as
like
a
separate
protocol
where
the
client's
cert
is
part
of
the
credential,
it
feels
like
that
to
me.
I
again,
I
don't
know
the
details
and
I
don't
know
how
how
common
that
is,
or
how
many
other
changes
they've
made.
I.
D
B
Server
also
used
for
authentication,
I
believe
so
that's
possible
yeah,
but
I
always
assumed
they
had
a
separate
token
for
authentication
and
authorization
and
the
search
was
simply
used.
F
C
B
D
D
E
E
It's
not
even
related
to
s3
it's
based
on
any
rest,
it's
related
to
any
rest.
You
know,
client
right
yeah,
any
http.
C
Yeah,
I
agree
with
ben
here,
I
think
having
it
as
a
standalone
protocol
makes
sense.
We
definitely
should
support
it,
but
no
need
to
kind
of
muddy
the
the
standard
s3,
but.
A
Yeah
sorry,
this
would
basically
impact
every
single
on-premises
s3
installation,
where
there
is
the
tls
sect
of
the
s3
endpoint
is
not
signed
by
one
of
the
well-known
certificates
that
may
be
part
of
the
container
that
is
running
in
the
port
and
running
the
s3
client.
So.
A
D
D
The
ca
cert
needs
to
be
passed
down
so
that
the
client
knows
that
they
can
trust
it,
but
the
I'm-
and
I
think
almost
any
client
in
the
world
would
know
how
to
deal
with
that.
But
but
this
this,
like
you,
have
to
install
you
have
to
use
a
specific
client
cert.
I
would
imagine
most
sdks
and
stuff,
don't
know
how
to
just
take
in
a
client
search
and
use
it
for
all
of
their
http
operations.
B
D
A
Client
search
will
also
need
a
key,
at
least
so.
You'll
have.
D
B
Yeah
yeah.
E
B
B
D
A
D
E
E
So
the
credential
that
the
format
of
the
file
will
be
designed
to
meet
the
sdk,
the
s3
sdk,
because
that's
the
protocol
right,
that's
by
design.
Yes
right,
that's
the
meaning.
B
Yeah,
I
think
so
so
there
is
a
config
file
which
needs
to
hold
like
the
default
region
right
right.
It.
E
E
I
mean,
but
I
think
it's
it's
all
possible
configurations
for
the
client
sdk
that
the
provider
can
provide
and
that,
of
course,
the
application
can
override.
But
it's
a
configuration
based
configuration
and
there's
something
else.
So
you're
saying
that
the
rest
will
be
in
a
bucket
json.
Is
that
so
the
information
that
every
application
needs
to
read
about
the
bucket
name
and
endpoint
will
be
in
that
bucket
json.
B
Possibly
because
stuff,
like
version,
could
be
useful
for
clients
that
support
multiple
versions.
B
The
other
thing
is
that
none
of
these
providers
seem
to
provide
a
way
to
specify
an
endpoint,
so
this
has
to
be
right
in
yeah
in
our
in
our
own
file.
So
so
let
me
ask
I
wanted
to.
C
Take
a
step
back
if
I
could,
what
this
looks
like
is
yet
another
api
right.
We've
got
two
apis.
So
far,
we've
got
the
spec
to
interact
with
the
driver.
We've
got
the
kubernetes
api
to
interact
with
the
end
user.
C
D
D
B
C
It
sounds
like
there's
definitely
existing
api
that
you're
trying
to
fit
into,
but
there's
potentially
going
to
be
kind
of
some
customizations
for
cozy
or
like
additional
pieces
of
information
right
and
so
for
that
reason,
what
are
those
differences
and
that
kind
of
stuff
definitely
needs
to
be
captured.
So
if
I'm
a
workload-
and
I
say
I
select
s3
or
I
select
gcs
through
cozy-
what
what
is
my
expectation?
What
is
the
contract
that
I
should
depend
on
right?
I
think
that's
what.
B
B
D
But
to
address
the
concern
about
existing
applications
and
fitting
into
an
ecosystem
that
already
exists
like
that's,
why
we
should
do
things
like
call
the
credentials
file
credentials
if
it's
s3
and
format
it
like
an
s3
credentials.
File
is
formatted
today,
so
so
so
that
minimal
change
of
existing
applications
is
needed,
but,
like
you're
not
going
to
get
around
having
to
read
a
bucket.json,
nobody
reads
that
today
because
it
doesn't
exist.
It's
like
that's
just
going
to
be
a
thing
that
you'd
have
to
start
doing
in
your.
I.
B
Mean
I
mean
so,
we
did
make
an
assumption
last
time
when
we
spoke
about
it.
So
if
there
is
a
workload
that
that
we
want
to
port
over
to
cozy
that
we
expect
that
there
is
no
developer
to
change
the
application
itself,
we're
saying
that
is
possible
in
case
of
brownfield
buckets,
because
we
know
the
name
of
the
buckets
up
front.
B
B
So
so
it's
still
possible
for
the
brownfield
use
case.
The
static
brownfield
use
case,
and
what
do
you
all
think
is
that
I
mean
that's:
that's
the
one
one
sort
of
turnkey
porting
that
we
that
we
support
where
without
changing
the
application,
you
can
simply
switch
to
cosy
if
it's
brownfields,
statically
provisioned
bucket
yeah.
That.
D
B
Okay,
so
coming
back
to
the
discussion
about
what
fields
needs
to
be
where
so,
so
it's
clear
that
we
need
something
like
bucket.json
now.
B
I
think
these
are
the
three
fields
that
are
I
mean
to
start
with.
I
think
these
three
fields
are
absolutely
necessary.
Now
that
we've
talked
about
calling
an
api,
we
obviously
need
an
api
version
to
it.
Should
we
make
this
a
kubernetes
object?
You
know
something
that
satisfies
runtime.object.
C
C
B
E
E
B
B
So
also,
the
other
thing
is
none
of
the
none
of
the
apis
have
a
field
for
endpoint
as
right
now.
B
Not
just
that
in
in
the
apis
that
we've
defined
in
the
bucket
star
apis
that
we've
defined
we
don't
have
last
I
checked,
we
don't
have
a
field
for
endpoint.
B
E
E
B
Yeah,
so
so
it
would
be
a
field
under
protocol,
but
it
wouldn't
be
set
by
it'll
always
be
set
by
cosy,
it's
kind
of
how
it's
thinking
of
it
again.
So
discussion
is
starting.
What
do
you
all
think
about
that.
D
G
D
B
B
Wouldn't
assume
that
okay,
fair
enough,
I
we
checked
with
azure
and
and
google
cloud
and
obviously
s3.
E
I
think
the
structure
we
defined
for
for
it
was
that
we
we
we
have
like
an
archetype
like
this
is
the
protocol,
and
then
we
have
a
flat.
You
know
keys
for,
for
how
the
protocol
would
parse
the
the
parameters
right.
We
didn't
stack
everything
we
didn't
move
everything
down
below
the
protocol
because
we
said
okay,
this
is
like
a
type
and
then
we,
the
protocol
itself,
knows
how
to
use
the
information
there
right.
E
E
I
I
don't
think
you
should
really
push
it
everything
to
be
under
something
I
just
think
it
kind
of
needs
to
be.
The
protocol
should
be
like
the
first
switch
kind
of
case
between
what
we're
yeah.
So
you
want.
D
B
Yeah
yeah
yeah
pvs
actually
have
every
objective
as
a
field
right.
E
D
E
You
have
like
an
s3
object
key
and
it
will
be
an
object
right.
You'll
have
a
type
s3
and
then
s3
colon
and
an
object.
That's
the
and
then
all.
B
The
s3,
no,
I
think
I
think
we're
talking
about
the
protocol
config
itself,
like
the
of
the
protocol
itself,
to
answer
ben.
That's
how
we've
already
defined
it
in
the
in
the
api.
D
B
Any
other
ideas
how
how
else
like
I
was
suggesting
the
end
point
in
there,
I'm
just
I'm
just
looking
at
every
option,
that's
available
to
us.
So
we
can.
We
can
understand.
D
D
D
B
So
so
when
you
say
a
union
struct,
so
we
already
have
a
structure.
We
already
have
a
definition
for
for
the
individual
protocols.
Why
don't
we
just
mirror
that
here
and
we
can
have
a
conflict,
a
type
like
a
kubernetes
bucket
config
or
some
type
like
that?
That's
that's
common
for
all
the
different
protocols.
So
so,
as
far
as
the
type
definition
is
concerned,
this
protocol
field
would
be
just
the
same
as
what's
defined
in
the
other
bucket
apis.
D
E
D
D
E
B
D
For
the
first
thing
is
protocol,
which
is
s3
yeah
protocol
and
then
and
then
the
fields
from
the
protocol
here
right
and
that
would
include
the
endpoint
and
the
version
in
the
bucket
and
maybe
the
region.
You
know
whatever
yeah.
E
B
E
Then
three
optional
keys,
s3,
azure,
blob
or
gs.
F
E
E
B
So
like
bucket
and
endpoint
would
be
common
name,
inversion
would
be
under
s.
That's
no
name
and
version
would
also
be
common.
E
B
So
so
the
so
this
was
my
question.
Yesterday
there
was
asking
slack
too:
should
we
push
this
information
like
endpoint
under
this
protocol,
specific
fields.
D
B
E
E
D
Can
validate
and
say
it
must
be.
One
of
you
know
these
known
values
for
it
that
are
s3
specific
and
then,
if
it's
not
one
of
those
three
one
of
those
supported
values,
you
can
check
it
out
if
you
leave
it
at
the
higher
layer.
Validation
gets
a
lot
trickier
so
because
s3
versions
might
be
like
s3
v2
and
s3
v4
and
s3,
v9
or
whatever,
but
like
gcs,
might
have
a
totally
different
versioning
scheme
with
a
different
algorithm
so
like
it
just
feels
that
it's
it's
it's.
B
Clear
so
yeah
version
is
kind
of
a
confusing
field,
actually
because
how
we
were
thinking
of
it
and
and
a
few
other
civils,
and
you
know
we
also
had
this
same
question
here.
Sometimes
we
don't,
they
use
like
the
version
field,
if
it
is
not
specified
by
the
user.
Cosy
would
fill
it
for
you,
because
the
provisioner,
we
assume
you
know,
chooses
one
of
the
protocols
as
the
versions
as
a
default
like
s3v4
and
then
and
then
fills
it.
E
E
B
E
B
E
E
E
B
So
you're
saying,
regardless
of
which
version
it
is
all
of
these
calls,
are
supported
like
regardless
of
the
signature
version.
E
B
Okay,
locking
up
your
versions
if
you're
using
numbers
to
keep
track
of
api
compatibility.
Okay
again,
this
is
not
for
s3.
Yes,
of
course,
it
is
so
current
api
version.
First
three
is
that
we
recommend
locking
the
api
version
for
a
service.
You
rely
on
for
production
code.
This
can
isolate
the
applications.
Okay
to
lock
the
api.
E
D
D
D
Right
to
avoid
confusion,
why?
In
this
in
this
powerpoint
or
whatever
this
slide,
no.
D
D
B
B
E
It
was
taken
as
an
assumption
when
you,
you
know
when
you
design
the
workload.
That's
the
point
right,
that's
what.
B
Or
we
could
say
when
we
start
the
provisional,
we
do
a
get
info
call
it's
supposed
to
provide
as
a
default
version.
E
In
any
case,
I
would,
I
would
probably
clarify
it
by
renaming
it
to
api
version,
because
that's
s3
I'll.
B
F
E
F
E
B
Yeah,
I
think
we
didn't
need
it
last
week's
check,
but
you
know
that
might
change.
E
So
as
flick
probably
can,
and
so
in
one
way
or
another,
you
can
separate
between
aws
and
s3,
but
you
know
it
might
maybe
might
feel
artificial
to
some.
But
for
me
it
makes
more
sense
to
separate
between
the
ones
that
are
services
like
gs,
aws
and
azure
blob
versus
the
ones
who
will
have
to
have
an
end
point,
and
you
know
things
which
otherwise
they
don't
work
and
in
this
case,
s3
at
the
endpoint
under
s3,
with
aws,
doesn't
have
to
provide
an
endpoint.
Actually
just
only
has
to
provide
a
region
so
yeah.
B
Region
is
being
there
well
actually
again,
it
could
be
some
some
other.
You
know
implementation
of
s3
standard
like
ceph
or
mini
or.
E
Yeah
and
that's
what
I'm
saying
so,
s3
compatibles
will
have
to
have
an
endpoint,
but
will
not
have
to
have
a
region
and
aws
will
have
to
have
a
region,
but
does
not,
but
it
has
an
optional
endpoint
right.
So
it's
I
think
it's.
It
might
make
sense
in
some
ways
to
separate
between
what
is
aws
service
that
you'd
like
to
have
here
like
google
and
s3,
but
I'm
not
sure
if
you
kind
of
feel
the
same.
B
I
think
region
is,
you
know
all
the
implementations
of
s3
support
region
and
they
also
like
all
the
sdks
of
s3
support.
Giving
an
empty
region
assumes
default
region
endpoint
again,
even
with
s3
sdk
endpoint
is
optional,
because
if
you're
using
garb
cloud,
for
instance,
you
have
to
provide
the
endpoint.
E
D
B
We
moved
region
endpoint
down
into
the
specific
protocol
in.
E
B
And
and
bucket
as
well
right
now,
we
were
talking
about
endpoint
region,
saying
that
endpoint
is
needed
only
if
it's
non
s3
and
region
is
needed
only
if
it's
s3
in
the
sense
aws
s3,
so
we're
saying
aws,
s3
versus
s3,
compatible
services,
so
they're
saying
endpoint
is
only
needed
if
it's
a
aws,
s3
compatible
service
and
region
is
needed
only
if
it
is
a
s3,
aws
s3
itself.
B
The
the
reason
we
were
bringing
the
argument
up
was
to
see
if
we
needed
to
create
two
different
structures
here
like
s3
and
s3
compatible,
but
I
think
that's
not
needed.
D
D
B
E
It's,
I
think,
isn't
it
it's
defined
in
bucket,
I
see
now
it's
like
it's
inside.
No,
I
mean
it's
like
bucket.spec.protocol,
that's
where
you
define
it
now.
E
D
Ever
gave
you
an
actual
blob
of
yammel,
they
look
like.
A
B
Going
back
to
what
saad
was
saying
and-
and
I
kind
of
agree
with
this-
because
the
protocol
itself
is-
is
basically
going
to
be
an
api,
whatever
we
define
here
is
going
to
be
a
api
contract
for
all
the
different
providers
that
implement
a
particular
protocol.
E
But
we
are
a
marketplace
between
providers
and
applications.
Here
I
mean
if
we
define
that
this
is
a
good
structure
for
the
provider
to
provide
the
information
through.
Isn't
it
I
mean,
should
we
recompile
it
differently?
Now
I
mean
I'm
just
trying
to
find
what
what
are
we
trying
to
do
by
creating
another
api
which
replicates
the
entire
structure,
then.
B
B
No,
let's
say
for
the
same
api
version
of
s3
and
the
same
signature
version
of
s3.
We
might
end
up
having
new
fields
added.
Okay,.
C
B
Absolutely
yes,
so
so
the
workload
the
contract
between
the
workload
and
cosy
itself
is
not
directly
tied
to
the
bucket.
It
is
in
a
sense,
it's
indirectly
tied,
but
not
directly.
What
I
mean
by
that
is,
there
should
be
a
version
here
which,
which
should
which
should
exist
as
a
contract
between
the
workload
and
cosy.
Just
just
that.
A
E
But
if
you
remove
a
field,
you'll
have
to
bump
the
version
or
something.
B
Right
so
so,
take
the
take
this
example
of
porting
between
two
club
providers,
I'm
moving
from
gcs
or
to
aws-
and
let's
say
let's
say
the
admin
creates
buckets
and
in
gcs,
first,
the
old
old
old
infrastructure,
and
they
use
this.
This
definition
of,
let's
say
both
support,
s3,
hypothetical
studio
and
and
let's
say
gcs.
B
There
you
go
okay,
so
so
you
have
you
have
these
five
fields
and
then
now
I
move
to
aws
and
on
aws
it's
an
older
version
of
the
protocol,
so
the
application,
which
is
expecting
five
fields
to
be
there
would
only
find
four,
let's
say,
and
that
would
be
an
issue.
E
E
C
C
But
if
we
have
yet
another
object,
what
would
the
difference
between
that
bucket
access
object
and
that
new
object
be,
and
if
the
new
object
evolves
or
the
bucket
access
evolves?
Is
there
ever
a
case
that
they
would
evolve
in
two
different
ways?
And
it
sounds
like
probably
not
because
the
purpose
of
the
bucket
access
object
is
to
be
this
intermediate
api.
E
It's
actually
spread
yeah,
but
I
I'm
saying
that,
but
I
think
it's
it's
a
little
bit
spread
out
and
I'm
not
sure
how
that
would
how
that's
convenient,
because
there's
multiple
things
here.
There's
the
protocol
comes
from
the
bucket
in
the
bucket
access
there's
a
reference
to
the
secret,
where
we
need
to
pull
in
the
credentials
right.
So
there's
either
way
that
this
api
will
not
be.
E
D
I
think
that
that
if
there
were
any
evolution
of
this
object
with
these
fields,
like
say,
for
example,
we
I
don't
know
wanted
to
change
the
spelling
of
one
of
the
words
for
some
reason
like
you'd
have
to
you
know,
in
order
for
clients
and
servers
to
clients
and
providers
to
both
be
compatible
like
something
would
have
to
do
translation
at
some
point
right
like
like.
If
let's
say
you,
you
misspelled
the
word
bucket,
so
we
gotta
add
a
new
field
with
the
correct
spelling
of
bucket
like
well
for
backwards
compatibility
reasons.
D
You
have
to
continue
supplying
the
old,
bad
spelling
and
the
new
spelling
and
you
have
to
copy
the
value
over
and
and
then
you
have
a
deprecation
period
for
the
old
one
and
eventually
it
goes
away
but
like
in
there's
some
period
of
time
where,
like
something
is
providing
a
translation
between
old
providers
and
new
clients
or
new
providers
and
old
clients.
C
All
right,
but
that
does
it
need
to
be
a
stand-alone
new
object,
or
can
it
be
the
existing
api
machinery
for
the
existing
bucket
access
object.
D
D
E
E
What
will
we
say?
Will
we
say
that
this
structure
comes?
I
mean
because
I
understand
where
you
came
from
city.
You
wanted
to
have
a
crd
with
a
spec
that
I
can
always
inspect
and
understand
what
I
expect
to
see
in
that
file
right
or
something
like
that.
Json,
that's
exactly
yeah,
but
now
like
when
I'm
saying
this
is
the
same.
Maybe
maybe
even
if
it's
the
same,
we
need
to
share
it
in
the
in
the
kind
of
go
types
yeah
system,
but
yeah,
but
eventually
still
create.
E
E
E
I'm
not
sure
if
it's
very
important,
but
how
do
you
you
can
just
put
it
in
documentation
and
generate
it
or
something,
but
then
we
probably
want
this
to
be
somehow
tied
to
to
these
structures
saying
this
protocol
comes
from
the
bucket
and
this
you
know
if
there's
something
else,
that
we
need
to
bring
like
the
bucket.
Well,
actually,
we
said
that
the
bucket
name
is
is
inside
already
for
the
protocol,
so
maybe
that's
the
entire
information.
We
need
just
the
protocol
information
from
the
bucket
and
the
credentials
like
another.
B
E
Information
for
credentials
and
configuration,
maybe,
and
that's
it
and
then
we're
saying
it's
a
collection
of
these
well,
maybe
configuration
for
example,
configuration
will
probably
be
below
the
protocol.
Like
we
just
said
right,
that's
the
configuration
somehow,
but
for.
B
Yeah,
that's
true
all
right,
yeah
yeah!
We
can
continue
this
just
check
the
time
yeah,
so
we
can
continue
this
on
monday.
I
think
this
is
a
good
discussion,
I'll
try
to
compile
this
down
into
just
the
points
that
we
decided
on
and
we
go
from
there.