►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 04 November 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Sidhartha Mani (Minio)
A
A
Yeah
so
we're
trying
to
figure
out
im
style,
authentication
at
least
have
a
clean
story
around
it
before
before
bringing
them
again.
B
Okay,
yeah,
I
think
I
think
there
was
some
deadlines.
Let's
see,
yeah
the
cold
freeze
deadline
coming.
I
think
it's.
What
is
that
code
free
16th
november
16th
and
then,
and
then
there
is
this?
There
is
this
deadline
november,
23rd
that
that
is
basically
for
us
to
submit
some
entry
to
show
up
in
the
major
major
things
for
1.23
release,
but
yeah.
So,
let's
see
if
we
can
get
it
in
before
that,
because
otherwise
we
can't.
A
Yeah,
I
I
agree:
okay,
so
yeah.
Let's
try
to
get
hit
that
deadline.
We
have.
We
have
about
what
12
days
now
still.
A
Yeah
nobody's
gonna,
yeah,
okay,
so
yeah,
let's
start
off
today,
so
in
terms
of
the
overall
plan,
if
we
figure
out
workload,
identity
based
authentication,
if
we
have
a
clean
story
around
it,
I
think
things
will
move
very
fast,
because
that
was
that
was
the
big
thing
that
that
tim
wanted
addressed
it's.
It
seemed
very
important
for
google
cloud
and
I
would
presume
other
cloud
providers
also
would
say
the
same
thing
so
far.
A
We've
mostly
you
know,
had
vendors
non-cloud
vendors,
making
progress
on
this
and
and
for
us
all,
I
would
think
access
key
secret
key
based
authentication
is,
is
the
easiest
step.
Sometimes
we
run
in
air
gap
environments.
So
you
know
these.
A
These
tokens
are
good
enough
for
us,
but
but
in
in
places
like
google
cloud,
where
it's
public
I
I
can
understand
why
they
would
be
more
focused
on
more
advanced
types
of
authentication,
so
yeah,
let's
get
this
figured
out
and
we'll
get
back
to
tim
again
and
we
should
be
able
to
make
it
this
time
all
right.
A
So
I
spent
some
time
trying
out
workload,
identity,
based
authentication,
and
here
let's
see
yeah.
So
it's
it's
very
straightforward.
It's
it's
what
we
guessed
it
works
like
last
week
when
we
were
talking.
So
what
I
would
like
to
do
is
explain
how
this
works,
and
I
think
this
will
help
us
model
the
api
and
answer
some
of
the
questions
we
had
last
week,
much
better.
A
So
what
what
kubernetes
does
in
or
what
gke
does
google
kubernetes
engine
does
is.
First,
you
have
to
enable
something
called
workload
identity,
but
but
they
recommend
that
you
know
all
kubernetes
clusters
that
are
run
by
gk
enable
this.
If,
if
there
are
workloads
that
talk
to
the
gke
itself,
google
cloud
itself
for,
for
example,
like
buckets,
so
what
it
does
is
it
creates.
It
creates
a
iam
account
for
every
service
account
that
you
create.
A
So
there's
a
lot
of
text
around
it,
but
that's
basically
what
I
just
explained
is
basically
what
happens?
Okay,
so
once
you
enable
it,
I
want
to
quickly
show
you
something
yeah
you
you
can
enable
it
by
default,
it's
gc
metadata.
Otherwise,
it's
gk.
What
this
means
is
there
is.
There
is
something
called
as
a
metadata
service
that
runs
whenever
you're
running
in
the
cloud
you
can.
You
can
query
the
metadata
service
and
based
on
who
the
source
is
that's
querying
it.
The
result
is
given
according
to
that.
A
So
if
I'm
quoting
from
node
one-
and
I
ask
for
node
name
it'll,
say
node
one,
if
I'm
going
from
node
two
and
ping
the
same
api
as
for
the
node
name,
it'll
say
no
two,
so
when
you,
when
you
enable
workload
identity,
then
the
workload
metadata
gets
set
to
something
called
gke
metadata.
A
If
not,
it
gets
set
to
gc
metadata.
This
gke
metadata
is
important.
If
you
want
to
do
workload,
identity,
style,
authentication,
the
gke
metadata
is
the
metadata
service
that
understands
the
mapping
between
service
accounts
and
kubernetes
and
im
accounts
in
google
cloud
okay.
So
that
was
a
that
piece
of
information
is
important,
even
though
it's
slightly
tangential.
A
So
once
it's
created,
what
you
can
do
is,
let
me
see
yeah
you
can
just
put
you
know,
take
a
service
account
that
you
created,
and
you
know,
provide
it
to
a
pod
and
the
pod
can
query
the
metadata
service.
A
A
So
yeah,
so
you
know
you
get
service
accounts
and,
let's
see
all
the
gke
or
google
cloud,
sdks
already
know
how
to
work
with
this.
So
if
you
take
any
of
the
sdks
for
talking
to
google
storage
buckets,
they
would
just
be
able
to
talk.
If,
if
you
already
map
the
service
account
to
the
im
account
for
the
bucket,
then
you
know
any
sdk.
That's
that's
built
by
google
will
just
know
how
to
use
this
yeah.
This
is
this
is
what
the
metadata
service
looks
like.
A
These
are
the
apis
it
supports.
So
so,
if
we
were
to
create
a
pod
and
using
that
part,
if
you
were
to
query
something
like
this,
then
you
mean
any
of
these
apis.
You
would
get
the
right
account
tokens
or
whatever
to
yeah.
This
is
the
token
to
actually
talk
to
the
backend.
A
There
is
one
limitation
I
can
go
over
it,
but
so
basically
the
the
access
model
is,
you
have
an
im
account
and
you
enable
workload
identity
to
map
that
im
account
to
a
service
account
kubernetes
and
if
a
part
runs
with
that
service
account
it
has
that
it
has
access
to
the
backing.
C
Hey
sorry,
I
came
in
late.
I
did
have
a
question
in
terms
of
provisioning,
so
I
think
that
means
when
you
provision
a
bucket,
you
need
to
provision
it
with
using
that
identity
as
the
owner
of
the
bucket.
Oh.
D
C
A
Right,
no
so
bucket
life
cycle
and
and
identity
life
cycles
are
are
independent,
so
bucket
need
not
be
provisioned
with
a
owner.
The
organization
under
which
the
bucket
falls
into
that's
the
owner
of
the
bucket.
A
Right
so
there
has
to
be
a
an
admin
user
and
a
user
with
admin
level
permissions.
It
can
be
a
it
can
be
a
service
account
or
it
can
be
a
you
know,
real
user,
and
they
would
have
admin
permissions
to
update
and
create
iam
roles
and
imm
identities,
and
and
they
would
they
would
have
to
create
a
service
account
in
kubernetes,
and
there
is
a
way
google
cloud
provides
to
map
this
kubernetes
service
account
to
the
service
account
in
google
cloud
and
once
that
mapping
is
done
it's
somewhere
here
yeah.
A
E
So
to
be
a
bit
clear
like
there's,
no,
the
cosy
does
not
have
a
load
over
here.
It
should.
A
Be
managed
by
admin:
well,
no,
okay,
so
that
is
also
so.
Let
me
go
over
the
lifecycle,
so
bucket
provisioning
will
happen
through
cosy
where
it
will
create
a
bucket
cosy
in
order
to
run
it.
It
needs
some
privileges
to
do
it
and
we
expect
cozy
to
run
as
an
admin
that
can
create
buckets
and
also
you
know,
provision
access
for
those
buckets,
so
provision.
A
Yeah
so
or
we
create
the
service
account
from
scratch
rather
than
reuse,
something
existing.
I'm
not
sure.
That's
that's
what
we
need
to
discuss
right
now.
C
Yeah,
I
would
look
at
so
actually
csi.
We
just
added
an
equivalent
feature
called
csi
service
account
token,
where
basically
well.
This
doesn't
happen
at
provisioning
time,
but
it
happens
at
mount
time.
C
But
during
the
mount
call
we
pass
in
the
pods
service
account
so
potentially
for
cozy.
There
could
be
some
equivalent
kind
of
thing,
at
least
for
at
the
time
when
we
modify
or
set
the
access
policies.
A
Right
right
that
that's
exactly
when
we
would
need
it.
So
is
this
something
that's
going
to
happen
at
no,
no,
no
stage
volume
or
like
not
published
volume.
C
G
G
C
E
C
Right
for
cozy,
we'll
want
it
earlier
at
at
the
the
time
that
we
set
up
the
access
policies
right.
A
Right,
so
we
we
so
so
far
the
model
we've
been
following
is
cozy
would
have
a
static
set
of
credentials
which,
which
would
be,
which
would
be
like
getting
admin
privileges
across
the
board,
to
create
buckets
to
do
everything
related
to
buckets,
rather
than
a
particular
service
account
or
a
particular
key
for
that
particular
operation,
so
is
doesn't
csi
also
have
something
like
that
where,
when,
when
a
new
volume
needs
to
be
provisioned
say
from
the
cloud,
the
csi
drivers
controller
would
would
have
permissions
to
talk
to
the
cloud
say
if
you're
on
amazon
create
ebs
drives
and
also
attach
them
to
the
nodes
that
that
you
wish
them
to
be
attached
to.
A
C
A
E
A
C
During
I
think
during
whenever
we
sorry,
I
I
forgot
the
latest,
but
we
have
like
a
we
have
a
an
api
for
access
control.
Is
that
right.
F
C
A
C
I
would
also
one
thing
to
clarify
the
the
cozy
magic
to
do
the
binding
I
I
assumed
that
would
have
to
be
done
by
the
driver
yeah
and
not
like
a
generic
cozy
component.
Okay,.
A
A
Yeah
yeah
I'll,
explain
I'll,
explain
what
jeffen
is
saying
here,
so
one
of
the
so
so
this
is
the
overall
architectural
workflow.
We
decided
just
just
what
we
just
said
now
the
service
account
comes
in
as
an
input
or
something
like
this
earlier.
The
service
account
comes
in
as
an
input
and
cosy.
A
Does
the
binding
of
the
service
account
to
a
cloud
using
the
driver
service
account
to
a
cloud
entry
and
the
service
phone
goes
into
the
part
and
the
application
is
presumed
to
have
the
necessary
facilities
to
use
this
service
account
based,
authentication
and
and
as
far
as
official
cloud
sdks
are
concerned.
Now
all
of
them
support
this
style
if
you're
using
a
vendor
sdk.
A
That
is
that
is
not
the
case.
So
if
you
are
moving
from
on-prem
to
cloud-
and
you
know,
let's
say
you
you're
using
min
io
now
you're
moving
to
google
cloud,
I
mean
they're
incompatible
protocols,
s3
versus
gcs,
but
but
let's
say,
google
cloud
suddenly
started.
Supporting
s3
min
ios
sdk
will
not
know
how
to
talk
to
google
cloud's
metadata
service
to
leverage
the
service
account
cell
authentication.
A
So
that's
that's
one
limitation
to
know
of
which
is
I
mean
the
whole
point
of
this
is
portability
and
we
say
we
allow
portability
as
long
as
the
protocols
are
the
same,
but
we
are
adding
another.
A
You
know
star
to
it
saying
disclaimer
to
it,
saying
that
we
allow
portability
as
long
as
the
protocol
is
the
same
and
the
access
mechanisms
are
supported
by
are
understood
by
by
the
sdk.
So
there
is
this
reliance
on
on
the
application
sdk's
facilities
here,
and
we
want
to
model
that
in
the
api.
Somehow
we
want
to
model
this
in
the
api.
A
Where
we
say
here
here
are
the
authentication
mechanisms
this
driver
provides,
or
this
particular
environment
provides,
and
there
needs
to
be
some
explicit
contract
to
to
denote
that
or
we
can
even
it's
up
for
discussion.
A
A
The
first
question
is:
should
it
be
able
to
say
hey,
I
want
access
key
secret,
key
style
authentication
or
should
be
completely
transparent
to
that
workload,
so
developer
who's,
creating
the
workload
should
they
be
able
to
say
here's
the
style
of
authentication.
I
would
need
for
my
workload.
A
Anyone
if
they
have
any
thoughts,
feel
free
to
pitch.
In
the
second
question
since
silent,
I'm
gonna
go
to
the
second
question.
If,
if
it's
not
the
application
that
says
hey,
I
want
I
want
workload,
identity
versus
access,
key
secret
key-
maybe
maybe
this
this
field
should
be
a
part
of
the
bucket
class
or
the
bucket
access
class.
A
Right
right,
different,
you
were
saying
right.
E
Yeah
like
last
week,
we
don't
know
the
input
part
right,
so
this
is
a
dependency
from
the
user
to
provide
the
service
account
right.
So
in
case
of
access,
key
and
secretary
user
did
not
don't
want
to
provide
anything,
but
we
can
automatically
mount
that
information
to
the
port.
But
here
we
are
expecting
user
to
give
the
service
account
details
right,
like
it
is
expected
from
the
user
part
right.
So
we
are.
A
A
Yeah
yeah,
so
so
one
of
the
arguments
last
week
was-
and
this
is
this-
is
a
good
argument
really
if
we
want
to
be
portable
and
if
we
had
this
particular
requirement
on
authentication
style,
we
wouldn't
really
be
portable
because
you'd
have
to
change
the
authentication
style
when
going
to
a
different
environment.
A
So
shouldn't
this
shouldn't
and-
and
we
do
the
same
with
like
pvcs
and
pvs,
where
the
pvc
doesn't
say
what
kind
of
file
system
is
going
to
be
on
the
on
the
on
the
pv.
That's
that's
the
configuration
that
comes
from
the
storage
class.
A
Similarly,
pvc
doesn't
also
say
whether
it's
a
read
write
once
or
rewrite
many
type
of
volume.
Again
that
comes
from
the
storage
class.
That's
one
of
the
ways
pvc
stays
portable.
A
C
A
Right
right,
because
you
know
sometimes
you'll
need
to
do
right,
locking
if
you're,
using
nfs
or
something
yeah,
that's
correct,
all
right,
so
that
that
is
confusing
to
me,
because
what,
if
I
were
to
move
between
a
provider
that
loves
rewrite
many
versus
another,
that
doesn't.
C
Yeah
that
I
mean
that's,
it
is
a
problem
right
like
in
with
pvcs
and
pb.
Today
we
have
a
bunch
of
we.
We
do
have
a
bunch
of
features
that
are
dependent
on
the
volume
driver
to
support,
like
snapshots,
for
instance
like
if
not
all,
drivers
support
snapshots.
So
if
someone
creates
a
pvc
and
then
takes
a
snapshot
and
tries
to
take
a
snapshot,
but
the
driver
doesn't
support
it
they'll,
you
know
get
some
error
or
something
but
like
they
don't
really
know.
C
Until
unless
they've
read
the
driver
documentation
to
see
what
it
supports.
A
Interesting,
okay,
so
so
pv
portability
is
dependent
on
multiple
things.
Then
it's
it's
dependent
on
yeah,
well,
actually
just
one
which
is
a
rewrite
mini
or
volume
mode.
It's
portable.
As
long
as
the
volume
mode
is,
you
know,
supported
by
the
driver
right,
correct,
okay,
understood,
okay,
interesting!
So
if
we
were
to
put
the
authentication
mechanism,
okay,
so
that
that
argument
would
also
apply
for
the
protocol,
then
s3
or
gcs,
and
so
far
the
argument
for
protocol
has
been.
A
You
know
that
it
should
be
in
the
class
object
should
be
something
that
the
admin
sets
based
on
based
on
the
driver.
I'm
trying
to
remember
why
we
made
that
decision.
Jeff.
Do
you
remember.
A
H
Yeah
we
did
go
back
and
forth
and
at
one
time
we
thought
it
was
a
list
of
protocols
in
the
class
and
the
br.
The
bucket
request
could
just
name
one
of
them,
and
then
we
went
away
from
that
idea.
H
The
driver
that
you're
that
you
define
in
the
class
implies
a
protocol,
so
I
don't
know,
I
think
it
would
just
seem
like
it
was
an
obvious
attribute,
and
maybe
that
was
all
the
thought
it
was
given.
Okay,
so
in
the
cap
we
had
some
notes
about
it.
A
A
A
A
C
A
Yeah
I
mean
this.
This
is
one
of
the
challenges
with
with
trying
to
go
off
the
pvc
approach
or
or
trying
to
retrofit
some
of
the
things
we
did
with
pvcs
and
object
storage.
The
data
api
is
is
different
between
different
providers
with
pvcs,
it's
always
posix
or
pvs.
A
It's
always
posix
read
write
calls
are
the
same,
regardless
of
who's,
providing
the
file
system
or
block
device
to
you,
and-
and
in
this
case
you
know,
there's
two
things:
data
api
changes
like
s3
versus
gcs
and
also
this
authentication
mechanism
to
talk
to
the
back
end
changes.
A
That
being
said
so
so
far,
we
we've
we've
modeled
this,
as
if
those
details
are
always
hidden
from
the
user.
A
The
admin
is
expected
to
provide
the
user
with
the
storage
class
name,
and
this
storage
class
is
assumed
to
always
provide
a
bucket
that
the
user
expects.
So
if
the
user
expects
s3
buckets,
then
the
storage
class
is
supposed
to
provide
some
s3
compatible
bucket
that
might
be
from
any
vendor
could
be
from
from
aws
or
any
any
other
vendor
out
there.
H
C
A
So
sorry
I
mean:
are
we
discussing
hey
jeff,
there's
a
lot
of
echo?
Okay,
I
viewed
it
now
wait.
How
did
you
say
that
family,
okay
anyways
so
so,
are
we
talking
about
having
one
driver,
support,
multiple
protocols.
H
D
C
I
guess
now
it's
like
the
the
is
it
a
list
versus
like
a
single
field.
H
A
No,
I
I
don't
want
to
get
distracted,
so
the
bigger
question
is:
do
we
do
the
same
for
authentication
model
or
not?
I
I
don't
know
if
we
should
go
back
and
question
the
protocol
argument.
Yet
unless
we
have
a
strong
case
for
for
for
choosing
not
to
have
or
having
a
list
and
and
the
same
argument,
you
know
whatever
we
decide
for
authentication
method
would
apply
to
protocol
as
well.
But
I
can
see
that
these
two
being
you
know
symmetric,
I
would
say.
C
A
C
Think
my
thoughts,
sorry,
I
have
to
leave
early,
but
I
think
my
my
main
thoughts
are
that,
like
what
like,
the
higher
goal
we
want
to
achieve
is
an
application
wants
to
request
like
a
specific
protocol
and
authentication
mechanism,
and
the
driver
needs
a
way
to
say
if
it
supports
it
or
not.
C
D
A
Cents
on
this
cosy
needs
to
do
some
orchestration
based
on
the
authentication
mechanism
you
ask
for
so,
if
you
ask
for
access
keys
and
secret
keys,
because
you
would
request
for
that,
get
create
a
secret
object
out
of
it
and
mount
that
secret
into
the
part
when
it
starts
up,
but
the
service
account
it
does
it
differently.
A
C
A
Oh
okay,
to
to
denote,
if
to
denote
the
authentication
mechanism,
you
mean.
G
A
Interesting
and
and
you're
saying
that
that
would
just
be
defined
in
the
in
the
bucket
request
of
something
that
the
user
requests
right.
A
So
question
of
portability
here
is
that
a
concern
at
all
here
like
based
on
this
approach?
Can
we
yeah?
Is
there
a
concern
with
this
approach?
Not
this
portability
important
but
more.
Like
does
this
in
this
model?
If
you
wanted
to
move
from
an
environment
that
allows
authentication
mechanism
a
to
an
environment
that
doesn't
allow
it
but
doesn't
support
it,
are
we
now
not
portable?
A
C
Yeah,
but
I
think
you
can
argue
that
this
is
similar
to
like
any
like
volume
feature
we
have
today
right,
like
you,
could
be
using
volume
snapshots
in
one
environment
and
move
to
another
environment
where
snapshots
doesn't
work,
so
I
think
it's
a
similar
boat.
I
think
what
the
maybe
the
important
part
is
is
like,
if
say,
the
feature
capabilities
were
equivalent
on
both
environments.
A
Yeah
yeah,
I
like
this.
Actually
it's
a
very
simple
and
it's
similar
to
how
pvcs
work
with
volume
modes
all
right.
So,
let's,
let's
revisit
what
the
decision
from
last
week
was
different.
Do
you
remember
what
we
decided.
E
So
last
week,
like
the
decision
was
to
like
a
driver
is
the
holder
like
says
it
can
support
lot
of
protocols
and
when
the
application
is
mounted
with
a
bucket,
I
mean
bucket
claim.
The
driver
will
give
all
the
information
on
the
port
or
the
application
like
for
the,
for
example,
in
this
case
for
identity
base,
it
will
be
the
service
account
for
the
secret
and
access
key.
It
will
be
saved
in
a
file
like
that
and
the
application
can
pick
any
one
of
it.
Really
if
you
it's
like
that,.
A
But
was
there
any
representation
of
the
authentication
mechanism
in
the
class
objects
or
or
in
the
bucket
request,
or
bucket
access
class
or
bucket
access
request
objects.
E
Okay,
let
me
check
the
notes
like
I
guess.
I
have
written
something:
okay,
yeah,
the
storage
class.
Have
the
list
of
authentications
and
driver
can
pick
the
authentications
from
the
list.
Something
like
I
see.
A
Yeah,
for
the
sake
of
portability,
but
for
the
sake
of
saying
that
it
is
portable,
we,
we
were
kind
of
going
down
this
road
where
we
were
saying.
Oh,
we
list
everything
that
that
this
application
supports,
or
this
driver
supports
the
driver
is
free
to
choose
one.
I
would
think
that
is
even
less
deterministic
than
not
having
the
field
at
all.
E
A
A
Application
yeah
so
the
way
he's
thinking
about
it.
You
can
see
in
this
comment
itself.
He
says
even
having
the
protocol
in
the
bucket
class
seems
overkill.
He's
thinking
about
just
transparently
passing
it
through,
like,
like
michelle,
was
suggesting.
A
So
we
don't
even
need
to
represent
it
anywhere
either
the
class
or
the
user.
Well,
the
user
objects
somehow
and
just
pass
it
along.
So
how
would
that
look?
Like
sure
would
do
you
have
any
thoughts
on
what
that
would
look
like
where
how
the
user
would
say?
This
is
the
authentication
mechanism
that
I
want.
A
She
talked
about,
okay,
so
yeah.
So
that
is
something
that
we
need
to
figure
out.
How
would
the
bucket
access
look
like
when
they
say
I
want
access,
keys
or
secret
keys
versus
when
they
say
I
want?
A
E
So
we
still
have
bucket
taxes
class
right.
It's
still
the
right
authentication
people
protocol
is
the
highest
level
like
at
least
for
me
and
authentication
is
the
next
level
under
protocol
right.
So
as
calendar
process,
the
protocol
will
be
staying
on
storage
class,
and
can
we
have
something
on
the
bucket
access
class?
Saying
authentication
mechanism
will
be
part
of
this,
like
this
authentication,
like
this
worker
class,
I
mean
bucket
access
class
supports
this
authentication
and
all
the
bucket
workers
created
from
that
bucket
access
class
will
have
that
authentication
or
something
like
that.
A
I
see
yeah
so
so
what
you're
saying
is
you
you
want
to
define
on
the
bucket
itself
that
this
thesis
authentication
mechanism
supported?
Is
that
what
you're
saying
I
mean
the
bucket
access
class
or
something
like
that
yeah
yeah?
So
that
is
what
we
discussed
last
week
right
that
we'd
have
a
list
of
authentication
mechanisms
written
down
in
the
class
object,
and
that's.
E
E
A
Okay,
so
yeah
that's
what
we
were
talking
about
just
now
and
what
michelle
seems
to
think
is:
maybe
we
don't
even
need
to
specify
it
there.
B
B
So
now
we're
talking
about
moving
the
front,
the
class
to
the
bucket
request,
but
it's
already
there.
So
I
mean
I
mean
yeah.
B
A
What
what
tim
is
saying
is:
if
you're
going
to
have
it,
we
should
have
it
in
the
class.
What
michelle
is
saying
is:
why
even
have
it
in
the
class,
we
don't
need
it
so
michelle's
thinking
we
can
pass
it
as
a
parameter
in
the
in
the
bucket
request,
object.
A
Yeah
yeah,
so
the
these
two
are
completely
opposite
of
what
each
other
is
saying
right,
like
tim,
wants
it
either
in
the
class
no
need
to
list
protocol
in
the
bucket
request,
so
he
doesn't
want.
The
request
may
want
a
list
protocol
in
the
bucket
class
he's
okay
with
putting
in
class,
but
he
sees
even
that
is
overkill
to
start
so
yeah
we
need
to.
We
need
to.
There
are
no
there's
no
wrong
answer
here.
A
It's
just
different
styles.
I
would
rather
go
with
what
tim
is
saying
right
now,
because
tim
has
been
more
involved.
He
knows
this
more
deeply,
so
so
going
accord
by
this
we
would
have
you
know
a
list
of
authentication
mechanisms
in
in
the
bucket
access
class
object.
If
you
were
to
go
this
route
of
how
tim
is
thinking
about
it.
A
Similar
to
protocols
we'll
have
authentication
mechanisms,
and
that
would
be
in
the
class
object,
and
when
someone
wants
access
to
a
bucket
they
would
refer
to
the
class
object
and
based
on
what
we
spoke
last
week
we
said
yeah,
so
that
part
is
still
confusing
to
me.
So
so,
first
before
you
go
there
shane
does
it
make
sense,
is
whatever
we
talked
about
just
now.
B
A
Right
right,
exactly
if
it's
in
the
class,
then
it's
not
there,
the
portability
is
easier.
My
only
concern
here
is,
you
know.
An
application
also
plays
a
role
here,
unlike
unlike
with
pvcs
and
pvs
or
unlike
with
snapshot.
A
The
application
has
a
role
to
play
here.
If,
if
it
is
an
s3
bucket,
then
you
know
the
application
needs
to
have
an
sdk
that
that
knows
how
to
talk
to
s3
the
gcs
bucket.
The
application
needs
to
have
that
too.
So,
right.
H
So
this
is
the
whole
problem
that
we
don't
have
a
posix
standard
for
bucket
access,
and
so
the
the
sdk
is
defining.
What
is
defining
the
protocol
inherently
and
and
the
application
uses
the
sdk
and
cosy
has
no
idea
what
sdk
an
application
is
using
that
a
workload
uses.
So
so
there's
always
a
chance,
no
matter
what
we
do,
no
matter
where
protocol
gets
defined
or
no
matter
how
a
driver
name
is
used
that
the
application
is
out
of
sync
with
the
underlying
bucket
right,
and
we
can't.
H
I
don't
know
how
we
can
prevent
that.
A
Yeah,
I
don't
know
if
we
can,
I
mean
we
could
do
one
thing
where
you
know
we.
We
create
a
three
different
bucket
types
like
it's.
It's
a
it's
not
just
bucket,
there's
no
bucket
anymore,
I'm
just
like
hypothetically
going
there.
I
don't.
I
don't
know
if
you
should
even
go
down
this
route,
but
one
possible
solution
is
to
standardize
three
different
types
of
buckets:
s3
bucket
gcs
buck
in
azure.
Buckets
that
way.
There's
there's
no
single!
You
know
generic
bucket.
It's
just
one
of
these
three.
H
H
A
A
Let's
see,
say,
services
services
only
behave
in
in
one
way
right.
They
just
route
your
request
to
one
of
the
pods.
Let
me
try
and
remember
this:
does
service
do
tcp
level
load,
balancing
yeah
it
does
because
it's
an
ip
tables.
Let
me
think
about
a
different
resource
where
there
could
be
a
possible
mismatch.
A
I
think
I
think
I
think
I
think
we
need
to
looking
at
how
pvcs
do
it
looking
at
how
pvc
models
the
volume
mode,
because
application
needs
to
know
if
if
a
volume
is
rewrite
many
or
rewrite
once
the
pvc
allows
you
to
specify
the
volume
mode
shouldn't
shouldn't
our
bucket
access
request
or
bucket
access
now.
But
that's
what
we're
calling
it
now
model
authentication
mechanism,
because
it's
very
application.
Centric.
A
A
A
Well,
protocol
would
be
in
the
bucket
request,
and
authentication
mechanism
would
be
in
the
bucket
access
object.
H
Well,
I
just
saying
I
I
agree
with
your
division:
the
separation
of
the
protocol,
if
that
ends
up
being
defined
being
on
the
bucket
creation
side
and
the
access
being
in
the
bucket
acts
and
and
the
authorization
or
token
mechanism
being
in
the
bac
bar
side.
F
A
Yeah
jeff,
I'm
hearing
an
echo
so
yeah,
no,
it's
gone
so
so.
Where
do
we
put
it
though?
The
the
question
is:
should
it
be?
It
should
be
in
the
class
object.
We
should
be
in
the
user,
controlled
object
and,
as
I
think
about
it
more,
I
think
it
should
be
in
the
user
controlled
object.
It
should
be
in
the
bucket
request
for
protocol
and
bucket
access
for
the
authentication
mechanism.
H
A
Yeah
so
so
class
would
be
provider
plus
parameters
for
that
provider,
except
here
the
application
is
involved
right.
So
my
my
reason
for
not
putting
it
in
the
class
is
because
there
is
a
there
is
a
role
for
the
application
to
play
here
and,
and
that
should
be
modeled
somehow
right
or
are
we
saying
like
it
can't
be
in
the
class
right?
H
A
Yeah,
that's
what's
making
me
think.
Maybe
we
should.
We
should
go
down
this
path
of
saying
that
protocol
you
know
remain
so
what
michelle
is
saying
is
is
very
similar
to
what
I
just
said,
but
but
with
another
step
further
she's
saying
don't
even
model
it,
don't
don't
even
have
protocol
in
the
as
a
separate
field
or
don't
even
have
authentication
mechanism
a
separate
field
just
pass
it
through
blindly.
A
Right
that
does
make
some
sense
to
me.
Well,
well,
now
that
I
think
about
it,
even
that
that
might
not
be
such
a
good
idea.
A
Like
like,
if
because
because
then
you
know
we'll
have
we'll
have
vendors
creating
their
own
specialized
protocols,
so
you
know
mineo
won't
do
this,
but
you
know
we
might
we.
Let's
say
menu,
creates
its
own
mineo
protocol,
which
is
basically
s3,
but
we
just
call
it
minio
and
then
now
we're
suddenly
not
portable
between
minion
s3.
We
would
never
do.
A
That
makes
no
sense,
but
I'm
just
just
saying
we
would
we
might
I
mean
this
is
what
saad
was
worried
about
initially
when
when
when
we
were
discussing
about
having
a
an
opaque
set
of
fields
inside
the
bucket
request
or
in
the
user
control
resource-
and
we
said
that's
not
a
good
idea,
because
we
would
end
up
with
vendors
implementing
their
own
proprietary
ones,.
H
A
A
Yeah
I
feel
like.
Maybe
if
you
write
it
down
and-
and
you
know
maybe
wrote
down
thoughts
about
it-
it'll
be
clearer.
H
Right
I
agree.
Xing
had
mentioned
that
in
our
last
meeting
that
if
we
could
get
something
written
down,
we
could
post
the
link
to
sig
storage,
cozy
and
and
then
we
have
some
ability
to
digest.
It
better
think,
do
a
little
research
and
make
comments.
But,
okay,
it's
it's
all.
You
know
we
hear
it
and
there's
a
recording,
but
it's
it's
hard
to
give
it
a
lot
of
thought.
I
think.
A
A
Yeah
all
right,
so
I
look
forward
to
your
feedback
there.
I
think
if
we
just
figure
this
out,
we
should
be
in
a
good
shape
and,
and
we
can,
we
can
convince
him
to
push
this
along.
A
Yeah,
that's
it
everyone!
Okay,.