►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 19 November 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
So
I
want
to
quickly
start
with
the
development
progress
so
for
there
were
some
of
you
not
present
on
monday,
so
I'll
quickly
discuss
what
where
we
were
on
monday.
So
during
the
course
of
last
week
and
before
monday,
we
had
implemented
the
revoke
access
call
and
also
delete
bucket
call
in
the
sample
provisional.
B
There
is
a
set
of
functionality
around
create
bucket
that
led
to
a
pretty
important
question,
which
is
what
I'll
discuss
today,
but
before
that,
I
want
to
quickly
sync
up
with
the
whoever
is
writing
code
right
now,
like
srini,
krish,
jeff,
etc,
to
find
out
what
the
progress
is
so
sweeney.
What
have
you
been
working
on
lately.
C
So
currently
I
got
thanks
to
xing.
C
The
spec
is
now,
as
is
the
original
spec
is
merged,
so
the
spec
repo
is
ready
and
api
repo
is
ready
kind
of
as
as
of
now,
so
the
api
is
bound
to
change
based
on
our
discussions
today,
but-
and
I
have
the
proud
job
that
is
tested
against
the
central
controller,
as
well
as
a
very
basic
skeletal
ete
test,
which
kind
of
gives
us
a
ability
to
write
actual
code
into
it,
which
runs
so
those
needs
those
pr's
are
are
are
going
to
follow,
make
files
are
are
kind
of
implemented.
C
As
for
our
discussions
to
to
have
one
release
tools
directory
under
spec
and
then
use
that
for
all
the
repos
so
that
we
maintain
one
single
copy
of
it,
all
that
is
getting
in
place,
so
I
haven't
touched
much
of
the
central
controller
logic.
Yet,
whatever
we
have
there,
the
basic
create
calls
and
delete
calls
are
in
place
other
than
that.
B
Okay
and
then
we
can
all
focus
on
the
e
to
e
text
a
little
bit
right,
because
we
have
the
skeleton.
Now.
Yes,.
C
Once
this
pr
is
merged,
people
can
write
real
code
into
the
the
template
provided
between
yeah.
B
Okay,
so
on
the
spec
side,
we
also
have
another
pr
right
or
is
everything
merged
on
the
spec
repo.
B
Okay,
so
so
one
of
the
things
was
we
wanted
to.
We
wanted
to
make
sure
that
that
we
have,
you
know
one
more
approver
or
a
few
more
approvers
in
the
spec
repo,
the
reason
being
since
it's
it's
since
you're.
Writing.
As
we
are
writing
code,
it
helps
us
to
move
faster
with
with
updating
the
grpc,
spec
and
and
iterating
on
that
now,
obviously,
before
we
make
any
changes
to
spec,
we
go
through
the
community.
B
First
make
sure
everyone's
in
agreement,
but
once
that
happens
we
want
to
be
able
to.
You
know,
approve
and
move
forward
with
that.
B
So
for
that
purpose
I
think
srini
has
added
a
few
of
us
or
I'm
not
sure
how
many
into
the
spec
repo,
as
as
a
pr
right
now
so
shing
and
saad.
Whenever
you
get
a
chance,
please
take
a
look
and
srini.
If
you
can
paste
the
link
to
that
in
the
chat
that
will
be
useful
I'll
do
that
right
now.
B
A
So
but
it's
back,
the
specs
have
to
be
changing
that
much
so
right
now
the
one
that
has
been
merged
is
the
one
that
got
approved
as
part
of
a
cab
right,
yeah.
So.
B
Yeah,
that's
a
good
point,
so
it's
just
we're
pre-alpha
right
now
and
things
are
evolving
quickly.
B
D
I'd
say:
let's
hold
off
on
this
until
it
becomes
a
problem
in
terms
of
velocity
and
if
it
becomes
a
problem,
let
us
know.
I
think
we
want
to
be
careful
about
the
changes
that
go
into
the
spec
and
make
sure
we
get
a
number
of
eyes
on
it.
D
I
realize
that
you
know
developer
velocity
is
important,
but
for
the
spec
itself
we
want
to
be
as
careful
as
possible
and
kind
of
have
slow
iterative
changes
rather
than
kind
of
rush
and
make
a
bunch
of
changes
and
potentially
regret
the
changes
that
were
made.
Would
you.
B
Be
okay
with
that
absolutely
yeah
know.
It
makes
a
lot
of
sense
what
you're
saying
so
during
the
holidays.
You
know
if
you
want
to
keep
iterating
that
might
prove
to
be
useful
to
be
able
to
make
changes
like,
even
if
it
is
not
about
making
changes
to
the
spec
in
terms
of
adding
new
fields
or
removing
fields,
but
but
maybe
just
a
bug
fix,
spelling
fixes.
It
will
help
us
if
we
could.
If
we
had
that
privilege.
A
B
E
E
Where,
where
like
you
know,
there's
an
uncontroversial
change
that
you
require,
and
it's
it's
blocking,
you
like
you,
can
make
those
in
branches
and
then
base
your
other
work
on
that
branch.
So
you
can
keep
yeah.
B
But
others
won't
be
able
to
start
using
it,
so
we're
already
others
won't
be
able
to
start
using
it.
We're
already
having
people
no.
E
No,
they
can't
so
that's
the
whole
point,
is
you
you,
you
create
the
pr
and
you
push
it
up
and
then
other
people
can
use
that
pr
as
the
base
for
their
work,
it
doesn't
have
to
be
merged
for
other
people
to
consume
it
like
yeah.
They
might
have
to
play
games
with
their
go.mod
file
to
point
to
a
separate
directory.
But
you
know
people
do
that
kind
of
stuff.
All
the
time.
B
C
B
We'll
be
making
mechanics
release
tools
more
often
for
sure
so
yeah
might
make
sense
to
put
in
a
different
ripple,
yeah,
okay,
so
moving
on
okay.
So
so
this
is
the
change
that
I
primarily
wanted
to
discuss
today.
B
So
this
is
how
it
looks
in
the
spec
right
now
the
the
protocol
field
in
in
the
various
bucket
resources.
Excuse
me,
the
bucket
request
has
just
the
protocol
name.
B
The
bucket
class
defines
a
protocol
structure
under
which
there's
name
and
version,
and
then
any
parameters
specific
to
the
protocol
are
passed
in
where
the
parameters
field,
which
is
at
the
top
level
of
the
bucket
class
structure
and
the
parameters
field,
is
a
map
string,
string
type
and
on
the
bucket
side
of
things
the
parameters
are
copied
as
is,
and
the
protocol
in,
in
addition
to
name
inversion,
also
has
a
protocol
specific
structure,
so
I've
shown
an
example
with
s3
with
bucket
name
region
and
point,
so
we
we-
we
were
thinking
about
this
and
and
one
of
the
challenges
that
we're
gonna
face.
B
If
we
go
for
this
approach,
is
we
have
to
keep
the
substructure,
the
s3
specific
substructure
which
is
in
the
bucket,
updated,
or,
I
would
say,
we'll
have
to
evolve
that
as
we
start,
as
as
we
keep
adding
new
features
and
and
a
protocol
specific
substructure
like
s3
or
google,
storage
or
azure
can
really
only
be
maintained
by
the
vendor.
A
B
Yeah,
I'm
I'm
getting
to
that
so
so
in
order
to
resolve
this
this,
this
maintenance
problem
of
having
to
maintain
this
set
of
protocol
specific
fields
and
having
to
evolve
this
as
quickly
as
the
protocol
wanted.
B
We
wanted
to
make.
You
know,
turn
that
that
protocol
specific
substructure
into
a
parameters-
field
of
map
string
string
type.
Now
this
allows
us
this-
gives
us
the
flexibility
to
add
any
new.
You
know
config
options
that
are
specific
to
the
protocol,
just
just
at
you
know
with
without
having
to
rely
on
updating
upstream
kubernetes.
B
So
if
s3
had
to
push
in
a
critical
security
fix
and
have
a
protocol
a
parameter
to
enable
that
fix
they
could
they
can
just
start
doing
it
without
having
to
make
changes
to
the
upstream
kubernetes
api.
So.
B
Encryption
configuration
saying
which
kind
of
key
to
use
which
kind
of
algorithm
to
use
I
mean
I
was
giving
that
as
an
example.
It's
just
you
know,
adding
new
features,
let's
say
even
if
it's
it's
just
having
to
maintain
a
vendor-specific
field
in
its
entirety
is
difficult
for
us
and
and
we've
seen
that
with
even
a
little
bit
with
a
csi
where
we're
moving
to
a
more
generic
csi
driver,
rather
than
maintaining
all
this
set
of
different.
B
You
know
vendor
specific
drivers,
so
I'm
I'm
in
my
next
slide
and
I'm
getting
to
the
other
point
which
yeah
just
just
hold
on
until
the
next
slide
and
and
you'll
see
what
I'm
talking
about,
because
as
I
I
am
going
to
abstract
some
fields
out
of
this
into
the
protocol
structure.
But
I
want
to
make
sure
everyone's
able
to
follow.
What's
going
on.
F
B
Actually
so
so
that's
the
other
thing,
so
it's
going
step
by
step
so
really
protocol.
Specific
parameters
should
go
under
protocols,
parameters
field,
but
there
are
some
some
parameters
that
are,
that
are
more
provisional,
specific
that
are
not
specifically
for
the
protocol,
but
they
are
a
set
of
conflict
fields
that
that
are
valuable
or
required
by
the
provisioner.
B
So
so
those
set
of
parameters
can
be
passed
in
through
something
we're
calling
attributes
to
me.
Attributes
seems
to
be
a
much
more
natural
naming
convention
for
something
like
that.
Obviously
all
of
these
names
are,
you
know
for
discussion,
so.
E
G
H
Attributes
is
probably
a
more
meaningful
name,
but
we
would
probably
stick
with
parameters
where
it's
defined.
If
we
didn't
add
it
under
protocol
and
guy's
question,
I
think
it's
a
good
question
guy
and
I
think
sid's
about
to
answer
it.
Why
there's
two
similar
opaque
maps.
G
I
think
I
just
did
well
did
guy
understand
the
answer.
Yeah.
H
B
Right,
so
not
entirely,
so
I
want
to
go
to
this
field.
Okay.
Now
we
can.
I
want
to
get
to
ben's
question
also
now,
so
there
are
some
fields
that
are
that
are
common
across
all
the
providers,
regardless
of
what
provider
it
is.
B
I
really
like
how
ben
put
it
on
monday,
which
is
all
all
that
is
needed
for
for
a
workload
to
a
access,
is
a
http
call
with
an
end
point
and,
and
some
parameter
some
way
to
know
how
to
authenticate
some
header
for
that.
So
so
you
know
to
that
end,
the
most
common
set
of
fields.
B
B
So
ben,
to
get
to
your
question
so
yes,
maybe
a
security
fix
will
not
need
a
change
in
parameters,
but
do
you
do
you?
Can
you
see
any
field
needing
a
change
as
we
go.
E
This
is
a
step
in
the
right
direction,
but
I
still,
I
guess,
I'm
still
quibbling
about
this
idea
that,
like
there
will
be
some
sudden
realization
at
amazon
that,
like
oh,
our
protocol
is
broken
and
the
only
way
to
fix
it
is
to
add
some
new
field
to
the
wire
level
protocol
that
the
that
every
client
talked
because
let's
say
they
did
that
right,
we're
on
s3
v4
for
their
authentication
scheme
like
if,
if
it
was
the
case
that
s3
v4
was
fatally
flawed
and
they
found
that
out
and
they
fixed
it.
E
And
if
they,
if
they
did,
have
an
s3
v5
which
someday
they
almost
certainly
will
like
all
of
the
sdks
also
have
to
update
before,
like
it
becomes
a
usable
thing
right,
just
because
they
release
it
doesn't
mean
that,
like
everyone
gets
it,
you
have
to
recompile
your
binaries
or
download
new
libraries
to
actually
speak
the
new
s3
v5
protocol,
which
means
you're
respitting.
All
of
your
pods
like
it's,
it's
basically
the
next
version
of
the
protocol.
If
you
change
something
at
this
level,.
B
B
Today
we
have
region,
zone
endpoint
and
bucket
name,
and
let's
say
we
start
with
object,
locking
and-
and
we
want
to
designate
one
bucket
to
be
a
lock
bucket
and
that
would
have
to
be
sent
in
by
parameters
and
s3
has
a
lot
of
different
buttons
and
knobs
like
object,
locking
and
in
order
in
order
to
support.
That
is
why
we
want
to
have
parameters.
D
So,
let's
take
a
step
back
here.
What
would
the
precedence
be
that
we
are
setting
if
we
make
the
protocols
field,
have
an
opaque
parameter
list
and
I
think
one
of
the
biggest
concerns
I
have
is
effectively
what
you're
doing
is
letting
each
provider
decide
what
the
set
of
parameters
are
that
should
be
consumed
for
that
protocol,
so.
D
B
So
that
that's
so,
we
don't
have
to
leave
it
up
to
the
provider,
and
these
can
be
well-defined
keys.
We.
A
B
Get
the
benefit
of
of
being
able
to
move
fast
through
an
opaque
parameters
field
and
also
define
well-defined
keys
for
s3?
How
would.
A
B
Right
so,
if,
if
I
wanted
to
add
a
new
parameter,
if
let's
say
s3
wanted
to
add
a
new
parameter,
it
would
be
just
a
documentation.
Change
after
going
through
the
community,
of
course,
and
rather
than
having
to
go
through
nine
months
of
the
kubernetes
life
cycle
and
then
being
able
to
be
stable.
B
A
Yeah,
so
I
think
you
did,
you
should
show
the
how
the
api
would
look
like.
I
think
this
one
doesn't
show
the
whole
thing
you
were
actually
proposing
not
to
have
those
three
particles
anymore.
Like
not
you
say
we're
not
saying
what
are
supported.
This
is
all
like.
Whatever
name
you
want
to
be.
There.
B
B
Fields
we've
seen
that
pattern
multiple
times
before
we
we
tried
to
do
the
same
thing
for
for
cloud
providers
and
we
saw
what
happened
there.
E
So
so
what
I
want
to
say
is
like
we
have
this
problem
today
in
csi
with
like
stuff
that
is
outside
the
scope
of
of
the
kubernetes
spec.
When
you
request
a
a
pvc,
you
know
I
can
say
well
I
want
a
10
gig
file
system
volume
and
you
can
get
a
10
gig
processing
volume.
You
don't
know
if
it's
going
to
be
ext4,
xfs
or
nfs.
E
E
D
D
A
E
On
some
minimum
bar
being
met
and
then
encoding
that
into
the
api,
so
that
the
api
is
whatever
the
minimum
standard
is
and
then
saying
you
know
what
there's
going
to
be
stuff.
That's
too
specific
for
us
to
put
into
the
api
and
for
that
we're
going
to
leave
it
undefined
and
then,
if
it's
undefined
yeah,
you
got
to
do
something
out
of
band
to
deal
with
it.
But
it's
like
does
that.
It's
always.
E
B
B
No,
my
question
is:
that's,
that's
that's
so
you
won't
be
able
to
scale
that
way,
though,
because
again
you're
bringing
a
manual
human
step
in
there.
My
question
is
only
this
like
what
sadhu
is
saying
to
would
that
parameter
still
be
in
the
storage
class,
or
are
you
thinking
it's
completely
out
of
bank,
because
storage
class
currently
supports
opaque
setup
parameters.
E
B
E
Then
through
and
I'm
saying
that
the
same
is
true
here,
so
I
can
put
in
the
in
your
previous
slide
when
app
was
called
parameters,
or
let's
say,
if
we
stay
on
the
site,
if
I
just
put
it
in
the
attributes,
then
I
can
know.
As
the
pod
author,
I
can
go,
read
the
bucket
class
and
say
it
has
attribute
x
that
I
know
I
care
about.
Therefore
I
will
use
it
in
my
pod
and
it
doesn't
require
any
communication
to
go
through
that
protocol
field.
E
I
don't
think
if
there
is
some
special
feature
that
only
that
I
happen
to
know
exists
on
my
s3
implementation,
but
it
isn't
generally
available
on
all
s3
implementations.
Then
I
can
consume
it
because
you
know
it's.
I
know
it's
in
the
bucket
class
and
also
I'll
write
my
pod
to
just
use
it
but
like
if,
if
the
protocol
itself
evolves
going
back
to
like
the
s3v4
is
found
to
be
not
good
enough
and
we
need
an
s3v5,
then
everyone
has
to
rev
something.
E
And
yes,
that's
going
to
be
a
slow
process
to
churn
through
to
add
a
new
protocol
based
on
the
way
we
do
api
reviews,
but
I
don't
see
a
way
out
of
that.
E
Protocol
one
one
could
imagine
it,
I
don't
know
if
you
talked
about
that,
but
like
so
some
sort
of.
B
What's
the
concern
with
this
approach
yeah
this
this
is
the
I
feel
like
this
is
a
good
middle
ground.
I
I
to
be
honest.
I
also
don't
like
the
approach
of
having
you
know,
well-defined
fields
rather
than
just
having
typed,
but
I
don't
see
a
way
around
that,
like
I
feel
like
this
is
the
best
middle
ground
that
we
can
find.
Given
the
different
concerns.
I.
D
D
B
E
E
B
A
Different,
like
a
variety
of
s3
types,
yeah,
no,
not.
B
E
B
E
F
So
I
I
think
that
the
use
cases
that
sydney
you're
mentioning
are
are
ex
like
advanced
configuration
and
even
for
from
how
I
looked
at
what
you
said
here.
I
think
parameters
should
have
been
like
the
attributes
back
again
and
like
this
one
would
have
been
something
like
a
config
for
for
the
client
right,
which
is
like
a
way
and
you're
saying
here
by
by
having
it
both
in
the
bucket
class
and
the
bucket
you're
saying
that
you
want
to
extend
it's
not
just
that
the
provider
would
be
able
to
inject.
F
Maybe
you
know
environment
variables
or
config
files
into
the
pods,
but
you're
saying
that
this
is
something
that
is
extended
to
bucket
classes
of
the
same
provider.
So
you
need
it.
So
I
would
say
maybe
that
piece
might
be.
You
know
you
can
get
it
using
the
normal
parameters
because,
like
this
is
a
specific
parameters
for
this
provider
right,
so
it
you
know
it
can
configure
it
as
well
and
then
true
only
that
provider
needs
to
if,
like
there's,
a
unique
capability,
which
is
what
you
are
referring
to
like.
F
I
want
a
lock
interface
or
whatever
I
want
to
configure
my
client,
my
sdk,
my
my
pods
to
behave
differently
based
on.
What's
the
provider's
knowledge
about,
you
know
the
bucket
capabilities
right,
so
I
think,
and
I'm
I'm
not
sure,
if
there's
a
concern
here
about
whether
providers
can
you
know
inject
things,
you
know
which
are
not
specified
by
the
spec
or
we
have
to
specify
everything
they
might
inject
into
pods.
For
this
to
be.
B
Yeah
yeah,
that
is
a
concern.
Actually,
I'm
not
I'm
not
so
first
question:
did
everyone
follow
that.
B
So
so
one
of
the
nice
things
that
he
brought
up
was
like
say
object,
locking
it
needs
to
be
communicated
to
the
workload
that
that
object,
locking
is
enabled
on
that
bucket
and.
E
E
The
problem
is,
if
object,
locking,
isn't
a
thing
that
can
be
guaranteed
to
mean
the
same
thing
across
every
s3
implementation,
then
we
can't
create
a
generalized
way
to
do
it,
that
that
is
portable
right.
It's
either
out
of
band
or
it's
portable.
I
don't
think
there's
a
middle
ground
between
those
two.
F
F
E
A
E
D
D
Sorry,
I
just
wanted
to
throw
this
in:
let's
focus
on
the
workload
right,
the
the
application
developer
who's
trying
to
use
this.
Why
would
I,
as
an
application
developer,
to
choose
the
overhead
of
going
through
cozy
versus
going
to
a
protocol
directly,
and
the
big
added
benefit?
Is
that
promise
of
portability?
B
B
Consider
I'm
I'm
I'm
on
board
with
what
you're
saying,
so
you
can
go
ahead.
A
G
H
A
bucket
class
so
like
for
google
class,
where
they
support
s3
and
gs
at
one
point
in
the
cap
we
said
it
could
be
a
list
of
protocols
in
the
bucket
class,
but
we've
we've
for
a
while
now
said:
no,
it's
a
single
protocol
and
in
google's
case
they
would
have
two
bucket
classes,
one
for
each
of
their
protocols.
I
know
ben
yover
brought
up.
A
A
H
B
Maybe
longer
yeah,
okay,
so
okay,
so
the
only
concern
I
want
to
address,
then
I'm
fine
with
what
south
bend
are
saying.
The
only
concern
I
want
to
understand.
So
if
we
go
back
to
this
approach
and
have
s3
be
put
in
here
or
every
protocol
specific
substructure
there
now,
the
only
concern
I
have
is:
how
do
we
manage
the
evolution
of
these
structures
because
it's
really
a
vendor
specific
thing?
How
does
a
new
vendor
add
themselves
to
this?
E
B
D
So
I
think,
there's
two
things
here:
right,
there's
the
protocol
and
there's
the
provider
and
what
what
is
likely
to
have
kind
of
more
options
is
the
number
of
providers,
not
necessarily
the
number
of
protocols.
D
H
Bob
could
I
ask
you
a
question
related
to
parameters,
though,
because
my
understanding
for
csi
is
that
in
the
storage
class
there
are
some
very
well-defined
parameter,
key
values
that
are
documented,
and
so
that
makes
parameters
feel
a
little
bit
weird.
It's
not
it's
not
totally
opaque,
because
we're
expecting
certain
keys
to
be
there
and
there's
behavior
changes
based
on
the
presence
or
absence
of
those
keys,
yeah.
H
D
The
reason
that
those
well-defined
keys
exist
on
the
csi
side
is
because,
on
the
csi
side,
we
provide
a
intermediary
between
kubernetes
and
the
sidecar
or
sorry
between
the
between
kubernetes
and
the
driver.
We
provide
a
sidecar,
the
external
provisioner
that
actually
handles
calling
out
the
to
the
driver,
and
so
these
well-defined
parameters
are
effectively
parameters
for
that
sidecar.
D
D
The
driver
itself
doesn't
know
anything
about
how
to
go
and
fetch
a
secret
from
a
kubernetes
api,
so
external
provisioner
sidecar,
please
go
and
fetch
it,
and
instead
of
passing
the
pointer
to
the
secret,
go
fetch
the
secret
and
pass
the
secret
down
in
your
call.
So
it's
a
way
to
modify
the
behavior
of
that
sidecar.
B
H
E
Yeah,
but
everything
else
is
just
opaque
and
goes
straight
down
right,
but
the
key
there
is
like
nobody
from
kubernetes
perspective.
You
don't
have
to
use
the
side,
the
standard
sidecars.
You
can
write
your
own
sidecars.
You
can
do
something
weird
you
can
fork
the
existing
sidecars
and
add
new
stuff,
so,
like
kubernetes
doesn't
care.
If
you
do
any
of
those
things.
B
So
I
have
I
have
based
on
that.
I
have.
I
have
an
another
proposal,
so
let's
say
we
we
keep
the
s3
or
protocol
specific
structure
within
the
protocol.
Do
you
think
we
still
have
a
use
case
where
opaque
map
is
is
warranted
where
it
could
be
useful.
H
H
I
just
think
the
use
case
is
a
generic
way
of
for
an
administrator
and
a
provisioner
to
partner
together
to
pass
specific
attributes
for
what
they,
what
is
desired
in
that
cluster,
for
new
buckets.
D
D
That
we
want
to
take
is
less
is
more
when
you're
defining
apis
like
this.
It's
always
easy
to
add
more,
it's
always
difficult
to
remove
things
after
you've
added
them,
and
similarly,
it's
always
easier
to
start
off
with
something
that
is
tighter
and
loosen
it
over
time
in
terms
of
api
versus
starting
with
something
loose
and
then
trying
to
tighten
it.
So
my
recommendation
would
be
unless
we
have
a
very
concrete
use
case
that
would
not
work.
D
Let's
start
with
something
tighter,
let's
start
with
first
class
fields
and
then
you
know
have
have
validation
on
those
and
if
we
discover
that
we're
painting
ourselves
into
a
corner-
and
this
this
is
the
purpose
of
doing
the
prototype
and
the
alpha
and
the
beta
is
to
to
figure
out
what
are
the
pain
points.
We
can
come
back
and
reiterate
on
this.
I
think
I'm
hearing
the
the
biggest
concern
is
around
potential
developer
velocity
in
terms
of
adding
to
the
protocol
in
the
future.
D
B
Them
coming
from
the
I'm
coming
yeah.
No,
that's
true!
I'm
coming
from
the
perspective
that
you
know
we
we've
already
seen
this
before
we've
seen
this
with
every
single
time
we
you
know,
we've
tried
to
support
something
that
is,
that
cannot
possibly
be
maintained
by
the
core
kubernetes
development
team
itself.
We
have.
F
Think
sid,
I
think,
sid.
The
way
I
was
I
was
hearing.
It
was
that
this
thing
is
going
to
be
an
api
for
the
for
the
application
developers
and-
and
the
velocity
in
you
know,
providing
reach
apis
for
them
is
also
confusing.
And
maybe
you
don't
want
to
get
like
too
quick
to
add
more
capabilities
to
the
application
developer
directly
and
but
but
you
know,
using
the
parameters
of
and
bucket
classes
to,
to
set
up
different
policies
per
bucket,
that
that
will
still
be
able
to
work
right.
B
F
Bail
out
from
this
problem
by
setting
a
policy
to
this
should
work
this
way.
But
if
before,
what
we're
trying
to
say
is
that
some
features
say
say
it's
going
to
be
locking
but
whatever
it
has
become.
So
you
know
important
in
the
protocol
that
you
know
it
should
be.
E
F
E
E
Obviously,
s3
is:
is
the
800
pound
gorilla
that
we're
going
to
have
to
address
there's
going
to
be
a
handful
of
others
that
are
going
to
be
really
important
to
people
that
are
involved
here,
but
there's
going
to
be
some
that
aren't
going
to
be
included
that
that
may
that
maybe
someone
will
want
to
include
later
and-
and
I
do
wonder
how
how
easy
it
is
to
like
add
another
protocol
if
we
don't
do
it
before
beta,
for
example,
or
before
ga,
you
know.
E
D
D
D
E
D
C
D
B
E
B
E
E
D
Effectively
changing
the
the
interface
by
which
workloads
will
interact
with
with
kubernetes
and
these
resources,
so
we
want
to
be
very
careful
about
that.
B
I
mean
yeah,
so
it
this
is
definitely
going
to
hold
back
any
vendor
from
from
being
able
to
run
on
kubernetes,
and
I
think
that
will
be
a
huge
issue
coming
from.
D
B
So
we
don't
get
to
define
what
what
when
no
we
should.
We
don't
get
to
define
what
what,
when
the
protocol,
what
protocol
support
by
the
vendor.
We
can
never
say
this
is
the
majority,
and
this
is
what
you
have
to
work
with.
I.
E
E
D
E
A
A
A
D
As
is
with,
you
know,
posix
on
the
file
side,
but
that
is
not
the
reality.
So
what
we
have
to
do
is
okay,
let's
try
to
kind
of
limit
it
as
much
as
possible
so
that
the
workloads
have
something
that
they
can
depend
on
and
then
underneath
the
cover
is
that
you
know
you
can
plug
in
as
many
different
vendors
as
possible.
E
One
more
go
ahead:
it
might
be
that
might
be
helpful.
So
again
on
the
on
the
csi
storage
side,
with
volumes,
we
have
something
called
volume
mode
and
it
used
to
be
just
file
system
and
there
was
an
effort
a
year
or
two
back
to
add
raw
blocks.
E
It's
like
an
entirely
different
mode
and
when
we
did
that,
like
all
kinds
of
things
had
to
change
in
the
kubernetes
api
to
accommodate
raw
block
volumes,
because
now,
instead
of
having
a
mount
point,
you
have
a
device
path
and,
like
your
pod,
definitions
had
to
change
to
cope
with
that
and
so
like.
There
are
two
two
modes
the
volumes
can
have
and
as
far
as
we
know
like,
there
will
never
be
a
third,
but,
but
maybe
so
so.
Okay,
so.
B
So
that's
where
I'm
coming
from!
No,
we
can
see
in
that
case,
kubernetes
is
doing
something,
so
so
the
csi
system
is
responsible
for
doing
something
with
that
mode.
In
our
case,
we're
not
doing
anything
with
with
say,
object.
Login
configuration
we're
just
passing
along
to
the
to
the
a
vendor
or
if
a
new
protocol
comes
in
we're
going
to
do
the
same
thing.
Let's
say
it's
a
whole
new
protocol.
Instead
of.
E
The
difference
crucial
difference
is
that,
like
I
can,
I
can
write
a
workload
that
says
based
on
the
way
I
design
my
workload.
I
need
a
raw
block
volume
and
so
I'm
going
to
write
my
pvcs
to
have
volume
mode
blocks.
My
pod
will
have
a
device
path
I'll
and
then
I
know
that
any
any
implementation
of
kubernetes
I
go
to
in
the
world
that
has
that
that
has
a
storage
class
with
roblox
volumes.
I
will
get
what
I
need
and
my
workload
will
run
and
it
will
be
fine,
so.
E
B
I
think
I
think
I
think
we're
doing
this
we're
going
too
far
with
the
portability,
because
we
want
to
make
the
bucket
request
portable.
We
even
earlier
we
agreed
that
we
cannot
possibly
make
the
bucket
class
portable,
so
the
bucket
class
is
something
that
the
admin
works
with
and
the
admin
is
expected
to
make
some
changes
as
they
move
across
providers
of
you
know,
infrastructures.
D
Agreed
with
that,
but
but
if
you
think
about
it,
it's
really
comes
down
to
the
bucket
request
is:
is
the
the
is
basically
the
workload
asking
for
object,
storage,
right
and.
D
Part
exactly,
and
so
when
you
are,
writing
a
workload.
What
is
the
interface
that
you're,
interacting
with
you,
have
the
request?
The
request
is
your
bucket
request,
and
that
is
portable
and
abstracted
away
nicely,
and
you
have
a
protocol
in
there
great,
but
there's
also
a
what
do
you
get
back
right?
What
do
you
get
inside
your
workload
surfaced
inside
your
workload,
and
that
is
also
an
api
yeah
and
that
needs
to
be
consistent
and
dependable
as
well.
E
B
I'm
talking
about
adding
a
generic
protocol
so
that
others
can
start
working
with
a.
F
I'm
just
saying:
maybe
it's
not
buckets,
I
mean
maybe
like
you
know,
you
could
redefine
buckets,
that's
for
sure,
but
but
what
you're
saying
is
that
I
cannot
take
cozy
right
and
maybe
play
with
it
to
the
fact
that
I
have
on
my
own
setup,
a
new.
You
know
mock
protocol
of
my
own
and
my
own
provider
mock
provider,
and
I
have
everything
hooked
up,
because
I
need
the
cozy
code
to
be
having
this
new
type
right.
E
D
Good
sorry,
one
more
thing
I
was
going
to
add
is:
it
is
really
unfortunate
that
there
are
all
these
different
protocols
right
and
one
of
the
goals
we
have
with
this
group
is
after
this
cozy
api
gets
established.
Potentially
we
could
introduce
a
new
standardized
protocol.
D
Maybe
it's
derivative
s3,
something
where
we
say.
Okay,
you
know
we
have
a
open
source,
common
standardized
object,
data
interface
and
that
kind
of
added
here
I.
B
Think
I
don't
know
if
that's
a
good
idea,
but
you
know
s3
tried
that
and
then
openstack
to
strike
that
and.
D
Yeah
I
I
I
completely
agree
that
this
is
a
you
know,
pie
in
the
sky,
dreamy
thing,
but
you
know
there
is
the
possibility
for
that
yeah.
I
think
we
do.
B
Many
many
times
yeah.
The
one
thing
I
wanted
to
say
was
sad.
I
do
agree
that
it
makes
you
know
it's
okay,
not
to
optimize
for
that
generic
new
protocol
structure
up
front,
but
but
I
do
believe
that
it
it'll,
you
know,
give
us
a
lot
of
flexibility
or
when
there's
a
lot
of
flexibility
to
start
start
playing
with
cozy
and-
and
you
know
also
start
contributing
back
to
us.
So.
D
I
was
gonna
say
I
completely
understand,
maybe
instead
of
kind
of
trying
to
design
and
see
this
is,
this
is
where
I
get
concerned
is
when
we,
whenever
we
put
a
hole
inside
of
an
api,
we
better
have
a
very
good
reason
for
doing
so,
because
that
becomes
a
leaky
abstraction
and-
and
this
is
exactly
that
and
that's
why
I'm
pushing
back
hard.
But
I
completely
hear
this
kind
of
argument
for
how
do
we?
D
How
do
we
not
become
a
bottleneck
in
terms
of
developer
velocity
from
the
perspective
of
integrating
all
these
different
vendors,
and
I
think
what
we
ended
up
doing
on
the
csi
side
was
saying:
hey
we're
not
going
to
declare
this
thing
ga
until
we
have
x
number
of
different
providers
or
implementers
that
have
successfully
implemented
this
thing
until
then,
we're
going
to
leave
this
api
open
and
flexible
and
continue
to
iterate
on
it
and
potentially
make
breaking
changes
to
accommodate
until
we
get
to
that
point,
and
that
was
very
helpful.
D
So
maybe,
if
you
set
such
a
goal
here
to
say
hey
for
a
given
protocol,
you
must
have
you
know
three,
four
different
vendors
that
have
successfully
implemented
that
protocol.
Then
we
have.
Basically
we
have
evidence
that
that
whatever
we've
defined
it
is
in
a
position
where
it
will
actually
be
useful
in
real
life.
B
B
D
Yeah
very
much
so
I
think
one
a
lot
of
vendors
wanted,
for
example,
workload
information
surfaced
into
the
driver
on
specific
calls,
and
we
pushed
back
hard
on
that,
and
so,
ultimately,
the
question
of
who
gets
to
decide
that
really
it's
you
and
this
community
right
here
right
and
instead
of
kind
of
this-
is
why
it's
very
important
for
all
of
you
to
wear
the
hat
of
the
application
developer
and
not
any
given
vendor
the
the
purpose
of
adding
this
this.
D
This
new
functionality
is
to
benefit
application
developers
and
as
long
as
all
of
you
are
kind
of
fighting
for
that
we'll
end
up
in
a
good
place.
We
have
to
realize
that
individual
vendors
will
always
want
something
kind
of
unique
to
themselves,
and
what
we
want
to
do
is
we
want
to
enable
as
much
of
that
as
possible,
but
in
a
portable
manner,
and
if
there
is
cross-cutting
functionality
that
we
see
available,
that
that
should
be
surfaced
across
all
of
these
vendors.
D
B
Yeah
yeah
yeah.
B
E
H
Just
wanted
to
say
sod,
I
I
appreciated
that
last
comment
of
yours
and
the
and
the
and
and
the
you're
on
the
focus,
keeping
the
the
developer
in
mind
as
the
primary
yes
decision
making.
I
I
think
we
get
away.
We
stray
from
that
from
time
to
time,
and
I
think
it
that
was
a
good
reminder
for
things.
D
B
D
Yeah
I
agree
with
then
I
think
this
is
the
whole
point
of
having
protocols
in
the
first
place
is
because
we
are
unable
to
do
that.
It's
a
compromise,
it's
not
ideal,
but
you
know
we're
saying:
well
we
have
to
do
this,
so
we
let's
keep
them
underneath
protocol
specific
structs.
B
I
I
have
one
question:
how
do
we
reflect
when
a
particular
implementation
does
not
support
a
particular
feature?
It
has.
E
To
or
it
doesn't
work
right
like
we
want
to
set
the
bar
low
enough
that
that's
not
a
hard
thing
to
do
or
or
that
there's
a
way
to
communicate.
Like
you
know,
if
it's
something
like
supports,
locking
buckets
like
that
should
if
it
ends
up
being
a
standardized
thing,
then
it
is
a
boolean
that
says
you
support
it
or
you
don't,
and
then
you
just
put
it
false.
If
you
don't
do
it
or
something.
D
Yeah
on
the
csi
side,
what
we
did
was
have
like
a
bare
minimum
that
you
have
to
support
right.
You
have
to
be
able
to
mount
some
sort
of
something,
and
that's
that's
the
bare
minimum
requirement
for
csi
beyond
that.
We
have
capabilities
that
a
driver
can
advertise
to
say
I
support
x,
y
and
z
and
then
kubernetes
is
able
to
read
and
detect
and
operate
on
those
to
say,
okay.
I
know
that
this
special
functionality
is
or
is
not
available
by
this
given
provider,
and
it
can
kind
of
handle
that
accordingly.
I
F
E
F
When
you
know
wasabi
comes
and
say
we
have
a
new
protocols
or
you
know
you
know
any
any
kind
of
vendor
whatever
says
it's,
not
my
my
my
you
know,
buckets
are
not
just
a
service
of
s3
compatible,
it's
a
new
protocol
with
new
capabilities,
and
I
have
my
sdks
and
all.
E
That
that's
why
I
was
talking
about
what
is
the
process
for
adding
a
whole
new
protocol?
Is
there?
Is
there
a
shortcut
we
can
offer
and
it
sounds
like
we're
sort
of
saying:
no,
there
isn't
and
that's
that's
good.
You
know
if,
if
you
want
to
add
a
brand
new
protocol,
you
go
through
the
same.
You
know,
process
of
updating
the
kubernetes
api
and
going
through
alpha
beta
ga,
and
you
know.
D
Yeah,
the
idea
of
the
protocols
is
that
we're
taking
a
bunch
of
different
providers
and
we're
forcing
them
to
go
through
the
same
tunnel
right,
we're
forcing
them
to
say
hey.
You
have
to
abide
by
this
api.
This
is
the
common
standard
api.
Therefore,
it
has
to
be
the
lowest
common
denominator
effectively
if
you
have
individual
vendors
that
have
functionality
outside
of
that
protocol
that
they
need
to
surface.
I
D
Yes,
absolutely
so,
if,
if
we
decide,
you
know
that
there
is
a
feature
that
is,
you
know
available
from
three
out
of
four
of
the
providers
for
a
given
protocol.
Then
yes,
let's
pull
it
into
a
common
layer
and,
let's
you
know,
put
it
behind
a
capability
or
some
some
sort.
A
D
Of
being
able
to
opt
into
that
functionality,
provider
doesn't
have
to
provide
that
functionality
in
order
for
us
to
be
backwards,
compatible
that
function
like
that.
That
needs
to
exist
where
somebody
doesn't
necessarily
add
the
newest
greatest
features
and
continues
to
work,
as
is
so
that
that
needs
to
exist,
but.
D
D
Let's
go
update
the
protocol
to
add
that
feature.
No.
Instead,
what
we
do
is
say:
okay,
does
that
feature?
Can
that
vendor?
You
know,
expose
that
feature
through
an
opaque
parameter
on
the
storage
class.
If
so
great
problem
solved.
D
If
two
or
three
vendors
start
you
know
supporting
that
functionality,
then
you
know,
maybe
it
makes
sense
to
actually
think
think
about
evolving
the
the
protocol
to
support
it
as
well.
B
So
without
a
time
I
I
want
to
bring
up
this,
this
sort
of
mechanism
for
people
to
add
a
whole
new
protocol
on
our
next
meeting.
But
but
this
is
a
good
discussion,
so
it
actually
helps
us
move
forward.
I
I
want
to
have
rob
on
the
call
next
week
and
try
and
understand
if,
if
the
current
structure
will
work
for
implementing
the
psycho,
there
was
some
key
problem
that
we
were
facing,
that
I
can't
quite.
A
B
Yeah
we
should
we
should
skip
the
meeting
next
week.
We
can
continue
the
monday
after
sing,
we'll
record
the
call
and
we'll
share
with
you.
A
D
So,
do
you
want
to
keep
the
monday
meeting
this
coming
week
or
not.
D
D
Okay,
so
I'll
go
ahead
and
remove
both
the
monday
and
thursday
calendar
invites
for
next
week.
So
we'll
reconvene
the
week
after.
B
Yeah,
I
think
that's
important.
Okay,
all
right!
Thank
you!
Everyone!
This
is
a
good
talk,
see
you
all
in
week
and
a
half.