►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 17 December 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Roadmap
that
we
had
created
before
the
cap
was
merged.
This
was
in
I
believe
september,
and
this
is
this
is
how
we
estimate
things.
This
is
how
we
expected
things
to
go
so
we
had.
We
had
two
goals:
one
was
to
get
the
kept
merged
and
then
the
second
was
to
get
api
approval
or
become
implementable.
A
A
We've
achieved
quite
a
bit
this
year,
so
we've
got
the
kept
reviewed
and
merged,
and
we've
started
pushing
quote
official
reports
and
we
are
actually
very
close
to
doing
that
demo
that
we've
been
talking
about
for
the
past
two
months
or
so,
and
I'll
go
into
what's
left
for
that
demo.
A
Next,
the
next
big
goal
other
than
the
demo,
is
to
get
the
api
review
done
and
once
that
api
review
successfully
goes
through,
we
can
start
planning
for
alpha
after
that
and-
and
I
still
believe,
we're
on
track
to
hit
alpha
by
version
1.21.
A
So
so
sad
when
you
were
gone
the
last
few
weeks
we
switched
to
using
github
as
our
project
management
tool
and
before
we
were
just
using
just
a
template
on
one
of
the
slides
that
I
was
making
and
and
our
first
milestone,
we
we've
defined
as
demo
one.
A
So
let
me
also
remove
this
one
for
a
second,
so
we
follow
the
standard
practice
of
to
do's
in
progress
interviews
and
done
thanks
to
nicholas,
I
hope,
he's
on
the
call
who
raised
this
issue.
A
He
mentioned
that
it
was
difficult
for
people
who
are
looking
to
contribute
to
the
project
to
get
started
and,
and
it
was
it
was
something
important
that
he
raised,
and
so
we
quickly
talked
about
it
and
decided
we'll
do
this
so
in
terms
of
what's
left
for
demo
one,
we
have
just
a
few
issues,
few
key
issues
that
need
to
be
addressed
and
a
little
bit
of
documentation,
changes
that
are
needed.
A
So
I'm
going
to
click
on
demo
on
milestone,
so
we
have
to
process
bucket
access
request,
delete
for
the
demo.
We
did
not
need.
We
did
not
say
we
will
do
delete
of
a
bucket
access
request
or
a
bucket
access.
However,
this
is
low
hanging
and
also
it's
a
it's
a
place
where
new
people
get
started
if
they
would
like
and
and
similar
to
that
updates
for
bucket
requests.
A
Maybe
srini
can
go.
Tell
me
a
little
bit
more
about
what
kind
of
updates
we
are
expecting
here.
B
C
B
The
update
has
to
respond
so
that
we
have,
we
have
to
protect
the
bucket
access
and
when
a
bucket
access
request
is
deleted
right.
So
the
code
is
in.
I
will
start.
A
Okay,
I
believe
summers
wanted
to
ps9,
they
assigned
themselves
to
it.
So
can
you
coordinate
with
summers
and
make
sure
you're
on
the
same
page,
yeah
yeah?
Can
you
cindy?
Can
you
just
reach
out
to
someone
on
slack
or
something?
So
you
wrote
that
away.
A
Okay,
yeah,
because
your
handle
is
different
here
from
slack.
That's
why
okay,
so
another
thing
is
bucket
request,
delete
which
is
similar
to
bucket
access
request,
delete
again,
it's
not
needed
specifically
for
demo
one,
but
it
would
be
it's
low
hanging
and
it
can
be
quickly
implemented
so
srini.
Another
question:
doesn't
the
bucket
request
delete
actually
follows
the
same.
B
You're
right
I
mean
bucket
access
request
and
bucket
request
are
performed
through
finalizers.
I
realize
that
yeah.
A
B
I'm
sorry
when
we
started
with
working
on
the
finalizers
I
just
wanted
to
update
the
community.
We
did
not
had
a
whole
well,
we
had
to
discuss
the
whole
picture,
so
I
I
created
these
these
issues
so
that
we
start
working
on
the
delete
aspect
of
it
and
then
I
realized
okay.
If
we
have
the
finalizers
on
bar
and
br,
we
just
need
to
process
them
and
we
don't
really
need
the
dealing
and
then
we
also
were
planning
to
have
the
finalizers
and
other
objects
right.
So
that's
the
whole
reason.
A
Yeah
yeah,
we
were
implementing
this
and
I
believe
there
was
an
issue
that
was
raised
so
the
the
I
think
this
you
know
we've
addressed
this
before
it
just
had
to
be
discussed
again.
The
issue
was:
how
do
we
process
a
bucket
request,
delete
if
the
corresponding
bucket
is
still
in
use
currently
in
used
by
a
part-
and
we
already
discussed
this
before-
which
is
we
have
a
finalizer
on
the
bucket
request
for
the
bucket
on
the
bucket
itself?
A
A
Okay,
all
right,
so
that's
that's
the
part
soon
did
I
get
that
right,
yeah,
absolutely,
okay,
and-
and
we
also
discussed
finalizers
with
rob
with
regard
to
how
does
an
admin
revoke
credentials,
while
a
part
is
running,
correct,
correct
is
rob.
B
A
He
has
a
conflict
this
at
this
time.
Okay,
so
the
rob
is
an
admin
at
red
hat
and
he
has
a
lot
of
experience
using
object,
storage,
provisioning
it
for
the
developers
and
customers
of
red
hat,
and
he
brought
up
a
good
issue.
What
if
there
was
a
scenario
where
the
admin
had
to
revoke
access
for
a
particular
bucket
without
pulling
the
part
down,
so,
for
instance,
you
can
have
you
can
have
a
pod
using
multiple
buckets
and
you
might
want
to
remove
access
just
for
one
of
them.
A
So
so
we
did
come
up
with
the
working
solution
for
this
but
yeah,
but
I
hope
I
wish
rob
was
here
to
also
explain
the
the
admin
side
of
things
better.
So
so,
in
a
case,
so
first,
the
simple
scenario
of
of
revoking
access
is:
if
a
bucket
access
request
is
deleted.
Let
me
go
back
to
the
type
that'll
be
easier.
A
If
a
bucket
access
request
is
deleted,
then
we
trigger
the
deletion
of
the
bucket
access.
If
a
bucket
access
is
currently
in
use
by
a
pod,
then
it
doesn't
go
away
because
there's
a
finalizer
and
once
the
pods
go
away,
the
node
agent
is
going
to
take
the
finalizer
out
and
then
this
can
be
deleted
when
the
finalizer
is
taken
out.
A
A
A
Okay,
so
so
yeah,
so
a
bucket
access
gets
cleaned
up
on
the
simple
use
case,
where
a
bucket
access
request
is
deleted
and
a
bucket
access
corresponding
bucket
access
gets
deleted
once
the
pod
stopped
using
it
now.
What
if
you
want
to
rework
access
while
a
bucket
access
is
in
use?
That
is
a
part,
is
using
it
at
that
point.
A
So
for
that
what
we
decided
long
back
as
well
was
if
the
user
deletes
a
bar
bucket
access
request,
then
we
wait
for
the
parts
to
die
before
bucket
tax
is
removed,
but
if
an
admin
deletes
the
bucket
access
directly,
that
is,
if
a
delete,
if
the
deletion
timestamp
is
set
for
the
bucket
access,
but
the
corresponding
bucket
access
request
is
not
been
deleted.
There's
no
deletion
timestamp
on
that,
then
we
know
it's
a
force
removal
and
at
that
point
we
go
ahead
and
revoke
the
credentials
alone.
A
We
don't
kill
the
part
or
anything
we
just
revoke
the
credentials
for
that
bucket
access.
E
Spec
sid,
I
have
a
question
on
that,
but
maybe
this
is
too
lower
in
the
code,
but
the
only
trigger
would
need
to
be.
The
change
would
be
the
deletion
of
the
ba,
because
that
would
result
either
a
force
delete
by
the
admin
or
as
a
natural
consequence
of
the
user,
deleting
the
var
it
gets
deleted.
So
do
you
need
to
do
you
need
to
have
special
code
to
handle
them
as
two
different
cases
yeah,
so.
A
We
want
to
make
the
distinction
between
a
forced
removal
of
axis
and
natural
removal
of
access,
I
would
say-
or
we
wait
for
the
parts
to
end
before
deleting
the
credentials
versus
we
delete
the
credentials,
even
if
a
part
is
using
it
right
now,.
D
So
I
think
one
drawback
to
this
approach
is
this
idea
that
you
can't
really
enforce
order
in
kubernetes,
especially
in
the
creation
and
deletion
of
objects,
and
so
you
can
imagine
if
finalizer's
finalizers
enforce
right
right
without
finalizers.
I
guess
is
my
point
and
in
this
case,
if
you
you
know,
imagine
that
somebody
manually
created
a
bucket
access
and
bucket
access
request
in
a
single
yaml
file,
and
then
they
went
and
deleted
that
yaml
file,
the
order
of
deletion
would
not
be
necessarily
guaranteed.
D
H
G
To
tell
the
system
like
I
don't
care
what
the
user
is
doing.
I
want.
I
want
this
gone
yeah,
but
but
under
normal
circumstances,
if
the
user
deleted
a
bucket
access
request,
while
the
pod
was
still
there,
the
bucket
access
request
would
hang
around
until
the
pod
cleaned
up
and
then
correct,
and
then
I
like
both
of
those
behaviors
that
makes
sense
to
me.
D
A
That's
a
good
point:
the
only
person
who
could
do
that
is
the
admin
correct,
because.
G
Anyone
with
the
appropriate
or
baggy
game-
oh
sorry,
go
ahead
ben.
Anyone
with
the
appropriate
our
back
access
to
delete
that
object
right
would
typically
be
an
admin
or
some
controller
that
the
admin
had
granted
access
to
and-
and
I
can
imagine
you
know-
various
controllers
coming
in
and
helping
people
do
these
things.
A
Yeah,
that's:
how
big
of
a
concern
is
that,
should
we
even
address
it.
G
A
If
that's
universally
good,
but
I
think
in
this
case
I
think
we
we
do
have
enough
of
fail
safes
and
and
if,
if
it's
used
the
way
we
intend
it
to
be
used,
which
is
which
is
a
loaded
statement.
I
would
say,
but
if
if
the
system
is
used,
where
our
back
roles
are
not
given
for
all
bucket
resources
to
just
about
anyone.
D
A
Data
is
not
lost,
you
can
create
a
new
bucket
access
request
and
new
bucket
access
gets
created
and
the
part
can
continue
using,
I
mean
part,
can
start
using
the
new
pa,
but
the
pod
would
have
to
restart
for
that.
D
I
think
that
seems
reasonable
as
long
as
there's
the
the
admin
who
made
the
mistake
can
undo
it,
they
don't
get
stuck
and
don't
have
to
go
through.
You
know
a
big,
complicated
set
of
steps
to
recover.
A
Yeah,
so
this
is
a
concern
that
we
brought
up
actually,
so
we
need
to
make
a
distinction
between
bucket
access
request.
That's
already
been
provisioned
the
bucket
access
versus
a
bucket
access
request
that
hasn't
provisioned
a
bucket
access.
A
I
D
A
It
is
sort
of
I
mean,
I
think
I
think
using
the
word
dangling
is
a
pretty
strong
expression
of
what's
going
on
here,
but
you
know
if
the
user
has
access
or
user
can
delete
the
bucket
access
requests.
It
is
name
spaced
and,
and
it's
available
for
the
user
to
manipulate.
G
G
J
G
J
G
D
A
D
A
So
here's
the
other
issue,
the
csi
driver-
does
not
get
an
event.
While
the
pod
is
running.
A
G
G
G
G
I'm
saying
like
the
in
kubernetes
the
whenever
anything
goes
wrong.
Your
first
recourse
is
just
delete
the
pod
right,
because
then,
if
you
ever,
if
you
have
a
replica
set,
it'll
just
make
a
new
one
and
it'll
get
rescheduled
and
all
your
problems
will
be
solved
for
not
all
of
them,
but
a
large
class
of
problems
is
solved
by
just
deleting
your
pod
and
letting
the
replicas
that
do
its
thing,
and
so
this
is
a
case
where
that
remedy
applies,
who
uses
replica
sets
anymore?
G
A
Replica
group-
maybe
I
forgot
there-
was
something
else
anyway.
So
so
what's
the
conclusion
here?
How
do
you
want
to
move
forward?
I
think
it
sounds
like
we,
the
design
that
I
just
described
before
we
got
into
the
discussion
about
how
we
deal
with
the
revoked
credentials
where,
if
a
credential
is
revoked,
then
we
expect
the
user
or
admin
to
go
ahead
and
restart
the
product.
A
Delete
the
part-
and
you
know,
make
it
get
recreated
somewhere
else
and
when
it
gets
recreated,
will
obviously
pull
the
new
bucket
access
if
there
is
a
bucket
access
that
gets
bound
to
the
old
bucket
access
request.
Yeah.
That
sounds
reasonable.
Okay,
okay,
so
is
someone
taking
notes?
A
So
that
would
address
bucket
access
request,
deletes.
A
Okay,
so
now
getting
into
customize,
so
we
were
adding
a
customized
template,
a
customize
spec
that
would
that
would
allow
us
to
deploy
all
of
the
object,
storage
components
in
in
one
shot.
Now,
to
do
that,
you
know
we
got
started
with
customize
and
I
just
want
to
track
the
progress
of
what's
going
on
here.
I
believe
stages
on
the
call
tejas
is
working
on
a
part
of
it
yeah.
This
is,
I
did
this.
A
Okay,
so
nicholas
I,
I
know
you
said
you
might
be
able
to
help
out,
and
you
have
expertise
in
this
it'll
be
good.
If
you
can
add
any
issues
here
or
if
you
can
you
already
added
issues,
if
you
can,
if
you
can
go
ahead
and
it's
here,
I've.
C
A
A
Yeah
we
we
encourage
everyone
here
to
you,
know,
add
issues
but
also
contribute
code
yeah,
it's
best
when
we,
when
everyone
contributes
code,
that's
the
that's
the
best
way
everyone
can
participate
and
also
help
us
move
forward
quicker.
A
There
is
one
issue:
that's
been
waiting
for
a
while
that
I
want
to
quickly
bring
up.
So
is
there
anything
you
can
do
to
help
us
push
this
forward
because
we're
relying
on
this
to
add
ci
for
all
of
the
components.
B
Can
answer
this?
I
mean
the
problem
with
this
pro.
This
piar
is
that
aaron
is
the
only
person.
Spiff
xp
is
the
only
person.
Oh.
A
Yeah,
you
have
oh.
M
B
Okay,
so
what
yeah?
I
will
be
ready
for
other
two
prize
repos.
So
then
we,
you
should
be
good.
Rest
all
is
under
our
control.
So.
B
A
So
yeah
this
adds
ci
to
all
the
different
components,
and
you
know
it'll
really
help
we're
trying
not
to
push
too
many
changes
in
for
the
other
components.
So
we
have
ci
rd
for
the
controller,
we're
trying
not
to
push
too
many
changes
in
we've
held
back
a
few,
so
it'll
be
good.
A
If
we
can
get
this
through,
because
some
people
have
actually
on
their
own
signed
up
to
work
during
the
holidays,
I
mean
not
full-time
or
anything,
but
just
whenever
they
get
a
chance,
rob
is
one
of
them,
so
I
want
to
make
sure
people
like
that
are
unblocked
in
terms
of
this
kind
of
things,
infrastructure
kind
of
things.
A
D
A
C
Yeah
sure
so
all
the
code
has
been
moved
over
from
the
old
repository
now
I
believe
so
now
it's
just
a
matter
of
adding
some
of
the
other
things
that
we've
discussed
like
the
the
finalizer
and
yeah.
I
think
the
finalizer
and
the
support
for
folders,
which
shouldn't
be
too
bad.
Okay,.
A
So
the
folder
yeah
support
for
folders
shouldn't
be
bad.
I
reviewed
your
pr
yesterday,
so
I
asked
for
you
to
give
an
update
because
you
know
I
want
to
go
over
the
finalize
the
logic
on
the
csi
provisioner
as
well.
A
Maybe
you
could
also
explain
how
the
design
that
we
discussed
yesterday
or
monday.
C
Yeah
my
understanding
for
the
csi
adapter,
like
the
the
csi
driver
side
of
this,
was
that
we
would
essentially
add
the
finalizer
onto
the
bucket.
C
C
Oh
okay,
let
me
update
my
notes,
then
yeah.
We
would
essentially
use
the
the
the
volume
id
to
add
a
finalizer
onto
the
bucket
access
on
mount
and
then
remove
that
finalizer
when
the
part
dies
yeah,
when
or
in
the
pod
days
are
when
we
unmount
right
now.
C
A
So
so,
to
summarize
what
chris
is
saying,
so
this
goes
back
to
the
old
bucket
access
request,
delete
and
bucket
request
delete.
We
talked
about
how
we
wait
for
pods
to
die
before
we,
you
know,
delete
it
in
some
cases,
he's
talking
about
how
we
know
if
a
pot
is
died
so
for
every
part
that
uses
the
bucket,
we
add
a
finalizer
for,
for
every
instance
of
the
part
we
add
a
finalizer
into
the
bucket
axis
and
the
the
the
value
of
the
finalizer.
A
The
string
is
going
to
be
the
the
pod
id
or
oh,
the
volume
id
that
is
the
unique
pvc
id
that
the
pod
uses,
that
instance
of
the
part
uses
and
when
the
part
dies
there
is
no
run
stage.
Volume
is
called
in
the
same
by
the
csi
driver.
Then
we
go
ahead
and
remove
that
finalizer.
For
that
part
from
bucket
access,
that's
how
we
know
if
a
pod
is
using
a
bucket
access
or
not.
L
A
All
right
so,
chris,
I
noticed
in
the
pr
buck
node
on
stage
volume
was
unimplemented
at
this
point,
so
it
is,
I
wasn't
entirely
sure,
did
not
unpublish
volume
already
have
this
logic,
or
did
I
miss
it?
Maybe
no.
A
Okay,
okay,
understood
yeah,
so
so
those
are
the
parts
I
mean.
We
don't
even
need
this
part
technically
for
demo
one.
But
since
we
are
stuck
with
things
like
ci
and
we
need
to
add
some
documentation,
there
are
some
cycles.
Some
from
the
developers
have
so
they're
moving
forward
ahead
of
demo
one.
A
The
final
focus
that
I
want
to
bring
up
in
terms
of
tasks
in
the
project
is
documentation.
I
want
to
first
of
all
thank
summers.
They've
added
a
whole
bunch
of
pull
requests
regarding
documentation
there.
There
was
an
issue
I
think
that
was
brought
up
with
one
of
the
tables
and
I
believe
they've
also
addressed
that
issue.
I
need
to
follow
up
on
that.
I
haven't
yet
done
that.
Okay,
that's
the
one.
A
I
think
this
is
a
very
important
thing
that
that
we
should
be
working
on
then
we
should
spend
more
time
on.
It
goes
back
to
adding
more
developers
or
including
more
people
in
the
project.
A
Can
be
easily
answered
with
documentation
and
also
it
will
it
will
help
ensure
consistency
in
in
our
design,
thinking
where
we
don't
have
to
revisit
how,
for
instance,
finalize
this
work
in
respect
to
this.
So
so,
if
there's
anyone
available
to
help
with
the
documentation
effort,
so
summers
is
obviously
helping
quite
a
bit.
Others
are
also
encouraged
to
help
get
this
out
of
the
way,
basically
just
early,
simple
documentation
that
talks
about
key
parts
of
the
design
so
other
than
the
readme.md.
A
We
want
to
document
how
bucket
creation
works.
Access
creation
works
just
minor
details
about
the
grpc
spec,
and
I
say
that
minor
details
right
now,
because
the
grpc
spec
will
evolve.
There
will
be
changes
requested,
possibly
from
the
api
review
or
as
we
write
more
code,
so
we
don't
have
to
go
heavy
on
that's
that
front.
Similarly,
some
documentation
about
the
api
is
needed.
A
I
want
to
make
sure
that
next
year,
when
more
developers
get
interested,
they
want
to
find
out
what
this
is
all
about,
or
you
know
non-developers
that
look
at
this
and
want
to
find
out.
What's
the
state
of
object,
storage
and
kubernetes,
they
should
be.
K
Able
to
easily
tell
what
it
is.
I
was
thinking
a
place
to
put
where
answers
to
the
questions
of
how
everything
works
could
go
in
the
spec.md.
A
Spec,
I'm
I'm
okay
with
that.
I
want
some
input
from
the
csi
people
here.
So
assad
is
maintaining
this
container
storage
interface
spec.
This
one
just
has
the
specification
for
the
grpc
api.
However,
this
might
as
well
be
extended
to
do
both.
Do
you
have
any
thoughts
on
this
or
do?
Can
you
tell
us
why
only
the
documentation
for
spec,
the
grpc
spec
is
here
and
the
rest
of
the
documentation
is
in
the
github,
the
what
was
the
other
other
than
the
grpc
spec?
D
Yeah,
so
the
spec
itself
is
self
documenting
what
each
individual
operation
does,
and
this
page
that
you're
looking
at
is
kind
of
kubernetes
developers
of
csi
drivers.
Right.
If
you
look
at
the
specification
alone,
it's
it
tells
you
about
all
the
different
csi
specific
calls,
but
it
doesn't
really
tell
you:
how
do
you
take
a
driver
package
it
deploy
it
on
kubernetes
and-
and
so
this
this
page
is
like
the
end-to-end
developer,
csi,
developer,
documentation
and
so
the
way
that
we
looked
at
it
was.
D
You
know
we
want
to
have
a
standalone
place
where
a
csi
developer
can
come
and
find
out
everything
that
they
need
to
know
which
is
this
website,
and
then
we
have
end
user
facing
documentation
on
the
kubernetes.io
page,
which
basically
just
says
you
know,
use
this
csi
volume
type.
If
you
want
to
use
it
and
for
details,
go
to
your
storage
vendor
for
details
on
how
to
deploy
a
csi
driver,
and
then
the
storage
vendors
come
here
to
figure
out
how
to
create
their
drivers.
Yeah.
D
The
individual
pages,
like
the
spec.md
and
the
various
side
cards,
are
all
self-documenting
in
their
own,
basically
documenting
that.
A
Feature
yeah
right
right,
okay,
that
sounds
good.
I
think
we
can
follow
almost
a
similar
model
here,
maybe
because
we
we
we've
seen
how
csi
has
done
it.
We
can
even
do
some
improvements
on
top
of
it.
I
think
our
main
priority
right
now
is
one
getting
onboarding
developers,
so
some
sort
of
documentation
about
just
I
mean
even
an
architecture
diagram-
would
help
like
something.
A
A
A
Now
it's
not
it's
not
something
they're
going
to
start
doing
right
away,
but
if
we
start
get
started
on
the
effort
now
or
if
we
even
talk
about
getting
started
on
the
effort
now
we
won't
be
late
when,
when
you
know
when
we
have
all
the
things
in
place
to
for
people
to
write
for
renders
to
write
their
own
drivers,
but
we
don't
have
the
documentation.
A
A
Just
a
little
bit
of
forethought
in
planning
can
avoid
that.
I
think
we
can
get
started
on
a
very
basic
website
right
now.
It
can
just
be
an
empty
website.
It's
a
github
website,
so
it's
a
markdown
repository
really
just
with
markdown
files
that
that
just
goes
over
an
introduction
which
has
a
diagram
that
says
what
it
is
and
maybe
something
like
a
grpc
spec
and
in
the
future.
We'll
add
a
line
item
for
writing.
Your
own
drivers.
A
We
do
have
a
kubernetes
dash
cozy.github.io.
We
have
that
org,
we
have
yeah.
That
should
be
a
good
place
to
start,
but
I
do
want
to
talk
about
naming.
We
have
15
minutes.
We
should
get
into
that,
and
I'm
glad
sad
is
here
today
for
that.
A
Because
that's
going
to
define
where
we
put
the
docs
so
sad,
I've
been
thinking
over
this
quite
a
bit
and
and-
and
you
know
the
name,
csi
was
a
really
powerful
name
when,
when
it
came
out
because
before
that
was
flex
and
flex
didn't
take
off
for
multiple
reasons,
there's
technical
reasons
and
other
reasons,
but
a
name
like
flex,
it
does
affect
how
you
know:
volumes
are
perceived
or
storage
just
perceived
or
whatever
effort
we're
trying
to
do
is
perceived
with
the
name.
A
A
It's
especially
difficult
when
I'm,
when
I'm
trying
to
describe
it
to
the
the
you
know
it's
a
small
company
of
minions,
so
my
manager
is
the
ceo
there
and
I
try
to
explain
to
him.
A
You
know
I'm
working
on
cozy
today
and
he
gets
confused
and
then
I
have
to
go
back
and
explain
to
him,
contain
object,
storage,
interface
and
so
on
and
so
forth
and
same
thing.
When
I
talk
to
customers,
you
know
when
I
use
the
word
cosy.
There's
this.
It
brings
about
a
strong
reaction,
no
doubt,
but
I
wouldn't
say
it's
necessarily
something
that
makes
us
look
the
best.
A
Storage
is
going
to
be
just
as
big
as
csi,
simply
because
of
the
scale
issue.
That
file
and
block
today
cannot
solve.
Maybe
it'll
solve
in
the
future,
so
so,
and
kubernetes
is
the
forefront
of
any
kind
of
standard
or
any
kind
of
common
interface
that
we
come
up
with
for
different
vendors.
What
kubernetes
defines
ends
up
being
the
new
standard
I
mean
oci
is
a
very
good
example
of
that
docker
dominated
that
space,
and
you
know
the
cto
of
core
os
came
up
with
this.
A
I
was
at
the
meeting
where
he
brought
up
this
proposal
and
oci
took
over
and
everyone
has
their
own
oca.
Runtime
now,
and
I
say
everyone
all,
the
major
vendors
and
and
now
oci
is
what's
only
supported.
Nobody
like
kubernetes,
doesn't
pick
any
sides
and
all
the
vendors
that
move
to
follow
oci.
A
I
believe
what
we
do
are.
The
work
we
do
is
is
very
important
for
the
future
and
we
should
have
an
acronym
that
sounds
simple,
like
csi
or
oci,
that
doesn't
mention
kubernetes
right
it.
It
doesn't
have
to
mention
kubernetes,
because
ben.
I
think
you
brought
it
up
and
I
think
it's
a
very
good
point,
which
is:
it
is
completely
possible
that
people
use
the
grpc
spec
without
anything
to
do
with
kubernetes
and
and
integrate
with
the
grpc
spec
to
provision
object,
storage
across
different
vendors.
G
I
E
G
G
A
Especially
because
it
doesn't
add
too
much
complexity
on
our
part,
so
my
proposal
is,
we
should
call
this
object.
Storage,
interface,
sad.
What
are
your
thoughts
on
that.
D
A
It
really
does,
but
it
doesn't
collide
with
anything
in
the
kubernetes
world
and
also
if
you're
gonna
get
a
good
good
acronym
like
osi.
It's
definitely
going
to
collide
with
something
just
as
long
as
it's
not
something
current.
It's
not
something.
That's
already
there
in
the
same
space,
it
should
be
fine.
A
J
A
G
I
D
So
that
wouldn't
be
great
just
because
of
kind
of
the
cognitive.
G
G
A
No
qualifiers
yeah,
no,
it
makes
us
look
arrogant
almost
so
that's
that's
not
right.
I
see
what
you
mean,
but
we
can
fix
that
with
documentation.
Don't
you
think.
G
I
I
just
I
don't.
I
don't
see
the.
I
don't
share
your
objections
to
to
having
this
the
c
I
mean
if
it's
the
pronunciation
like
you
could
call
it
cosi
or
something
you
could
just
change
it.
In
fact,
when
I
was
a
kid
there
was
a
museum
in
my
hometown
of
columbus,
ohio
called
c-o-s-I
and
everyone
called
it
cosi,
and
so
the
first
time
I
saw
a
c-o-s-I.
I
was
like
oh
it's
cosi
and
then
people
started
saying
cozy
and
I
was
like.
Why
are
you
saying
it
that
way?
G
I
was
used
to
the
other
pronunciation,
but
I've
gotten
used
to
this
one.
I
don't
know.
I
think
pronunciation
is
a
solvable
problem,
especially
as
it
becomes
better
known
by
whatever
name
we
do
choose.
People
will
just
take
it
on
board.
Just
like
I
mean
csi
was
weird
to
people
when
it
was
new
and
then
once
everyone
had
read,
18
blog
articles
about
csi
like
oh
okay,
I
know
what
that
is.
A
A
D
I
agree
with
ben
here.
Branding
is
difficult,
but
you
build
a
brand
over
time
right.
If
you
think
about
like
kubernetes,
people
probably
would
have
thought
it
was
a
really
bad
and
weird
difficult,
terrible
name
in
the
beginning.
A
A
People
are
like
what
how
do
you
say
it
again?
It
made
it
made
you
like
look
twice,
but
it
wasn't
another
word.
It
didn't
invoke
the
the
connotation
that
another
word
in
works.
Cozy
sounds
like
we're,
not
even
serious.
It's
it's
easy.
It's
cozy.
A
D
The
big
difference
with
csi
was
that
we
wanted
it
to
be
compatible
beyond
kubernetes
with
other
orchestration
systems
as
well.
Kubernetes
wasn't
kind
of
the
de
facto
orchestration
system
back
then,
and
so
mesos
wanted
to
implement
it
and
others
as
well.
So
that's
why
there
is
no
mention
of
kubernetes
at
all
in
the
csi
spec
and
it
stands
alone
as
a
separate
repo
for
cozy.
I
don't
believe,
there's
any
intention
to
adopt
it
beyond
kubernetes
so
yeah.
I
don't
think
that
would
be
problematic
in
my
head.
D
Right
just
for
the
sake
of
abstraction
yeah,
that
makes
sense.
J
J
Sorry,
someone
was
speaking:
go
ahead
within
the
open,
the
world
of
open
source.
I
didn't
discuss
with
the
others
in
the
team
yet,
but
I'm
pretty
sure
the
people
within
the
soda
foundation
could
be
rather
interested
in
using
this
interface
for
their
applications
as
well.
Nice.
J
A
G
A
We're
looking
at
it
as
a
bigger
problem
than
it
is
with
thinking
about
naming
for
other
implementations.
A
Again,
we
are
when
we're
building
this
for
kubernetes.
That's
what
we're
really
doing.
We're
saying
it's
open
for
others
to
use
we're
not
we'll
even
support
them
to
do
it,
but
the
name
is
not
going
to
be
something:
that's
holding
them
back.
They'll
just
use
they'll
just
call
this
the
spec
for
kubernetes
osi.
A
I
I
I
want
to
start
changing
the
the
language
around
what
we
do
and
I
like
kubernetes
osi
at
the
website.
We
have
the
kubernetes
osi
github
r.
The
first
thing:
that's
what
I
did
and
registered
that
github.
B
G
B
G
A
And
and
let's
say
we
move
forward
with
that
sorry,
how
does
that
change
the
pronunciation.
J
A
D
H
I
like
either
of
those
and
and
part
of
the
reason
for
my
preference
is
I
like
drawing
our
lineage
back
to
the
csi,
because
there's
a
lot
of
you
know
inspiration
that
we're
taking
from
csi
in
the
design
and
the
architecture,
and
I
think
some
of
the
success
shine
from
csi
will
get
onto
to
cosi
or
cosi,
whichever
and-
and
I
think
that's
a
a
very
big
benefit,
as
opposed
to
like
service
catalog
that
you
know
really
really
didn't
have
that
going
for
it.
You
know
service.
A
G
So
is
it
possible
that
if
we
use
the
name
kubernetes
we'll
run
into
trademark
issues
with
the
linux
foundation
because
they
own
that
trademark
and
no,
we.
A
A
Here
so
that
is
true,
we
come
from
the
csl
lineage.
A
H
H
You
know
if
you're
running
in
you
know
a
private
data
center,
you
have
your
block
services
coming
from
enterprise,
storage,
arrays
or
whatever,
and
and
you
have
your
objects,
you
know,
services
coming
from
your
private.
H
H
A
Developers,
yes,
like
sadha's,
mentioning
branding
is
difficult
or
it's
tricky,
I'm
looking
at
it.
Also,
I'm
also
taking
into
consideration
how
an
enterprise
is
looking
to
buy
storage
looks
at
it
now,
some
some
developer,
actually,
admin
would
be
responsible
for
going
and
making
a
case
for
this,
and
and
csi
has
a
strong
case
today
at
minio
when
we
deploy
on
kubernetes,
and
the
first
question
we
get
asked
is:
is
this
a
csi,
that's
as
f
as
much
as
they
know,
even
though
they
have
the
buying
power?
A
G
It
is
a
csi
driver
and
it
does
many
things
for
many
storage
platforms
when
we
eventually
ratify
this
cozy,
whether
it's
cosi
or
kosi,
something
else
we're
just
going
to
add
it
to
trident
it's
just
going
to
be
another
thing
that
trident
does
and
we're
going
to
tell
our
customers.
Look
you
get
trident,
you
get
csi,
you
get
the
cozy
thing
you
get.
You
know
it's
all
there
yeah.
I.
A
G
A
See,
that's
that's
why
branding
is
tricky.
So
I'll
tell
you
this.
There
are
a
lot
of
reasons
for
and
against
and
obviously
there's
we
can't
know
what
the
right
answer
is.
Until
some
standard
is
set,
I
think
kubernetes,
osi
or
cube,
osi
or
kosi
is
better
than
cosi.
D
I
think
I
would
caution
against
it
and
the
reason
is
there
is
already
a
lot
of
momentum
for
the
existing
naming
in
terms
of
you've
got
a
bunch
of
github
repos
with
that
name.
We've
already
got
cube
contacts.
You
know
with
with
that
existing
name,
and
I'm
people
are
starting
to
recognize
that
this
is
the
thing
even
up
at
the
cncftoc
meetings
like
it's
come
up.
Oh
cozy
is
starting
and
it's
a
project,
and
you
know
it's
a
thing
so.
G
Me
I'm
still
in
favor
of
keeping
the
csi
name
as
I've
said,
yeah.
D
A
Sorry
yeah,
the
the
main
thing
is
I
I
can
tell
you
pretty
confidently
that
coming
from
someone
who
is
making
the
buying
decision,
it
does
become
an
ambiguity
and-
and
it's
going
to
help
all
the
vendors
here
to
to
have
this-
be
a
slightly
more
distinct
from
what
it
is
today,
simply
being
able
to
call
it.
Kubernetes
osi
is
different
enough
that
that
the
branding
will
will
pick
up
and
make
sense
for,
for
whoever
the
buyer
is
like
someone
suggested,
aaron
suggested
a
service
catalog
didn't
work
out.
A
I
used
this
example
last
time.
If
google
was
called
search.com
or
yahoo
was
called
portal.com
or
aws
is
called
cloud.com,
people
wouldn't
be
as
interested
and
it's
been
shown
there
was
a
cloud.com
and
which
is.
D
I
think
it's
you
know,
how
much
is
it
the
name
and
how
much
is
it
the
product
right?
If
you
talk
about,
you
know,
service
catalog,
how
much
of
it
was
the
you
know
the
name
versus
what
was
actually
delivered,
and
I
think
I
tend
to
lean
towards
if
we
deliver
a
very
solid
product
in
terms
of
an
interface
that
is
reusable
it's
portable,
it
is
an
attractive,
useful
thing.
It
won't
matter
what
name
we
put
on
it.
H
There's
plenty
of
voices
on
the
on
the
phone
that
that
kind
of
work
regularly
and
talk.
You
know
a
lot
with
enterprises,
you
know,
I'm
you
know
I'd
come
from
delhi
and
see
you
know
spent
20
years
working
with
enterprises,
but
I
don't
think
necessarily
that
the
failure
service
catalog
was
the
naming.
I
think
it
didn't
help
that
it
wasn't
something
that
very
descriptive,
but
also.
H
A
H
A
No,
I
I
get
it,
we
can
still
call
it
cozy,
there's
nothing
wrong
with
that.
I
just
want
to
rename
it
to
kubernetes
osi,
because
we're
really
not
integrating
at
the
container
layer
either
and
calling
it
kubernetes
osi.
We
could
still
go
with
the
cozy
name,
except
in
the
documentation.
We
can.
We
can
write
this
as
just
kubernetes
osi
people.
I.
D
A
I
A
D
The
root
org
is
container
storage
interface.
It's
not
csi.
A
Okay,
so
that
we
have
very
little
time
I
I
still,
I
don't
believe
the
cost
is
high
sad,
I
think
the
cost
is
high
is
going
to
be
way
higher
in
the
future
and
I'm
speaking
from
experience
with
vendors
and
speaking
to
others
about
with
this
name,
and
I
want
what's
best
for
us
and-
and
you
know,
I
think
a
good
compromise
would
be
kubernetes
osi.
A
That's
that's!
That's
all
I
have
actually
now.
If
the
community
feels
strongly
that
the
cosi
is
the
right
way,
we
can
go
forward
with
that.
Obviously,
but
I
would
ask
everyone
to
just
consider
it
once
again.
I
do
spend
a
lot
of
time
talking
about
this,
so
that's
where
I'm
coming
from.
That's
all,
I
have
to
say
yeah
yeah,.
A
About
kubernetes,
I
should
say.
A
I
think
the
repositories
you
know
it's
really
long,
I
mean
just
in
terms
of
using
it.
Obviously
it's
usable,
obviously
you
can
bookmark
it
type
it
out.
It's
not
too
much
of
a
big
deal
but
again,
making
things
simpler,
making
things
easier
and
small-
and
you
know
easier
to
consume-
does
help
in
a
big
way
in
the
long
run,
at
least
like.
D
I
think
there's
two
things
that
are
being
brought
up
here.
One
is
like
the
name
of
the
existing
repos
is
too
long
right.
You
could
just
collapse
that
down
to
cosi,
dash
spec.
If
you
want,
I,
I
don't
think
we
should
do
that.
We
didn't
do
that
for
csi
and
it
wasn't
an
issue.
The
second
is
about
changing
the
names
and
the
first
part.
I
think
I
would
lean
against
in
a
second
honestly
renaming
I'd
say
I
don't
see
the
benefit
honestly.
A
I
yeah,
I
think
I
think
what
will
help
is
you
know,
speaking
to
some
speaking
more
about
it.
We
can
bring
it
up,
maybe
once
again
next
year,
this
is
the
end
of
the
year.
We've
made
excellent
progress
as
right.
Now,
cozy
is
a
great
name.
We
also
own
kubernetes,
osi,
github,
r,
sorry
cozy,
github
cosi.
A
So
again
I
personally,
like
the
name
cozy
cosi,
I
haven't
had
any
problems
with
it.
It's
from
my
experience
talking
to
others
that
I'm
bringing
this
up
and
if,
if
others
also
face
this,
please
bring
it
up.
We
only
have
probably
a
month
before.
We
cannot
change
this
forever.
A
D
Let's
pick
it
up
at
the
next
meeting
next
year,.
L
A
A
Anytime,
in
the
future,
sorry.
A
Cosposie
yeah,
I
mean
it's
just
if
it's
all
developers
I
want
to
hear
from
others
too.
That's
all
I'm
saying.
D
Thank
you
all
so
much
and
I'll
be
out
for
the
next
couple
weeks.
So
I'll
see
everyone
next
year,
all
right,
happy
new
year,
yeah
all
right
take
care.
A
Thank
you
I
wanna
say
one
last
thing:
again:
we've
done
a
great
job.
This
year
we
have
made
excellent
progress
and
I
want
to
congratulate
the
team.
I
want
to
congratulate
all
the
contributors,
all
the
new
contributors
and
thank
you
all
for
the
effort
you've
put
in
next
year.
We're
definitely
set
we're
on
the
right
track
to
go
ahead
and
achieve
alpha
and
and
have
vendors
using
this.
So
thanks
again
and
happy
new
year
to
everyone.
Merry
christmas
and
see
you
see
you
next
year.
D
A
So
I
can't
hear
something:
the
org
you
talk
about:
oh
yeah,
kubernetes.
I
think
saad
owns
this
kubernetes
cozy
or.
D
A
That
that
we
own
it
there's
also
kubernetes
cosy.
Oh
yeah.
I
don't
know
who
owns
that?
Oh,
I
thought
you
owned
it
okay,
never
mind.
I
don't
own
this
for
sure
who
is.