►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket Standup Meeting - 30 November 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
Okay,
so
welcome
back
everyone.
I
hope
thanksgiving
holidays
went
well
for
all
of
you
and
everyone
is
safe.
B
So
today
I
want
to
first
give
an
update
on
where
we
are
in
terms
of
development.
I'll
get
the
update
myself
even
and
then
I
want
to
talk
about
the
things
that
we
left
off
during
the
last
meeting.
So,
let's
start
with
the
development,
so
the
goal
we
were
aiming
towards
the
milestone
was
to
do
a
demo,
a
demo
where
we
show
the
create
bucket
feature
and
also.
B
B
That
is,
the
demo
is
not
just
a
smoke
and
mirrors
demo
or
it's
not
something
that
shows
that
it
works,
but
rather
the
demo
is
going
to
be
a
result
of
having
built
all
these
features
in
as
close
to
a
production-ready
manner
as
possible.
That
is,
with
testing
with
automated
integration
tests
and
also.
B
Manual
tests
with
some
actual
providers,
so
during
this
week
during
the
last
week,
the
thanksgiving
week
there
was
some
progress
that
was
made
so
for
especially
from
people
who
are
abroad
and
also
some
people
who
had
time
here
were
able
to
contribute.
So
I
want
to
catch
up
on
what
work
was
done
last
week
and
then
we'll
go
from
there.
B
So
I'm
going
to
open
up
this
slide
so
sajan.
I
believe
you
worked
on
integration
effort
to
run
all
the
different
services
together
and
see
if
the
integration
goes
well
manually,
correct,.
B
Okay,
so
what
was
it
like?
Did
you
face
any
challenges
or
whether
you
actually
no.
B
B
Okay,
so
if
you
run
into
any
issues
or
if
you'd
like
to
talk
more
about
it,
you
can
always
reach
out
to
me
directly
on
slack
or
you
can
reach
out
on
the
sick
storage
cozy
channel,
where
all
of
you
pitch
in
yeah.
B
C
Yeah
actually
not
a
whole
lot
of
progress,
but
some
progress
for
the
api
repo.
I
added
the
basic
controller
logic
in
there,
that
is,
that
pr
is
merged,
and
then
there
is
the
second
pr
which
to
fix
the
keep
config
consistency
with
the
keep
config,
the
environment
variable
and
the
command
line
flag.
It's
a
small
pr.
I
pushed
it
a
few
minutes
ago.
The
third
thing
I'm
working
is
this
central
controller.
I
started
working
both
on
the
e2e
et
framework
looks
good.
C
I
haven't
submitted
a
pr
because
I
first
need
to
move
the
code
so
in
the
process
of
moving
the
code.
I
started
writing
some
unit
tests
for
the
for
the
basic
bucket
create
logic.
There
were
some
issues
that
I
had
a
one-on-one
session
with
you
to
fix.
Some
of
these,
like
you
know,
registering
the
crds
manually
and
so
on
and
so
forth,
but
I
still
ran
into
some
issues
with
the
fake
client.
So
currently
I
made
some
progress,
so
pr
is
shortly
due.
C
There
are
few
issues
I
added
three
tests
and
there
is
some
issue
with
the
cleanup
logic.
So
if
there
is
any
buckets
that
bucket
requests
that
are
hanging
in
there
between
these
tests
and
running
so
some
issues,
so
that
I
I
think,
I'm
very
close
to
submitting
a
pr
for
that.
So
that
will
add
the
basic
bucket
create,
along
with
unit
test
now,
once
that
pr
is
good
to
go,
I
can
start,
but
I
I
will
put
out
prs
for
the
ete
as
well
as
for
the
for
the
pro
logic
to
build
on.
C
So
we
have
a
pro
job
that
is
merged.
I
have
submitted
a
pro
job
for
the
for
the
controller
so
for
the
api,
my
proud
job.
Also,
I
made
some
changes
because
that
proud
job
was
also
doing
post
submit,
which
it
is
not
really
building
a
container.
So
we
don't
need
a
post
post
submit.
We
only
need
a
pre-submit
job
for
that.
So
for
the
controller.
B
C
My
understanding
is
pre-submit,
does
basic
unit
testing
and
all
the
regular
stuff,
like
you
know,
verify
logic
and
all
that
post
have
made
builds
the
image
pushes
the
image
and
tesla
image
out.
B
Yeah,
okay!
No!
This
is
good
progress,
actually
a
lot
of
progress
last
week.
So
most
of
the
efforts
right
now
seem
to
be
around
integration
right
in
terms
of
what
is
there.
C
C
B
Yeah
all
right
so
and
I
believe,
you've
already
started
with
the
central
controller-
that's
right,
okay
and
sajin.
I
I
believe
you
are
working
on
moving
the
code
for
the
sample
provisioner.
B
B
Great
okay,
so,
okay,
so
we
just
need
to
coordinate
with
rob,
then
to
get
the
sidecar
controller
moving
and
and
sweeney.
Where
are
there
two
pr's
that
we
were
waiting
on
or
have
they
been
merged?
B
C
Think
pretty
much
for
the
spec
yeah,
almost
all
the
prs
are
merged
right
now
I
am
it's
basically,
I
am
the
bottleneck
right
now,
so
there
is
one
pr
that
is
on
the
spec.
I
don't
remember
exactly
what
the
status
of
that.
Let
me
check
if.
B
C
There
is,
there
is
one
pr
in
the
spec
that
you
know
the
crds
require
this
as
special
annotation,
because
we
are
using
case.io
right
so
that
annotation
is
when
we
generate.
I
don't
know.
If
there
is
any
way
on
the
types
we
can
specify
an
annotation
to
generate
that
annotation.
C
I
haven't
figured
that
out.
If
we
can,
we
can
hard
code
it
into
the
crds
we
generate
and
check
it
in,
but
if
anybody
wants
to
regenerate
using
it,
you
will
have
problem.
C
C
C
G
C
G
B
See
I
think
customers
might
be
better
here.
It's
idiomatic
people
use
it
and
people
understand
it
quickly.
I
think
it's
harder
to
make
errors.
If
we
do
that,
okay,
I'll
I'll,
send
you
an
example
of
just
adding
custom
annotations
and
we
don't
even
need
the
json
patch,
because
we
can
apply
this
annotation
to
all
the
objects.
It's
fine!
B
B
Okay
sounds
good,
yeah,
all
right,
okay.
So
let's
move
on
to
the
next
thing.
So,
last
week
where
we
left
off
was
we
were
talking
about
adding
parameters
to
the
protocol
field
and
we
had
a
bunch
of
discussions
around
what
to
call
them
and
how
they
would
be
useful
and
how
they
would
be
utilized,
and
the
final
conclusion
was
because
we
want
to
standardize
this
protocol
field,
as
this
is
what
will
be
sent
down
to
the
pod.
The
best
solution
right
now
is
to
keep
the
you
know.
B
Parameters
field
if
we
had
them
would
create
an
opaque
map.
String
string
field
and
can
easily
lead
to
an
api
that
can
be
broken
by
different
vendors
satisfying
the
same
protocol,
so
we
wanted
to
keep
a
strict
check
on
what
fields
go
in
here,
so
we
went
back
to
our
previous
model
of
how
a
protocol
was
designed
where
there
wouldn't
be
this
map,
string,
string,
opaque
field.
B
So
then
the
question
came
up
as
it
is
right
now
the
api
looks
good,
but
if
we
wanted
to
add
new
fields
to
the
api,
how
do
we
go
about
it?
I
think
that
should
be
something
we
discuss
and
document
so
any
vendor
for
a
particular
protocol.
B
That's
one.
The
second
thing
I
want
to
focus
on
which
maybe
not
today
but
in
the
next
meetings,
would
be:
what
would
it
take
for
a
new
vendor
to
add
their
new
protocol
into
this
structure?
B
So
the
two
things
that
I
want
to
get
cleared
up
is
how
to
add
new
fields
to
existing
apis
or
existing
protocols
and
how
to
add
entirely
new
protocols.
B
I
B
Here,
hey
ben,
so
I
don't
know
if
you
remember
this,
but
this
is
where
we
left
off
on
the
19th.
B
J
To
define
the
new
the
new
struct
in
that
union
for
protocol
and
yeah,
there
will
need
to
be
a
review
process
and
we'll
need
you
know
bit.
I
mean.
B
J
J
J
A
J
Yeah
I
mean
it's
going
to
be
tricky
because
yeah
I
mean
it's
one
thing:
if
it's
like
an
ietf
standard
protocol
or
something
that
can
point
to
you
know
here's
where
here's
where
we,
you
know
the
the
community
sort
of
agreed
that
this
is
a
standard
versus
someone's
proprietary
protocol
that
they
say
yeah.
I
have
a
product
that
implements
this.
I
I
guess
I
guess
the
a
good
a
good
sort
of
minimum
bar
would
be
like.
J
Is
there
a
working
product
that
implements
it
that
people
can
use?
Because
if
someone.
B
I
don't
know
if
that'll
even
do,
because
I
could
build
a
working
product
that
nobody
uses.
B
B
It
could
be
something
like
some
sort
of
adoption
could
be
one
one
parameter
or
the
best
way
would
be
if,
if
there
is
some
mechanism
to
allow
them
to
do
their
own
protocols
their
own
way
without
affecting
the
api,
that's
even
better,
so
we
will
still
standardize
for
s3,
gcs
and
azure.
But
you
know.
J
The
argument
was
there's
already
a
way
to
to
use
your
bug,
object
bucket
protocol
without
modifying
our
api,
and
it
exists
right
now,
which
is
you
just
do
it
inside
the
pod
and
you
don't
use
cozy
right.
So
the
question
is:
is
there?
Does
it
make
sense
to
have
like
a
backdoor
mechanism
and
cozy
to
have
some
sort
of
a
non-standardized
support?
And
I
would.
H
J
Yeah
exactly
it
would
not
so
so
that
we
could
just
say:
look
until
you're
ready
to
propose
a
proper
protocol
to
the
bucket
class
and
our
downward
facing
api.
J
B
See
the
other
thing
is
small
shops
would
want
to
legitimize
their
protocol
by
becoming
part
of
kubernetes.
So
we
have
to
somehow
set
a
clear,
a
guideline
or
a
clear
standard
on
what
we
like.
J
Yeah,
I
I
think
we
would
want
to
err
on
the
side
of
too
few
rather
than
too
many
protocols.
True,
because
yeah
I
mean
the
whole
point,
is
we
as
sod
was
saying
like
we
don't
want
to
be
in
a
world
where
there's
17
different
object
protocols
that
doesn't
benefit
anybody
right
like
the
goal,
is
to
gradually
coalesce
them
into
into
a
less
number.
J
B
Mean
again,
but
I
don't
want
to
you
know
in
that
sense,
I
don't
want
to
take
it
too
far,
either
because
sometimes
allowing
the
market
to
play
itself
out
and
then
you
know,
collapse
and
and
converges
is
better.
So
I
I
don't
want
to
like
force
in
you
know
I
like
the
idea
of
having
fewer
rather
than
more,
but
I
don't
want
to
enforce
it.
B
I
want
to
be
very
strict
on
how
what
we
love,
but
I
don't
want
to
have
it
as
a
guideline
or
a
guiding
philosophy
for
us,
and
you
know
we
shouldn't
dictate
anywhere
that
that
we
think
there
shouldn't
be
too
many
protocols
or
anything.
G
But
then
can't
you
look
at
the
history,
because
the
spec
today
specifies
s3
azure
and
to
google
object
storage
protocol.
Why
those
three?
Why
not
swift-
and
I
don't
know
myself,
but
if
there
is
some
bar
that
is
put,
then
the
existing
protocols
needs
to
need
to
pause
that
bar
as
well,
of
course,
and
there
should
be
some
some
rationale
as
to
why
those
three
were
chosen.
F
Isn't
it
isn't
it
just
another
point
of
view?
Isn't
it
have
to
come,
doesn't
have
to
come
from
the
you
know,
workload
that
uses
like
a
generic
open
source
product
project
that
uses
that
protocol.
Otherwise
I
mean,
if
you're,
if
you're,
just
a
provider
that
also
writes
the
application,
then
you
don't
need
any.
F
You
know
standards
around
it
that
much,
but
if
you
really
want
to
provide
you
know
a
new
protocol
with
a
new
set
of
features
and
capabilities
to
to
the
community,
then
first,
let's
see
that
the
community
is
using
it
like
there's
open
source
projects
that
use
that
right,
yeah.
J
We
have
an
advantage
where
you
know
you
don't
have
to
be
integrated
with
cosy
for
things
to
work.
So
we
can.
We
can
try
to
let
the
horse
go
first
and
have
the
cart
come
after
it
and
say
you
know
the
horse
is
there
are
kubernetes
workloads
using
this
object
protocol
today
and
they
just
aren't
using
cozy,
and
if,
if
that
is
the
case
and
we're
like
okay,
you
know
it.
I
guess
the
argument
I
would
make
on
in
in
favor
of
making
it
easier
to
add
more
things.
J
Is
it
actually
doesn't
cost
us
very
much
right?
It's
just
a
few
extra,
it's
one
extra
struct
definition
in
the
bucket
class
and
in
the
downward
facing
api,
and
maybe
some
validation
logic,
which
would
have
to
be
defined
at
the
time
that
the
fields
were
proposed
right
like
and
there's
no
maintenance
burden
other
than
just
having
that
code
lying
around
like
it's,
not
gonna,
be
a
lot
of
bugs
filed
against
this
thing.
J
F
J
B
J
B
J
That's
what
I
was
saying
like
like
we're,
not
holding
them
back
like
if
they
really
do
have
the
next
best
protocol
like
they,
should
be
able
to
implement
that
and
prove
it
to
the
world
without
or
at
least
step
one
of
that
prove
it
to
the
world.
You
know
get
some
implementation
out
there
before
they
have
to
get
something
in
cozy.
I
don't
think
we're
really
holding
them
back
more
than
you
know
having
to
do
basic,
I
mean
in
terms
of.
B
The
first
question
they'll
ask
is:
is
this
cozy
compatible?
I
mean
they
do
that
with
csi.
I
All
the
time,
well,
you.
J
Know
what
just
about
every
protocol
just
about
every
storage
or
object,
storage
implementation
in
the
world
speaks
both
their
protocol
of
choice
and
s3
right
or
just
s3,
but
that
seems
to
be
a
thing.
So
so,
if
they
want
to
say
yeah
we're
cozy
compliant,
they
can
implement
the
s3
bit
of
it
and
the
cozy
bit
of
it
we're
giving
amazon
an
unfair
advantage
is
what
I'm
saying
so
yeah,
and
I
I
don't
want
to
do
that.
I'm
just
saying
like
that.
J
F
J
J
Presumably,
the
workloads
are
being
coded
to
what
we
support
so
like
by
supporting
an
existing
protocol.
You
automatically
have
compatibility
with
existing
workloads.
Every
time
you
add
a
new
protocol,
there
are
zero
workloads
that
support
it,
because
it's
new,
so
by
definition
like
you're,
implementing
something
that
nobody
supports.
B
H
B
Think
I
think
that's
a
good
way
to
put
it.
I
think
if
there
is
some
sort
of
adoption
in
an
enterprise
situation,
and
I
think
we
should
define
a
little
clearly
so
we
could
say
an
enterprise
is
defined
as
an
organization
with
more
than
say,
200
employees
or
something
I
don't
know,
there's
a
specific
definition
for
it.
We
can
look
it
up
and
we
can
say
if
at
least
two
enterprises
have
adopted
this
verifiably,
then
yeah.
H
I
I
got
a
question
too
so
so
far
there
is
nothing
better
than
s3,
but
just
for
s3
there
are
multiple
versions,
so
some
some
data
stores,
for
instance,
support
versioning
and
some
other
don't
and
all
and
now
are
we
sure
that
just
the
version
that
that
I
see
here-
protocol
name
s3
version
can
capture
those
subtle
differences.
J
Yeah,
I'm
actually
really
not
in
favor
of
the
the
version
field
in
here.
I
think
we
should
get
rid
of
it,
but
because
I
think
it
at
least
exactly
this.
Confusion
like
like.
There
are
different
versions
of
the
s3
protocol
like
s3,
v2
and
s3
v4,
and
we
should
be
explicit
about
which
one
we're
talking
about,
but
we're
not
talking
about
subtle
iterations,
that
amazon
makes
in
their
sdk
and
their
the
features
they
support.
We're
talking
about
like
major
revisions
to
the
s3
authentication
protocol.
H
Before
but
some
vendors,
sometimes
they
strictly
support
or
specific
syntax,
but
they
don't
support,
for
instance
versioning,
which
is
a
major
feature,
and
when
you
write
an
application,
you
have
to
be
aware
of
this
I
mean
the
semantics
of
accessing
a
bucket
are
very
different,
whether
you,
you
use
a
version
bucket
or
an
unless
packet
yeah.
Do
you
see
what
I
mean.
J
I
H
For
instance,
the
the
first
first
min
io
versions,
if
I'm
not
mistaken,
like
two
two
three
years
ago,
did
not
support
versioning.
So
as
a
as
an
application,
you
want
to
know
that.
F
So
so
you're
not
saying
about
knowing
it
you're
saying
that
you
want
to
specify
your
your
workload
to
some.
F
J
F
J
Have
if
it's
something
that
can't
be
like
worked
out
in
band
with
negotiation-
and
you
just
have
to
know
it
a
priori,
then
then
yeah.
That
would
be
an
appropriate
thing
to
stick
in
the
in
the
protocol
field.
Alongside
all
the
other
s3
fields
and-
and
I
think
we
should
go
over
those
one
by
one
as
we
become
aware
of
them-
yeah,
because
we
want
to
take
that
very
seriously.
J
But
I
don't
want
to
just
try
to
sweep
it
all
under,
like
yeah
there's
going
to
be
a
version
number
and
and
and
that's
going
to
solve
everyone's
problem.
H
Because
you,
you
can
have
a
you
know:
legit
new
vendors
in
the
market
that
offer
an
amazing
product,
for
instance,
which
still
which
don't
support,
for
instance,
versioning,
but
and
still
are
very
interesting
for
some.
Some
other
applications.
J
B
Okay,
so
so
didn't
support.
You
know
a
few
years
ago,
mine
didn't
support
allowing
you
to
specify
a
version,
but
it
supported
s3v4.
B
It
was
still
a
standard
version
of
s3.
It
just
didn't.
Allow
you
to
choose
between
the
two.
Well,
while
creating
a
bucket,
and
even
now
the
client
allows
you
to
choose
a
version
only
for
compatibility
reasons
with
the
actual
s3
menu.
The
server
itself
only
talks
the
v4
version.
J
Yeah,
I'm
pretty
sure
everyone
only
talks.
Well,
I
don't
know
v4
seems
to
have
really
taken
hold,
but
my
question
is:
is
what
will
it
look
like
when
v5
comes
along?
You
know
there
will
be
some
sort
of
a
migration
period
and
then
yeah.
I
don't
know
what
we'll
do
in
in
the
bucket
class,
because
we'll
have
to
say
okay,
you
know
for
for
workloads.
It's
only
speak,
v4,
here's
what
you
need
to
know,
but
if
you
also
support
v5,
here's
the
additional
details
you
need
to
know,
but.
H
What
what
you're
saying?
Basically,
if,
for
instance,
an
application
expecting
an
s3
compatible
storage
which
understands
s3,
select,
for
instance,
which
is
a
you
know,
an
extension
for
doing
a
big
data,
it's
up
to
the
application
to
basically
to
detect
if
the
storage
does
actually
support.
J
Ago
is
was
if
it's
something
that
like
is
widely
supported,
then
we
would
consider
adding
if
it's
just
like
a
you
know,
one
or
two
vendors
have
done
something
weird.
You
can
have
a
bucket
class
parameter
that
allows
you
to
you
know
as
the
human
look
at
the
bucket
class
and
say
I
see
that
it
supports
this,
and
I
and
I
will
use
this
bucket
class
for
my
pod,
knowing
that
I'm
going
to
get
the
feature,
but
there
wouldn't
be
a
programmatic
way
to
consume
it
initially
right
that
so.
I
B
B
And
and
s3
select
is
again
is
not
a
part
of
the
bucket
workflow
really
xp
select
allows
you
to,
for
those
of
you
don't
know,
s3
select
allows
you
to
run
filters.
B
J
S3
api
right,
but
what
I'm
saying
is
like:
let's,
let's
say
that
we
let's
say
we
left
this
out
and
somebody
wrote
a
workload
that
depended
on
it
and
it
ran
fine
when
they
actually
ran
against
amazon,
but
then,
when
they
moved
to
a
different
cloud
that
still
had
s3,
but
it
wasn't
amazon,
it
was
something
else
that
purported
to
bs3.
They
didn't
support
it.
Would
they
be
totally
out
of
luck
if
they,
if
they
assumed
this
functionality,
was
there
and
it
in
fact
wasn't
no.
If
we
model.
B
It
then,
then,
then
there
would
be
a
problem
if
we
didn't
model
it,
then
then
you
know
it's
up
to
the
of
the
workload
application
writer
to
to
ensure
that
the
new
provider
has
it.
If
you
start
modeling
and
have
a
field
for
for
s3
select,
I
mean,
who
is
to
say
the
the
new
workload,
the
new
vendor,
the
the
second
vendor
is
going
to
even
understand
that
field
or
support
it
will
only
work.
I.
F
An
implementation,
optional
implementation,
but
I'm
not
sure
if
it's
like
dynamic
in
in
the
code
versus
being
like
a
build
time
kind
of
configuration.
I'm
not
sure
we
can
take
a
look.
But
if
that's.
J
The
kind
of
yeah,
if
it's
the
kind
of
thing
that
other
implementations
of
s3
start
to
support
and
it
becomes
a
kind
of
thing
that
a
workload
wants
to
be
able
to
dynamically,
determine
if
it's
there
or
not
yeah.
That
would
be
a
good
candidate
for
adding
it
to
the
s3
protocol.
If,
if
amazon's,
the
only
company
that
supports
it,
then
it's
like
okay,
you
should
know
if
you're
running
on
amazon
or
not
and
whether
it's.
B
Gonna
use
this
feature
or
not.
Here's
the
thing
amazon
min
io
supported
and
you
know,
ceph
doesn't
as
of
now,
but
minnow
supports
it
with
a
few
disclaimers.
Basically,
so
it
only
supports
it
for
csv
format,
and
at
this
point
so
I'm
I'm
kind
of
wary
of
modeling
this,
because
we'll
end
up
in
you
know,
situations
like
what
I
said
trying
to
deal
which.
F
B
B
F
Part
of
s3,
it's
part
of
object,
api.
It's
an
object,
interface
in
this
way,.
B
J
But
it
it
matters
for
workload
portability
right.
If
the
workload
depends
on
it,
you
need
to
know,
can
I
go
to
another
cloud
and
still
and
will
my
workloads
still
work
right?
It
seems
relevant
and
so
to
agree
with
you
sid
like
I,
we
would
not
want
to
include
unless
we
had
a
very
precise
definition
of
what
it
was
yeah.
H
J
And
it's
so
that
providers
could
could
reliably
flag,
whether
they
supported
it
or
not,
and
of
course
the
default
is
not
so
if,
if
the
provider
never
heard
of
it,
you
say,
of
course
I
don't
support
that.
I
don't
know
what
that
is,
and
that's
fine.
So
you
know
this
is
a
standard
thing
that
this
particular
provider
doesn't
have,
and
so
we
have
a
clear
way
of
communicating
that
to
to
workloads
if
they
care.
J
But
like
the
bar
for
including
something
like
that
is,
multiple
vendors
would
have
to
have
made
interoperable
implementations
and
when
workloads
would
have
to,
you
know,
be
interested
in
using
it
and
being
portable
at
the
same
time
and
then
we'd
say
yeah.
That's
that's
a
good
candidate
for
including
a
formal
capability
bit.
G
G
B
I
mean
I
mean
so
in
this
case,
and
I
don't
know
if
this
is
even
relevant
for
all
the
other
protocols,
but
all
the
other
features
that
we'll
deal
with.
But
you
know
the
the
limiting
factor
for
s3
select
is.
Is
you
need
to
have
a
lot
of
machines
in
the
back
end
that
have
a
lot
of
compute?
That
can
actually
do
this
filtering
and
things
like
outpost
and
snowball
snowballs,
where
they
ship
you,
the
suitcase
with
the
amazon
cloud
in
it.
B
J
B
We
just
have
to
model
it
here.
Basically,
there
are
three
three
ways
to
do:
encryption
and
the
the
hardest.
One
is
simply
just
having
some
sort
of
key
provider
like
a
hashicorp,
vault
and
and
the
configuration
would
require
just
you
know,
turning
on
encryption
and
then
pointing
to
the
keystore
yeah
so
yeah.
I
think
it
would
be
easier
here
compared
to
csi
to
support
encryption,
but
but
we'll
get
to
it.
When
you
know
in
the
future
yeah,
I
still
want
to
try.
G
One
thing
I
I
like
that
raised
my
attention.
I
think
said
you
said
so
a
while
ago
about
customers
and
basically
an
rfp
asking.
Is
this
cozy
compatible?
I
need
to
watch
out
that
such
questions
do
not
arise.
The
reason
being
unlike
csi,
a
customer
can
say,
you
know,
is
a
csi
compatible.
Yes,
no
and
then
they
mean
either
file
or
block
in
the
case
of
cozy.
G
The
question
should
be:
is
this
cozy
provisioning
for
the
s3
protocol
compatible
because
if
a
vendor
has
an
implementation
for
the
the
azure
protocol,
but
his
backend
doesn't
support
the
s3
one
and
you
have?
The
customer
has
an
s3
application.
Then,
if
the
question
is,
is
it
cozy
compatible?
Then
the
answer
is
yes,
but
will
the
customers
application
work
with
it,
not
at
all.
B
I
see
what
you're
saying
so
I
mean
so
so
application
to
bucket
protocol
as
far
as
we're
concerned,
that's
opaque
to
us,
so
it
does
the
applications
work
with
whatever
bucket
it's
asked
for
again,
that's
a
that's
a
concern
for
the
admin
who's
provisioning.
You
know
the
the
bucket
for
the
for
the
workload.
However,.
J
Good,
I
was
gonna
say
that
there's
two
pieces
of
value
that
we're
providing
one
is
basic
automation
around
bucket
creation
and
deletion
and
life
cycle
management
which
doesn't
exist
today
like
and
that's
valuable,
regardless
of
what
protocol
you're
using
and
then
there
is
the
other
piece
of
value.
Is
standardized
protocols
such
that?
J
If
you
have
a
workload,
that's
coded
to
s3
and
it
works
in
one
cloud,
you
can
move
it
to
another
cloud
as
long
as
that
cloud
has
a
bucket
class
that
supports
the
s3
protocol
or
or
azure
or
whatever,
so
the
other,
the
first
half
of
it
still
matters.
Even
if
you're
inventing
a
new
protocol,
the
customer
might
be
interested
in
tweaking
their
application
to
speak
a
new
protocol
if
they
know
that
they're
getting
the
first
piece
of
value,
which
is
bucket
life
cycle,
automation,.
B
I
I
think
his
question
is
more
along
the
lines
of.
If
I
have
a
provider
say,
sef
and
seth
supports
s3
and
swift,
and
you
know
the
vendor
for
ceph
might
say:
hey
look
to
you
know
their
customers.
We
are
cozy
compatible,
but
their
customers
might
be
expecting.
B
I
don't
know
a
swift
implementation,
whereas
even
though
there's
cozy
compatible
they're,
not
chef's,
s3
versions,
cozy
compatible
their
self-s.
You
know
swift
version,
cozy
compatible.
So.
J
B
Yeah
and
the
other
thing
is,
you
know,
customers
generally
do
not
understand,
even
as
of
today,
most
customers
that
you
know
at
least
we
deal
with
them
in
io-
do
not
understand
what
csi
is
and
they
just
ask
the
question
like
it's,
probably
the
purchasing
manager,
whoever
that
is
director
or
senior
director
or
someone
who
really
has
not
been
in.
You
know,
writing
code
or
or
dealing
with
any
of
these
things.
B
G
Yeah,
if
my
point
is,
and
and
this
is
really
marketing
more
than
anything
else,
if
if
one
says
I
am
csi
compatible,
then
as
a
consumer
of
that
product,
you
there's
some
certain
things.
You
can
reasonably
expect
from
that
product,
as
in
it
can
give
me
a
file
system
volume
and
then
I
use
it.
Cozy
compatible
doesn't
give
you
any
guarantees
whatsoever.
G
B
No,
no,
that's
not
true!
So
so,
if
you
think
about
it,
the
the
vendor,
so
you
know
to
come
back
to
what
you're
asking
there's
two
different
fields
here.
If
you
look
at
the
bucket
class
right
in
the
middle
of
the
slide
that
I'm
showing
there's
a
provisioner
and
there's
a
protocol,
so
we
model
both
yeah,
you
could
say
you're
cozy
compatible,
but
that
problem
exists
with
csi
too.
You
could
say
I'm
csi
compatible,
but
some
applications
need
ssds
or
nvmes
or
higher
iops
or
network
volumes.
B
Right
so
posi
compatible
would
be.
You
know
enough,
because
we
modeled
both
and
and
let's
say
they
try
to
use
a
protocol
that
doesn't
exist
here.
The
bucket
creation
would
fail
and
the
workload
would
not
be
able
to
start.
So
you
know
this
is
not
going
to
lead
to
a
silent
failure
in
the
application
where
they
go
to
production
and
suddenly
figure
out
that
something
is
wrong.
They
would
be
able
to
figure
it
out
during
deploy
time
itself.
So
I
believe
cozy
compatible
is
is
good
enough.
G
So
let
me
take
a
step
back
and
just
wondering
I
know
the
technical
implications,
etc
are
wondering
about
how
do
we
want
to
put
this
in
the
field?
Do
we
want
people
to
request
cozy
compatibility,
or
do
we
want
people
to
request,
for
example,
cozy
s3
compatibility,
cozy,
slash,
azure,
blob
compatibility,
cosy,
slash,
google,
whatever.
G
J
G
B
J
And
the
third
one
I
I
I'm
saying
that
not
that
you
would
say
it
out
loud
but
like
what
it
will
come
to
mean
in
practice
is
when
you
say:
cozy
compatible,
you
mean,
does
it
do
cozy
and
does
it
do
s3?
And
if
so
my
up
is
good
because
that's
the
lowest
common
denominator
and
yeah?
If
you
can
do
better,
that's
better,
but.
B
The
decision
making
happens
like
this
provider
supports
the
sc
protocol.
Now,
does
it
work
with
cosy
so
so
the
the
protocol
itself
is
is
the
first
decision
like
the
first
constraint
and
then
comes
cosy?
Nobody
is
going
to
evaluate
you,
know
a
driver
first
or
cozy.
Compatibility
first
then
see
if
we
suppose
the
protocol
it's
the
other
way
around,
because
the
the
the
requirement
is
driven
by
the
application
rather
than
by
cozy
compatibility.
J
Well,
the
the
difference
is
with
csi
your
your
application
doesn't
need
to
know
anything
other
than
am
I
getting
a
file
system
or
a
raw
block
exactly,
and
here
it's
unavoidable
that
you
have
to
know
more
than
that,
like
it's
just
it's
the
nature
of
block
storage,
we're
not
going
to
escape
this
trap
like
it's.
Just
because
of
how
block
storage
is
we
we
can't
we
can't
magically,
install
a
layer
that
makes
this
not
matter
like
it.
Just
matters.
J
F
I
have
a
related
question
in
the
bucket
request.
The
protocol
field
is,
is
a
is
a
string
right.
It's
not
an
object
like
true,
so
if,
if
so,
what
we're
saying
is
that
the
bucket
the
the
workload
can
only
request
the
the
name
basically
and
then,
whenever,
like
and
later
when
we
come
to,
you
know
different
variants
or
capabilities
like
we
discussed
earlier.
B
F
I
think
yeah,
I
think
the
one
thing
was
for
specifying
the
protocol
was
that
we
want
to
assert
in
some
way.
So
if
you
I
don't
know
if
it's
relevant,
but
I'm
just,
I
think
that
was
why.
J
F
Yeah,
so
I
just
remember
that
the
reason
it
was
added
the
protocol
was
added
to
the
bucket
request
was
to
was
like
to
declaratively
specify
by
the
workload
which
protocol
it
expects
to
get
so
that
it's
hardcoded
into
the
workload
definition
I'm
expecting
s3
versus
I'm
expecting
blob
right
and
not
just
to
rely
on
the
bucket
class,
which
is
not.
You
know
something
that
necessarily
if
I
migrate
the
workload,
I'm
gonna
end
up
with
the
same
bucket
classes,.
G
G
I
have
to
drop
yeah,
it's
almost.
B
Thank
you
everyone.
I
think
I
think
we
should
conclude
at
this
point
and
continue
the
discussion
on
thursday.
I
think
I
think
we've
started
a
good
discussion.
B
We
should
we
should
explore
next
week,
one,
how
do
we
add
a
new
protocol
and
yeah?
That's
that's
the
main
thing.
B
Is
there
any
other
questions?
Anyone
had
anything
else.
You
want
to
keep
on
the
agenda
for
next
week.
C
B
Yeah
yeah
just
particles,
true
true
yeah,
because
some
protocols
such
as
encryption,
some
don't.
J
B
Yeah
yeah,
okay,
so
so
I'll
keep
that
on
the
agenda
for
next
week.
Good
talk
today,
sorry
not
next
week
for
thursday
good
talk
today
and
see
you
all
on
thursday.
Thank
you.