►
Description
Container Object Storage Interface (COSI) is a Kubernetes Enhancement Proposal (KEP) which aims to define a common interface for Kubernetes users who want to create and manage object storage for their applications. When do users want to use object storage, and how does COSI improve their experience? What is the status of COSI today, and what is coming on the horizon?
A
Now,
good
morning,
good
afternoon,
good
evening,
wherever
you're
healing
from
welcome
to
another
episode
of
cloud
tech,
thursdays
here
on
open
shift
tv,
I
am
chris
short
executive
producer
of
openshift
tv.
Yes,
I
say
it
so
fast.
I
can't
even
say
it
anymore.
I'm
joined
by
a
wonderful
team
of
folks
here.
I
would
like
to
hand
it
off
to
amy
marrich,
to
tell
us
what
we're
talking
about
today.
B
Hi
everyone,
I'm
amy
marish,
I'm
the
openstack
community
person
for
red
hat
and
I'm
going
to
introduce
some
of
my
other
cloud-related
community
cohorts.
We
have
josh
burkus,
who
is
the
community
person
for
red
hat
for
kubernetes
and
mike
perez?
Who
is
the
community
architect
for
the
ceph
community
mike?
Do
you
want
to
go
ahead
and
introduce
our
guest.
C
Yeah,
so
we
got
a
blaine
gardner
here,
who
is
a
principal
software
engineer
on
the
red
hat,
open
shift,
storage
team
and
is
a
upstream
brook
maintainer
and
containerized
storage
engineer
specifically
to
get
that
right
and
I'm
excited
here
to
talk
about
kubernetes
csi
in
particular,
which
stands
for
container
storage
interface.
It's
been
generally
available
status
and
v,
let's
see
1.13
and
has
since
evolved
to
support
a
large
number.
A
C
Vendors
and
storage
formats
and
the
container
storage
interface
only
supports
block
and
file
level
storage.
At
that
point,
however,
the
kubernetes
community
is
bringing
forward
an
initiative
called
the
container
object,
storage
interface
or,
as
we
call
cozy,
to
focus
on
the
infrastructure
deployment
of
object,
storage
solutions
in
kubernetes
environments,
so
blaine
gartner.
Would
you
like
to
take
it
away.
D
Yeah,
I
am
I'm
excited
to
talk
about
cozy.
It
is
yeah,
it
is
very
analogous
to
csi
it's
a
common
logic:
storage
api
for
for
kubernetes,
I
kind
of
want
there.
We
go,
I
kind
of
wanted
to
start
from
the
point
of
what
is
my
investment
in
this?
Why
do
I?
Why
do
I
care
about
this?
I
am
I
work
with
rook
and
steph.
D
I'm
a
maintainer
upstream
for
the
work
project
dealing
with
steph
ceph
ceph
itself
is
a
distributed,
software-defined
storage
solution,
it's
been
around
for
nearly
a
decade
or
maybe
more
now,
and
provides
three
types
of
storage,
block
and
file
which
are
provided
or
are
part
of
the
csi
initiative,
and
it
also
provides
object,
storage
and
then
rook
is
the
management
plan
for
ceph
in
kubernetes
and
allows
storage
to
be
mounted
into
your
whatever
application.
Pods
you
have
for
yourself.
D
And
yeah
being
you
know,
being
object.
Storage
is
one
of
our
our
big
three
three
features.
The
benefits
of
object.
Storage
that
we
see
for
users
are
that
it's
provides
unstructured
data,
there's
no
real
tiering
system,
but
it
does
also
provide
a
lot
of
rich
metadata
which
makes
for
easier
analytics.
D
D
Storage
are
things
like
all
the
video
and
audio
that
you
have
in
like
netflix
or
for
gene
sequencing
files
from
genomics
research,
or
you
know
more,
a
little
more
mundane
like
for
system
backups
and
something
that
we
do
in
the
rick
project.
Is
we
use
it
to
to
house
our
helm,
chart
repository
so
it's
great
for
packages
and
containers
as
well.
D
D
There
are
some
projects
to
make
this
a
little
easier
which
use
fuse,
which
is
a
file
system
in
user
place
user
space
rather
to
to
mount
the
bucket
and
then
be
able
to
access
that.
D
But
it
is
still
a
pretty
manual
process
in
rook.
I
think
the
process
is
a
little
better,
but
it
is
specific
to
just
what
handles
it.
We
have
custom
resources
for
creating
an
object,
store
based
on
ceph
and
then
also
for
creating
users
for
the
object
store
and
when
you
do
that,
brooke
creates
a
secret
that
you
can
then
go
and
query
programmatically
to
get
authentication
details
that
you
can
just
type
directly
into
your
user
application.
D
What
if
you
could
claim
object,
storage
without
specifying
exactly
what
that
is.
I
just
want
some
object,
storage
and
I
can
claim
it
with
a
standard
kubernetes
manifest
that
claim
will
be
fulfilled
automatically,
and
then
I
can
access
that
claim.
Programmatically.
D
It
is
it's
a
kubernetes
enhancement
proposal,
that's
currently
in
alpha,
but
it
is
like.
The
goal
in
the
plan
is
that
it
actually
becomes
a
part
of
kubernetes
similar
to
container
storage
interface
csi
today,
so
it
will
be
built
into
kubernetes
and
it
takes
inspiration
from
past
success
in
kubernetes,
notably
persistent
volumes,
personal
involving
claims,
storage
classes,
and
then
I
have
mentioned
the
csi
interfaces.
D
And
the
yeah,
I
think
the
chief
benefit
is
that
it's
more
flexible,
it's
easier
to
use
and,
for
you
know,
coming
from
the
worksite
it's
easier
to
implement,
also
than
writing
something
ourselves,
and
another
big
benefit
I
see
is
supporting
multi-cloud
use
cases
which
helps
reduce
vendor
lock-in.
Also.
D
And
the
kind
of
technicals
for
how
cozy
achieves
this
is:
it
provides
resources
to
users
and
administrators
to
that
they
can
create
to
actually
get
object,
storage
and
from
from
past
success,
there
are
analogs
here
the
bucket
class
kind
of
matches
up
to
what
a
storage
class
is
and
a
bucket
request
matches
up
to
a
persistent
volume
claim
the
bucket
matches
up
to
a
persistent
volume.
So
an
administrator
can
set
up
a
bucket
class.
A
user
can
request
a
bucket
from
that
class
and
then
we'll
get
a
bucket
that
is
created.
D
D
B
So
you're
talking
about
the
storage
classes
and
the
persistent
volumes,
and
then
you
just
mentioned
block
storage
as
well.
So
are
these
persistent
volumes,
block,
storage
or
totally
related
to
the
object,
storage.
D
D
Storage
paradigm,
where
you
do
have
storage
you
can
claim,
but
it's
I
think
it's
just
not
been
around
as
long
it's
not
as
well
defined
and
there's
a
lot
more
variability,
and
so
I
think
for
for
csi
it
focuses
a
little
bit
more
on
traditionally
what's
been
available,
which
is
block
and
file
storage.
D
Yeah
yeah
csi
also
has
helper
sidecars
that
are
used
to
help
provision
and
mount
storage.
There
are
also
helpers
for
like
expanding
a
volume
and
koci
has
plans
to
you
know
following
that
design
which
has
been
pretty
successful,
also
implement
sidecars
and
those
primarily
help
vendors.
That's
not
something
users
really
care
about,
but
it
is
something
that
I
you
know
working
on
this
and
rook
will
care
about,
and
then
cozy
itself
also
will
have
a
controller
or
has
a
controller
which
manages
some
of
the
more
vendor
neutral
behaviors.
D
D
B
D
A
D
Yeah
I
mean
I
think,
chef
being
distributed
can
actually
improve
the
performance
of
even
even
slow
discs
if
you
have
the
right
setup
and
that
can
be
true
for
object.
Storage
also,.
E
D
Sure
sure
I'll
continue
on,
then
I
think
I
got
all
the
content
here
on
the
slide.
D
I
do
want
to
talk
a
little
bit
about
how
rookie
uses
cozy
currently,
so
there
is
what
what
I
have
often
kind
of
probably
explained
as
a
proof
of
concept
predecessor
to
cozy,
which
exists,
which
is
a
lib
bucket
provisioner
api,
which
rook
uses
to
provide
object,
bucket
claims,
and
this
allows
users
to
claim
object,
storage
which
is
fulfilled
automatically
which
we
implement
in
rook.
But
it's
not.
This
is
not
specific
to
to
this.
D
These
obcs
can
be
implemented
by
other
other
operators
as
well,
but
this
is
a
nice
like
starting
point.
This
is
what
yeah.
This
is
the
kind
of
concept
for
cozy
and
it's
something
we
we
see
our
users
asking
questions
about,
and
I
think
that's
a
really
good
sign
for
for
cozy
and
that's
why
we're
engaged
in
the
cozy
community
and
that's
why
we're
we're
starting
some
work
to
integrate
with
the
alpha
version
and
hopefully
be
not
be
one
of
the
first
people
on
this
train
so
yeah.
D
Just
to
to
finish
up,
I
I
want
to
kind
of
thank
some
people
who
have
helped
me
along
my
cozy
journey,
being
chris
john
sid,
jiff
and
jeff,
who
are
both
upstream
and
then
also
my
co-workers
here
at
red
hat,
and
I
think
they
deserve
a
shout
out
yeah.
I
think
I
think
we
can
open
it
up
to
the
full
q,
a.
C
All
right,
thanks
bling,
so
we
talked
about
specifically
with
the
container
storage
interface
and
that
being
focused
on
file
level
and
block
level
storage.
I
think
it
might
be
helpful
for
the
audience
to
take
a
step
back
and
to
understand
the
use
cases
and
how
these
are
actually
being
used
by
applications
so
that
people
understand
the
different
types
of
storage,
because
you
hear
file
block
and
you
might
not
know
the
difference
in
when
to
use
one
or
the
other.
Sometimes
it
is
actually
applicable
to
use
both.
C
You
know
in
different
ways,
but
maybe
you
want
to
talk
about
what
like
coming
from
openstack.
Specifically,
you
know:
we've
used
a
lot
of
virtual
machines,
kind
of
deployments
and
then
using
block
block
storage
to
clone
images
of
virtual
machines,
to
which
the
nodes
would
be
the
root
file
systems
to
those
virtual
machines
and
then
in
theory,
right
with
it
being
distributed
throughout
stuff.
C
Your
your
virtual
machine
is
fine
and
in
terms
of
like
those
clones,
you're
saving
a
lot
of
data,
you're
saving
a
lot
from
data
in
terms
of
having
to
not
have
to
rewrite
those
images,
but
instead
do
writes.
On
top
of
those
images,
so
it
provides.
You
know,
storage
efficiency
with
that.
However,
maybe
with
object
level
storage,
we
could
talk
about.
C
You
know
in
terms
of
an
application
using
boto
or
something
like
that
and
how
that
all
hooks
into
where
the
operators,
I
guess
actually
the
the
operator,
deploying
a
pool,
an
object,
storage
pool
through
rook
and
then
how
an
application
would
actually
utilize
that.
D
Yeah,
okay,
I
think
there
there's
a
lot
there
to
talk
about.
I
guess
first
yeah
talking
about
like
block
and
file.
I
do
also
tend
to
think
about
block
storage
like
like
virtual
machines.
That
is
kind
of
like
you
want
a
virtual
machine.
You,
you
know,
you
have
a
block
device
that
you
install
it
on
just
like
any
sort
of
raw
hard
drive
that
you
install.
You
know
linux
on
in
a
server.
D
D
E
D
Is
a
pretty
easy
go
to
it
and
why
I
just
want
to
put
some
files
in
I
kind
of
like
have
them
already
arranged
in
the
structure,
things
that
fit
pretty
well
there,
but
there
is
because
of,
I
guess,
we're
talking
very
technically
all
the
linux
inodes
and
things
there
is
a
system
metadata
that
it's
required
for
for
file
systems
to
locate
things.
D
I
yeah
from
from
openstack.
I
think,
if
I
remember
correctly,
the
the
system
that
kind
of
hosts,
the
like
images
that
you
would
use
to
install
a
vm
on
a
black
device
is
generally
object.
Storage.
It's
like
manila
in
openstack
glance,
yeah
glance!
Sorry!
Oh,
I
think
manila
was
file
storage!
That's
right!
Yes,
it's
been
a
while,
but
yeah
so
so.
Openstack
is
a
pretty
good
example
of
like
where
block
file
and
object
are
all
used.
D
E
I,
since
you
brought
up
databases,
I
can
actually
give
you
a
good
example
of
a
common
practice
from
the
database
world,
which
is
that
for
primary
storage.
For
the
actual
you
know
live
data
databases,
we
use
block
storage
if
they're
using
you
know,
network
storage,
either
local
block,
storage
or
network
block
storage.
E
But
I
we,
if
it's
available,
we
actually
like
to
put
backups
in
object
storage,
because
we
think
of
each
backup
as
a
single
unit.
It's
we're
looking
for
maximum
availability
for
it
and
object.
Storage
works
much
better
for
that.
E
Following
up
and
actually
mike's
question,
I
have
a
specific
question
about.
You
know
how
this
is
going
to
work
with.
E
How
is
it
going
to
work
with
applications
right
because,
right
now,
if
I'm,
if
I'm
running
a
database
and
I'm
sending
my
backups
to
object,
storage
and
I'm
doing
it
in
python,
then
I'd
use
boto.
If
I'm
on
aws
and
I'd
use
the
swift
library,
if
I'm
an
openstack
and
presumably
I
would
use
something
else
for
google
cloud.
But
I
don't
actually
know
that
much
about
google
cloud
storage.
E
D
I
I
haven't
used
boto,
so
I
don't
know
exactly
what
it
provides,
but
I
I
can
say
that
for
for
for
aws,
it
provides
an
s3
interface
and
rook.
Also
well,
ceph
really
is
the
the
thing
that
provides
an
s3
compliant
interface.
D
There
are
sometimes
features
that
amazon
implements
a
little
bit
before.
Seth
might
get
them,
but
largely
like
the
the
base
is
still
is
still
there.
It
is
cosi,
just
makes
it
easier
to
in
a
in
a
kubernetes
native
way,
just
create
a
manifest.
That
says
I
want.
I
want
some
object.
Storage,
I
need
like
30
gigabytes
of
it
or
I
need
three
terabytes
of
it
or
you
know.
D
Maybe
I
need
a
petabyte
of
it
if
you
know
something
really
huge
and
then
that
will
go
and
it
will
look
at
the
bucket
class
and
see
okay,
this
ends
up
being
provisioned
by
rook,
or
this
ends
up
being
provisioned
by
amazon,
and
then
the
operator
for
record
the
operator
for
amazon
will
go.
It
will
create
the
storage.
That
operator
will
then
like
figure
out
what
the
actual
credential
details
are
and
then
populate
a
secret,
and
then
the
the
user
can
actually
sort
of
look
at
the
the
information
from
what
they
requested
and
see.
D
Yeah,
I
would
say
that's
true:
it's
just
a
matter
of
whether
or
not
there
is
an
implementation
for
the
storage
that
exists.
E
For
what
it's
worth
object
storage
is
the
example
of
why
the
the
line
between
what
is
a
file
system
and
what
is
a
database,
is
not
only
extremely
thin
but
non-existent
at
this
point.
Yeah,
it's
really.
It's
really
whether
it's
a
file
system
or
a
database.
It
really
depends
on
on
which
project
or
vendor
it
came
from
right.
A
C
I
wanted
to
add
something
on
with
what
blaine
was
mentioning
and
specifically
with
stuff
supporting
s3
as
s3's
api,
so
we
have
in
github
under
the
sub
github
org,
you
could
look
for
the
s3
test
repository
and
what's
new
with
that
library.
Is
that
that
that
is
the
whole
test.
Suite
that
we
are
using
to
ensure
ceph
is
actually
compatible.
We
actually
found
that
other
libraries
use
it
as
well
to
make
sure
that
they're
compatible
with
s3.
C
So
there's
been
a
lot
of
investment
done
by
this
deaf
community
in
terms
of
making
sure
that
this
works,
and
so
what's
great,
is
that
with
the
rook
development,
as
bling
was
mentioning
that
you
know
focused
around
with
the
s3
api,
I
mean
you
can
ensure
that
stuff
works
with
it.
Then
the
s3
api
is
going
to
work
with
it
too.
So
you
can
provision
both
in
you
could
you
could
yeah
provision
both
in
amazon's
ec2?
If
you
wanted
to
do
your
own
object
inside
of
there
and
not
use
s3,
but
yeah.
E
The
well
I
mean
part
of
it,
I
mean
I
think,
I'm
still
wondering
about
the
whole.
You
know
client
side,
language
thing
because,
like
I've
worked
with
boto
extensively
and
the
problem
is
that
boto
is
designed
to
interface
with
aws,
so
it
handles
all
of
the
authorizations,
for
example,
as.
D
E
The
class
and
obviously
we're
actually
when
it's
accessing
cozy
we're
not
going
to
want
to
have
it,
do
that
the
or
we're
going
to
want
to
have
it
do
because,
presumably
so,
the
authorization
is
actually
being
handled
when
you
instantiate
the
cozy
connection,
yes
yeah
yeah,
so
the
yeah
wait.
C
Actually,
can
I
speak
on
this
one?
Yes,.
C
Yeah,
I
think
there
might
be
confusion
so
what
photo
we're
thinking
like
as
the
client,
the
user
that
is
actually
going
to
be
reading
and
writing
the
data.
What
we're
actually
doing
with
cozy
is
this
is
on
more
on
the
control
plane
side.
So
as
the
operator
who's
actually
making
that
bucket
that
object,
pool
and
then
buckets
available
to
users.
That
then,
would
use
it
on
photo.
C
So
boto,
then,
is
connecting
to
actually
as
authorizing
through
stuff
itself,
through
monitors
at
that
point
through
well,
not
through
monitors,
but
actually
your
key
and
secret.
As
I
understand
your
access
key
and
secrets
so
and
then
you,
as
the
operator
you're
able
to
give
access
to
those
buckets
within
that
pool,
you
know
to
a
set
of
users
and
then
from
the
boto
interface.
You
are,
you
know
doing
your
your
puts
and
your
guts
to
those
buckets
to
interface
with
them
from
your
application.
C
No
because
he's
not
in
the
data
path
at
all,
so
you
have
your
application
and
then
you
have
stuff
over
here
so
and
sepham.
Your
application
is
talking
directly
to
seth.
Reading.
Writing
because
he's
not
in
the
picture.
As
the
operator
the
who
has
to
make
you
know
the
servers
do
something
and
have
the
buckets
available
so
that
the
application
can
actually
read
and
write
data
to
them
so
operator.
B
E
Okay,
so
cozy's
role
is
effectively
wholly
administrative
yeah.
That's
what
it
sounds
like
you
know,
so
so
in
in
some
ways
giving
it
making
it
analogous
to
csi
is
a
little
bit
deceptive
because
csi
actually
attaches
you
know
a
block
or
a
file
system
to
the
individual
containers
which.
B
D
Yeah,
I
I
think
that
is
kind
of
where
the
the
analogy
breaks
down
so
csi
is
also
not
in
the
data
path,
but
it
does
also
help
to
mount
block
or
file
storage
automatically
into
into
containers,
and
that's
not
quite
as
possible
with
object
storage
because
it
is,
you
know,
object,
storage
applications
have
to
say
here
my
author,
like
my
authentication
credentials
with
my
username,
my
key
and
I
need
to
you,
know,
authenticate
to
the
s3
interface,
whether
that
is
backed
by
amazon
or
whether
that's
backed
by
seth
like
that
is
just
sort
of
kind
of
what
the
application
knows
and
I
think
a
lot
of
times.
D
That
is
to
my
understanding,
mounted
in
in
different
ways.
It
can
be
an
environment
variable
that
says
here
to
my
credentials
or
it
can
be
like
a
file
that
the
application
reads
also.
C
And
there's
also
I
mean
so
in
csi.
You
know
we
have
you
know
where
we
make
volume
claims,
for
example,
and
but
we
still
have
sort
of
that
concept.
As
I
understand
you
know,
with
even
the
existing
container
storage
interface
that
we're
using
today
of
making
certain
claims.
What
exactly
are
those
resources?
I'm
pulling
up
the
source
code
now
to
remember,
but
you
might
know
off
the
top
of
your
head.
D
The
do
you
mean,
like
the.
C
C
Like
when
you're,
as
I
understand
like
those
bucket
claims
like
that's
to
actually
provision
the
bucket
that
the
application
would
use
and
you're
able
to
set
various
additional
configs
in
the
yaml,
such
as
like
the
maximum
number
of
objects
that
you
want
to
allow
as
a
policy,
you
can
also
set
the
max
size
for
that
bucket
as
well.
So,
in
a
way
it's
like
you
provision
volumes
to
containers,
and
then
they
mount
them
or
they
can
use
an
http
s
interface,
for
example,
and
write
out
to
buckets
instead
either
way
from
the
stuff
standpoint.
E
It's
just
sliced
out
into
different
pools
if
I'm
not
getting
going
off
in
the
left
field.
Here,
I'm
really
interested
in
that
second
path,
because
that's
the
new
thing
with
this
right
because,
like
I'm
used
to,
if
I'm
writing,
if
I've
got
an
application,
that's
python
and
aws,
then
I
run
my
thing.
I
have
boto
boto
creates
a
connection,
an
https
connection
to
some
amazon
s3
bucket,
which
is
you
know,
purely
sort
of
a
network
service
within
boto.
E
The
it
sounds
like
that's
going
to
be
enabled,
but
it
also
sounds
like
there's
kind
of
a
second
path
around
making
a
bucket
claim.
That's
different
from
that.
That's
that's
not
https.
So
what
I
want
to
know
is
how
that
second
path
works,
because,
among
other
things,
I've
never
found
the
authenticated
authorized
for
every
single
request
ideal.
D
A
E
I'm
trying
to
figure
out
what
the
role
of
the
bucket
claim
is
in
him
right,
because,
if
you're
just
doing
pure
https
authorization
et
cetera,
there's
no
real
role
for
a
claim
right,
you,
the
volume
exists,
and
if
you
have
the
correct
credentials,
then
you
can
access
it
or
the
bucket
exists.
You
have
the
correct
credentials,
you
can
access
it.
So
what
role
does.
E
D
Sorry,
I
guess
if
I
step
back
a
little
bit,
and
I
assume
that
I'm
I'm
a
user
of
a
grouping
of
these
cluster-
there
is
a
like
the
cozy
operator
for
amazon
is
running.
I
can.
I
can
write
a
manifest
that
says:
okay,
I
need
object,
storage
and
so
I'm
going
to
make
an
object
bucket
claim
resource.
D
That
says
I
want
object,
storage
and
then
cozy
will
see
that
and
the
amazon
operator
will
actually
create
that
storage
and
provision
it
and
make
sure
that
a
bucket
exists
and
that
a
user
exists
for
that
bucket
and
then
so
the
the
the
details
around
actually
creating
an
object,
storage
bucket
and
a
user
to
access
that
bucket
are
done
at
the
at
the
kubernetes
level
so
that
you
can
write
an
application
that
says.
D
Okay,
I
just
you
know,
I'm
connected
to
some
object,
storage
that
uses
s3
and
so
I'll
use
part
of
the
boto
library,
maybe
to
you
know,
read
an
environment
variable
that
says:
here's,
my
user
and
another
one.
That
says
it's
my
access
token
and
then
I
will
just
access
whatever
bucket
from
there.
I
will
upload
my
backups
or
I
will
you
know,
upload
whatever
files
I
have.
D
The
one
of
the
things
that
cozy
also
allows
is
applications
that
are
ephemeral.
So
if
an
application
wants
object,
storage,
but
it
doesn't
need
it
for
a
long
time,
then
cosi
also
handles
the
create
this
bucket
and
then
also
delete
this
bucket
when
the
application
is
done
running,
and
so
it
it
abstracts
away
the
create
this
bucket
and
make
a
user
for
this
bucket
and
then
give
me
access
details
like
that
is
all
handled,
and
in
that
way
it.
E
Okay,
I
mean
obviously
that's
a
good
thing
because,
for
example,
within
the
python
world,
there's
a
lot
of
higher
level
applications
that
are
written,
that
embed
boto
and
so
having
that
still
work
is
terrific.
I
do
actually
kind
of
wonder
about
sort
of
limiting
ourselves
to
the
s3
workflow.
Just
because
one
of
the
characteristics
of
s3
is
it's
slow
right.
I
mean
deliberately
so
right.
It's
designed
to
be
slow
but
highly
available,
but
that's
not
a
characteristic
of
object,
storage,
necessarily
right.
That's
the
characteristic
of
s3
right!
D
I
I
think
that's
a
good
question.
Cozy
also
is
focused
on
being
sort
of
vendor
agnostic,
and
I
I
would
consider
s3
kind
of
part
of
a
vendor
like
ceph,
provides
an
s3
interface,
but
you
also
could
just
talk
to
the
rados
gateway
directly,
so
there
could
be
a
sort
of
driver
in
place
for
that
in
the
cozy
upstream
meetings.
D
You
know
we
talk
about
all
the
big
cloud
providers.
Azure
has
their
own
non
s3
interface.
I
think
they
call
it
blob
storage
and
they
have
their
own
access
methods,
and
so
just
like
there
might
be
an
amazon
and
s3
driver
of
sorts
with
cozy.
There
also
would
be
an
azure
one
or
one
for
gcp.
I
think
they
might
also
use
an
s3
compatible
interface,
but
I
it
could
be
wrong
about
that.
Yeah.
E
The
I
mean
all
this
is
also
disappointing
to
me
because,
of
course,
the
openstack
folks
built
swift
to
be
an
open
standard,
but
everybody
is
focused
on
s3
compatible
the
yeah.
It's
just
because
I'm
looking
at
this-
and
this
is
just
like
a
really
small
example-
so
pardon
me
for
for
being
very
narrow
here,
but
all
of
my
personal,
hands-on
experience
with
object.
Storage
is
through
my
time
as
a
as
a
database
engineer,
right
right
and
so
like
to
give
you
an
example
of
sort
of
fast
versus
slow
use.
E
E
You
know
high
throughput,
but
but
you
know,
can
be
slow
to
access,
but
like
one
thing
that
you
don't
do
in
the
postgres
world,
if
you're
just
looking
at
s3
is
unless
your
database
has
a
very
low
quantity
of
changes,
you
wouldn't
put
your
replication
logs
into
s3,
because
it's
too
slow
because,
like
on
a
database,
that's
really
busy,
you
might
be
producing.
E
You
know
16
replication
logs,
a
second
which
doesn't
work
with
with
the
sort
of
s3
workflow,
where
it
might
take
several
seconds
to
upload
an
individual
one.
But
if
you
actually
had
fast
object,
storage
available,
then
suddenly
that
becomes
a
viable
option
and
you
could
do
something
like
say:
broadcast
replication,
where
you
know
one
database
writer
puts
those
replication
logs
into
object,
storage
and
then
multiple
readers
are
able
to
pick
up
individual
copies
of
those
as
objects.
D
I
I
think
it
should
be
already
possible.
I
guess
I
have
a
bit
of
a
follow-up
question,
which
is
when
you
say
s3
is
slow.
Are
you
talking
sort
of
specifically
about
amazon's
implementation
of
s3
or
is
it
the
s3
like
the
gui
interface
itself,.
E
Yeah,
both
okay,
both
right
both
there's
the
whole
network
implementation
that
is
optimized
for
high
throughput
high
availability,
slow
access
right
you've
got
to
make
your
cloud
trade-offs.
That's
the
set
of
trade-offs
they
decided
to
make.
It
is
super
useful
right
for
for
one
set
of
use
cases
the,
but
then
the
second
thing
is
every
request.
I
make
every
object.
E
I
write
goes
through
the
entire,
the
entire
authentication
and
authorization
path
via
the
api
and
that
authentication
authorization
takes
a
significant
amount
of
time
when
you're,
comparing
it
to
the
amount
of
time,
that's
required
for
a
local
file
right
right.
D
Yeah
yeah
yeah.
I
I
think
I
I
can
provide
a
little
bit
of
an
answer
here.
So,
let's
yeah,
let's
say
that
seth
is
able
to
provide
the
s3
interface
faster
than
amazon
is
able
to
provide
it.
You
could
in
the
same
in
the
same
sort
of
multi-cloud
cluster
or
even
if
a
kubernetes
cluster
were
just
running
in
amazon,
you
could
create
a
bucket
class.
That
is,
you
know,
standard
s3
and
you
could
create
another
bucket
class.
D
That
is,
you
know,
called
like
fast
s3
and
if
I,
as
a
user,
I'm
like
okay,
I
have
a
database,
but
it's
very
active
and
I
want
fast
s3
backups.
Then
I
could
create
a
bucket
request
for
that
fast
s3
class
and
then
get
that
faster,
faster
storage
versus.
If
I'm
like.
D
D
You
would
just
say
this
specification
for
the
application.
This
is
fast
s3
and
this
one
that's
not.
C
No
just
or
you
just.
A
C
C
So
as
the
operator
I'm
going
to
use,
cube
ttl
to
you
know,
change
the
state
of
with
crds,
as
I
understand
so
changing
the
state
as
here's
a
bucket
that
I'm
going
to
go
ahead
and
provision
by
claiming
here's
a
set
of
attributes
here
are
the
users
that
have
access
to
it.
Here
is
the
policies,
the
quotas
that
should
be
set
on
it
go
ahead
and
provision
it,
and
then
the
work
operator
is
going
to
go
ahead
and
kick
things
off
with
the
orchestrator
api.
C
As
I
understand,
and
from
that
standpoint
on
seth,
you
know
you
have
your
physical,
you
know
you
have
your
physical
racks
one.
Is
it
fast
with
ssds
one
with
spinning
discs
and
from
the
standpoint,
though,
you
set
it
up
where
they're
each
their
own
pools
and
you
provision
those
buckets
on
the
solid
state
drives
and
then
from
there
the
access
key
and
the
secret
that
you
give
to
the
user.
C
C
In
theory,
you
don't
have
to
even
change
anything
in
your
applications
so
that
it
can
begin
doing,
read
and
writes
and
from
there
it
authenticates
with
the
access
key
and
secret
key
and,
as
I
mentioned,
like
cozy's,
not
in
the
picture
at
all.
At
this
point,
it's
just
between
the
application
and
stuff
correctly.
E
No,
no,
it's
not
accessing
it's
accessing
the
object.
Storage
like
like
here's,
the
core
problem
with
the
s3
api
in
terms
of
write,
speed
right,
so
I'm
producing
16
objects,
a
second
for
each
one
of
those
objects.
I
need
to
authorize
and
authenticate
before
I
can
write
it
in
terms
of
the
speed
of
rights.
That's
a
lot
of
time.
C
Okay,
well
I
mean
I
could
speak
on
the
radio's
gateway
standpoint,
though
it's
one,
it's
one
authentication
and
then
from
there.
You
are
a
single
object,
because
you
know
you're
not
going
to
upload
an
entire
like
in
one
chunk,
a
gig,
so
it
breaks
it
up
into
chunks
and
it
uses
multi-part
uploading
to
which
then
there's
the
authentication
is
only
done
that
one
time,
because
at
that
point
you
have
a
token
and
you're
you're.
C
You
know,
I
don't
think
that
affects
our
performance,
because
it's
just
how
you
tell
it
what
to
do
stuff
is
you
know
in
itself
different
in
terms
of
its
implementation,
of
whatever
s3
is
using
underneath?
Okay,
but.
E
E
D
I
I
think
the
answer
is
is
yes,
the
application
would
have
to
have
a
little
bit
of
like
leaky
information
in
that
case,
to
know
that
the
s3
access
credentials
it
was
provided,
will
work
with
viaratos.
D
Alternatively,
there
there's
definitely
room
for
us
to
explore
whether
like
providing
a
rados
like
plug-in
to
cosy
is
also
something
that
makes
sense,
so,
even
if
it
is
ultimately
backed
by
the
same
object,
store
being
able
to
have
two
different
interfaces
to
it
is
something
I
think
that
would
be
possible.
We
haven't
really
planned
that
far
ahead.
I
think
most
people
are
really
interested
in
s3.
So
that's
our
that's
been
our
focus,
but
it
is
yeah.
D
A
D
Yeah
cosi
has
kosiaki
has
several
different
repositories.
I
think
there
are
some
links
that
should
be
shared
with
you
all.
It
has
a
controller
and
an
api.
I
think
there's
another
another
interface
as
well.
Honestly,
the
the
best
places
to
probably
go
are
the
kubernetes
slack
channel
and
then
look
for
the.
I
think
it's
just
called
codec,
but
to
look
for
the
like
container
object,
storage,
interface,.
D
Could
join
yeah
the
kubernetes
community
calendar
also
has
on
mondays
and
thursdays
around
sometime
around,
like
11
or
noon
mountain
time,
so
there's
actually
a
meeting
going
on
right
now
for
for
cozy,
while
we're
talking
but
yeah
on
the
kubernetes
community
calendar
there's
the
information
I
think
the
meeting
on
monday
is
a
little
strange.
It
will
request
a
password
which
is
just
like
seven,
seven,
seven,
seven
seven
to
get
in,
but
the
community
is
very,
very
welcoming
and
there's
always
a
lot
of
discussion
about
what
you
know.
D
What
do
users
need
from
this?
What
do
we
need
for
alpha?
How
do
we
envision
this
kind
of
sort
of
evolving
in
the
future?
D
A
Drop
these
links
again
for.
D
Yeah,
I
guess
shameless
plug
for,
for
we
are
dropping
the
the
newest
1.6
release
later
today
is
the
plan
unless
something
goes
terribly
wrong,
but
yeah
we,
we
have
a
pretty
pretty
smooth
process
at
this
point,
so
yeah
that
has
some
some
changes.
I
made
to
be
able
to
handle
a
much
larger
rook
staff
clusters,
as
far
as
like
number
of
osds
help
upgrade
them
a
little
faster.
D
We're
also
able
to
support
multiple
file
systems,
which
is
has
been
a
beta
feature,
but
it's
now
a
ga
feature,
and
then
we
have
some
first
class
nfs
gateway
support.
Also.
B
E
Storage
right
is
using
it
for
database
use
cases
right
yeah,
I
mean
I
also
used
it
for
cdn
use
cases.
But
frankly,
I
feel
like
the
cdn
use
case
is
a
lot
more
cut
and
dried
yeah.
The
I
mean.
Maybe
I
never
actually
was
a
you
know.
I
was
never
in
charge
of
a
cdn
if
I
was
actually
running
the
cdn
rather
than
just
using
it.
I
might
have
a
different
opinion.
E
But
the
you
know
it's
the
hey:
what
applications
are
not
currently
making
much
use
of
object?
Storage
be
particularly
because
our
whole
conception
of
object
storage
is
s3
and
and
that
sort
of
workflow
that
that
is
not
inherent
in
object,
storage.
It's
just
that's
one
example
of
object,
storage
and
it
made
certain
specific
trade-offs.
D
Oh,
I
I
just
wanted
to
say
I
yeah
for
for
all
of
them
here,
I'm
chatting
with,
but
anyone
also
watching
if
you,
if
you're
like
wow
this
you
know,
cozy
sounds
really
exciting.
I
would
like
to
be
able
to
make
use
of
this.
D
Obviously
it's
you
know
it's
in
alpha
and
things
aren't
ga
yet,
but
if
you're,
if
you're
interested
in
it,
I
think
it's
also
really
helpful
for
the
the
upstream
community
to
understand
what
what
use
cases
people
are
excited
to
use
it
for,
and
that
also
helps
us
make
a
better
design,
because
we
can
understand
like
really
what
are
the
things
that
people
care
about,
so
yeah
definitely
check
in
on
this
live
channel
or
like
the
stand-up
meeting
or
the
you
know,
either
one
of
the
the
cozy
meetings.
D
D
A
B
B
E
Developments
and
now,
when
they
the
I
see,
what
is
it
that
that
you
know
cozy
is
going
for
beta
in
the
kubernetes
pr
stream
I'll
know
what
it
is
that
you're
talking
about.
D
Yeah,
I'm
I'm
really
excited
for
that
moment,
particularly
yeah.
I
think
that
will
be
when
it's
yeah,
when
we've
seen
the
the
alpha
more
stages,
learned
and
really
make
something
great.
I
honestly.
A
D
E
The
it's
actually
it's
been
weird,
I
mean
I
mainly
have
experience
with
the
the
amazon
flavor
the
yeah,
and
it's
been
weird
to
me
how
slow
they
have
been
to
enable
sort
of
storage
things
on
their
own
distro
of
kubernetes,
because
they
have
all
of
these
storage
options.
E
A
E
Our
our
next
we're
not
gonna
have
cloud
tech
thursday
on
the
next
cycle,
two
weeks
from
now,
because
because.
A
E
But
but
if
you're
into
watching
this
channel,
openshift
tv
is
going
to
be
hosting
a
series
of
office
hours
the
week
of
kubecon
europe,
most
of
them
after
the
end
of
the
day
for
for
europeans
and
we'll
touch
on
a
lot
of
the
same
topics
that
you
normally
see
on
this
channel.
So
do
look
at
openshifttv
for
that
schedule.
A
Yeah,
if
you
would
be
so
kind
as
to
subscribe
to
the
calendar,
I
would
really
appreciate
that
that
way,
you
know
what's
coming
up
next
and
you
can
see
if
it
aligns
with
your
schedule.
So
awesome.
B
B
All
the
projects
and
all
the
projects
under
the
foundation
itself,
so
kata
zul
airship.
A
All
right,
wonderful,
you
say:
d,
okay,
thank
you,
amy,
all
right,
folks,
we're
gonna
sign
off
for
the
day.
We
appreciate
you
tuning
in
hope.
You
learned
a
lot
and
we
will
see
you
soon.