►
From YouTube: CNCF Storage Working Group Call - 2018-01-10
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
Hey
Paul,
are
you,
are
you
out
there?
Does
your
audio
work.
C
A
A
A
All
right
a
little
bit
low
on
the
tenants,
but
you
know
we're
recording
this,
so
we
will
definitely
send
it
out
to
the
whole
group
once
we're
done.
We'll
get
kicked
off
here.
So
Ben's
not
gonna,
make
it
he's,
got
a
conflict,
so
I'll
be
I'll,
be
chairing
it
for
where
this
morning
on
the
agenda
today,
we
have
a
couple
things:
one
is
to
talk
about
the
open
service
broker
API,
so
we
have
Paul
Murray
from
Red
Hat.
A
C
A
C
A
lot
Clint,
so
I
hey
everybody.
My
name
is
Paul
Murray
I
work
for
Red
Hat
on
a
open
service,
Berkeley,
API
and
different
things
in
the
kubernetes
space.
The
most
relevant
to
this
one
or
to
this
conversation
is
given
IT
Service,
Catalog
and
I'm
gonna
give
a
short
talk
today
about
cloud
native
storage
and
open
service
broker
API.
So
as
as
far
as
our
agenda
for
this
little
tacos,
first
I'm
going
to
give
an
overview
of
the
open
service.
C
So
the
premise
of
open
service
worker
API
and
its
value
proposition
is
that
users
and
applications
need
access
to
usually
say
services
and
resources.
But
since
this
is
the
storage
working
group,
I'm
also
going
to
call
it
that
storage
is
the
thing
that
they
need
access
to,
and
those
of
you
that
have
a
history
of
working
in
large
organizations
may
be
familiar
with
lengthy
and
sometimes
convoluted
procurement
processes.
C
To
get
new
resources
or
services
provision
and
a
value
proposition
of
open
service
brokering
API
is
that
it
lets
a
service
provider
integrate
with
multiple
platforms
with
the
single
API
that
allows
users
of
those
platforms
to,
in
an
on-demand
fashion,
provision
new
instances
of
a
service
where
a
service
is
just
some
capability.
We'll
talk
a
little
bit
more
about
that
in
a
moment,
but
it
allows
users
to
provision
new
instances
of
things
and
to
bind
those
instances
to
their
applications
where,
of
course,
an
application
might
be
something
at
the
like
user
facing
application
level.
C
It
might
be
something
more
infrastructure,
it
runs
the
gamut,
so
the
open
service
worker
API
itself
defines
an
HTTP
interface
between
a
platform
and
entities
that
provide
a
set
of
capabilities
or
services
that
we
call
service
workers
and
a
service
broker
is
a
component
of
a
service
that
implements
the
open,
serviceworker
api.
So
she
put
that
the
the
Kannamma
canonical
example
that
we
use
from
discussing
this
is
that
is
that
the
say,
for
example,
that
a
service
might
be
a
database
as
a
service.
C
I'm
gonna
talk
a
little
bit
about
the
operations
of
this
API
and
because,
in
my
professional
life,
I
work
on
the
kubernetes
service
catalog,
which
is
an
integration
between
kubernetes
and
open
Service
Worker
API
I
can
do
that
in
the
context
of
kubernetes
Service
Catalog.
A
brief
history
lesson
before
we
begin
open,
serviceworker
api
that
started
out
as
the
Cloud
Foundry
service
broker
API.
C
It
has
gone
through
a
couple
of
major
revisions
in
its
lifetime,
as
Cloud
Foundry
Service,
Worker
API
and
in
2016
users
of
Cloud
Foundry
were
coming
to
the
Cloud
Foundry
folks
and
saying
we
we
like
this
idea.
We
want
to
be
able
to
use
this
from
other
platforms,
and
at
that
point
it
was
decided
that
we're
that
bad,
the
right
future
for
the
API
was
to
become
something
that
was
more
open
than
just
the
cloud
Patrick
community,
and
so
a
new
working
group
was
created
in
API,
was
renamed
and
since
then,
we've
had
a.
C
A
A
D
D
A
A
We
won't
remember
that
in
the
future,
so
right
now
we're
on
the
old
zoom
invite
which
is
in
the
meeting
minutes
today,
I
I'm
going
to
be
sharing
this
because
Bennet
isn't
able
to
make
it,
but
we've
got
a
couple
items
that
are
on
the
agenda
in
the
meeting
minutes.
The
first
is
that
we
have
Paul
Morey
from
Red
Hat
and
he's
going
to
be
discussing
the
open
service
broker
and
and
how
this
fits
in
this
cloud
native
storage
world
that
we've
been
discussing,
and
second
we've
got
an
item
to
discuss
the
test
project.
A
C
Thanks,
hello,
everybody,
my
name
is
Paul
Murray
I
work
for
Red
Hat,
on
open,
serviceworker,
API
and
different
things
in
the
kubernetes
ecosystem
and
I'm
gonna,
give
a
short
talk
today
about
intersection
of
cloud
native
storage
and
have
been
serviced
per
great
VI.
So
our
agenda
today
includes
an
overview
of
the
open,
serviceworker
API
call
out
of
touch
points
between
storage
and
open
serviceworker
api
and
then
some
examples
that
I
am
aware
of
cloud
native
storage
type
integrations
via
open
serviceworker.
C
So
the
value
proposition
of
the
open
service
broker
API
is
that
it
provides
a
way
for
others
to
create
components
that
know
how
to
provision
new
instances
of
capabilities
which
could
someone
mute
their
microphones.
Please,
the
the
API
allows
service
providers
to
provision
or
create
components,
called
service
brokers
that
know
how
to
provision
new
instances
of
resources
and
how
to
create
new
bindings
to
those
resources.
C
There
there's
also
a
lot
of
lengthy
in
high
quality
documentation
for
folks
that
have
a
background
in
cloud
foundry
for
the
exact
integrations
between
cloud
foundry
and
open
serviceworker
api,
but
we're
going
to
talk
Kerber
Nettie's
today,
because
that's
most
most
familiar
to
me
as
I
said,
the
kubernetes
service
catalog
is
an
integration
between
kubernetes
and
brokers
that
implement
the
open
service
brokering
yeah.
It
is
shaped
similarly
to
kubernetes
and
as
a
user
of
the
kubernetes
service
catalog,
you
use
api
resources
that
will
feel
very
familiar.
C
Hopefully,
if
you
have
experience
in
kubernetes
and
that
allows
you
to
to
provision
new
instances
make
bindings
to
them
without
having
to
interact
with
the
AAP
open,
Service
Worker
API
directly.
That's
something
I
want
to
call
out
as
a
point
of
moderate
confusion,
sometimes
in
in
our
community
that
the
open,
serviceworker
API
is
really
meant
for
platforms
to
integrate.
With
rather
than
end-users,
so
let's
take
a
look
at
yes.
I
skip
this
slide
here.
Let's
take
a
look
at
the
fundamental
operations
of
open,
serviceworker
API.
C
C
Central
to
this
discussion
is
provisioning
new
resources,
so
there's
an
operation
called
provision,
and
that
is
an
operation
that
creates
a
new
instance
of
a
service
or
a
resource
to
consume
that
an
instance
of
a
resource
or
a
service
in
an
application.
There's
an
operation
called
bine
that
will
for
services
that
implement
it.
Allow
the
broker
or
service
broker
to
return
information
like
credentials,
coordinates
quality
and
service
settings
for
applications
that
want
to
use
a
service
and
then
provision
in
BA
and
have
symmetric
pairs
to
undo
them.
C
So,
in
the
context
of
urban
Nettie's
service
catalog,
which
is
very
very
similar
to
in
terms
of
the
generalities
of
the
workflow,
to
use
this
API
in
cloud
foundry,
the
first
step
is
to
add
a
service
broker
to
the
catalog.
You
do
this
by
creating
in
terminating
Service
Catalog,
you
create
a
cluster
service
broker
resource
and
that
tells
the
service
catalog
that
there
is
a
new
broker
to
consume.
So
what
happens
after
you
create
this
resource?
C
Is
the
service
controller
backing
the
Service
Catalog
API
is
watching
API
and
says:
hey,
there's
a
new
cluster
service
broker
that
I
want
to
consume
I'm,
going
to
go,
call
that
the
catalog
endpoint
and
we
do
have
some
unfortunate
naming
collisions
in
the
space,
so
I
usually
try
to
be
very
good
about
disambiguating
them.
If
there's
a
question,
please
holler
and
we'll
disambiguate
it,
but
the
the
catalog
controller
calls
the
broker's
catalog
endpoint
and
gets
back
a
payload
from
the
broker
that
says.
C
C
What
happens
at
that
point
is
that
they
create
create
a
new
service
instance
resource
and
that
resource
has
information
about
the
the
service
and
tear
of
that
service
that
the
user
wants
to
use.
You
can
pass
parameters
to
service
instances
to
set
knobs
that
that
service
allows
you
to
set
the
catalog
controller,
handles
communicating
with
the
broker
and
calling
the
provision
operation
on
that
broker.
The
broker
does
the
work
of
actually
provisioning
the
resource
and
reports
back
status
to
the
caller
saying
I
either.
Did
this
or
I
accepted
your
request
and
I'm
gonna?
C
C
Now,
when
a
user
wants
to
bind
an
instance
that
they've
provision
of
a
service
to
their
application,
they
make
another
resource
called
a
service
binding,
and
the
pattern
should
be
familiar
at
this
point.
User
creates
a
resource,
there's
a
controller
that
backs
the
service
catalog
API
that
detects
that
a
new
service
binding
resource
has
been
created.
It
calls
the
binding
operation
on
the
broker
for
that
service
instance
and
passes
the
parameters
just
like
with
provision.
You
can
pass
parameters
to
bindings
Berger
handles
doing
the
work
of
creating
that
binding
and
passes
back.
C
So
next
steps
that
are
relevant
to
this
audience
are
there
are
two
that
I
think
are
are
probably
going
to
be
most
interesting
to
folks.
In
this
audience,
one
is
the
concept
of
API
extensions
or
generic
actions,
which
are
intended
to
allow
broker
authors
to
extend
the
API
dynamically
with
new
actions.
C
The
canonical
use
case
that
we
have
for
this
is
that
this
API
is,
you
have
notice,
does
not
have
operations
for
backup
and
restore.
It
is
very
difficult
to
get
folks
to
agree
on
the
details
in
a
specification
like
this,
for
what
certain
actions
should
entail
and
what
types
of
parameters
they
should
accept,
etc.
C
So
I
would
say
perhaps
one
year
six
months
ago,
the
idea
started
getting
traction
in
our
community
that
there
should
be
a
way
to
add
new
operations
that
allow
people
to
prototype
new
extensions
to
the
API
and
add
capabilities
to
their
services
that
are
possibly
unique
to
their
particular
service
and
perhaps
implement
some
others
specification
that
they
can
link
to.
But
the
idea
is
that
you
should
be
able
to
extend
and
add
new
actions
to
this
API
without
having
to
go
through
a
lengthy
process
of
actually
making
a
change
to
the
spec.
C
The
other
thing
that
I
think
this
audience
may
find
interesting
is
the
concept
of
a
binding
output
schema
right
now.
There
is
no
way,
no
first-class
way
in
the
API
for
you,
as
a
consumer
of
a
service,
to
know
exactly
the
pieces
of
information
that
you
will
get
when
you
bind
to
an
instance
of
a
service.
C
The
binding
output
schema
will
allow
brokers
to
publish
a
some
sort
of
schema.
That
says,
when
you
bind
to
this
an
instance
of
the
service,
you
will
get
these
keys
that
contain
this
kind
of
information.
These
keys,
XY
and
Z
key
in
particular,
have
sensitive
information
in
them,
so
you
should
treat
them
as
if
they
had
sensitive
information
and
basically
exposed
to
the
user
and
allow
user
interfaces
to
be
created.
They
communicate
to
the
user.
C
B
C
C
I'm
sorry,
the
pause
is
because
I
I
started
giving
this
talk
before
before
we
had
sorted
out
the
meeting
link
and
I'm
not
wondering
whether
I
skipped
some
content
about
the
history
of
the
API
and
since
those
the
memories
of
the
two
versions
of
this
talk
that
I've
given
in
this
hour,
are
kind
of
mixed
together.
Can
somebody
give
me
a
sanity
check
that
I
discussed
the
history
of
this
API
yeah.
A
C
That
is
very
possible.
The
short
answer
is
that
open,
serviceworker
API
is
not
kubernetes
specific.
It
started
out
as
a
Cloud
Foundry
API
and
users
of
the
Cloud
Foundry
serviceworker
API
in
2016,
we're
coming
to
the
Cloud
Foundry
folks
and
saying
this
concept.
We
want
to
use
it
from
other
platforms.
So
what
I've
presented
here
is
just
an
explanation
of
the
API
mechanics
in
the
context
of
kubernetes,
because
that's
what's
most
familiar
to
me
as
far
as
brokers
go.
C
B
C
Think
that
might
be
something
to
talk
about
offline
sure.
If
you
want
to
send
me
an
email,
we
can
discuss
that
for
now,
I'm
going
to
get
through
the
rest
of
the
presentation,
and
perhaps
that
might
clear
up
some
questions
so
touchpoints
between
open
serviceworker,
api
and
storage.
There
is
a
notion
of
Vol
services
in
open,
serviceworker
API.
However,
this
is
the
is
a
feature.
That's
very
Cloud,
Foundry
specific.
C
Changes
to
the
API,
like
the
bye
and
response
schema,
may
make
it
easier
to
implement
integrations
for
storage
volumes.
But
despite
this
information
that
I
told
you,
there
are
already
brokers
that
provide
access
to
a
different
cloud
native
storage
like
capabilities
already
and
when
I
say
cloud
native
storage.
I
am
NOT
an
expert
on
cloud
native
storage,
but
when
I
say
it,
what
I
think
of
is
the
interoperable
storage
that
you
can
perhaps
find
something
in
your
cloud
of
choice
or
in
your
environment
of
choice
that
will
have
parity
in
some
other
cloud
or
environment.
C
Some
projects
that
I
am
aware
of
are
there
is
a
broker
called
the
open
SDS
broker
that
creates
SEF
compatible
volumes,
so
the
the
services
it
offers
are
a
like
volume
as
a
service.
So
you
can
provision
provision
a
new
instance
of
this
and
get
a
safe,
compatible
volume
created
for
you
by
open
SDS.
There
are
also
a
few
s3.
C
Compatible
brokers
so
there's
a
CNS
object
broker
that
creates
an
s3
compatible
object,
store
Aaron
Boyd
on
this
call
is
somebody
that
you
can
get
some
more
information
about
that
broker
from
there's
also
a
Mineo
broker,
which
I
am
not
sure
whether
it
is
actively
maintained
at
this
point,
but
it
is
another
broker
that
it
creates.
An
s3
compatible
object
store
and
then
there
is
an
AWS
broker
based
on
red-hats
instable
broker
that
creates
s3
buckets
using
the
AWS
api.
C
D
C
E
One
out
to
k-chat
but
I'll,
say
here
one
of
the
things
I'm
looking
for
the
entries
to
have
is
not
just
the
service
name
but
sort
of
its
schema.
You
talked
about
that,
but
also
the
level
of
service
that
the
instance
is
providing
so
I
might
have.
You
know:
different
object,
storage
brokers,
for
instance,
that
provide
different
levels
of
service.
For
the
same.
You
know,
SEF
compatible
volumes,
for
instance,
and.
C
So
I
do
presentations
on
the
subject
with
a
little
bit
more
time.
One
thing
that
I
glossed
over
but
touched
upon,
I'm,
not
sure
if,
if
if
I
touched
upon
it
enough,
but
there
is
a
concept
of
a
plan
for
a
service,
a
service
has
at
least
one
plan
and
a
plan
is
a
tier
or
level
of
that
service.
So
one
thing
that
you
can
use
plans
to
represent
is
different
levels
of
quality
of
service,
so
I
you
may
have
in
the
case
of
a
database
as
a
service.
C
C
F
A
Awesome,
alright
cool
so
to
be
respectful
time
for
be
able
to
talk
about
the
tests.
I
want
to
close
out
of
questions
for
now.
If
you
guys
have
anything
else,
please
send
a
question
to
the
SME
Google
group
and
another
important
call
to
that
sometimes
are
around
the
open
service
broker
API.
So
thank
you
all
for
presenting
all.
G
A
Fantastic,
so
let's
get
into
this
so
on
here
we
have
Brian
to
to
do.
You
will
answer
the
question
questions
about
the
tests.
The
test
is
a
project
that's
been
presented
to
the
TOC,
I
think
back
in
May
or
June
or
July
of
last
year.
We
just
had
a
presentation
on
it.
Last
week
and
last
week,
the
last
meeting
that
we
had
so
I
think
we
want
to
open
up
to
the
group
to
ask
any
questions
or
make
any
comments
about.
B
A
So
something
that
I
I
noticed
Ryan
went
during
the
presentation
is
like
I
think
it.
You
know
the
test
that
solves
an
interesting
problem
in
terms
of
like
highly
scalable
mice,
environments,
but
the
thing
that
was
was
missing,
I
think
it
a
little
bit.
Was
you
know
what
is
the?
What
does
it
look
like
to
actually
deploy
like
what
does
it
look
like
to
employ
the
test
with
with
kubernetes?
Like
you
know,
what
is
the
user
experience?
A
You
know
how
how
automated
is
the
lifecycle
of
having
that
platform
sit
within
that
environment
and
I?
Think
at
the
time
that
the
answer
was
hey,
that's
interesting,
we
should
look
at
it.
Do
you
have
any
any
perspective
on
that
or
what
you
think
or
how
important
that
may
be
to
to
this
kind
of
project.
G
G
Well,
the
primary
way
that
it's
being
used
outside
of
Google.
At
this
point
there
is
a
pretty
significant
community
around
around
the
tests
in
terms
of
how
automated
the
life
cycle
is
I
think
it
is,
as
far
as
I
know,
fairly
automated.
There
are
some
things
which,
like
restarting,
which
may
require
some
amount
of
operator
work,
but
you
know
routine
things
like
instances
being
rescheduled.
It
just
has
to
be
automated,
because
that
sort
of
thing
happens
in
Borg
all
the
time.
G
To
rest
with
young
Borg,
so
the
way
I
see
the
tests
is
it's
a
bridge
for
applications
to
cloud
native
and
that's
how
it
started.
You
know
YouTube
started
with
my
sequel
and
then
it
just
had
explosive
growth
and
then
it
was
acquired
by
Google
and
moved
on
to
Borg
and
the
test
is
what
made
that
possible.
So.
B
You
Brian
one
one
question
I
had
about
that,
so
that
I
saw
you
can
talk
about
a
my
sequel
operator,
my
single
controller,
that
Oracle's
supposely
open
sourcing
in
the
next
few
months.
Are
you
guys
is
the
best
team
working
with
them
or
are
there
any
plans
there
too?
For
that
to
support
the
test?
I?
Don't.
G
But
that's
not
backing
away
my
circle
operator,
but
there
are
a
few.
My
sequel
operators,
they're
pretty
simplistic
in
my
and
you
know,
as
a
tests,
you
know
obviously
has
some
more
advanced
features
like
sharding,
but
you
know
it's
also
pretty
production
hardened
and
heavily
instrumented
for
cloud
native
operations,
so
you
can
tell
what's
going
on
in
terms
of
the
monitoring
and
the
logging
and
whatnot
you
know
it
has
a
lot
of
operator
miles
on
it
and
the
my
sequel
operators
really
can't
be
compared
to
that.
But.
A
Right,
but
if
you
fire
in
the
case
of
a
mice,
equal
operator
right,
it's
gonna
be
a
applaud
that
controls
a
lifecycle
of
everything.
In
the
case
of
the
test,
it
seemed
like
it
was
pretty
much
a
manual
deployment
as
you
scale
out
different
nodes,
and
it
doesn't
have
integration
against
the
kubernetes.
You
know
to
act
as
a
controller
or
an
operator,
maybe
I'm
misunderstanding
it,
but
it
seems
like
a
very
manual
process
to
do
any
of
the
scaling.
H
B
F
A
A
A
G
A
Alright,
if
you
guys
do,
you
know,
feel
free
to
send
them
to
the
Google
Group
I
think
that
you
know
we'll
we'll
follow
up
with
an
email
to
discuss
like
what
the
next
steps
are
or
what
we,
what
we
feel
to
get
some
more
opinions
on
paper
and
then
we'll
deliver
that
feedback
to
the
TOC
with
Ryan.
Okay.
Thank
you
cool.
Thank
you.
A
The
next
things
to
figure
out
what
other
projects
that
we
want
to
bring
to
the
SME
to
talk
about
so
I
have
a
an
agenda
item
where
it's
kind
of
open.
So
we
can
either
talk
about
it
here
or
you
could
guys
can
fill
in
projects
that
you
think
are
going
to
be
relevant
to
share
with
the
SVG
to
discuss
and
I.
Think
that
we're
open
for
anything.
A
You
know,
as
you
guys
saw
today,
you
know
the
open
service
broker,
isn't
something
that
we're
looking
to
bring
into
CN
CF,
it's
something
that
we
just
want
to
be
more
rounded
and
more
educated
about.
So
we
understand
you
know
the
landscape
and
you
know
what
all
the
projects
are
that
are
out
there
and
I
think
it
helps
us
better
understand.
You
know
what
yeah,
how
projects
can
be
relevant
and
where
they
fit.
A
B
G
I,
just
a
quick
comment
on
the
open
service
broker.
I
think
one
of
the
ways
the
primary
ways
I
see
it
being
relevant
to
storage,
is
that
it's
potentially
in
the
future
one
of
the
primary
ways
that
higher-level
storage
systems
could
be
consumed
by
applications
in
communities
and
other
cloud
platforms.
Whether
it's
object
storage
is.
G
You
know
there
are
some
very
common
types
of
services
consumed
by
applications
running
in
public
or
private
clouds
or
container
platforms
like
kubernetes
and
storage
systems.
I
think
dominate
whether
it's
you
know
cash
and
cashing
systems
or
databases,
object
stores,
cudahy
stores
or
what
no
sequel,
stores
or
whatnot.
You
know
there
also
other
services,
messaging
services
and
whatnot
that
you
might
consume,
but
storage
systems
tend
to
dominate
yeah.
A
I
I
totally
agree
there
and
that's
why
we
brought
it
on
today.
I
think
that
the
the
user
experience
in
what
the
open
service
broker
you
know
can
set
up
and
what
Paul
was
describing
today
is
how
I
think
that
people
will
want
to
be
able
to
consume
services
and
I
think
that
that's
how
get
applications
you
know
married
to
to
their
information
into
their
data.
You
know
that's
what
we
think
about
like
this
orchestrated
storage
platform
box,
that
you
know
Steven
team
had
been
thinking
about.
A
A
You
know
whether
something
actually
does
integration
to
OSB,
because
if
we
want
to
you
know,
have
a
platform,
that's
highly
scalable,
you
know
automatically
managed
by
something
like
kubernetes
and
then
we
want
to
also
make
sure
that
there's
a
user
experience
which
is
seamless,
then
that
means
that
the
OSB
or
whatever
it
was
in
that
area,
is
important
for
integration.
So,
yes,
pull
your
dream.