►
From YouTube: CNCF Storage Working Group Meeting - 2018-02-14
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
B
B
B
B
B
D
B
D
Well,
I
mean
it's
mostly
business,
as
usual.
I
think
the
events
leading
up
to
the
announcement
and
a
lot
of
things
that
happen
to
get
all
the
source
code
repo
transferred
over
and
all
the
administrators
have
around
it.
So
we
spent
quite
a
bit
of
time
on
that
and
it's
not
even
sunny,
but
otherwise
it.
You
know
pretty
much.
You
know
hopeful.
D
D
We're
thinking
more
right
now
as
part
of
getting
into
the
CN
CF
we're
looking
into
how
to
update
our
governance,
and
you
know,
help
enable
more
folks
to
join
the
party.
There
is
also
where
you
know
the
Red
Hat
team
has
kind
of
stepped
up
to
help
with
Brooke,
and
there
are
more
folks
that
have
expressed
interest
in
you
know,
supporting
it
commercially
or
as
part
of
their
portfolio
product,
so
yeah
I
think
overall,
very
very
happy
with
progress
over.
B
B
No,
okay,
in
terms
of
the
the
TOC
you
know,
we
got
a
presence
here.
I
did
a
presentation,
you
guys
on
Rex
ray
and
we
are
planning
to
submit
that
to
the
TOC
or
at
least
have
a
discussion
at
the
TOC
next
week,
so
I'll
be
presenting
it
there
and
then
we'll
request
at
that
point
that
we
get
an
invite
for
doing
a
proposal.
Does
anybody
have
anything
they
wanted
to
chat
about
regarding
Rex
ray
any
questions
or
any
concerns.
B
It
was,
it
was
actually
so
there
I
think
the
actual
github
pages
a
little
as
behind
I.
Don't
think
it's
been
updated
in
terms
of
the
recordings,
but
these
have
all
been
recorded.
So
I
will
ask
Chris
Anna
check
two
to
go
to
the
archives
and
make
sure
they
gets
the
recordings
published
on
the
website
and
you
can
check
in
with
a
github
page
and
you'll,
be
able
to
pull
that
up
and
listen
to
it.
E
B
Okay,
so
we
we
didn't
have
Mineo
on
the
agenda.
They
unfortunately
are
a
no-show.
This
I
am
NOT
able
to
get
ahold
of
them
unless
we
have
anything
else
to
chat
about
I
think
we
can
pull
this
one
early
and
then
we
can
do
hopefully
two
sessions
in
the
two
weeks
from
now
on,
the
28th
so
we'll
have
hopefully
we'll
get
many
I/o
to
present
and
will
also
get
open.
Ebs
percent
hey
this.
E
A
F
B
F
Right
so
Clint
just
expectation
wise.
This
is
last
week.
This
is
the
repeat
of
what
we
were
trying
to
do
with
a
B
right.
That's
right,
high
level,
rubi!
Okay!
So
let
me
let
me
get
the
and
documents
I'll
just
go
over
what
we
were
planning
to
present
to
you
guys
see.
Do
you
just
have
shared
the
document
or
should
I
just
go
ahead
and
share
the
screen
of
the
other
screen.
F
B
As
these
cloud
native
environments
kind
of
emerge,
so
I
asked
them
to
do
a
little
bit
of
a
open
up
about
what
their
perspective
is
on.
You
know
what
what
mini
I/o
is
gonna
do
in
terms
of
like
what
market
it's
gonna
help
fill
and
then
also
on
what
mini
I/o
is,
and
you
know
where
some
of
the
successes
have
been.
So
that's
some
of
the
context,
for
you
sure.
E
F
Yes,
all
right,
perfect,
okay,
I'll
get
started
so
I
I'm
just
going
to
go
over
and
give
you
guys
a
general
overview
of
who
media
is
what
we
do.
It's
a
general
introduction.
If
we
need
to
go
into
details
and
if
I
don't
know,
they're
quite
answers
to
the
questions,
we
can
always
get
the
right
people
as
a
follow-up,
but
just
wanted
to
talk
to
you
about
you
guys
about
how
we
designed
it
the
minimum
principles
behind
me
know
and
which
markets
we're
playing
in
and
so
on
and
so
forth.
F
So
it
will
be
a
very
high-level
general.
You
can
stop
me
anytime,
you
would
like,
and
we
can
go
into
details
if
you,
whichever
direction
you
want
to
take
it
so
mean
you're
from
the
name,
as
you
can
see,
is
based
on
a
minimalist
philosophy
and
its
really
in
most
of
the
cases
when
we
are
going
and
explaining
this
to
enterprise
clients,
it's
kind
of
hard
for
us
to
explain
how
a
cloud
native
type
of
this
storage
solution
would
play
into
the
future
of
things,
and
especially
in
the
storage
world.
F
It's
kind
of
hard,
but
now
that
we
are
talking
to
people
are
negatively
cloud
native
in
their
mentality
and
their
philosophy.
It's
so
much
easier
to
talk
to
a
crowd
like
you
guys
so
essentially,
menu
is
a
very
simple
high-performance
object.
Storage,
that's
been
designed
with
the
cloud
native
architectural
and
design
principles
in
mind.
Second
page
is
just
going
over
the,
as
most
of
you
guys,
probably
heard
and
know
it's
about
the
waves
of
the
changes.
F
Mino
is
100%
s3
compatible
and
we
take
all
of
the
s3
principles
and
nest
tree
compliance
into
the
heart,
and
we
always
try
to
make
sure
that
s3
compliance
is
first
and
utmost
and
then
sometimes
we
even
implement
things
that
are
a
little
bit
different
or
we
improve
some
of
the
things
that
s3
didn't
really
go
further
in
our
thought,
and
that's
always
the
case
with
us.
We
always
make
sure
that
s3
compliance
is
is
at
the
center
of
all
of
the
things
we
do
at
menu.
F
This
picture
is
just
showing
you
guys
that
you
know
how
the
growth
of
storage
is
doing,
especially
with
the
growth
in
IOT
and
other
things
that
are
generating
a
lot.
Video,
especially
IOT,
generating
a
lot
of
data,
and
the
shift
is
everybody
is
moving
towards
the
cloud,
whether
it's
public,
private,
multi
cloud,
hybrid
cloud
and
all
of
the
buzz
words
that
you
see
in
the
industry.
This
is
just
trying
to
explain
the
the
effort
is
going
in
that
direction
or
the
trends
are
going
in
that
direction.
F
F
We've
done
work
with
Microsoft
Azure,
for
example.
We
have
integrated
with
their
managed
service
offerings.
You
can
essentially
run
menial
software
on
Microsoft
Azure
as
a
manager
application
and
the
way
it
works
is
if
you're
were
to
write
a
write.
Some
code,
that's
working
on
s3
in
the
past
and
or
multiple
reason,
I
call
this
the
Walmart
syndrome.
F
So
that's
kind
of
a
neat
thing
that
we've
been
integrating
same
thing
with
the
other
cloud
vendors,
as
you
can
see
that
similar
integration
and
sometimes
people
just
because
menu
is
very
simple
and
the
binary
is
tiny
binary
that
people
really
take
it's
for
simple
environments:
it's
just
single
tenant.
If
you
want
distributed
mod
or
a
single
tenant
mod,
you
can
just
run
it
very
simply.
F
E
F
Terms
of
the
Microsoft
Azure,
we
did
a
full-blown
customized
implementation
because
I'm
not
sure
if
you,
if
you
have
looked
at
the
way
that
Microsoft
did
there
many
services
offering
they
basically
made
it's
like
a
ISPs,
can
do
a
SAS
type
of
offering
where
you
lock
down
and
up
lock
down.
But
you
pick
your
VM,
you
pick
your
VM
configuration.
You
pick
how
your
your
load
balancer
so
on
and
so
forth,
and
then
you
create
a
template.
I,
it's
very
close
environment.
F
That's
just
for
your
software,
so
that's
kind
of
their
new
version
of
their
multiple
there's
a
couple
of
different
variations
in
Microsoft
marketplace,
but
that's
how
they
do
it.
So
our
implementation
is
kind
of
a
specialized
customized
packaging
for
issuer.
It's
not
it's
because
it's
just
only
worse
and
Azure,
because
they
they
dictate
their
conditions.
The
same
for
the
Google
Cloud
we're
on
the
launch
pad
or
the
section.
The
Google
has
a
very
lighter
version
of
this
managed
hosting
type
of
environment.
So
they
just
don't.
They
magically
do
bring
your
own
license
type
of
it.
F
B
F
Side
I
didn't
get
to
that
one
week
we
are
so
flexible,
so
we
definitely
do
storage
and
we
live
the
orchestration
to
kubernetes
when
in
the
right,
in
most
cases,
when
we
suggest
to
it
implementation,
that's
on
the
right
side,
where
private
clouds
we
have
the
kubernetes
integration.
We
have
the
cloud
native
and
it's
a
let
me
circle
in
this
light:
weight
integration
which
is
Perham
or
the
file
basic
the
file
generation
and
then
just
launching
the
launching
terminal.
F
As
you
wish
in
terms
of
the
configuration
you
put
especially
it
like
in
the
mini
our
website,
you
can
just
go
in
and
put
your
configuration
and
trace
the
mo
file,
and
then
you
just
use
that
to
lunch
and
we
live.
We
left
the
orchestration
and
all
the
work
that
we
don't
do
well
to
us.
I
orchestration
engine
and
kubernetes
is
what
we
go
with
and
what
we
like
to
do
same
with
docker
swarm.
F
You
just
go
and
create
the
swarm
file
and
you
just
use
that
to
lunch,
and
you
modify
it's
about
20
megabyte
of
Pioneer
that
we
have
and
we
we
just
lightly
integrate
it
into
these
systems
so
that
we
have
the
flexibility
for
their
friends
same
with
Cloud
Foundry,
with
pivotal
same
with
other
with
mesosphere.
We
did
all
these
lights,
I
call
it
light
integration,
but
it
requires
a
lot
of
testing
and
integration
work.
F
Then
the
users
of
those
systems
can
do
that
and
we
just
do
what
we
do
best
doing:
the
storage
which
is
durable,
storage,
doing
the
racial
coding
doing
it
in
a
light
way,
with
no
metadata
and
a
very
performant
high-performance
way.
Those
are
the
things
that
we
shine
and
those
are
things
that
we
do
well
and
we
know
how
to
do
well.
So
we
don't
really
claim
to
be
doing
the
work
of
kubernetes
in
terms
of
launching
it
in
orchestrating
it
multi-tenancy
and
so
on
and
so
forth.
G
F
What
we
can
do
to
a
couple
of
ways-
one
is:
let's
say
that
this
we
are
responsible
for
the
durability
of
the
story,
so
we
do
erasure
coding
and
that's
what
we
do
best
in
terms
of
going
across
multiple
disks
or
multiple
servers,
and
we
we
kind
of
provide
the
durability
of
the
storage.
That
way.
So
we
are
responsible
for
the
durable
storage,
but
in
the
backend
we
can
use
XFS,
let's
say,
and
the
local
file
system
or
Minya
has
a
gateway
mode.
F
F
We
don't
modify
any
of
the
content,
whether
it's
an
Isola
or
Microsoft,
Azure
blob.
We
leave
the
contents,
whether
it's
filesystem
or
not.
We
live
them
unmodified.
So
the
good
thing
that
we
know
is
we
don't
write
it
in
any
proprietary
form
or
formats
into
the
backend.
Therefore,
you
can
access
the
same
ISO
on
file
using
the
file
access
mode,
but
file
access
protocols
or
in
Microsoft,
where
you
can
use
block
native
chuckles
and
you
can
still
access
the
same
file.
F
F
Mini,
oh,
you
mean
right,
yeah,
so
back
end
will
be.
We
already
take
the
back
end
and
basically
you
need
the
back
end,
blob
or
Coubertin,
GCSE
storage
and
you
distribute
across
them,
and
we
do
our
own
just
similar
to
think
about
it
as
Linux
XFS,
local
filesystem,
and
you
have
six
of
them.
12
of
them.
Multiple.
Many
of
them
and
menial,
writes
the
Razr
coding
across
all
those
disks.
The
same
thing
in
Google
Cloud.
F
So
I
do
so
I'm
not
so
the
thing
is
yes
to
start
with,
but
I
get
a
be
careful
because
Google
also
provides
their
own
s3.
So
that's
why
I'm
kind
of
confused.
Why
would
you
want
to
do
that?
But
if
you're
saying
that
you're
not
using
Google's
s3-
and
you
want
to
use
min
your
s3
backhanded
by
Google,
star
storage
I-
get
a
double
check
on
that,
but
I'm
pretty
sure
the
answer
is
yes:
okay,.
F
F
We
have,
that
utilizes
certain
certain
if
performance
factors
within
the
chipset,
and
especially
in
skylake
with
Intel,
J
buff
or
any
SSD
J,
both
just
bunch
of
flash
arrays
or
instead
of
just
a
bunch
of
disks,
we
used
to
just
bunch
of
flush
arrays
nowadays
and
you
can
use
any
of
these
technologies
to
have
a
supercharged
object,
storage.
Normally
in
the
marketplace.
F
People
look
at
object,
storage,
tertiary
or
secondary
storage
or
a
backup
end
point
we're
trying
to
explain
to
the
market
or
change
the
thinking
around
block
storage
in
a
way
that
SSD
enabled
with
very
fast
razor
coding
and
very
fast,
fast
throughput.
Nowadays,
people
are
going
to
multiple
25,
gigs
or
hundred
gigs.
F
We
believe
that
the
way
people,
especially
in
the
cloud
native
world,
especially
in
the
newer
generation
of
databases
from
db2
MongoDB
to
all
sorts
of
other
databases
that
rs3
compliant
in
the
back
end
people
are
going
to
change
their
behaviors
and
in
the
enterprise,
will
come
soon
as
well,
that
they
are
going
to
use
databases
more
of
snapshot,
targets
with
a
high
performance.
High
performance
object
when
it's
provided,
whether
it's
some
private
cloud
or
public
cloud,
and
that
changes
early
days
in
my
opinion.
F
But
we
see
that
as
a
as
a
trend
in
the
market,
slowly
happening
in
some
of
the
forward-thinking
areas
and
the
other
Isilon
I
mentioned
VMware,
as
well
as
a
recent
kind
of
a
compatible
stories.
You
can
run
it
down
two
coffees
and
as
well,
so
the
other
slides
are
about
which
I
mentioned
the
meaning
of
popularity
in
the
development
community,
with
the
slack
the
number
of
stars
that
we
had
as
well
as
the
polls.
Dr.
Paz,
is
a
nice
number
to
track.
F
However,
you
through
peas-
as
you
can
imagine,
but
still
it's
a
very
good
number,
compared
to
some
of
the
other,
like
some
of
the
in
an
open
source
storage
world.
Clearly
we
are
getting
some
traction
and
compared
to
other
projects,
we
still
we
live
in
a
good
place
and
the
trajectory
is
very,
very
good
in
terms
of
where
it's
going.
You.
F
In
terms
of
the
enterprise
we
just
in
the
early
days
of
our
kind
of
commercial
enterprise
deployments,
we
have
a
couple
of
POCs,
that's
in
the
works,
but
they
are
not
in
production.
Yet
so
and
plus
we
don't
have
to
approval
from
those
large
clients,
financial
and
others
that
they're
they're
shy
ones.
I
used
to
be
working
at
a
financial
firm
and
they
never
want
to
talk
about
it.
F
Sounds
good
I,
don't
have
any
specific
that
I
can
point
out
at
this
point,
all
right,
so
basically
I
think
I
covered
some
of
these
features
so
distributed
the
array
geocoding
that
we
do
bit
road
protection.
These
are
all
the
things
that
most
of
the
commercial
classical
object,
storage
software
vendors
are
already
doing,
and
we
have
to
do
that
already.
These
are
baseline
in
our
opinion,
so
you
already
know
that
menu
is
Astra
compatible.
We
provide
the
Razr
coding
and
bit
road
protection.
F
We
are
also
changing
to
highway
hash
in
our
bridge,
which
is
a
different
algorithm
that
provides
much
more
performance
in
terms
of
the
way
and
the
patrol
the
rater
code
has
been
done
in
terms
of
the
bit
road.
This
is
a
concept
in
the
in
the
object
storage
world,
where
you
have
the
disk,
basically
mechanical
aspects
of
the
dist
it
going
bad
and
changing
the
the
parity
and
the
other
bits,
and
we
do
that.
F
That's
kind
of
the
things
that
you
have
to
do
in
order
to
be
a
very
durable
and
strong
object,
storage
and
distribute
it
more
dimension,
that
which
is
the
which
is
the
distribute
most
of
the
enterprise
clients
that
we
work
with
and
most
of
the
larger
deployments
we
use
distributed
a
mode
and
that's
how
you
got
we
go
with
there.
We
change
in
the
latest
release
we
change
and
we
have
like
some
of
that
restriction,
but
we
were
doing
an
N
by
2.
Essentially,
you
can
have
16
disk
and
up
to
8.
F
People
are
getting
more
comfortable
compared
to
the
spinning
rust
I
call
it.
The
HDD
disk
people
are
marked
much
more
comfortable
in
their
ability
to
withstand
and
they're
more
durable.
There
is
couple
of
years
of
durability,
difference
between
the
HDD
and
SSD
and
there's
not
really
good
scientific
data
on
it.
But
a
few
that's
been
done
shows
that
the
SSD
is
proven
to
be
more
reliable.
So
we
feel
more
comfortable
with
these
J
buffs
just
punch
a
flash
or
SSD
disk
that
are
in
the
service.
F
We
can
relax
some
of
the
stringent
durability
requirements
that
we
had
in
the
earlier
days
of
me.
No
other
things
like
we
have,
as
we
clearly
have
the
encryption
for
object
based
encryption.
That's
also
s3
compliant
feature
that
this
trio
has
clients,
client
side
as
well
as
server
side,
and
we
also
working
on
the
pieces.
There's
a
couple
of.
We
have
full
as
free
compiles
on
the
encryption
side.
We're
just
working
on
two
feature
set
that
that's
still
in
the
works.
F
That's
about
multi-part,
uploads,
that
if
you
have
large
objects,
you
chop
them
into
simpler
at
some
smaller
pieces
and
upload
do
a
put
and
we're
working
on
the
encryption
of
those
as
well
as
we
are
working
on
under
range
gets,
which
is
a
you
provide
a
range
of
the
when
you're
getting
an
object.
You
provide
the
range
of
bits
that
you
want
to
pull
in,
which
makes
certain
implementations
much
much
more.
You
can
increase
parallelism
and
as
well
as
performance,
so
and
also
love
the
compute.
F
We
also
worked
on
lambda
compute
in
earlier
days,
and
we
can
trigger
as
this
similar
philosophy
that
we
I
mentioned
in
terms
of
orchestration.
If
we
are,
we
are
the
story
system.
We
are
doing
stories
well,
but
leave
the
other
orchestration
and
management
tasks
in
this
case
monitoring
or
triggering
events
to
other
systems.
So
therefore,
we
did
the
integration
tool,
the
type
of
computing
when
you,
when
you
have,
for
example,
multiple
objects
being
uploaded
in
term
in
your
system,
you
can
trigger
events
to
put
metadata
if
you're
uploading
checks.
F
If
you're
uploading,
different
images,
you
need
to
classify
them,
you
need
to
modify
them.
Instead
of
making
it
part
of
the
story
system,
you
can
trigger
lambda
events
to
to
do
processing
afterwards,
so
I
covered
some
of
these
during
the
initial
parts
of
the
presentation.
So
I'm
gonna
go
very
quick
on
these.
We
worked
on
the
private
call
different
segments
on
NASA
as
Isilon
example.
F
I
provide
you,
I
assume,
has
their
own
nests
as
a
file,
but
we
can
run
on
top
of
Isilon
and
untouched
the
original
content,
but
serve
them
as
objects,
essentially
with
s3
compliance
in
the
front
end.
J
buff
is
the
just
bunch
of
flash.
We
talk
about
that
kubernetes
and
Cloud
Foundry
integration
as
well.
F
In
going
into
the
details
how
the
architecture
works.
I'm
gonna
go
very
fast
on
these
slides.
Please
stop
me
if
you're
interested
in
any
special
segments
or
any
area,
this
is
a
exactly
describing
what
I
mentioned
application
can
have
access
to
the
backend
straight
or
through
a
menu
if
they
needed
to
do
an
s3
compliance
storage.
F
We
really
like
this
technology
and
we
are
improving
it
and
one
of
the
one
of
the
folks
in
at
mean
you
know,
is
very,
very
much
specialized
on
this
and
he's
done
a
lot
of
work
and
contributing
back
to
the
community
in
terms
of
the
way
we
use
this
and
his
rights
and
a
few
articles.
And
if
anybody
is
interested
I
can
provide
the
details
on
that
and
enabled
with
the
hundred
cake.
F
Agos
objects,
storage,
and
this
slide
is
basically
trying
to
explain
that
within
just
a
bunch
of
disks.
You
can
you
could
just
bunch
of
flash
sorry
using
the
NAND
and
310.
You
can
get
very
high
density,
very
high
performance
if
you
have
the
right
score:
storage
performance,
like
the
things
that
we've
done
with
the
fast
hash
I
with
avx-512.
F
So
this
is
the
what
I
mentioned
about
the
lambda
functions.
Lambda
functions
that
we
implemented
in
terms
of
the
hooks
into
different
environments,
whether
it's
elastic
search
or
read,
is
that
when
you're,
uploading,
downloading
or
modifying
certain
objects
using
the
Mineo
storage,
you
can
create
similar
to
audit
log
or
a
batch
processing.
We
all
say
in
the
lander
functions
into
that
mix.
This
is
just
a
slide
of
how
we
did
the
vandermonde
read
someone
hashing
and,
as
I
said,
we're
about
to
change
to
a
different
algorithm,
all
of
whatever
we
do.
F
We
always
make
sure
that
it's
backward
compatible
with
highway
has
changed,
we're
going
to
make
sure
that
everything
else
is
also
compatible.
This
is
exactly
showing
I
think
somebody
was
asking
me
if
we
could
use
the
persistent
disk
with
the
Google
Google
storage.
This
is
the
depiction
of
exactly
whoever
stores
to
this
weather
is
XFS
or
blob.
You
essentially
use
them
in
a
very
similar
way
in
terms
of
the
underlying
racial
code.
We
call
it
excel.
This
is
just
the
code
name.
What
we
use
for
the
way
we
do
the
erase
recording.
F
So
if
you
have
to
disk
I
mean
for
disk,
it
will
be
to
data
and
to
parity
and
on
the
right
side,
it's
just
the
config
file
or
the
JSON
file
that
we
have
for
all
of
the
details.
What
algorithm
we
use?
The
data
you
see
is
to
parties
to
the
block
sizes.
If
you
have
a
large
file,
the
block
size
is
like
that.
Then
you
put
it
into
multiple
parts.
F
All
right
so
and
the
rest
is
basically
the
integration
into
kubernetes
and
Cloud
Foundry
I'll
try
to
focus
on
two
Cooper
natives
I
might
say
that
I'm
a
new
new
beyond
kubernetes,
so
I'm
not
expert.
So
you
guys
probably
know
all
the
details.
More
than
I
do
so
I'm
an
expert
on
other
areas,
but
not
on
kubernetes,
so
just
the
disclosure.
F
Essentially,
if
you
go
to
the
mini
or
IO,
we
have
a
really
nice
simple
feature
where
you
kind
of
translate
between
the
between
the
what
you
need
and
an
s3
the
access
key
secret
key,
the
mod,
which
is
how
you
use
standalone
version
of
media
or
the
distributed.
The
examples
I
have
shown.
You
is
all
the
distributed
version
of
mania,
so
they
are
most
of
the
enterprises
or
larger
implementations
or
deployments
they
all
use
distributed.
Mods.
All
you
do.
Is
you
put
your
access,
key
secret,
key
distributed
or
standalone?
F
We
are
very
open
to
that
and
we
can
work
with
that,
but
I'm
just
covering
the
baseline.
How
we
do
it
and
have
we
presented
to
the
users
and
the
enterprise
committee
that
we
have
in
kubernetes
have
they
should
be
ranked?
Is
it
simple?
You
use
the
persistence
volumes
mapping
and
we
kind
of
translate
between
how
many
words
and
how
you
should
you?
Would
you
be
using
your
first
volumes.
F
F
More
deeper
integration,
because
pivotal
similar
to
Azure
is
a
very
close
like
ecosystem,
as
most
of
you
might
know,
so
we
have
to
do
certain
things
to
kind
of
integrate
with
that
is
just
showing
their
dashboard
and
how
mean
your
integrated
into
the
details
of
the
pivotal
and
how
a
developer
can
go
to
market
place
within
pivotal
and
pick
a
menial
instance
and
then,
just
like
we
do
with
the
yellow
file
and
communities.
You
put
your
access
key
secret
key
and
it's
just
in
a
closed
ecosystem.
F
That's
fully
integrated,
that's
just
what
it
is
essentially-
and
this
is
just
a
CLI
example
of
that-
and
the
last
page
is
at
this
page-
is
the
piece
about
the
high-level
architecture
of
how
you
could
be
using
it
with,
with
a
use
case
on
Cloud
Foundry,
these
most
of
these
I
covered,
so
I'm
just
going
to
show
you
some
succession.
This
is.
This
is
basically
what
we
have
on
the
managed
services
deployment
site
if
you're
not
familiar
with
Azure.
F
In
our
case,
we
recommend
two
three
four
multiple
for
our
managed
hosting
implementation,
and
this
is
just
a
high-level
view
of
how
applications
interact
within
that
your
system,
whatever
there
at
the
end
DNS
or
how
they
would
be
using
the
virtual
machines,
and
we
have
auto
scaling
and
able
to
measure.
So
if
somebody
is
doing
some
processing
that
requires
more
power
than
it's
just
automatically
scales,
fully
hands-off
managed
services
where
you
need
to
have
s3,
compatible
high
performance
storage
against
your
block.
B
B
You
refer
to
it
in
the
club,
foundry
setup
right
now,
I
think
we're
talking
about
doing
one
for
kubernetes
in
the
future.
But
what's
the
user
experience
like
for
that,
if
you
have
it,
yeah
I
have
been
figured
under
cloud
foundry
and
someone
who's.
A
consumer
wants
to
go
spin
up
an
app
to
use
mini
I
have
storage.
What's
the
experience
like
so
I?
Don't.
F
Know
your
if
you're
familiar
so
I
tried
this
myself,
the
user
experience
in
pivotal
is
very
very
much
within
the
ecosystem
of
pivotal.
So
that's
probably
another.
So
essentially
this
is
the
user
experience
initiation.
So
this
is
how
you
go
up
in
pivotal
to
launch
a
service,
and
in
this
case
you
pick
the
mini
object,
storage
as
service
and
that's
sign
there,
and
then
you
configure
your
instance
once
it
launches
that
instance,
then
it
becomes
a
server
endpoint.
F
So
you
can
just
do
a
command
line,
endpoint
configuration
from
an
app
or
a
user
CLI
or
you
can
just
launch
the
UI
against
this
instance
within
pivotal.
So
it's
a
bit
close
within
pivotal,
but
if
you
go
to
assure
that's
a
better
example,
probably
from
a
user
experience,
essentially
once
you
launch
this
as
your
manage
service
I,
don't
know
if
I
have
a
good
picture
of
it,
but
it
becomes
a
managed
service
or
a
service
running
within
asure,
backed
by
all
the
things
that
you
would
require.
F
You
would
get
from
a
cloud
full-blown
cloud,
a
load
balancer
in
front
of
a
couple
of
VMs
running
the
Mineo
software.
That's
enabling
you
to
access
your
blob
storage.
That's
already
existing
in
within
assure
with
s3
front
ends,
just
let
just
to
simplify
and
the
user
uses
it
basically,
whatever
tools
that
they
have
or
code
they
have
CLI.
Let's
say
they
just
introduced
it
as
an
s3
endpoint
to
the
front
end,
so
this
manager
app
launches
it
has
an
IP.
F
You
take
the
DNA,
fully
qualified
domain
name
or
the
IP
introduced
it
as
a
CLI
endpoint,
or
you
can
take
that
UI.
We
made
it
part
of
this
deep
integration.
We
made
it
so
simple
for
the
user.
We
have
a
domain
name
for
them
where
they,
but
their
storage
account
name.
Since
we
have
part
of
this
interview
process,
we
asked
them
for
authorization.
We
need
to
have
authorization
to
use
to
access
their
blob
storage,
part
of
that
authorization.
We
checked
the
storage
accounts,
whatever
storage
accounts
that
they
already
have
in
Asscher.
F
We
can
enable
them
to
reach
that
blob
storage
using
a
UI.
That
me
know
has
a
very
light:
simple
UI
that
they
can
use
that
with
their
storage
account
and
a
full
access
to
theirs.
Blob
storage
upload
download
using
that
or
just
use
the
s3
command
set
and
tools
or
the
code
that
they
already
have
that
s3.
Against
this
end
point.
B
Two
distinct
consumers
here,
two
different
roles
that
I'm
thinking
about
one
is
the
provider
and
that's
the
person
who's
gonna
go
to
this
is
your
console.
He's
gonna
go
to
the
pivotal
console
and
they're
gonna
launch
any
I/o
instance
to
make
it
available,
and
then
your
developers
and
your
consumers,
who
are
just
expecting
to
just
get
storage
from
from
somewhere
and
that's
right,
I
think
that
there's
there's
a
manual
approach
to
it
right
now,
which
is
hey.
B
Here's
your
s3
endpoint,
just
plug
it
into
your
app
and
that's
all
fine
and
great,
but
I
think
that
the
the
service
broker
side
of
it
is
more
reflecting
that
hey
a
user
in
this
space
should
be
able
to.
You
know,
work
with
a
standard,
API
or
developer
tool.
It'll
work
with
standard
API
and
use
that
to
actually
like
to
broker
or
to
to
create
buckets
to
you
know
enable
things
that
that
they
would
want
or
that
they're
gonna
help
them
store
their
data.
B
F
Yeah
now
I
get
your
question,
so
we
have
multiple
SDKs
that
we
support
and
you
can
use
the
mini
SDKs
to
do
that
or
the
language
SDKs
to
do
that.
Just
natively.
On
top
of
that,
we
have
something
called
menial,
client
or
MC.
Throw
short
and
MC
has
multiple
useful
for
us
administration
perspective
or
of
Puppets.
For
example,
we
have
an
MC
bucket
command.
We
have
MC
upload/download,
compare
mirror
copy,
all
the
basic
Linux
command
that
you
can
imagine
in
the
new
world
of
s3.
F
F
So
in
this
case,
we
have
that
rich
toolset,
which
is
very
popular
in
the
community
and
personally
I,
really
like
it,
because
it
makes
life
so
easier
from
configuration
to
movement
of
data
to
management
of
data
that
can
be
used,
but
also
the
API
is
do
we
have
SDKs,
and
on
top
of
that,
if
you
want
to
do
any
kind
of
automation
like
pivotal
needed
that
it
wasn't
created
bucket
example.
But
if
you
say
on
top
of
the
ML
file
we
have
on
kubernetes,
we
needed
to
do
a
deeper
integration.
F
Where
you
want
to
set
rules,
that's
gonna
create
buckets,
you
can
just
call
or
you
can
call
MC
or
you
can
just
we
can
do
deeper
integration,
that's
more
programmatic!
It
both
of
them
are
possible.
We
just
don't
know,
what's
needed
to
be
able
to
kind
of
do
that,
work
or
integrate
it.
But
it's
very
simple
for
us
to
do
that.
F
Integration,
because
we
are
naturally
built
that
way
and
we
can
simply
integrate,
but
even
without
any
official
integration
or
work,
we
can
use
the
MC
command
line
set
to
do
all
of
the
basic
configuration
and
management
of
objects
or
buckets.
You
can
just
do
an
MC,
make
buckets
and
create
all
the
things
you
need.
You
can
just
do
MC,
config
and
add
all
of
those
endpoints.
We
were
talking
about
whether
is
Asscher
or
Isilon
or
kubernetes.
B
F
E
G
Is
what
is
the
competitive
landscape
here?
I'm
not
familiar
with
this
area
and
Q
is
what
are
your
plans
with
respect
to
the
CN
CF,
so.
F
First
question
is:
we
are
kind
of
a
different
perspective
on
the
landscape
of
object
storage,
but
if
you
take
the
name,
object,
storage
and
the
classical
players
in
that
area
in
open
source,
we
clearly
have
a
lot
of
traction.
But
if
you
mix
the
commercial
implementations
of
object
storage,
there
are
multiple
players
from
clever
safe,
which
is
now
different,
acquired
by
IBM
to
cloudian,
to
skala
T,
to
SEF
in
in
a
different
way.
F
Most
of
them
has
a
commercial
mixture
of
file
and
object
storage,
but
nonetheless
present
themselves
as
object
storage,
and
they
are
kind
of
they've
been
out
in
the
market
for
many
years,
ten
plus
in
some
cases.
In
our
case,
we
just
focus
on
to
the
lightweight
cloud
native
as
well
as
caught
native,
meaning
the
architectural
and
philosophy
how
we
designed
it.
Multi-Tenancy
you
can
just
instantiate
or
the
spin-off
an
instance
of
Minea
for
each
tenant
each
department,
each
area
that
nature
that
likeness
and
the
strong,
durable
storage
core
the
implementation
of
the
recording.
We
did.
F
We
focus
on
that
and
try
to
try
to
present
it
into
these
integrations
all
the
ones
that
we
talked
about
or
within
kubernetes
or
other
areas
where
the
orchestration
is
done
for
storage
in
a
very
simpler
way.
So
that's
what
we
focus
on
rather
than
comparing
apples
to
oranges,
because
all
of
those
other
companies
started
in
different
era
and
different
segments,
so
it's
kind
of
hard
or
unfair
to
them
or
to
us
to
compare
them
one
on
one,
but
that's
the
market
landscape.
F
If
you
look
at
object,
storage,
if
you
look
around
and
see
who
does
object,
storage
most
of
them
has
software
or
appliance
they
have
votes
in
some
cases,
but
that's
they
how
they
play
it,
whereas
we
focus
on
to
integration
into
the
modern
era,
more
cloud
native
type
of
environments
in
a
very
lightweight
fashion,
as
I
described,
because
it's
just
in
our
culture
in
nature.
So
that's
the
answer
to
your
first
question.
F
But
object
should,
in
our
belief,
with
the
changes
in
the
market,
be
another
abstraction
layer
or
an
ad
third
drive
work
for
integration
for
CN
CF,
and
we
would
like
to
help
and
contribute
and
listen
to
you
guys
how
you
see
it
and
help
in
that
direction.
We
want
object
to
be
on
the
radar
for
the
next
generation
of
CN
CF.
Oh.
B
D
F
B
F
B
And
if
you
guys
have
any
other
questions,
feel
free
to
email,
the
group
and
we'll
make
sure
we
get
in
contact
was
very
to
get
an
answer.
But
looking
forward
to
you
guys,
participating
and
helping
us,
you
know
build
community
around
the
cloud
network
ecosystem
and
you
know
especially
around
this
object
space
so
pretty
exciting
stuff.
This
morning,
all
right.