►
From YouTube: 2018-10-23 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
Dad
is
definitely
not
a
big
priority.
That's
for
sure.
Okay,
let's
talk
about
a
0.9
stuff
here,
so
let's
go
ahead
and
instead
of
going
through
this
list
talk
about
some
of
the
higher
priority
or
bigger,
bigger
items
there.
Maybe
the
the
board
will
be
a
better
look.
Some
one
of
the
things
I
don't
know
if
Yanis
is
on
the
call
I
doubt
he
is
the
time
zones,
but
he
has.
A
He
successfully
created
a
pull
request
for
the
Cassandra
operator
and
it
took
a
full
pass
through
that
review
there
and
he's
incorporating
some
of
that
feedback.
So
it
looks
like
that
is
in
pretty
decent
shape,
I
think
from
the
review
that
I
did
form
you
know
being
included
in
0.9
and
and
having
a
functional
Cassandra
operator.
So
that's
exciting
I'm.
A
B
Okay,
well,
I'll
give
a
summary
unless
he
chimes
in
well.
You
know
the
CSI
plugin
is
coming
along
and
I
think
it's
he's
got
it
tested
and
working
I
think
there's
a
there's,
an
integration
step
here
that
we
were
talking
about
a
few
days
ago,
where
you
know
really
integrating
it
with
Rick,
because
right
now
it's
it's
a
separate.
A
B
D
C
A
B
D
B
Can
play
just
like
we
declare
agent
and
and
flex
so
you
know
we
haven't
talked
to
me
the
details
there,
but
yeah
and
maybe,
whereas
1.12
we
could
just
decide
and
where
and
where
it's
less
than
that.
Maybe
we
do
play
flex,
I'm,
not
sure
I'm,
not
sure
we'll
work,
all
this
out
in
1.9
timeframe,
I'd
like
to
have
CSI
more
documentation
around
at
least
that
people
can
use
it
more.
E
E
C
F
E
Essentially,
yeah
they're,
more
ask
you
the
same
thing:
I
would
suggest
we
move
away
introduced
the
entry
API
I
start
off
using
the
Saudis
or
attachments
the
ID.
Let's
make
is
a
barrier
to
marriage
for
those
I,
don't
know.
What's
that
the
migration
path
is,
we
can
just
say
la
quinta.
Greenfield
I
just
grown
feels,
so
the
room
field
is
relatively
easy
for
grown
field.
Bromfield,
it
could
be
migration
process.
We
need
to
converse
the
Saudis
into
the
attachments
API.
We
just
saw
we
just
we
can
make
them
happen.
E
C
E
Need
to
use
the
external
attacher
doesn't
make
life
easier
because
I
believe
the
touch
ace
do
is
you
know
there
are
some
differences.
The
driver
talk
to
that
Hatcher.
So
if
the
driver
cannot
just
look
at
the
status
or
if
it's
going
to
be
a
lot
of
overhead,
if
you're
doing
that
path,
so
we
better
just
used
to
attach
it
yourself.
E
Was
a
designs
back,
I
use,
virtually
metas
are
and
I
think
it's
more
intuitive
lines
to
actually
and
there's
also
PR
just
about
how
to
use
CSI
and
start
everything
up
reach
the
upper
Chris.
The
driver
creates
volume,
Creek
storage
across
and
also
use
the
snapshots
external
snapshots,
controller,
yeah
I.
Think
that's
what.
E
B
E
A
C
And
just
just
sort
of
I
level
to
make
sure
I
understand
it.
So
if
we
bring
it
when
we
bring
in
CSI
flex,
volume
just
becomes
a
legacy
thing,
for
you
know,
kubernetes,
112
and
before
alignment
before
is
that
is
that
the
path
we're
taking?
We
would
actually
replace
flex
volume
usage
with
CSI
on
newer
clusters.
E
E
C
A
E
It's
already
implemented
by
a
gentleman
called
robots
and
he'd
basically
have
the
surface
y
ver
as
well
as
the
snapshots
of
no
as
well
as
the
provisioner.
So
that's
his
booking
are
perfectly
royal.
We
know
just
put
it
within
two
contacts
when
I
say
focus,
we
all
use
means
function
in
working,
we're
having
run
the
CSI
certificate
in
past
years.
We
were
going
to
do
this
as
part
part
of
integration,
and
so
once
that
happens,
we
have
great
confidence.
This
is
ready
to
use
for
certification.
A
E
A
B
I've
incorporated
all
the
feedback,
and
the
testing
looks
good
for
me
on
luminous
and
mimic
and
Nautilus,
except
a
couple
of
new
features
that
need
some
work.
Oh
there's,
a
new
Orchestrator
module
that
I
was
trying
to
enable,
and
that
needs
a
little
more
work
to
it
to
be
enabled
in
the
images.
But
overall
what
I'm
just
waiting
for
is
the
publishing
of
the
the
manifests
for
this
FSF
image.
The
you
know,
they're
published
to
the
seth
amd64
and
arm,
but
the
manifests
still
working
through
some
kinks
I
blame
and
we're
working
on
that.
A
What's
is
there
a
projected
date
for
Nautilus.
A
Yep
yeah
the
goal
for
one
dot,
no
surfers
here:
dot
nine
here
was
mimic
support,
right,
yeah,
great
Alexander,
I.
Think
a
couple
of
us
have
taken
a
look
at
your
poll
requests
for
supporting
arbitrary
PVS
house.
How
is
that
going
and
in
terms
of
you
know
your
time,
did
you
get
to
focus
on
it
or
drive
that
to
completion.
A
F
And
yeah
I'm,
just
in
the
tests
right
now
at
least,
are
failing,
so
yeah
I
would
need
to
add
some
tests
or
is
it
there
is
probably
an
issue
yeah,
okay,
at
the
moment,
right
now
is
the
CI
should
agree
to,
but
there
are
no
new
tests,
for
only
is
a
small
test,
with
simply
checking
that
the
arguments
are
correctly
set.
If
a
user
name
is
said,.
F
A
F
A
D
B
B
D
F
F
But
the
networks
or
we
should
shut
down
as
far
as
I
know,
said,
infirmity
issue,
it's
simply
because
our
BD
well
normally
there
would
be
an
RB
d
unmapped
service
that
will
be
executed
just
before
the
network
goes
down
as
far
as
I
know,
but
but
it
seems
there
isn't
really
a
function
executed
in
couplet
when
it
is
shut
down,
shut
down
or
more
or
less.
The
system
is
shut
down.
F
H
D
D
F
C
H
D
F
I
just
wanted
to
say
that
would
be
probably
a
from
hearing
what
you
say:
a
thing
that
we
need
to
get
documented
at
the
difference
and
stuff
like
chorus,
which
is
specific
and
then
that
in
our
documentation,
because
I
think
not
not
everyone,
especially
platform-as-a-service
and
run
group,
would
have
this
option.
There.
F
A
I'm
definitely
a
favor
Alex
of
you
know
addressing
this
in
any
way
that
we
can,
if
there's
some
documentation,
we
can
add,
then
that
would
be
great.
If
there's
you
know
something
more
automated.
That
would
be
great,
but
I
and
I
do
agree
with
you
too,
that
this
hurts
our
be
no
production,
reliability
story
for
sure.
So
you
know
I'm,
not
it's
a
difficult
problem
and
I'm
not
sure
exactly
what
the
best
solution
is,
but
I
I
support
you,
and
you
know
it's
interesting-
that
this
is
important.
You
said,
there's
a
third
issue
to
Alex.
F
Yeah
it
would,
it
would
be
good
if
we
I,
don't
know
the
dumbest
thing.
I
could
think
of
would
need
you
to
have
some
functionality
in
the
logger
just
before
it
get
it's
getting
written
to
standard
order,
standard
error
which
well
runs
a
string,
replace
and
goes
over
all
the
secrets
before
it's
before
it's
being
written,
did.
F
F
A
Kolia
then,
my
suggestion
here
is
persisted
on
the
issue
here.
It's
good
dad
stuff
to
github
directly
so
that
everyone
can
see
them
later
on
cool
all
right,
so
I
think
that's
everything
for
0.9.
We
can
move
on
to
the
rest
of
the
agenda.
So
Brian
are
you
on
the
line
here
to
talk
about
the
elevators,
dynamic
resource
allocation,
yeah.
I
I
You
all
right,
so
we
are
professore
we're
in
a
I
company.
We
do
we
work
on
project
alameda,
it's
an
open
source,
intelligent
resource
Orchestrator
for
kubernetes.
We
use
machine
learning
to
predict
the
future
pod
and
node
resource
usage,
so
we
can
redistribute
the
pods
across
clusters
or
we
can
elastically
set
the
pod
resource
like
CPU
and
memory
requests
on
the
it's
to
best-fit
predicted
usage
and
here's
our
github
and
our
slack,
and
also
you
can
email
me
if
you
want
more
information.
I
I
The
first
thing
we
do
is
we
watch
for
objects
that
have
our
annotations
in
it,
and
then
we
send
a
signal
to
the
AI
Alameda
plane
and
that
I'll
use
the
prometheus,
and
also
we
have
a
data
exporter
on
the
worker
nodes
to
collect
all
the
metrics
sit
into
our
prediction.
Engine
and
our
prediction
engine
will
create
the
future
resource
usage
and
then
we'll
expose
the
raw
prediction.
I
Data
back
onto
the
kubernetes
cluster,
as
well
as
have
some
AI
recommendations
that
are
our
own
CR
DS,
that
we
that
our
controller
uses
and
then,
of
course
we
can
just
stop
here
and
then
people
can
use
the
predictions
for
themselves.
But
we
want
to
do
like
the
first
example
so
that
people
can,
you
know,
copy
us
or
like
use
it
themself
and
then
our
feedback.
I
So
once
we
once
we
set
once
we
automate
the
recommendations
with
our
controllers,
we
send
back
to
feed
the
results
and
then
we
enhance
our
AI
learning
and
then
yeah
I
can
I
want
to
ask
you
guys
some
feedback
about
this
architecture
later.
But
let
me
just
go
on
about
the
CR
D
objects,
so
first
the
community
version
the
operator
would
have
to
create
their
own,
their
own
objects
with
our
annotations
and
then
have
to
match
the
labels
with
each
object.
I
So
we
know
which
which
objects
to
automate
and
then
for
the
commercial,
we'll
just
code
it
directly
into
our
operator
and
then
they
just
add
the
annotations
directly
on
their
their
own
objects
and
we'll
look
for
it
and
then
here's
an
example
of
the
pod
resource
prediction.
You
can
see
sorry
it'll
whole
list
the
pods
here
and
then
so
we
predict
the
future.
Cpu
utilization
will
solve
a
table
for
memory
and
then
we
give
you
the
recommended,
request
and
limit,
and
then
this
one
is
the
pod
allocation
across
notes.
I
I
I
I
We
have
our.
We
will
create
our
own
objects
that
our
operator
watches
for,
but
also
will
would
expose
the
raw
prediction
data
as
like
an
endpoint
or
something
it's
still
up
in
the
air.
We're
just
we're
just
looking
for
feedback
or,
if
there's
any
interest,
and
this
kind
of
thing
with
rook
or
the
community.
C
C
It
could
have
been
destructive,
disrupt
running
certain
OSD
there
and
update
whatever
nice
tone,
crush
box
and
move
it
right
and
having
that
all
kind
of
happened
at
CRD
level,
like
there's
and
not
necessarily
plugins
in
code
like
if
an
annotation
shows
that
that
we're
watching
for
and
rook
that's
as
there's
a
prediction
on
that
volume
or
a
node,
the
dot
could
feed
into
the
next
scheduled
incision
or
even
a
disruption.
Yeah.
I
So
yeah,
so
that's
what
that's!
What
I
was
saying
where
we
can
just
stop
here
at
the
AI
recommendations
and
the
raw
producer
just
expose
that,
but
then
we
want
to
also
just
do
a
little
bit
more.
We
want
to
be
that
we
want
to
make
an
example
for
the
rest
of
the
community
too,
so
it's
easier
for
them
to
use
us
well
I
wanted
to
ask
about
this
architecture
right.
So
you
see
how
our
our
controller
executes
a
recommendation
directly
on
the
the
operator
created
objects,
I'm,
not
sure.
B
C
C
E
I
just
want
to
add
a
point,
so
basically,
you
could
just
create
a
there's,
a
two
things
in
the
rook,
that's
operator,
other
squeeze
or
Cialis,
and
it
creates
clusters
and
the
other
thing
just
you
create
a
cluster,
in
my
opinion,
between
miniscule
best
it
as
a
value
look
of
watch
the
caster
and
somebody
asked
cows
watch
the
caster,
so
maybe
use
some
coke
or
something
you.
When
you
see
across
the
objects
you
can
just
modify
the
resource
consumption
and
the
placements
in
the
cast
objects
before
look
actually
Chris
Nicastro.
E
I
C
D
C
D
C
I
Yeah,
we
think
that's
the
best
approach.
Also
it's
just
we
just
I
guess
our
team
wants
just
wants
to
do
some
more
work.
I
guess
they
want
to
make
an
example
for
the
community.
That's
it
yeah.
We
just
want
a
week.
We
are.
We
already
can
just
expose
all
the
prediction,
data
and
recommendation,
and
anyone
can
use
it.
It's
just
the
I
just
I
guess
they
want
to
do
more,
do
a
little
more
yeah,
so
yeah
I
agree.
I
Yeah
well,
arts
offer
so
we
have
the
Alameda
offer
and
everything
alright
already
done.
It's
just
sweet
where
our
first
example
we
wanted
to
use,
look
Seth,
specifically
Seth
on
kubernetes,
but
and
then
we're
just
hoping
to
get
some
feedback,
like
you
guys
like
had
any
interest
in
this
kind
of
thing.
I
wanted
want
to
give
any
suggestions.
I
H
It
depends
on
what
the
resource
changes
that
you
want
to
make
and
if
you
want
to
change
the
size
of
the
OSD
caches,
for
example,
that's
something
that
you
just
tell
the
OSD
to
do
on
its
own
and
it'll
adjust
the
amount
of
memory
that
is
consuming
and
if
I
understand
correctly,
work,
isn't
even
setting
a
memory
limit
on
those
pods.
Currently,
so
you
just
probably
the
rook
operator,
would
watch
see.
Oh,
do
you
see
that
prediction
change
and
then
just
usf
can
fix
that
or
something
like
that
to
match.
C
H
I
think
another
slick
thing
is:
if,
if
you
could
get
usage
information
on
Peavey's
and
feed
that
back,
we
could
use
the
new
RVD
live
migration
thing
where
we
could
actually
move
the
our
buddy
image
between
different
performance,
tiers
based
on
predicted
usage
that
might
be
pretty
cool
out
of
cheering.
I
I
A
All
right
so
just
look
you're
doing
a
time
check
here.
We
have
22
minutes
left
and
we
have
a
couple
of
items
in
both
say
15
minutes.
So,
let's
see
if
we
can
make
good
timing
through
here,
I
have
a
hard
stop
at
10
and
I.
Don't
know
if
you
can
change
ownership
to
to
another
person
on
the
meeting.
So
let's
try
to
make
good
progress
here.
So
let's
I'm
gonna
start
showing
my
screen
again,
but
the
next
topic
was
my
grading,
the
CI
continuous
integration
stuff
to
the
CNC
f
infrastructure.
B
D
A
G
A
It
and
I
don't
think
we
have
any
resident
expertise
on
that
either.
The
the
only
exposure
that
we
had
had
was.
You
know
that
initial
sort
of
request
by
opening
an
issue
in
the
repo
saying
that
we
have
interest
in
integrating
so
I,
think
that
there
is
there
would
be
some
some
learning
some
collaboration
to
do
there
about
what
the
steps
are.
A
G
Like
literally
no
action
on
it
from
them
means
there's,
there's
got
to
be
some
other
buttons
to
push,
and
so
I
guess
in
the
absence
of
anybody,
knowing
that
I
can
just
start
to
dive
into
their
docks
and
start
to
reach
out
and
see
what
I
can
find
to
see.
Who
do
we
have
to
do?
We
have
to
prod
in
order
to
get
at
least
some
acknowledgement
on
the
process
here,
yeah.
A
I
think
that's
a
great
idea
and
I
think
right
now
we
can
probably
talk
like
what
the
high
level
goals
are.
So
do
we,
you
know
I
think.
Is
it
a
a
plan
here
that
you
know
the
hosting
of
the
Jenkins
solution
would
be
done
by
the
CNC
F
in
their
environment
that
we
remove
that
hosting
from
how
it's
currently
being
done.
A
A
So
you
know
deploying
ruk
and
a
cluster
that
has
you
know
Prometheus
and
flow
and
D,
and
you
know
Jager
and
all
sorts
of
you
know
other
CN
CF
projects
to
make
sure
that
as
the
ecosystem
as
a
whole
is
integrated
and
you
know
not
stomping
on
each
other
in
certain
ways.
So
that's
something
that's
kind
of
interesting
as
well.
I'd
say:
that's
not
primary
goal,
but
it's
an
interesting
goal
that
we
could
problem
that
we
could
potentially
benefit
from
by
moving
to
being
hosted
by
the
CNCs.
There's.
C
B
C
C
C
B
B
A
So
in
that
case,
you
know
from
a
logistical
perspective
that
some
of
those
operations
like
upgrading,
Jenkins
or
managing
them
the
it's
instances
in
some
way,
those
not
everyone,
has
access
for
that
right.
That's
I,
think
you
know
it's
hosted
by
outbound
right
now,
right
so
I,
don't
know
how
we
can
best
manage
those
type
of
operations.
A
C
D
A
C
A
What
we're
currently
limping
around
with
right
now
is
what
we
migrated
over
from
our
first
incarnation
of
the
CI,
so
I
think
that
you
know
we
just
haven't,
had
resources
to
try
to
build
something.
You
know
more
new
with
stability.
Excuse
me,
and
you
know,
latest
blood
plugins
and
all
that
sort
of
stuff,
so
I
think
that
there
definitely
gets
support
for
wanting
to
have
something
new
and
reliable.
A
G
I
I
have
some
experience.
There's
automation
out
there
that
I
had
worked
on
in
a
previous
role
to
automate
stand-up
of
Jenkins.
It's
an
advancable
roles
that
lives
out
on
github.
So
if
it's
the
goal,
if
it's
possible
to
just
drop
and
replace
and
move
the
jobs
over,
that
should
be.
That
should
be
pretty
easy
or
straightforward.
We've
also
used
the
play
votes
to
upgrade
existing
Jenkins,
but
I
don't
know
about
going
all
the
way
back
to
ancient
history.
C
F
C
A
G
A
H
And
just
just
a
broadly
set
the
so
there's
a
strong
desire
to
have
automated
dynamic
provisioning
of
buckets.
The
question
is:
the
default
choice
is
to
use
a
service
worker
just
because
that's
how
bunch
of
other
stuff
is
being
implemented,
and
there
actually
is
an
implementation
that,
where
GW
that
uses
a
service
worker
already.
So
that's
one
path,
but
there's
concern
about
the
friction
that
that
involves
so
Aaron
is
here
on
the
call.
A
Yes,
it's
I
think
that
that
had
been
one
of
my
initial
somewhat
large
concerns
as
well.
Is
that
the
friction
that
comes
along
with
having
to
run
a
Service,
Catalog
or
open
service
broker?
That
requires
its?
You
know
its
own
at
CD
instance
since
its
using
aggregated
api,
and
you
know
having
that
flow,
be
part
that
dependency
and
experience
be
part
of
of
what
we
currently
have
in
Brook
does,
as
you
mentioned,
bring
in
some
friction
that
I
personally
I'm
concerned
about.
H
C
Just
even
before
we
go
there
and
kind
of
curious,
so
the
finest
ending
quickly
the
open
service
program
provision
create
provision
essentially
manage
services
and
and
use
them
from
your
application.
It
isn't
from
a
learning
standpoint:
why
can't
the
service
broker
somebody
who's
using
a
service
worker
like
revision,
Brooke
and
use
an
object
CRD
within
rook?
Why
doesn't
rook
act
as
one
of
those
many
services
I
think
brokered.
H
H
J
Think,
there's
a
lot
of
unknowns,
the
driving
forces.
We
want
to
try
to
make
everything
be
as
a
service
as
possible
right
and
being
that
customers
want
to
quickly
deploy.
You
know
buckets
to
store
objects
in
and
doing
it
from
the
service.
Catalog
provides
a
nice
user
experience
where
you
know
you're
not
having
to
understand
the
ins
and
outs
of
the
storage
provider
or
set
anything
up.
J
You
can
just
go
to
the
catalog
and
you
know
quickly,
provision
that
have
you
know
a
URL
to
go
ahead
and
push
things
to
so
that
that's
the
driving
force.
Why
we
don't
do
some
things
over
other
things,
I,
don't
think
it
has
to
be
that
way.
I
don't
think
we
have
to
were
required
one
way
or
the
other
we're
just.
The
idea
was
to
come
up
with
some
sort
of
workflow.
That's
easy
for
customers
that
make
sense
from
the
Service
Catalog.
Does
that
make
more.
C
C
C
They
were
on
an
on-premise
cluster.
The
model
I
was
kind
of
having
my
head
is
what
the
experience
should
be.
We
should
strive
to
get
this
similar
experience
for
people
who
want
to
use
this
service
broker.
Why
can't
truck
be
the
moral
equivalent
of
say
the
s3
managed
service
I'm.
J
Hoping
that
that's
I'm,
hoping
that's
what
we're
striving
for
I'm,
hoping
that
when
people
decide
that
they're
gonna
be
an
open,
shipped
customer
that
they're
gonna
look
at
this.
The
idea
is
that
Amazon
Google,
all
these
have
service
catalogs
right
I
mean
that's.
Why
we
created
the
service
catalog,
but
for
Red
Hat,
our
value
proposition
is
we're
gonna
put
every
service
imaginable
in
our
catalog
as
an
open
source.
You
know
if
it's
an
open
source
product,
then
we'll
put
it
in
our
catalog.
J
Even
if
it's
not
you
know,
maybe
always
in
our
best
interest,
which
is
like
a
funny
thing
that
we
do
right,
but
that
that
experience
should
be
a
like-for-like,
and
the
importance
of
that
is
also
when
we
come
into
hybrid
cloud
that
if
I'm
on
AWS
and
I
decide
to
transition
all
my
stuff
over
to
open
shift,
it's
very
similar
in
the
way
that
I
interact
with
the
storage
layer
and
that
stuff
should
be
that
equivalent.
It
shouldn't
feel
any
different
from
the
way
I'm
provisioning.
C
Sense
and
so
from
that
I
trying
to
understand
the
layering
on
this.
So
it
seems
to
me
that
if
somebody
was
using
the
open
source
sprinkler,
then
they've
already
said
about
they
figured
out
how
to
run.
You
know
at
CD
cluster
to
manage
it
and
do
all
the
stuff
they
need
to
getting
writing
the
broker
peace
that
makes
rooked
appear
in
the
open,
Service
Catalog,
but
that
makes
sense
people
want
to
do
that.
Writing
the
code
to
provision
say
buckets
or
filesystem
within
the
open
service.
C
J
I
think
we
just
have
to
tie
the
two
together,
not
yeah
where's,
to
set
things
up
so
when
I'm
accessing
it
from
the
open
service
broker
the
catalog,
whatever
operator
I,
used
to
deploy
that
in
there
it's
it's
already
configured
to
use.
You
know
either
like
a
default
storage
class.
That's
are
you
set
up
for
stuff
with
you
know
the
proper
keys
and
secrets
I
mean
it
just
it
just
works
the
ideal
with
that
I
click
that
and
it
works
and
I'm
not
having
to
let
go.
J
D
J
And
and
also
I
can
share
with
you
and
I
haven't
with
sage
in
the
past,
authentication
was
one
of
those
sticky
points.
That's
weird
that
we
should
iron
out,
like
the
Gluster
team
created
like
in
an
school
playbook
that
runs
that's,
obviously
not
the
way.
We
want
to
go
long
term.
We'd
rather
have
an
operator
that
deploys
all
this,
but
that's
the
vision,
I
think
everyone's
on
the
same
way.
So.
C
Have
thought
about
a
slightly
different
protocol
that
if,
instead
of
the
broker
that
goes
directly
to
our
GW,
you
can
write
a
broker
that
deploys
a
rep
cluster
calls
creates
the
custom
resources
used
by
a
rook
to
generally
no
provision
a
bucket
or
object
store,
so
essentially
layer
layering?
If
you
think
of
open
service
work
where,
as
a
consumer
of
rook,
just
like
a
user
would
be
yeah
and
the
layering
open
service
broker
consumes
rook
and
then
does
the
whatever
shimming
it
needs
to
do
for
open
service
broker.
H
Yeah
as
long
as
the
life
cycles
match
up
because
when
application
exit
gets
deployed
and
they're
like
I
need
object,
storage,
we
don't
want
to
deploy
a
new
requester
because
there's
probably
already
one
deployed.
You
want
to
consume
the
existing
storage
hardware,
resources
that
are
already
set
up
in
turn
so
that,
but
once
the
and
then
there's
also
the
setup
of
the
broker
itself
and
that
specific.
C
H
Nicee
right,
it's
the
equivalent
of
declaring
the
volume,
climber,
storage
pool
or
whatever
ERD
they
a
more
look
LaVon
that
for
object,
which
in
reality
is
actually
deploying
this
service
broker.
Endpoint.
That
knows
how
to
create
buckets
for
you
on
your
behalf
and
ferry
all
the
credentials
back
and
forth
whatever
it
is.
I.
H
Okay,
so
the
the
original
concern
was
that
we
want
the
ability
to
dynamically
provisioned
buckets
without
having
to
have
all
the
complexity
and
overhead
of
a
Service
Catalog
and
a
broker.
Is
that
is
that
still
a
desire
requirement?
Should
we
like
try
to
layer
the
two
so
that
workers
hosting
CDs
I
think.
C
The
the
way
I'm
thinking
about
it
is
that
I
mean
I
I'm
personally,
not
a
big
fan
of
the
access
through
a
service
broker,
but
that's
not,
but
if
people
want
to
use
that
and
they
should
be
able
to
write
if
another
approach
emerges
that
can
do
dynamic
provisioning
of
resources
or
matter
services
without
having
to
go
for
the
service
breaker,
then
great
they
can
coexist.
You
know
I,
think
I.
F
C
Part
of
the
discussion
on
that
issue
that
we
had
and
I
think
there's
a
little.
You
know,
I
I
think
we
were
trying
to
compare.
We
talked
about
approaches
of
how
to
consume
or
or
implement
dynamic
provisioning
from
the
air
and
I
think
from
a
rogue
stamp
on.
We
should
just
you
know
we
should.
We
should
not
be
that
opinionated
ma'am.
H
I
guess
that
that
the
higher
level
goal
it
seems
to
me,
I,
don't
know
if
there's
excusable
or
not,
but
when
you're
deploying
an
application,
you
should
be
able
to
say:
I
want
a
bucket
and
I,
don't
know
if
it's
the
broker
mechanism
or
something
else
that
should
be
able
to
say.
Oh
well,
you
are
on
AWS
and
the
storage
classes,
policies
or
whatever
set
up
for
this
cluster
or
such
that
I'm
gonna
provision
you
an
s3
bucket
or
your
on-prem
or
your
running
rock
or
whatever
it
is,
and
I'm
gonna
link
you
to
yeah.
H
C
That
abstraction
does
not
exist
in
service,
but
you're
actually
you're
picking
a
very
specific
implementation.
You're
referencing,
the
name
of
the
implementation
through
the
broker
and
the
provider
for
it
and
you're
specifying
things
that
are
look
like
concrete
implementation
values
to
populate
it
with,
but.
H
H
C
D
C
It
right
you
can
write
exactly
so
that
that's
the
that
was.
That
was
the
spirit
of
the
proposal
we
were
looking
at
and
that's
this
issue
is
that,
can
we
create
a
CR
de
that
represents
an
abstract
bucket
and
is
not
tied
to
a
given
implementation
and
that,
based
on
context,
thou,
the
right
bucket
gets
provisioned
and
the
application
doesn't
need
to
know
about
it?
That
was
the
spirit
and
not
seems
you
know
unless
you
started
doing
better
brokers
and
all
sorts
of
other
stuff
that
was
hard
to
achieve
with
the
existing
service
broker
approach.
C
Okay,
that
was
the
spirit
of
that
design
and
I.
Don't
think,
we've
pursued
it
in
the
Rope
context,
but
that
was
a
spirit
of
design,
but
you
know
honestly
I
think
if
people
are
using
the
service
broker
today
and
they
don't
care
about
this
powerful.
You
know
workload,
portability,
I,
don't
see
why,
as
a
community,
we
shouldn't
support.
That's
nice
yeah!
We
linked.
C
J
What
we're
planning
on
moving
to
what
we're
capable
of
and
see
what
work
items
would
need
to
be
created
to
get
you
know
to
parity
and
then
do
it.
Do
a
comparison
go
out
to
AWS
and
to
play
some
object
store
on
there
through
history
and
see
what
that
experience
is.
I
can
and
just
worry
about
that
piece.
First,
then,
the
the
portability
kind
of
is
secondary.
In
my
opinion,
we
can't
boil
the
ocean
with
the
the
portability
right
now.
We
just
need
to
make
it
easy
to
consume
object
through
the
road
through
the
catalog.
J
C
C
The
first
one
I
think
we
should
probably
explore
with
how
we
implement
I
I,
think
layering
rook
broker
on
top
of
rooks
er
DS
seems
like
a
reasonable
start
from
my
perspective,
and
we
can
talk
about
packaging
and
deployment
and
who,
where
it
ends,
and
so
I
don't
know,
wants
to
kind
of
drive
that.
But
we
should
probably
design
for
that.
D
D
B
H
H
J
D
Like
set
volume
and
sort
of
long
term,
there
might
be
some
changes
that
are
would
be
good
in
cefalu
m--.
So
the
the
same
sort
of
things
can
be
used
for
deep
sea
and
for
ansible
and
for
rook,
but
he
he
wanted
to
have
like
a
more
technical
thinking
about
it
and
I
didn't
know
if
that
would
be
a
good
thing
to
have
in
next
week's
rook
meeting
or
if
something
sort
of
one-off
should
be
scheduled.
Yeah.
C
B
Okay,
I
think
we
want
to
get
going
sooner
yeah
we
can
meet
yeah
sooner.
That's.