►
From YouTube: Ceph Developer Summit Quincy: Rook
Description
Full agenda: https://pad.ceph.com/p/cds-quincy
B
C
C
Yeah,
I
think
the
yeah,
the
manager
module,
is
a
good
thing
to
talk
about.
Otherwise,
I
think
just
had
some
general
thoughts
about
planning,
but
no
real
specific
features
other
than
learning
about
what
seth
is
doing
in
quincy,
so
we
can
make
sure
we
expose
those
features
coming
up.
C
E
C
Good
yeah,
I
think,
a
month
or
two
ago
we
had
a
a
meeting
on
this
topic
with
why
miguel
you're
still
here,
yep,
yeah
and
others
about
what
we
want
to
do
with
the
module
and
the
scenarios
around
it.
E
B
There's
the
bridge
between
the
dashboard
and
android,
not
not
so
much
about
the
cli,
so.
A
I
think
yeah,
I
think
it's
mostly
a
dashboard,
but
I
think
in
some
cases
for
the
cli.
Yes,
because
we
would
like
the
stuff
documentation
for
scenarios
like
how
do
you
deploy
nfs
or
how
do
you
deploy
a
new
rgw
zone
or
like
if
we
can
make
those
work?
That
would
be
great
and
things
like
replacing
a
drive
like.
If
we
can
document
the
cli
commands
that
you
do
to
do
that
and
have
them
work
with
rook
as
well
as
with
stuff
idioms,
and
that
would
be
fantastic.
B
Yeah,
especially
as
remo
rook
is
going
to
remove
drive
group
support
already
gone
perfect
so,
which
means
that
it's
going
to
be
super
awkward
to
to
implement
oc
support
to
the
rook
manager,
module.
B
So
I
I
either
we
we
adopt
the
existing
dashboard
way
of
of
deploying
osds
to
also
support
this
kind
of
look
way
of
doing
things,
or
we
just
leave
it
out
for
the
moment,
and
I
mean
if
we
want
to
want
to
support
the
rook
way
of
doing
then
we
have
to
extend
the
orchestrator
interface
between
between
the
dashboard
and
the
rook
manager
module
to
combinate.
How
rook
does
things.
E
B
A
C
Right,
there's
primarily
two
different
ways.
One
way
is
you
can
just
tell
it
to
go,
look
for
all
the
available
devices
and
consume
them,
or
also
with
raw
devices.
You
can
tell
us
a
list
of
device
names
or
basic
properties.
Well,
names
is
really
the
only
thing
there,
but
the
yeah,
the
preferred
way
really
is
with
pvcs.
So
if
you
have
any
any
sort
of
pvc,
whether
you're
running
in
a
cloud
provider
like
aws,
could
give
you
a
gp2
pv,
you
can
tell
it.
C
Yeah
go
create
the
osd's.
What,
after
you
generate
the
pb
or
dynamically
provision
at
pb,
I
should
say,
and
then
rook
will
create
the
number
that
you
tell
us
and
we'll
try
and
spread
them
across
across
the
cluster
evenly,
and
I
mean
it's
basically
that
you
can
dynamically
provision
them
and
yeah
and
work
kills
and
creates.
B
F
A
But
that's
a
new,
a
new
feature
of
the
orchestrator
interface
to
provide,
and
then
that
has
to
get
surfaced
all
the
way
up
into
the
dashboard.
So
it
seems
to
me
that,
like
right
now
the
dashboard
has
sort
of
well
right.
Now
it
has
one
way
to
create
osd's,
it's
the
drive
groups,
but
I
think
the
other
way
that
we
should
add
is
when
you're
looking
at
like
the
device
list,
the
device
inventory
of
the
physical
devices,
you
should
be
able
to
just
say,
like
create
an
sd
on
that
device.
A
Like
I
see
it,
it's
available
use
it
create
osd
that
that
probably
can
be
done
with
rook,
also
right,
because
that's
actually
not
going
through
the
pvc
interface
or
maybe
it
is,
but
I
mean
that's
hidden
by
the
details.
But
if.
E
E
C
Oh
yeah
yeah,
particularly,
I
guess
I
guess
my
summary
is
upstream
users
yeah.
They
just
find
it
so
much
easier.
Just
to
say
I
just
go
consume
the
devices
on
the
nodes
that
I
tell
you
and
use
certain
filters,
maybe
to
look
for
certain
devices,
but
the
upstream
users
I'd
say:
aren't
using
the
pvcs
very
often
unless
they're
running
in
a
cloud
provider.
C
So
there's
that
simplicity
of
of
just
avoiding
the
pvs,
I
think
is-
is
attractive
to
a
lot
of
people
and
in
that
mode,
where
we're
not
using
pvs.
We
also
have
a
way
to
discover
and
what
devices
are
there
so
then,
and
the
the
dashboard
at
one
point
was
going
to
consume,
that
I
don't
know
or
the
or
the
rook
module.
Does
I
guess
as
far
as
retrieving
what
devices
are
there,
so
you
could
choose
them
and
then
turn
around
and
create
osds
on
them.
E
G
One
question
about
the
current
support
of
the
orchestrator
cli.
If
so,
if
we
currently
run
this,
I
just
place
this
parity
between
group
and
if
we
currently
run
the
supply
osd.
What
is
this
I?
I
know
what
is
expected
behavior?
What
is
the
current
behavior
in
that
working
or
is
it
broken.
G
Yeah
yeah.
No,
yes,
there's
been
this
change
right
in
the
no
longer
supporting
the
drag
groups.
If
we
run
this
command
the
supplyosd,
what
is
the
outcome
of
that
command?
Is
that
going
to
break,
or
is
it
going
to
work.
H
I
think
that
it
is
working
for
very,
very
simple
cases.
Okay,
when,
for
example,
you
are
using
a
host
on
a
device.
Okay-
and
this
is-
is
working,
but
I
I
haven't
tested,
for
example,
what
happened
with
more
complex
drive
groups,
but
I
think
that
is
the
the
in
this
moment.
This
is
the
situation
in
all
the
the
rook
orchestrator.
Okay.
Maybe
we
can
start
to
to
evaluate
what
is
the
damage
control,
okay
and
to
see
all
these
things
are
working.
H
All
these
things
are
not
working
okay
and
see
how
we
can
start
again
with
providing
this
this
functionality.
H
What
I
think
that
is
important
is
that
what
I
think
that
we
need
to
be
focused
in
and
keep
the
ap
provided
by
the
stator,
only
one
ip
in
order
to
work
with
parameter
of
with
the
kubernetes
clusters.
H
Okay,
maybe
things
are
going
to
be
a
little
bit
different,
because
we
are
not
in
control
of
the
flash
structure
in
the
case
of
kubernetes
clusters,
but
I
think
that
this
is
the
way,
but
I
think
that
the
most
important
now
is
just
starting
to
work,
doing
evaluation
of
what
is
our
current
state
and
prioritize
the
things
to
do
I
I
would
like
also
to
have
a
session
together:
okay,
the
rook
people
and
and
understated
people
and
the
dashboard
people
in
order
to
show
how
we
are
doing
the
things
now
using,
for
example,
the
dashboard
okay
with
a
parameter
cluster
and
see
what
are
the
possibilities
in
order
to
adapt
this
to
the
acquired
methods
cluster,
because
I
think
that
in
this
moment,
when
we
pass
part
of
the
people
is
aware
of
what
is
happening
in
the
orchestrator
and
in
the
environmental
world.
H
I
But,
but
I
think
also,
I
mean
I'm
not
sure
it's
even
clear
in
the
the
seth
adm
case,
the
non-rook
case
is
to
what
are
we
trying
to
accomplish.
You
know
with
with
the
dashboard
right
I
mean
to
me:
it's
all
about
ease
of
use.
I
mean
I
would
even
include
day
one
you
know.
Basically,
we've
always
got
a.
I
I
think,
a
knock
against
the
complexity
of
install
right,
so
I
mean
would
it
be
nice
and
you
know
independent
of
whether
it's
kubernetes
based
or
self
adm
based,
do
we
want
to
have
a
day
one.
That's
really
easy
to
install
first
class
of
users
right
I
mean
I
would
love
to
see
that
personally
in
the
dashboard.
You
know,
but
that's
not
something
that's
been
prioritized.
It's
not
something.
We're
driving
I'd,
love
to
see
that
then
then
go
on
from
there.
What
are
the
high
level
workflows?
I
We
want
to
try
to
accomplish
from
a
day
too,
we
put
together
something
a
while
ago.
I
think
we
just
viewed
it
internally.
We
tried
to
share
it
outside,
I'm
not
sure
how
many
people
looked
at
it,
but
I'd
love
to
have
that
kind
of
discussion
and
figure
out.
What
are
the
workflows
we
want.
We
have
a
limited
set
of
resources
and
dashboard
and
I'd
love
to
see
both
kubernetes
based
and
set.
Adm-Based
deployments
probably
have
the
same
functionality,
but
is
that
what
we
want?
I
don't
know.
D
A
Be
a
few
differences
because
of
yeah.
You
know
if
work
is
on
a
cloud
provider,
for
example,
then
these
are
all
dynamically
provisioned
devices,
but
but
yes,
aside
from
those
things,
we
should
try
to
make
it
as
similar
as
possible.
I
agree.
A
Well
I
wonder
if
it
makes
sense
to
just
to
table
the
osds
for
the
moment,
because
that's
that's
like
the
most
complicated
part,
and
it
seems
like
the
the
low
hanging
fruit
here-
is
to
make
sure
that
we
have
parity
in
the
implementation
of
all
the
other
services
in
the
orchestrator
layer
so
that
you
can
deploy
rgw
zones.
You
can
deploy
nfs
servers.
You
can
deploy
the
manager
mds
file
systems,
all
that
stuff
make
sure
all
that
stuff
is
working.
E
A
A
So
I
having
thought
about
this
a
bit
this
last
week,
I'm
kind
of
thinking
that
we
should
just
bite
the
bullet
and
write
a
technology
test
that
deploys
kubernetes
with
cube
adm
yeah,
so
that
we
can
deploy
our
cluster
through
toothology
and
have
this
as
part
of
our
normal,
better
regression
test
that
we
do
with
all
the
orchestrator
layers
and
stuff.
B
And
otherwise
we
will
never
be
able
if
we
are,
if
the
minute,
if
the
rook
manager
module
still
works.
D
D
A
A
C
J
Yeah
I
pasted
one
of
those
in
the
chat.
Maybe
maybe
we
can
discuss
this
more
in
a
separate
call
more
detail,
but
one
of
the
things
we
want
to
do
for
quincy
is
have
more
automated
reformatting
or
re-provisioning
of
osds.
J
F
Yeah,
I
think
it's
true
for
all
the
components
too,
like
the
months
as
well.
Typically,
I've
been
thinking
about
this
with
some
of
the
issues
we
have,
we
had
with
the
compaction
of
the
monster,
so
it
would
be
nice
to
get
them
on
offline.
Do
something
and
then
breaking
back
up
yeah,
so
yeah
it
works
for
months
as
well.
A
The
the
initial
driving
motivation
here
is
that
blue
store
needs.
This,
like
fs
check
whatever
to
run
in
order
to
convert
the
omap
to
the
updated
format,
and
we
didn't
do
that
by
default
with
the
pacific
upgrade
because
it
is
super
slow.
A
Or
them
to
see
store
later,
or
maybe
you
have-
I
mean
if
we
were,
if
we're
having
this
conversation
two
years
ago,
it
would
have
been
that
we
want
to
convert
file
forms
through
the
blue
store.
You
can't.
A
Because
we
don't
support
file
store
with
staff
adm
anyway,
so
it
doesn't
really.
It
doesn't
really
matter,
I'm
a
little
nervous
about
or
unsure
about
that
case,
because,
in
my
experience
the
the
times
when
you
like
really
want
to
like
repave
osds.
It's
because
the
whole
infrastructure
around
like
how
you
divvied
up
your
devices,
like.
J
F
F
If
you
want
to
revamp
everything
like,
I
think
I
was
thinking
about
something
else
like
if
you
want
to
encrypt
the
disk,
for
example,
which
is
similar
to
switching
from
blue
store
price
to
blue
store
and
everything.
Then
you
have
to
touch
the
drives,
so
it's
it
becomes
a
an
entirely
different
workflow
yeah.
A
J
D
A
E
F
I
mean
if
we
want
to
add
more
init
containers
to
the
current
sequence.
Then
I
think
it's
easy,
because
this,
what
you're
talking
about
essentially
comes
from
an
upgrade.
So
when
you
upgrade
you
go
if
these
are
hosts
by
hosts,
so
you
always
wait
in
between
so
it's
fine
and
then
you
can.
I
think
you
can
be
in
a
situation
where
you
always
run
those.
L
F
Containers,
for
example
in
group
we
always
run
expand
with
us,
but
just
in
case
the
pvc
size
changed.
So
we
will
just
automatically
resize
the
blue
blue
fs.
Then
we
could
do
the
same
for
reshotting
once
that's
done
so
that's
that's
seems
relatively
easy
to
achieve.
A
I'm
just
trying
to
think
how
this
would
be
abstracted
into
something
that
the
orchestrator
would
run
like.
I
wonder
if
it's,
if
the
way
to
think
about
it,
is
as
a
maintenance
task
and
you
define
a
series
of
different
maintenance
task
types.
One
maintenance
task
would
just
be
fsjack,
that'd
be
sort
of
the
simplest
one,
but
you
could
have
one
that's
reshard
or
one
that's
whatever
and
either
these
would
be
something
that
it
has
to
be
like
built
into
the
manager
module
or
the
orchestrator
module.
F
A
Well,
it
I
think
it
would
initially.
I
think
the
perimeter
would
just
be
something
that
you
would
manually
trigger
like
that'd,
be
the
very
first
thing
like
go:
do
this
maintenance
task
on
this
ost
and
it
would
stop
the
demon
run.
The
maintenance
task
gather,
success,
failure
and
or
log,
and
then
start
it
again
and
then
there'd
be
something
on
top
of
that.
That
would
automate
it
like.
A
F
But
if
it's
a,
if
it's
a
recurring
event,
it
would
have
to
be
like
a
job.
If
it's
an
upgrade
pass,
then
it's
easy.
We
add
it
to
the
spec
and
every
time
we
restart
the
deployment,
then
we
get
through
it.
But
if
this
is
something
after
the
upgrade
that
we
would
need
to
run
periodically,
then
it
would
have
to
be
like
in
kubernetes
it's
easy,
because
we
have
jobs
and
we
can
schedule
them
on
hosts
and
everything,
and
we
just
need
to
write
the
logic
on
like
yeah.
F
A
It
wouldn't
necessarily
be
like
a
recurring
task
like
this
might
be
a
one-time
thing,
and
it
might
be
that
the
admin
says:
okay
go
restart
all
my
sds
and
then
it
goes
one
by
one,
but
it
you
do
that
particular
task
once
not
like
you.
Do
it
every
month.
F
F
F
C
D
A
And
I
I
mean,
I
think
that
the
main
question
for
me-
I
don't
know
if
josh
is
coming
back
back
or
not,
but
the
main
question
for
me
is
whether
yeah
having
oh
there,
you
are.
You
just
moved
on
my
screen
having
like
a
small
set
of
predefined
and
built-in,
but
built-in
support
instead
of
these
tasks
that
you
can
do
like
whether
it's
fs
check
or
reshard
or
drop
allocation
data
or
whatever
it
is
or
repair.
J
L
And
like
the
idea
of
the
you
know,
defining
them
as
separate
tasks
of
which
a
user
can
decide
whether
they
want
to
do
it
as
a
part
of
the
upgrade
or
later,
as
as
like
a
maintenance
top
and
and
initially
we
need.
We
need
to
identify,
like
you
know,
reshading
or
for
format,
changes
and
things
like
that,
but
I
can
imagine,
like
other
blue
stroke
tool.
Operations
can
also
become
a
part
of
such
maintenance
operations.
You
know
they
could
be
generic
tasks
where
you
know
you
need
more
knowledge
than
you
know.
D
E
J
If
you
have
like
one
wrapper
that
like
runs
all
of
these
and
users
need
to
worry
about
which
ones
they
need
to
use
or
not,
this
could
just
gets
all
their
osd's
to
the
latest
and
best
settings
and
other
they
could
be
built.
On
top
of
these
more
specific
general
tasks
that
could
be
more
customizable.
C
I'm
thinking
a
little
bit
of
this
as
we
have
some
settings
in
the
cr
that
say
do
you
want
to
enable
this
or
this
or
this
during
the
osd
restart
and
then
so,
if
you
want
to
enable
resharding,
then
we
just
enable
that
net
container
and
restart
the
osd
and
when
it's
all
done
then
then
I
disable
it
again
or
for
more
custom
tasks.
You
know,
then
it
is
more
that's
where
it's
harder
with
rick,
because
you
got
it.
A
That's
where
it's
done,
I'm
not
sure
it's
a
given
that
this
has
to
be
something
that's
reflected
in
the
crd,
because
this
could
be
something
that
the
the
stuff
manager
module
drives
right.
So,
if
there's
a
way
for
the
manager
module
to
tell
an
osd
a
specific
ost
pod
to
stop
and
then
to
and
to
just
as
a
one-time
thing
run
a
job
on
it
and
then
start
again
like
if
it
can
do
that
sort
of.
A
Is
a
situation
where
we
have
to
re-implement
like?
Like
so
say,
we
have
this
primitive,
where
you
can
these
this
basic
capability
and
that
adm,
even
where
you
can
just
you,
can
stop
a
demon,
run
some
maintenance
operation
and
start
again
immediately.
The
next
thing
you
do
is
on
top
of
that,
build
something
that
looks
similar
to
the
upgrade
where
you're
gating,
based
on
okay,
to
stop
and
doing
this
across
the
whole
cluster
and
iterating
across
a
little
cluster.
A
If
that's
implemented
in
the
manager
like,
we
don't
need
to
re-implement
the
whole
same
thing
in
rook.
If
we
can
just
make
those
low-level
operations,
something
that
you
can
do
your
manager,
rook
right.
F
A
F
We
can
do
it
like
simply
by
we
can
have
in
it
containers
that
we
only
activate
and
deactivate
just
like
travis
said,
and
then
the
manager
module
can
just
look
up.
What's
the
pod
name
and
then
set
an
environment
variable
which
will
effectively
restart
the
deployment
and
then
go
through
all
the
init
sequences
and
then
activate
the
desired
maintenance
containers,
yeah
and.
D
C
E
A
J
Okay,
that
sounds
like
a
pretty
good
way
to
go
for.
Are
there
any
concerns
about
it
when
this
same
kind
of
thing?
First
off
adm,.
F
Okay,
so
I
guess
once
you
have
like
a
blueprint
or
a
pr
for
a
cpdm
just
like
highlight
us
and
we
can
go
ahead
and
simply
add
the
relevant
containers.
But
then
someone
has
to
write
the
manager
logic
to
find
out,
okay,
which
what
is
the
id
is,
which
part
and
then
go
ahead
and
cube,
ct
on
the
right
comment
or
just
call
the
api
whatever,
and
then
this
will
trigger
the
maintenance
on
the
osd.
F
But
then,
at
this
point
the
manager
will
have
to
drive
between
ost,
because
this
is
really
out
of
rooks
control.
At
this
point,
okay,
we
just
provide
the
the
facilitators
in
it
containers
and
then
you
execute
them.
I
think
it's
a
good
pattern
because
we
go
into
the
you
go
into
the
toolbox.
If
you
go
with
the
cli,
you
just
have
manager
do
like
orange
daemon
repair
and
that
works.
F
I
don't
know,
but
I
just
I
just
read
that
because
I
for
me
it
doesn't
really
make
much
sense
and
I
I
just
know
like-
or
at
least
I
remember
how
hard
it
was
to
implement
it
in
containers.
It
looks
like
somehow
it's
successful
first
f80,
but
I
just
don't
really
want
to
go
back
to
that
experience.
But
if
we
are
looking
at
bringing
parity
between
like
the
manager
interfaces,
then
why
one
would
have
iscsi
and
the
others
want
or
something
I
guess
is
the
question.
A
Yeah
I
mean
things
like
nfs
and
iscsi
only
make
sense
if
you
are
providing
storage
to
an
external
something
external
to
kubernetes,
at
least
in
my
brain.
Maybe
I'm
missing
something,
but-
and
in
that
case,
like
I'm,
still
fuzzy
about
what
how
that
actually
works
like
if,
if
kubernetes
has
this
whole,
like
virtual
networking
layer
and
you
have
you're
providing
a
service
ip
is
that
ip
accessible
to
things
outside
the
cluster.
F
C
A
router
yeah,
so
rook
just
creates
the
service
and
then
depending
on,
if
you're,
using
routes
or
there's
a
few
different
things
load
balancers
the
cloud
providers
at
different
different
ways
to
connect
the
service
to
the
outside
world.
Okay,.
C
A
Maybe
nfs
would
be
the
I
think,
making
sure
rgw
and
nfs
work
is
would
make
the
most
sense.
Sorry,
then,
maybe
this
is
another
stupid
kubernetes
question,
but
in
the
in
the
case
of
a
service
like
if
you
have
an
an
rgw
object,
store
cr-
and
you
say
I
want
three
rgw
daemons
and
there's
a
service
associated
with
that.
That
service
has
one
ip
associated
with
the
right
or
is
there
a
service
for
every
instance
of
the
daemon.
C
A
D
A
This
is
sorry,
this
is
backing
up
a
little
bit.
So
what
we
were
talking
about
before,
with
the
with
that
sort
of
having
parity
between
what
set
adm
is
doing
and
what
kubernetes
is
doing,
but
maybe
that's
actually
maybe
it's
the
route,
that's
actually
the
the
moral
equivalent
of
a
virtual
ip
and
asia
proxy
on
the
cephadian.
B
B
A
Yeah,
I
mean
that's,
I
wasn't
I
wasn't
trying
to
go
there.
What
I'm
saying
is
that,
right
now,
when
you
deploy
with
you
through
the
orchestrator
api,
if
you
deploy
an
rgw
spec
right
on
safe
adm,
we're
gonna,
deploy
in
our
gw's
and
on
kubernetes
we're
going
to
deploy
nrgws
plus
the
service
abstraction,
but
the
service
abstraction
doesn't
actually
let
an
external
user
access,
rgw
with
load,
balancing
or
anything
like
that.
You
have
to
have
the
route
in
order.
For
that
to
happen,
no.
F
No,
you
you
once
you
get
the
service,
then
it
just
like
terminology
wise.
The
service.
Ib
is
the
whip
of
keeper
id.
If
you
will
okay,
because.
F
At
this
point,
and
then
the
route
is
just
something
that
it's
like
exposing
like
opening
the
port
it
like
gives
you
like,
a
public
id.
If
you
want
or.
A
Right,
but
is
it
I
think
that
might
be
the
same.
I
think
that's
what
I'm
saying
like
the
you
don't
get.
The
external
world
can't
access
it
until
you
deploy
through
the
route.
Basically,
and
you
specify
that
external
ip
yeah.
A
Virtual
network
yeah,
so
I'm
wondering
if
that
is
the
equivalent
of
on
the
staff
adm
side
if
we
deploy
the
rjw
service
and
we
optionally
specify
that
I
want
this
virtual
ip.
That
means
that
somebody
outside
and
access
outside,
the
cluster
like
there's
no
virtual
networking
so
inside
outside,
is
a
bit
fuzzy,
but.
F
I
think
the
reason
why
you
have
a
vip
is
because
you
want
to
bring
h8
to
a
hedge,
a
proxy
right.
Yes,
so
you
kind
of
need
it.
You
kind
of
need
that
one,
even
if
this
means
exposing
to
the
subnet.
I
guess
where
the
vp
is
on
running
on.
F
Honestly,
I
I
don't
know
it
really
depends
up
being
on
the
on
the
on
the
cni
and
all
the
plugins
yeah.
So
I.
M
M
E
Well,
you
don't
go
through
proxy
at
all,
but
unless
it's
that
a
separate
service
for
that-
and
you
know.
A
Yeah,
okay!
Well,
I
guess
what
I'm
getting
at
is
that
it
could
be
that
it
seems
like
we
have
two
two
options.
A
A
So
we
get
that
single
ip,
that
you
can
hit
with
h
a
and
that
triggers
the
whole
aha
framework
and
in
the
rook
side,
if
you
don't
have
that,
then
it
deploys
the
way
it
does
now,
where
you
just
have
a
service,
and
you
can
access
the
service
internal
to
kubernetes,
but
not
externally,
and
if
you
do
specify
that
property,
then
you
get
a
route
and
then
you
get
all
the
external
access
and
from
a
high
level,
10
000
foot
view
it.
They
mean
the
same
thing
right.
B
As
long
as
we
don't
we'll
be
a
future
parity
with
the
ansible
right
website,
enterable
provides
a
lot
more
in-depth
configuration
options
than
just
the
ip.
A
Right
well,
so
the
second
part
of
that
is
that,
because
it's
optional
in
both
cases,
I
was
getting
all
wrapped
up
inside,
because
in
the
rook
case
it
was
like
not
optional.
It
was
always
deployed.
The
service
was,
but
I
didn't
realize
that
there
was
this
extra
route
piece
that
is
optional
for
the
external
part
and
that
more
more
logically
maps
onto
a
dha.
N
F
M
C
A
So
I
guess
the
the
punch
line
is
then.
If,
if
that's
the
case,
then
we
could
have
the
h
a
piece
of
this
be
a
separate
logical
service
at
the
orchestrator
layer,
similar
to
where
I
was
going
before,
so
that
you
deploy
the
rgw
service
or
the
nfs
server
through
the
orchestra.
A
Later
you
do
support
supply,
rgw
blah
and
that
just
deploys
the
demons
and
then
separately
you
do
step
forward
to
apply
or
route
or
whatever
you
want
to
call
it
at
the
orchestrator
layer
and
that
maps
to
the
kubernetes
routing
stuff
and
the
the
sephadm
keep
the
live
d
aj
proxy.
Why
one?
Which.
F
Why
won't
you
just
go
with
the
aha
approach
straight
away
instead
of
having
to
deploy
a
router's
gateway,
then
enable,
if
you
can
get
it
for
free
and
then
add
more
h,
aprox.
H
B
The
my
problem
is
that
I
don't
have
made
my
mind
about
90wha.
I
just
hadn't
had
the
cycles
too
in
depth.
Look
into
that
topic,
and
I
think
both
options
are
very.
Are
okay,
the
having
it
as
a
separate
entity
makes
it
more
cumbersome
to
deploy
it
as
a
user
having
it
within
the
itw
servers
makes
it
going
to
be
more
cumbersome
to
or
make
the
lgw
specification
a
lot
more
complicated,
as
we
have
to
have
a
bit
a
lot
of
settings
for
the
specific
for
we
we
need
to
have.
B
B
Having
it
as
a
separate
entity
makes
sense,
as
it
is
more.
B
A
Yeah
I
mean,
I
think,
that
the
same
set
of
issues
come
up
that
I
was
tripping
over
before.
Where
like
do
you
always
combine?
Is
it
always
a
combination
of
people?
I
have
dna
shape
proxy,
bundled
together
as
a
unit,
and
because
then
you
can
deploy
those
anywhere
and
they
can
point
to
a
service.
Or
do
you
want
you
want
to
make
it
possible
to
do
like
active
passive
with
just
a
virtual
ip
or.
B
B
Split
it
out
into
different
parts
that
are
more
independently
usable,
that's
easy
with
with
kubernetes,
because
kubernetes
has
deployments
and
and
services
and
parts
and
then
stuff
like
that.
B
And
virtual:
it's
it's
a
bit
more
cumbersome
in
cefadm,
because
we
don't
have
those
abstraction
layers
already
provided
we
have
to
build
them
our
own
and
we
always
have
to
find
the
balance
between
keeping
it
simple
keeping
it.
B
Especially
crafted
for
us
fadm
on
the
unh
one
hand
and,
on
the
other
hand,
making
it
more
modular
and,
to
some
degree,
re-empty
re-implementing,
self-adm,
re-implementing,
kubernetes
and
cdm,
and
we
always
have
to
find
a
balance
at
some
point.
B
We
implemented
those
specification
files
because
the
cli
was
just
not
not
great,
and
I
think
we
have
to
do
that
here
again:
kind
of
providing
different
layers
of
of
that
thing,
one
one
user
interface,
that's
maybe
lgwha
and
then
also
providing
access
to
the
lower
level
objects
for
keeper,
fd
or
hip
rookie
independently.
F
F
I
honestly
have
always
been
against
having
hyperproxy
as
part
of
serviceable,
just
because
h
proxy
already
exists
on
its
own
as
a
rule
for
forensical,
just
like
keeper
lived,
he
does
and
embedding
it
into
the
product
always
felt
a
little
bit
wrong,
and
I
guess
the
problem
is
that
once
you
start
doing
this,
then
those
two
have
so
many
configuration
options
or
so
many
ways
it
could
be
deployed.
F
B
F
F
Yeah
and
you
can,
you
can
just
like
keep
the
stance
today-
that
you
only
deploy
set
components
and
if
you
want
to
bring
to
rgw,
then
you
need
to
have
a
load
balancer
somewhere.
A
A
Yeah,
I
guess
it
feels
to
me
like
having
this
be.
Having
that
have
like
component
module,
be
an
optional
thing
that
we
can
deploy
or
not
deploy
gives
us
a
bit
of
an
out
because
we
can
deploy
it
in
a
reasonably
opinionated
way.
Support
for
proxy
then
deploy
it
yourself
and
need,
and
these
are
the
hooks,
so
you
can
figure
out
what
the
discoverable
endpoints
are,
that
your
h8
proxy
points
to
you.
F
A
C
I've
added
one
more
data
into
there.
Just
to
I
don't
know
that
it's
a
big
topic
to
discuss,
but
bucket
provisioning
with
the
quasi
or
container
objects.
George
intervention-
I
think
it's
called
that's
coming
soon
with
kubernetes
1.21,
I
think,
is
the
first
release
with
it
and
and
rook
will
be
basically
providing
a
storage
class
that
lets
you
easily
provision
buckets
right.
C
I
don't
know
if
there's
anything
that
would
make
sense
as
far
as
the
dashboard
or
maybe
it's
already
being
done
or
you
know,
bucket
provisioning
is
an
area
that
that
community
is
interested
in,
but
is
there
anything
that
needs
to
be
done
at
a
different
level
as
well
for
like
for
stuff
adm?
I
don't
know.
N
O
Yeah,
I
I
guess
my
thought
is
I
yeah
I.
I
hope
we
find
the
time
to
like
prior
to
prioritize
integrating
with
that
in
this
coming
year,
because
I
think,
even
though
it's
gonna
be
in
alpha,
I
think
that's
something
to
get
started
with
earlier.
O
It's
gonna,
be
the
official
kubernetes
way
to
do
it.
But
that
said,
I
I
don't
see
a
direct
analog
of
that
to
seth
adm,
because
it's
like
a
kubernetes
like
my
application,
wants
object,
storage,
I'm
gonna,
request
it
and
then
there's
a
sort
of.
O
Yeah,
so
I
guess
my
thoughts
are
mostly
like
I.
I
think
this
is
a
priority
we
should
have
for
rook
in
the
coming
in
the
coming
year.
I
don't
I
don't
see
it
being
something
that
we
need
to
think
too
hard
about
for
quincy,
although
I
think
there
are
certainly
things
we
could
talk
about,
especially
as
it
comes
to
like,
like
brownfield
buckets
and,
and
things
like
that,
like
how.
E
O
D
O
So
there
is
maybe
actually
now
that
I
start
talking
some
something
we
could
talk
about,
whether
there's
like
meta
information,
we
can
put
on
buckets
and
bucket
users
to
identify
like
how
they
were
created
or
like
by
what
mechanism
they
were
created
that
way
in
rook
it.
You
know
we
can
tag
something
like
oh
recreated
this
for
this
cozy
bucket.
O
This
is
the
same
request
coming
in.
We
know
we
can
just
continue
on,
but
if
it
was
created
with
a
different
mechanism,
then
maybe
we
want
to
flag
an
error
and
say
this
bucket
pre-exists.
I
don't.
N
O
I
guess
my
thought
is
the
metadata
would
be
really
just
enough
information
to
say
the
source
of
something
it
would
be
a
string.
That's
I
don't
know
64
characters
long,
maybe
128
characters
long.
O
Yeah,
it
would
effectively
just
be
like
something
to
indicate
like
to
cross-reference.
It
is
from
a
request
from
cozy
or
from
object
bucket
claims
that
we
currently
have
just
to
say.
Okay,
the
object
bucket
claim
is
this
name
in
this
namespace,
which
I
think
is
like
128
characters
of
of
meta
information.
N
Yeah
dude
do
some
investigation
there.
We
we
have
some
place
to
put
extra
metadata,
but
you
know
we-
maybe
not
a
good
idea
like
to
have
a
generic
faith
there
for
me
to
date
with
that,
you
know,
could
be
abused.
K
D
P
The
use
case
sounds
a
little
strange,
provisioning
buckets
on
a
brownfield
cluster
that
has
other
important
stuff
on
it,
presumably
with
the
same
users
also,
and
if
it
was
a
different
set
of
users,
then
you
can't
recreate
a
bucket
that
was
created
by
a
different
user.
So
that
might
be
a
way
to
to
avoid
the
conflicts.
N
A
And
I
think
the
big
question,
in
my
mind,
is
like
sort
of
related
is
whether
cozy
has
a
type
of
brown
field
interface,
where
what
you're
really
you're
not
claiming
a
new
bucket
but
you're
claiming
access
bucket.
A
O
O
Cozy
definitely
does
and
the
the
current,
like
sort
of
proof
of
concept
that
we
have
with
object
bucket
claims
also
has
a
brownfield
case
so
yeah
I
mean
it's
also
like
something
that
any
implementation
in
rook.
We
would
need
to
know
if
it
is
a
green
or
brown
field
case.
In
order
to
know
whether
checking
is
something
that
we
actually
should
do
or
not,
and.
O
Yeah,
I
guess
this-
this
idea
isn't
like
doesn't
have
full
fruition.
I
just
know
that
in
rook
there
are,
even
just
outside
of
you
know,
object
storage.
There
are
times
when
it
would
be
helpful
to
understand
like
what
the
source
of
some
thing
is,
whether
it's
a
like
configuration-
or
I
I
guess
mostly
configurations-
is
what
I'm
thinking
about
is
like
what
what
initially
created
this
thing
or
modified.
This
thing.
F
M
K
And
the
obc
implementation
library
provisioner
did
in
fact
uses
bucket
policy
and
edits
it,
and
then
it
elaborates
it
over
time
to
adapt.
Okay.
P
O
I
know
I
yeah,
I
think
some
of
this
discussion
is
probably
good
for
offline,
but
I'm
really
glad
to
have
started
the
the
conversation
going.
It
might
be
worthwhile
for
me
to
jump
in
on
the
next
call
as
well.
It
sounds
like
since
that's
rgw
to
potentially
talk
about
it.
A
little
more.
K
O
Haven't
already,
and
I
saw
jeff
and
reached
out
on
the
like
kubernetes
slack
to
the
and
I
yeah
I
told
jeff,
and
he
could
ping
me
with
any
questions
that
I
can
ask
during
like
the
cozy
upstream
meeting
things
and
I've
been
trying
to
follow
those.
As
I,
though,
the
past
few
weeks
have
been
a
little
crazy
for
me,
but
yeah
I'm
I
mean
I'm
really
happy
to
have
jeffen's
help
doing
that
because
it's
you
know.
I
also
just
have
so
many
things
well,.
O
Yeah,
I
I
also
appreciate
all
of
the
like
knowledge
that
all
of
you
are
bringing
about
rgw
and
object
storage,
because
that's
that's
not
something
I
have
a
like
great
depth
of
knowledge
into,
but
as
far
as
integrating
it
with
rook.
I
also
have
spent
some
time
looking
at
the
lib
bucket
provisioner
and
like
how
rick
interacts
with
that
and
have
rewritten
some
things
recently.
So.