►
From YouTube: CDS Reef: RBD
Description
The Ceph Developer Summit for Reef is a series of planning meetings around the next release and some community planning.
Schedule: https://ceph.io/en/news/blog/2022/ceph-developer-summit-reef/
A
Okay,
I
suppose
we
can
start
welcome
everyone
to
the
self-developer
summit
session
on
rbd.
This
is
just
going
to
be.
You
know,
like
a
planning
thing,
I'll
attempt
to
enumerate
the
things
that
we
expect
some
work
to
happen
on
during
the
reef
cycle.
A
A
The
most
important
like
bucket
here
is
a
midi,
mirror
related
issues
and
work
items.
A
The
even
though
our
video
mirror
demon
has
been
there
for
for
quite
a
while,
it
dates
back
to
2016.
I
think
it
turns
out
that
it.
It
is.
A
Even
the
journal
based
mirroring,
which
is
as
old
as
rbd
mirror,
is
not
particularly
stable
and
there
are.
There
are
things
to
improve.
A
And
this
goes
in
particular
for
the
captured,
based
nearing,
which
was
added
in
the
octopus
cycle.
I
think,
but
hasn't
seen
much
use
until
now.
B
A
A
The
the
the
source
cluster
and
the
target
cluster
agree
on
the
state
is
just
that
the
local
rbd
mirror
or
demon
just
gets
stuck
for
whatever
reason.
A
These
are
the
the
top
priority
issues
right
now,
since,
even
though
some
of
them
may
appear
to
be
innocuous
and
connect
can
be
worked
around
by
restarting
the
ivory
mirror
demon,
they
actually
stop
their
application
and
at
least
for
that
particular
image,
but
may
also
have
wider
implications
due
to
the
fact
that
the
replayers,
the
inventory
players
within
the
rbd
mirror
demon.
There
is
one
instantiated
for
each
image:
they're,
not
as
independent
as.
A
A
This
is
important
not
just
for
these
workarounds
in
the
short
term,
but
also
you
know
just
because
in
kubernetes
environments,
demons
can
be
terminated
for,
for
a
number
of
reasons.
A
A
A
The
other
major
bug
is
the
fact
that
in
which
force
image,
promote
command,
has
a
force
flag
and
that
force
flag
unfortunately
comes
with
a
bunch
of
preconditions
that
are
just
not
and
just
not
visible
in
the
real
world
in
particular,
it
more
or
less
requires.
A
Well
it
the
the
way
it
works
it.
It
depends
on
whether
the
source
cluster
is
is
ritual
or
not,
and
if
it's
not
reachable
it
it
actually,
in
most
cases,
it
just
refuses
to
work.
So,
as
someone
put
it,
there's
not
much
force
to
force
promote
currently,
and
this
again
necessitates
various
workarounds
in
particular,
restarting
or
stopping
the
arbitrary
mirror
demon
before
issuing
force
promote,
which
is
not
that
big
of
a
deal.
If
there's
a
disaster.
A
Because
those
are
supposed
to
be
rare,
but
but
it's
it's
really
sloppy
and
could
be
addressed
again
related
to
that
is
the
fact
that
agreed,
mirror
demon
does
not
currently
give
up
if
the
source
cluster
becomes
unavailable.
A
A
The
first
one
is
snapshot.
Scheduler
improvements,
the
the
snapshot
scheduler
as
it
is
today,
is
pretty
dumb
it.
A
If
you
have,
you
know,
100
images
and
a
certain
scheduling
interval,
then
all
those
images
get
snapshotted
wallace
at
the
same
time,
instead
of
so
here,
instead
of
those
snapshots
being
spread
across
the
scheduling
interval.
B
A
Target
cluster:
there
is
a
an
issue
or
a
bunch
of
wishes
related
to
logging.
A
The
logging
is
somewhat
challenging
to
configure,
and
particularly
so
in
kubernetes
environments,
just
due
to
due
to
how
that
containers,
due
to
how
some
demons
get
wrapped
into
some
containers
and
how
those
containers
are
deployed.
A
The
standard
ways
of
applying
log-related
options
on
the
command
line
are
more
or
less
not
available
anymore,
because
that
would
require
like
too
much
intervention
and
too
much
expertise.
To
be
frank,
there
is
the
centralized
configuration
database
on
the
monitors.
A
They've
been
there
since
the
nautilus
release,
but
the
rvd
mirror
demon
has
not
been
adapted
to
to
make
good
use
of
it,
and
this
leads
to
a
bunch
of
just
user
experience.
Speed
calls
if
you
ask
someone
to.
A
Bump
the
log
levels
for
rbd
mirror
demon,
it
needs
to
be
done
on
on
both
the
source
cluster
and
the
remote
cluster
and
the
set
of
the
set
of
options
that
need
to
be
enabled
for
that
is
really
not
obvious.
A
So
the
the
task
here
is
to
first
of
all
make
everything
go
to
the
same
log
file,
because,
on
top
of
the
issues
that
I've
just
that
I've
just
enumerated
the
unless
you
go
through
unless
you're
extra
powerful
when
setting
up
those
mode
related
and
configuring
those
the
logs
would
go
to
different
log
files
in
in
different
directories
and
reconciling
that
or
even
capturing
all
of
them
is
unnecessarily
hard.
A
We
need
to
make
all
of
them
go
to
the
same
long
file,
but
at
the
same
time
make
those
log
streams
be
more
identifiable
and
just
grapple
and
generally
easier
to
follow
if
you're,
just
attempting
to
if
you're,
just
scrolling
through
the
log
file.
A
This
the
issue
of
making
log
streams
more
than
defined
was
not
unique
to
ibd
mirror.
I
believe,
like
there
was
a
similar
problem
with
pg
logging
in
leds,
and
you
might
be,
you
may
have
seen
you
know
the.
A
A
There's
probably
too
much
information
crammed
into
that
prefix
on
the
arbitrary
side
we
have
the
reverse.
A
There's
there's
not
much
there,
except
for
the
the
the
id
of
the
thread
that
issued
the
log
message,
and
that
makes
it
very
hard
to
you
know
to
to
piece
the
these
long
messages
together
in
the
rbd
mirror
context,
because,
unlike
everything
else
in
the
rvd
ecosystem,
it
actually
instantiates
multiple
libraries
or
more
precisely
well
liberative
instances,
it
has
it
instantiates,
one
of
those
for
each
image
that
is
being
replicated,
and
you
know,
if
you
think
about
it.
A
It's
you
know.
Repping,
for
that
is
just
a
nightmare,
and
the
other
productization
item
here
is
consistent
image
level
metrics.
A
We
already
have
some,
but
they
they're
not
necessarily
consistent
between
each
other
and
the
they're
also
not
exposed
as
well
as
they
should
be.
Apparently
just
a
json
dump,
which
one
can
get.
A
Via
a
particular
rbd
mirror
image
command,
we
need
to
make
sure
all
of
that
is
exposed
through
the
admin
sockets
that
individual
replayers
instantiate
and
make
sure
that
the
format
in
which
that
is
done
is
easily
consumable
by
the
new
paranoid
premier
face
exporter
that
is
being
worked
on.
I
have
a
link
to
to
the
exporter
pr
with
what
what
it
basically
does.
Is
it
grapes
the
admin
sockets
on
the
node
and.
A
Translates
those
metrics
into.
B
A
Into
something
that
permeates
can
slap
up
from
from
http
endpoint.
A
There
is
a
general
issue
with
the
perf
calendars
as
they
are
exposed
by
admin
socket.
You
know
through
admin
sockets
today,
and
that
has
to
do
with
the
fact
that
the
the
matrix
names
are
not
stable,
in
particular
for
rbd.
A
We
embed
the
full
name
and
the
image
name
in
the
in
the
metric
name
and
then
goes
against
like
that.
That's
not
how
permeate
is
that's
not
what
producers
expect,
because
all
of
the
aggregation
facilities
that
it
has
depend
on
the
metric
name
being
stable.
So
if
it's,
you
know,
let's
say
the
number
of
iops,
then
it
should
be.
The
name
should
be.
A
You
know
the
number
of
iops
and
the
the
image
name
and
the
pool
name
and
all
the
associated
metadata
needs
to
be
communicated
in
the
labels.
The
previous
exporter
that
is
under
development
is,
as
a
first
step,
is
probably
going
to
do
that
internally,
so
he
is
going
to
parse
these.
A
Inconvenient
metric
names
extract
stuff,
like
google
name
and
image
name
and
other
label-ish
fields
from
that,
but
longer
term
we
should.
A
We
should
do
that
at
the
counter
level,
there's
a
ticket
that
I
forgot
to
link,
but
there's
a
ticket
for
that.
This
is
obviously
not
specific
to
rbd.
So
there's
this
is
going
to
be
done.
I
think,
as
part
of
this
very
known,
pregnancy,
exporter,
initiative
and
once
that
is
done,
we're
going
to
need
to
switch
the
metrics
that
we
already
have
and
the
metrics
that
we
are
going
to
add
to
that
to
that
new
format.
A
And
well,
there's
not
much
to
talk
about
here.
We
need
to
see
how
our
video
mirror
behaves
with
more
than
a
couple
hundred
images.
Some
of
that
work
is
already
underway
and
we
already
already
have
some
results
and
we
actually
did
a
bug
in
latest
while
doing
that,
but
there's
I'm
pretty
sure,
there's
a
lot
more
to
find
and
in
particular
on
the
on
the
arbitrary
side
right
now,
the
these
efforts
are
focused
on
on
a
single
fibonamia
demon.
So
we're
not
talking
about
multiple
private
demons.
A
There
is
some
support
for
that
already,
but
the
the
the
policy
code
that
is
responsible
for
distributing
images
between
between
multiple
library
demons
may
have
may
have
some
some
issues
which
we
don't
know
about
yet
so
that
would
be.
That
will
probably
be
the
next
step
or
the
next
step
in
this
field.
Testing
effort
use,
you
know,
stand
up,
multiple
arbitrary
demons
and
see
you
know,
exercise
the
the
policy
code
and
see
how
that
behaves.
A
So
that
may
or
may
not
land
or
may
not
be
worked
on,
but
it
would
be
nice
to
because
the
bulk
of
the
bulk
of
the
work
has
been
done,
and
it
would
be
nice
to
finish
it
in
time
for
the
reef
release
and
the
other
thing
that.
A
You
know,
depending
on
how
you
look
at
it
should
probably
receive
higher
higher
priority,
is
some
sort
of
check
summing
for
for
mirror
snapshots
and
for,
like,
after
the
snapshots
are
replicated
and
applied
on
the
on
the
target
cluster.
There
needs
to
be
a
mechanism
that
ensures
that
everything
went
fine
and
that
the
the
replicated
image
matches
the
the
original
one.
A
A
A
Yeah
yeah,
sorry,
I
I
forgot
about
that
one
because
in
my
mind,
it's
kind
of
it's
more
of
a
it's
more
of
a
feature,
whereas
the
things
I've
listed
out
here
are
are
well
except
for
the
consistency
groups
thing
which
I
remembered
because
I
happened
to.
I
happened
to
be
looking
at
it
recently.
A
C
A
Know
for
some
of
these
there
are
gaping
holes
in
the
existing
code,
but
you
know
thanks
for
that
I'll
add
it
to
the
to
the
pad
and
to
the
trello
board,
which
I'm
going
to
amend,
based
on
the
outcome
of
this
meeting.
C
A
Okay,
moving
on
to
the
next
kind
of
big
ticket
item
or
group
of
items
is
the
nvme
or
fabrics
gateway
for
rbd
ibd
going
forward.
A
This
is
supposed
to
replace
the
iscsi
gateway
and
the
email
of
a
fabrics
protocol
is
considered
to
be
superior
and
like
this
is
where
we're
going
to
be
investing
our
efforts
going
forward.
A
It
is
not
on
par
with
iscsi,
yet
work
in
progress,
but
just
just
you
know,
a
general
direction
is
that
the
heavy
metal
covers
gateway
is
going
to
take
over
the
the
ice
kinds
of
gateway,
hopefully
that
they
hope,
because.
A
A
single
gateway
in
a
single
gateway
group
is,
you
know
that
is
more
or
less
working,
and
you
know
it's
working
well
enough
for
us
to
for
us
to
have
embarked
on
the
on
benchmarking
it,
and
there
is
an
ongoing
performance
investigation,
because
we've
identified
some
irregularities
in
it.
A
Basically,
we
we,
we
see
performance
degradation
in
some
cases,
which
is
you
know
not
there
when
the
nvme
oaf
gateway
is
not
in
play,
and
these
issues
appear
to
be
down
to
spdk.
At
least
that's.
A
The
current
working
theory,
spdk
is,
is
the
project
that
that
we
use
for
for
the
nvme
or
fabrics
target,
so
just
as
the
ice
kaiser
gateway
uses
to
see
new
runner,
the
energy
me
on
the
farmer's
gateway
uses
spdk
for
this
purpose,
the
well
the
nvme
layoff
target
app
inside
spdk,
to
be
precise,
so
there's
ongoing
investigation
there
there's
there
are.
There
is
nothing
at
the
architectural
level
that
would
explain
these
these
issues.
A
And
on
the
gateway
itself,
on
the
gateway
implementation
front,
the
giggly
configuration
persistence,
vr
is
almost
ready,
there's
sort
of
the
final
set
of
comments
there.
That
needs
to
be
addressed,
and
with
that
it
will
be.
You
know
restarting
and
maybe
liquidates
will
be
possible,
and
this
also
makes
the
groundwork
for
the
for
the
for
having
more
than
one
gateway
in
a
gateway
group
and
having
one
and
one
gateway
group
going
forward.
A
So
this
is
this
is
the
prerequisite
for
having
active,
active
gateways,
I'm
not
sure
if
jonas
or
others,
okay,
I
don't
see
anyone
who's
who
is
involved
in
that
effort
here.
So
I'm
going
to
leave
it
at
that,
but
this
is
another
major
focus
for
us
for
the
reef
cycle.
A
A
The
first
one
is
support
for
in
provisioned
encrypted
clones,
and
this
refers
to
being
able
to
have
multiple
clones
in
the
chain.
So
this
would
be.
You
know
a
clone
of
a
clone
of
a
standalone
image,
for
example,
and
you
know,
have
those
clones
be
encrypted
with
different
passphrases
and
with
different
encryption
formats?.
A
So
this
pr
is
going
to
change
that
it
is
currently
under
review
and
should
land
soon.
The
next
one
is
support
for
nbd
stream
are
in
our
rbd
live
migration.
A
This
is
a
little
hard
to
explain
if
you're
not
familiar
with
the
with
the
existing.
You
know
life
migration
feature,
but
basically
it
boils
down
to
the
fact
that
currently
we
have
support
for
live
migrating
from
from
the
cue
card
to
image.
Well,
not
just
to
cartoon.
I
think
the
yukon
one
images
are
also
supported,
but
the
the
the
issue
is
that
that
support
is
partial.
A
Are
supported,
and
just
so
that
limits
it's
usefulness,
because
some
of
these
features
are
actually
starting
to
be
enabled
by
default
or,
if
not
by
default.
Then
you
know
in
in
in
in
in
wide
use,
and
this
means
that
we
can't
we
can't
line
migrate
from
from
those
images.
I'm
gonna
skew
out
to
files.
If
you
will.
A
And
just
at
the
implementation
level,
the
issue
is
that
we
basically
re-implement
the
the
cue
card
to
article
or
format
within
library,
and
we,
you
know,
in
order
to
make
it
feasible.
We
had
to
take
those
shortcuts
and
just
cut
those
cut.
Those
features
out
in
order
to
support
those
advanced
features
or
one
way
of
doing
that
is
basically
reuse.
A
A
Benefit
that
this
brings
is
the
fact
that
it
can
actually
preserve
sparseness.
A
Now
this
is
this
is
currently
not.
You
know
possible
with
the
existing
qcal
streams,
so
there's
there's
a
pr
for
that
and
it
and
it's
next
up
in
the
review
queue
after
the
thing
provisioned
encrypted
clones.
A
A
A
A
Further
enhancements
to
persistent
wide
back
cash,
there
is
a
plan
to.
A
A
And
the
migration
there
is
to
to
be
able
to
handle
the
space
allocation
in
the
in
the
cache
plugin
itself,
instead
of
relying
on
the
pmm
library
for
us
to
do
that
this.
This
opens
up
some
opportunities,
some
optimization
opportunities,
but
unfortunately
it
is
also
going
to
be
changing
the
on
disk
format
of
the
of
the
pm
mode,
and
that
is
the
reason
that
this
pr
was
still
made
and
moved
out
of
quincy
into
reef.
A
A
So
this
is
they
are
it
is
there
and
we're
going
to
merge
it
in
the
in
the
beginning
of
the
reef
cycle,
just
to
make
sure
that
the
on
disk
format
change
has
enough
time
to
and
to
bake
to
for
any
issues
to
be
sorted
out.
There.
A
And
the
last
thing
on
my
list
is
qa
tweet
improvements,
the
things
the
thing
in
mind,
like
the
thing
that
I
have
you
know
that
that
is
in
particular
on
on
the
list
of
things
that
that
need
to
be
done.
It's
we
have
a
few
unstable
jobs
that
either
fail
or
time
out,
and
it's
been
this
way
for
for
a
while
now-
and
I
I
think
most
of
us
just
learned
to
ignore
these
failures,
but
one
of
these
jobs
actually
actually
masked
a
test
regression.
A
So
luckily
it
wasn't,
it
wasn't
an
actual.
You
know
louis
barbie,
or
you
know,
I
believe,
mirror
regression.
It
was
a
test
regression
where
a
certain
certain
test
case
just
just
stopped
running
due
to
an
unrelated
due
to
an
unrelated
change
completely
outside
of
rbd.
A
A
A
Due
to
this,
so
I
would
like
us
to
commit
to
getting
rid
of
this
long-standing
unstableness
in
the
test
suite
in
the
reef
cycle
I
mean.
Obviously
the
environmental
failures
are
are
always
going
to
be
there,
and
you
know
it's
always
going
to
be
a
bit
of
a
mecca,
welcome
all
with
those,
but
any
any
long-term
issues
like
like
the
ones
I
have
in
mind
here
should
really
be
prioritized.
B
A
This
list,
because
this
is
a
slippery
slope
that
can
lead
to
actual
project.
You
know,
product
regressions,.
A
I
think
that's
that's
all.
I
have
any
other
like
anything
else
that
should
be
on
this
list
other
than
the
persistent
right,
the
cash
and
the
multiplayer
support
that
I'm
going.
D
A
D
Going
back
to
the
rbd
mirror
stuff,
a
little
bit
like
one
thing
we
talked
about
last
week
was
trying
to
like
stagger
a
snapshot
schedule
and
a
mirroring
schedule,
so
that
there's
like
a
consistent
performance
impact
on
overtime.
Maybe
that's
kind
of
already
posted
in
some
of
these
ideas
here
around
scale,
testing
and
words,
but
I
just
wanted
to
bring
that
up.
A
Yeah
it
is
it
is,
it
is
implicit
in
the
snapshot.
Scheduler
improvements
item
I
explicitly
called
out
the
the
the
spreading
out
of
the
snapshots
across
the
scheduling
interval
when
I
was
going
over
it.
B
Probably
not
significant,
but
we
were
discussing
about
like
improving
state
machine
documentation
around
rbd
mirroring
so
that,
like
just
better
documenting
some
of
the
pieces
for
developers
and
maybe
cepherium
movement
of
rbd
suite
to
using
sap
area
still
in
pipeline,
I
think
there's
one
piece:
that's
remaining
multi-cluster
support,
but
yeah
that
needs
some
work.
A
A
Fairly
easy
to
address-
and
they
also
would
be
a
good
task
for
someone
just
getting
familiar
with
the
code
base,
since
it's
a
good
way
to
to
kind
of
you
know.
A
Have
a
look
at
it
at
a
high
level
and
you
know
put
together
those
those
kind
of
high
level
state
diagrams,
and
it
would
definitely
help
anyone
who
is
who's.
You
know
opening
a
particular
file
for
the
first
time
feel
free
to
add
that
to
the
beginner
bucket
in
the
pad.
A
As
for
the
sfadm
migration
for
the
for
the
qa
suite,
I
believe
there
is
a
pr
for
that.
I
don't
think
the
person
who
started
it
is
involved
with
stuff
anymore,
but
yeah
going
going
forward
it.
It
might
also
be
you
know,
a
good
kind
of
starter
thing
for
someone
to
pick
up
get
involved
with
topology
and
the
rbd
suite
in
general.
A
We
do
have.
We
do
have
one
job
that
that
has
been
migrated
to
chef
adm.
That
is,
that
is
the
iscsi
job
that
but
yeah
we
need
to
I'm
not
sure
like.
I
don't
want
to
abandon
package-based
installs
entirely,
just
yet
simply
because
the
like
containerized
like
if,
if
the
ice
kaze,
if
migrating
the
ice
cutter
job
to
this
fadm
is
any
indication.
A
A
B
I
think
there's
only
multi-cluster
support,
that's
like
not
giving
the
yeah
that's
like
that
needs
more
effort,
apart
from
it
other
suites
slip.
Fine,
but,
as
you
said
that
I
am
might
not
be
aware
of
what
podman
issue
might
hinder
it,
but
it
might
be
similar
to
the
radar
suite
and
yeah.
B
We
can
get
some
insights
if,
like
if
people
hit
issues
regarding
podman
in
red
or
suite
as
well
as
they
frequently
run
as
efferion
bases.
D
There,
I
guess
I'd
say
that
I
agree
in
general.
We
don't
want
to
get
rid
of
package-based
tests
entirely.
D
It
would
help
be
helpful
to
revive
this
pr
and
and
migrate
more
the
testing
to
slip
adm,
and
we
could
keep
a
parallel
speed
protected
for
packages
as
well.
If
you
wanted,
I
agree,
we
don't
we
don't
want
to
get
rid
of
that
coverage
as
well.
A
I
guess
it's
just
my
my
sort
of
take
on
this
has
been.
You
know
that
rbd
does
not
do
anything
special
on
the
server
side
and,
like
we
said
for
adm
coverage,
we
like
we
essentially,
we
should
get
it
for
free
from
the
latest
suite
and
other
suites
that
have
already
taken
the
plunge.
A
A
So
I
I
would
actually
be
like,
as
far
as
actual
you
know,
development
and
actual
you
know
marrying
rbd
with
cepheum.
I
would
be
more
interested
in
having
those
things
integrated
with
separatedm,
so
in
particular
the
immutable
object,
cache
and-
and
maybe
you
know,
take
a
look
at
just
in
in
general.
How
can
like?
A
Because
at
some
point
we
talked
about
the
notion
of
a
client-side
stuff
idiom,
for
you
know
for
things
like
if
you,
if
you
have
a
container
registry,
then
you
can
just
pull
those
containers
instead
of
having
to
fish
for
for
packages
to
install
on
the
client
node.
A
Honestly,
I
would
be
more
interested
in
developments
of
that
major
as
far
as
fadm
is
concerned,
but
if
someone
picks
that
vr
up
and
migrates
the
the
non-ibd
mirror
jobs
right,
because
I
think
that's
what
deepika
refers
to
by
saying
that
you
know
the
the
multi-cluster
support
is
missing.
Another
way
of
saying
it
is
that
none
of
the
arbitrary
jobs
would
work.
A
A
Fairly
low
priority,
but
because
we
already
have
safety
on
coverage
in
other
suites
and
the
these,
the
the
the
thing
that
I
listed
about
breaking
down
those
jobs
that
that
are
that
are
causing
that
are
causing
trouble
and
causing
us
to
wave
like
repeatedly
wave
them
for
for
both
stable
and
major
releases.
D
D
Think
the
one
thing
you
mentioned
there
about
moving
the
client
side
pieces
to
be
running
in
containers
as
well.
I
I
I
will
be
quite
helpful
in
the
medium
to
longer
term,
with
the
work
and
technology
to
be
able
to
run
based
on
your
local
container
image
and
avoid
a
package
build
at
all.
I
think
that
that's
all
ways
off
before
it's
ready
for
use,
though
that's
getting
started
now,
but
I
think
that'll
be
quite
helpful
in
the
future.
A
But
there
is
this
gw
coi
utility
that
you're
supposed
to
use
to
configure
your
luns
and
just
in
general,
interact
with
the
ischedger
demon
and
that
utility
lives
inside
of
the
container
and
whereas
previously
the
user
could
just
fire
up
a
terminal
type
gwci
and
you
know
have
added.
They
now
need
to
list.
The
containers
that
are
running
on
the
node
identify
the
the
iscsi
container.
B
A
Log
into
that
container,
so
like
exec
into
it,
and
only
then
you
know
they
can
access
the
gwc
li
utility
so
from
the
from
the
iscsi
user
experience
perspective,
the
containerization
actually
degraded
it,
and
these
are
the
you
know
the
the
improvements
that
you
know.
I
would
like
to
see
as
far
as
containerization
is
concerned,
as
opposed
to
just
migrating
the
suite
to
to
to
just
have
idea.
A
Yeah
way,
just
like
you
can
do
like
you,
can
invoke
a
sephiroth
shell
and
that
pops
you
into
a
shell
where
a
sub
command
is
available.
It
would
be
good
to
do
something
like
that
for
for,
for
iscsi
and
and
going
forward
for
the
envy
meal
of
fabric's
gateway,
because.
A
You
know
this:
this
is
a
a
utility
that
is
part
of
a
demon
container
today,
but
it
really
should
be.
It
really
should
be
part
of
the
just
a
client-side
set
of
utilities
that
the
user
can
can
access.
A
After
you
know,
logging
into
a
shell
or
something
of
that
sort
it
shouldn't
take,
you
know
it
shouldn't,
be,
you
know,
go
with
containers,
go
grab
for
iscsi
identify
this
particular
container
exactly
into
it.
B
A
It
should
it
shouldn't
take
that
this
isn't
specific,
like
I
I
kept
referring
to
to
cepheidiam,
but
this
may
not
be
just
within
safety,
and
this
may
be
this
is
this-
is
a
bit
wider
than
that
right,
so,
for
example,
the
gwci
utility.
In
order
for
this
to
work,
the
wcli
utility
needs
to
be
made
available
in
one
of
his
self
containers.
A
As
opposed
to
you
know,
being
bundled
in
the
in
the
ice
casing
container,
which
also
runs
a
demon,
so
I'm
guessing,
I'm
not
sure
if
anyone
is
actually
going
to
do
any
work
on
that
in
the
recycle.
But
I
guess
it
doesn't
hurt
to
add
containerization
improvements
into
the
list
of
things
and
in
particular,
into
the
trailer
board
so
that
it
can.
It
gets
tracked
and
doesn't
get
forgotten
about.
D
D
A
Okay,
well
thanks
for
your
input,
thanks
for
attending
the
rbd
planning
session,
hopefully
we'll
get
a
good
chunk
of
this
actually
done
for
eve
and
I'll
see
you
in
the
channel
thanks.