►
From YouTube: Ceph Orchestrator Meeting 2021-03-30
Description
A
All
right,
yep,
there
are
a
few
things
on
the
agenda.
Please
add
anything
else
that
you
guys
want
to
talk
about.
A
Yeah,
so
I
guess
the
first
item
is
just
a
couple:
separation
things.
I
noticed
that
the
I
had
to
change
the
list
networks
in
order
to
make
the
the
subnet
stuff
work
properly.
Did
that
merge?
I
think
that
might
already
merged
actually,
but
it
seems
like
it.
A
Overlaps
with
the
gather,
facts
backs
command
and
it
feels
like
this
should
just
be
merged,
but
I
wanted
I'm
not
sure
if
anybody
knows
what
the
history
of
this
was
and
why
they
were
separate
or
if
there's
any
reason
not
to
just
pull
them
together.
A
It's
it's
providing
similar
information,
so
so
list
networks
originally,
would
tell
you
all
the
all
the
subnets
and
then
which
ip
addresses
it
was
a
dictionary
of
subnet
to
a
list
of
ips
that
are
on
the
host.
It
was
used
by
the
code
for
the
monitor
to
pick
whatever
pick
my
rp's
and
I
extended
that
for
that
networks
command
also
used
it.
For
that,
and
I
made
it,
I
had
to
make
a
change
to
it
for
the
hs's,
what
it
was
for
the
for
the
virtual
ip
service
for
keepalfd.
A
I
changed
it
so
that
it
also
includes
the
the
interface
name
so
like
each
zero
or
whatever
it
is
I'm
in
there,
and
when
I
was
looking
at
the
code,
I
realized
that
gatherfax
is
gathering
similar
information,
but
not
quite
it
has
it
lists
the
interfaces
and
then
it
lists
one
ipv4
address
and
one
ipv6
address.
A
So
it's
like
kind
of
squished
in
a
like
a
way
that
throws
away
a
bunch
of
information,
but
it's
similar,
but
it's
not
actually
what
I
needed
also,
but
it
just
seems
weird
to
have
two
whatever
it
seems
like.
We
should
just
collapse.
Those
two
and
yeah
yeah.
B
Yeah,
I
agree
with
that,
but
what
is
your
you
thought
to
to
put
the
list
networks
functionality
inside
the
other
facts.
B
B
A
Okay
and
then
the
other
one
was
the
aj
proxy
vip
stuff.
A
So
I
think
the
main
thing
that
I
ran
into
when
I
stopped
working
on
it
was
oh
well,
I
guess.
First,
I
had
a
conversation
with
patrick
and
varsha
about
stuff
s
and
nfs,
and
they
were
they
had
questions
about
how
to
make
they
had
a
question
about
nfs,
because
when
they
were
killing
the
nfs
demon
it
would
restart
or
maybe
not
restart,
but
sometimes
it
might
get
scheduled
on
another
machine
and
that
would
change
its
ip
and
then
they're
like
how
do
we?
How
do
we
make
this
work?
A
And
so
I
mentioned
that
the
virtual
ip
thing
would
be
perfect
for
this,
because
you
could
just
move
the
ip
around
and
it
would
follow
the
follow
the
nfs
gateway
and
they
were
excited
about
that.
So
I
think
it's
I
think,
that's
and
they
need
that
for
downstream.
Also,
so
I
think-
that's
probably
the
next
use
case
after
rgw
to
to
focus
on
and
make
sure
it
works.
A
The
main
gap
in
the
pull
request
that
I
have
right
now
is
that
if
you
have
say
you
have
an
rgw
service
that
has
a
bunch
of
demons
on
a
bunch
of
nodes.
There's
an
aja
proxy
service
that
can
go
anywhere
because
it's
actually
redirecting
traffic,
but
the
virtual
ip
service
has
to
be
on
the
same
nodes
as
whatever
the
services
that
it's
renting
and
the
keep
id
configuration
in
fact,
usually
is
probing
for,
like
a
particular
port
on
localhost.
A
So
somehow
we
need
to
make
sure
that
the
virtual
ip
service
is
on
the
same
nodes
as
whatever
it
is
that
you're
you're
printing
yeah,
so
whether
it's
nfs
or
whether
it's
h.a
proxy,
or
whether
it's
rgw
directly
like
it
needs
to
map
back
to
that
service.
And
so
you
could
do
that
explicitly
now
by
like
labeling
nodes
and
then
scheduling
both
of
them
have
this
having
the
same
placement
spec.
A
But
I
was
thinking
that
in
sort
of
the
out
of
the
box
case,
you
might
have
just
like
eight
nodes
and
you
don't
care
where
anything
runs.
You
don't
care
where
the
virtual
ip
is.
You
just
want
it
to
like
to
work.
A
You
don't
want
to
have
the
label
nodes
or
anything
like
that,
and
so
I
was
going
to
change,
extend
the
placement
spec
to
have
a
property
called
match
service
hosts
and
then
a
different
service
name
and
then,
and
so
you
could
say
that
the
virtual
ips
service
would
just
be
placed
on
whatever
at
different
services,
whether
it's
the
nfs
gateway
or
whatever.
It
is,
and
then
when
it
goes
and
it
does
the
apply
it'll.
Just
look
at
the
host
and
it'll
just
match
it
up.
B
Another
approach,
maybe
is
well
to
to
hide
the
virtual
ip
service
inside
the
other
services,
for
example,
to
have
another
attribute
in
the
service
just
to
say,
I
want
a
virtual
id
for
this
service
and
if
you
press
this,
you
set
this
this
attribute,
then
a
part
of
the
of
the
normal
diamond
of
this
service
deploy
a
keep
alive
diamond
in
the
same
host,
providing
a
debit
ap
for
for
the
for
the
diamond.
C
B
I
think
that
is
more
more
is
going
to
be
more
easy
to
configure
for
the
final
user,
and
probably
we
are
going
to
avoid.
Well,
it's
going
to
be
a
little
bit
more
complicated
because
we
are
going
to
to
have
exactly
the
same
kind
of
problems
that
we
have
with
h.a
for
our
double?
U
because
you
are
deploying
a
couple
of
different
diamonds
in
its
in
the
service
for
its
cost,
but
in
any
case,
I
think
that
it
would
be
nice
well
to
to
keep
the
configuration
simple.
B
A
Yep
yep
yep,
I
kind
of
agree,
which
is
why
the
other
reason
why
I
sort
of
paused
I
wanted
to
get
sebastian's
input
on
this
and
see
what
his.
A
Yeah,
I
don't
know
I'm
kind
of
I'm
chew
violence,
because
I
think
it's
in,
I
think
for
aj
proxy.
It
makes
a
lot
of
sense
to
have
it
separable,
because
it
can.
You
can
manage
a
bunch
of
different
stuff
and
you
might-
and
you
have
to
control
where
issue
proxy
runs,
independent
of
the
other
service
that
you're,
proxying
and
so
on.
B
A
Yeah
yeah
that
might
make
sense,
and
we
can
probably
figure
out
a
way
to
avoid
any
code
duplication
to
make
that
work.
It'll
just
be
a
little
bit.
A
Weird
okay
well
I'll
I'll
hold
on.
For
that
I
mean,
because
if
we
do
that,
then
we
don't
need
the
that
place
and
spec
matches
thing:
yeah,
yeah,
okay,
there's
something
here:
cs:
quincy
a
document
that.
A
Yes,
okay,
so
we
want
to
create
a
a
roadmap
for
dashboard
for
quincy
and
we
want
to
schedule
a
session
during
cds
to
do
that.
A
Yeah,
I
don't
think
we
have.
I
see
it
on
the
you
put
it
on
the
the
dashboard
meeting
agenda.
E
But
let
me
can,
I
just
add
something
I
mean
from
the
point
of
view
of
dashboard,
and
you
know
we
have
dashboard
and
some
things
will
go
through
orchestrator.
Some
things
will
go
directly
to
components
right.
So
how
do
we
get
all
of
those
component
requirements
together?
I
mean
who
is
responsible
for
that,
and
how
do
we
drive
that
right?
I
mean,
for
instance,
as
we
break
down
some
of
the
things
recently
we've
been
making
changes.
It
does
that
force
us
now
to
go
to
components
versus
going
directly
to
orchestrator.
E
You
know
in
some
of
the
multi-site
things
we
just
changed,
I
mean
so.
If
we
do,
we
go
directly
to
the
components.
How
do
we
make
sure
we
get
the
representation
and
we
have
a
bunch
of
components
right,
so
I
mean
you
see
where
I'm
going.
E
E
Exactly
yeah
yeah
because
in
general
I
mean,
let
me
ask
a
silly
question
from
the
point
of
view
of
the
component.
You
know
all
the
component
discussions
upstream
and
I
I
can't
go
to
all
of
them
right
so,
but
in
general,
do
they
talk
about
the
cli
needs?
You
know,
which
is
one
aspect
of
management
you
know.
If,
if
they
do
then
you
know
dashboard
should
be
a
part
of
every
discussion
as
well.
Right,
I
mean
in
a
way.
A
Yeah,
I
think
the
problem
is
that
by
default,
the
cli
is
part
of
the
discussion
implicitly,
because
that's
what
the
developers
use
and
that's
what
they
piss
with
and
and
everything,
and
I
think
the
developers
aren't
using
the
dashboard.
E
Okay,
we
need,
we
need
people
to
show
that
the
value-
and
you
know
maybe
yeah.
A
E
A
E
But
so
so
don't
let
me
ask
the
silly
question
that,
from
the
point
of
view
of
then
you
know
the.
If
we
put
it
in
each
of
the
component
sections,
then
we
would
need
someone
from
the
dashboard
team
at
every
single
one.
Then
right
so
it's
like
so
we'll
have
to
distribute
our
team
across
make
sure
we
get
full
coverage.
Have
those
discussions
that
we
could
synthesize
or
bring
it
back.
A
E
A
I
mean
yeah,
I
think
it'll
be
worth
it
to
have
like
I
mean,
if
you
just
think
about
like
the
information
that
we
want
to
convey
if
we
want
to
have
people
on
the
rgw
team,
for
example,
familiar
with
what
the
dashboard
does
like.
I
think
just
spending
15
minutes
and
just
doing
a
demo
like
a
walkthrough
of
all
the
stuff
that
it
currently
does
and
then
follow
that
up
with
a
discussion
of
like
what
doesn't
it
do
and
what
what
should
come.
Next,
like.
E
Right
and
and
then
from
the
point
of
view
of
you
know
what
is
planning
quincy,
which
is
different
from
a
functionality
perspective
that
needs
management
as
well.
You
know
so
yeah
because
I
remember
last
year
in
one
of
the
planning
sessions
I
went
to
a
couple
just
because
they
were
in
my
time
zone
and
they
were
available
and
there
was
no
discussion
on
management.
There
was
some
cli
discussion
and,
like
you
know
so
I
went
away
last
year.
Thinking
damn
we're
missing
here.
E
You
know
and
so,
and
I
asked
questions
on
the
management
it
would
be.
Like
you
know,
just
sort
of
a
the
conversation
would
stop.
I
don't
know
they
say
we
don't
know.
You
know
it's
like
okay,
so
we
might
be
nice
to
drive
that
then,
okay
alfonzo
can
we
do
that?
Will
we
will
we
be
able
to
scale.
G
E
One
me
can
maybe
drive
the
the
dashboard
for
a
little
bit
too.
Why
do
you
think.
B
D
D
Yeah,
if
we
are
collecting
in
session
some
conclusion,
maybe
we
can.
A
Yeah
yeah
sure
I
mean
yeah.
I
think
if
you,
if
you
want
to
have
a
second
like
a
specific
dashboard
thing
of
yet
that
just
just
add
it
to
the
edge
of
the
pad,
whenever
you
think
makes
sense.
But
it
sounds
to
me
like
probably
what's
going
to
happen.
Is
that-
and
I
think,
probably
what's
going
to
be
most
effective
is
if
we
include
like
at
the
eye
close
to
the
top
of
all
these
other
components?
A
If
we
do
like
the
dashboard
summary
and
the
discussion
around
it,
and
then
hopefully,
that'll
see
the
rest
of
the
discussion
for
the
session
to
make
sure
people
are
thinking
about
the
dashboard
implications
with
other
new
features,
then
that
will
get
everybody
else.
Thinking
about
it
and
then
the
actual
dashboard
session
at
the
end
is
probably
going
to
be
mostly
the
dashboard
team,
like
figuring
out
how
to
for
what's.
What's
what's
gonna,
how
it's
all
gonna
come
together.
I
guess
I
guess.
E
Yeah,
if
we're
to
do
that,
we
have
the
discipline
for
that
for
every
single
meeting.
I
think
that
would
work
great
and
then
we
synthesize
what
all
those
parts
and
then
we
have
a
very
limited
team.
Unfortunately,
so
we're
going
to
have
to
prioritize,
you
know
our
the
workflows
we
want
to
concentrate
on
and
the
management
things
we
want
to
concentrate
on.
You
know
which,
which
are
more
important
for
the
downstream.
You
know
so
we'll
work
on
those
first
and
you
know
so:
hey.
A
E
A
There
is
a
user
survey
and
there
are
that,
were
it's
almost
wrapped
up
and
there
were
questions
about
the
dashboards
in
there,
but
I
haven't
looked
at
them.
So,
okay,
that's
I
mean.
E
Did
you
have
a
link
for
that
data?
We
could
take
a
look
as
well.
Just
I'd
be
curious.
Is
that
accessible.
A
H
A
A
I
mean
that
is
a
good
possibility,
though
it
might
be
that
we
want
the
telemetry
to
indicate
like
how
often
the
dashboard
was
logged
into
and
which
pages
were
being
used
or
something
that
might
be
interesting.
Information.
I
E
Do
you
try
to
make
every
single
session
sage
of
those
or
just
so
or
do
you?
Do
you
miss
a
few
as
well?
I
imagine
it's
tough.
I
mean.
A
I
I
I
mean
I
I
think
last
time
around,
we
didn't
really
do
it.
This
way
where
we
had
like
a
whole
bunch
of
separate
meetings,
there
weren't
there
weren't
that
many,
so
it
hasn't
been
a
problem
in
the
past,
but
I'm
gonna
try
to
make
I'm
gonna
try
to
make
all
these.
A
E
E
Yeah,
but
if
we
have
to
prep
and
everything
you're
out
monday,
so
then
starts
tuesday,
so
we'll
have
to
you
know.
Maybe
what
we
can
do
is
concentrate,
maybe
alfonzo.
You
could
concentrate
on
the
ones
on
tuesday
and
make
sure
we're
aware
of
what
we
need
to
show
from
the
point
of
view
of
you
know
the
dashboard
make
sure
we're
ready
and
then
maybe
you
know,
then
we
can
ever
when
ernesto
comes
back
on
monday.
He
could
pick
up
the
following
day
or
something
if
we
could
work
it
that
way.
So.
D
Apologize
well
now,
it's
just
about
the
well.
If
somehow
from
the
caster
team
can
create
some
document
like
this
about
it,
it
has
to
not
be
called
third
or
passive
priorities.
It
can
be
safe,
orchestrator
quincy
priorities,
so
we
can
other
components
check
what's
going
on
in
the
orchestrator
and
then
bring
about
or
bring
up
our
our
concerns,
the
topics
or
or
comments.
A
A
A
A
All
right,
that's
good!
Okay,
what
else
rgw
block
in
bootstrap
gonna,
ganesha
config?
J
So
here
the
first
pr
is
like,
in
which
the
type
part
is
removed
from
the
nfs
volumes
interface.
Basically,
so
that
is
one
and
another
is
a
related
pr
in
which
the
rgw
block
was
added
to
the
bootstrap
ganesha
conflict,
which
is
basically
a
bare
minimum
conflict
which
starts
the
nfs
daemon.
But
now,
since
rjw
block
is
added
and
on
the
rook
side,
it's
not
added.
J
So
one
of
the
links
actually
shows
the
points
out
to
the
config
in
rook
side,
as
well
as
on
the
cepheidium
set.
So
there's
a
difference
now,
so
I'm
not
sure
why
the
rgw
conflict
block
is
required
because,
in
my
knowledge
it's
not
required
for
starting
up
a
ganesha
demon,
but
it
is
added,
and
now
it's
causing
a
difference
and
in
future
that
can
cause
issues
maintaining
both
the
conflicts
on
rock,
as
well
as
the
fading
side.
J
And
if
we
can
like
add,
if
it
is
required,
if
we
can
add
it
to
the
higher
like
through
the
nfs
plug-in
volumes
plug-in,
then
that
would
be
great
because
we
have
the
the
type
cluster
type
in
the
interface.
J
And
also
that
with
dashboard
dashboard
would
initially
very
closely
coupled
with
rook
and
dashboard
that
supports
the
rgw
config
as
well.
Sorry,
rgw
exports,
as
well
as
ffs
exports,
so
based
on
that
the
other
conflict
which
doesn't
have
the
rgbw
config
block,
also
works
for
the
rgw
exports.
F
Yeah
yeah,
so
one
of
the
things
I
think
there's
like
two
parts
to
this,
though,
like
we
need
the
we
need
a
key
ring
with
them,
caps
to
the
osd
and
that's
necessary
to
access
our
gw,
and
so
that's
what
this
block
and
stuff
adm
is
for,
and
I'm
not
sure
you're
just
creating
an
export.
It's
not
good
enough
for
this,
and
so
one
of
the
beauties
of
doing
this
this
way
is
that
we
can
deploy
a
single
nfs
service
that
supports
both
cyffs
and
rgw
within
a
single
instance.
F
If
we
continue
down
the
way
that
the
volumes
module
is
currently
implemented,
we'd
have
to
stand
up
a
cluster
first
ffs
and
a
cluster
for
rgw
separately,
and
then
I
think
the
dashboard
would
need
to
be
aware
of
that
which
would
be
too
complicated.
If
that's
less
than
ideal.
F
F
A
So
I'm
I'm
a
little
bit
fuzzy
here,
but
just
to
make
sure
I'm
understanding
this
this
config
here
that's
being
generated.
This
is
the
one
that's
pushed
out
as
a
file
into
the
container
inside
the
config
and
it's
just
a
bare
minimum
and
then
the
config
blocks
for
actual
exports
are
in
radius
and
they
get
like
sucked
up
by
the
daemon
and
so
that
they
can.
Those
can
be
dynamically
created
and
removed
that
right.
F
Yeah
yeah,
that's
right,
so
the
basic
idea
is:
we
have
the
bootstrap
config
that
so
that
orchestrator
will
create
the
key
rings
and
do
the
minimal
bootstrap
config
just
to
get
ganesha
up,
and
then
it
has
a
watch
url
that
points
at
a
radius
and
then
inside
that
watch
url.
That's
where
we
place
like
either
the
volumes
module
or
the
dashboard
will
place.
The
export
intake,
vessels
and
stuff
like
that.
F
K
So,
michael
the
the
idea
here
is,
I
I
guess
we
got
to
think
big
picture
for
both
deployment,
back-end,
so
you're
thinking.
The
rook
should
eventually
also
have
this
rgw
block
with
a
special
user
just
the
same
as
step.
Adm
no
longer
use
the
admin
key
and
either
the
dashboard
or
the
volumes
plugin
should
be
creating
the
actual
export
blocks
for
rgw.
F
K
Think
that
that
kind
of
makes
sense
you
know
I
I've
re,
you
know
I've
been
beating
this
drum
for
a
while,
and
I
realized
it.
You
know
it's
not
quite
easy
to
get
in
that
get
where
we
want
to
be,
but
the
the
deployment
back
in
either
ceph
or
rook
should
be
as
stupid
as
possible.
F
K
A
I
wonder
if
we
just
need
to,
I
mean:
does
it
matter
if
like
well?
I
guess
first
question
is
like:
is
this
direction?
Is
the
fading
behavior
here?
Is
this
direction
that
we
want
to
go
in,
and
so
are
we
good
from
a
stephanie
in
perspective,
and
then
I
guess
the
second
thing
is:
should
we
get
those
people
in
a
room
and
figure
out
what
needs
to
be
done
on
the
worksite
to
make
these
two
things
converge.
K
Well,
I
think:
well,
first,
we
gotta
decide
whether
what
stephanie
m's
doing
is
is
is
the
right
approach,
and
I
I
I
guess
so
you
know
if
we
can
lift
it
into
the
orchestrator
layer.
That
would
be
fine
varsha
did
you
have
more
thoughts
on
this.
J
That
is
fine.
Other
option
would
be
to
add
it
to
the
interface.
The
current
cluster
create
interface
like
an
rgw
flag
or
something
so
that
it
will
add
the
conflict
with
this
particular
thing,
a
customized
config
block
to
it,
whatever
is
set
up
by
sephardium
or
rook.
J
A
F
That's
been
my
viewpoint
because,
as
soon
as
you
create
the
export
and
mount
it,
that's
when
rgw
actually
loads
within
the
ganesha
daemon
in
timeless.
A
A
K
My
instinct
is
to
keep
these
keep
the
nfs
cluster
as
specific
as
possible,
since
these
are
so
cheap
to
spin
up.
I
don't
really
like
the
idea
of
an
nfs
cluster
serving
both
rgw
and
ceffs.
I
don't
really
feel
that's
it's
necessary
to
support
that
level
of
flexibility.
F
F
D
D
K
I
think,
with
the
780m
back
ports
for
octopus,
nf
yeah
nfs
with
the
dashboard
is
already
broken
and
there's
been
various
discussions
about
aligning
it
with
the
the
volumes
plug-in
which
varsh
is
actually
in
the
process
of
ripping
out
into
a
separate
nfs
plug-in.
K
That's
that's
orthogonal,
so
that
you
know
the
dashboard
is
not
doing
its
own
special
thing.
It's
it's
using
the
nfs
plug-in,
but
there
are
various
barriers,
technical
barriers
to
getting
that
done
and
also
just
lack
of
resources
to
dedicate
to
it.
But
as
far
as
I
know,
the
current
dance
is
that
dashboard
is
already
broken.
It
doesn't
work
with
stuff
adm's
nfs
deployments.
F
I
I
think
it's
partially
correct
k
for
did
a
bunch
of
work
to
make
that
work.
There's
a
couple
like
awkward
things
like
you
can
select,
there's
an
nfs
checkbox
nfsv3
checkbox
during
the
export,
but
that
turns
out
to
be
a
no-op
because
we
support
nfsv4.
Only
so
you
can
make
this
selection,
but
you
don't
get
a
d3
protocol.
F
It's
not
configured
that
way,
but
it's
harmless.
I
mean
it's
not
more
of
a
cosmetic
issue
than
anything.
K
I
guess
you
know
if
rtw
is
getting
a
call.
A
Please
continue
without
I
mean
just
just
backing
up
a
sec
to
the
bigger
question
of
like
whether
it
makes
sense
to
have
separate
nfs
clusters
for
rgw
and
stuff
s
in
multiple
instances
like
it
almost
feels
like
it
almost
seems
like
that
might
be
orthogonal,
because
I
guess
the
the
thing
that
I'm
most
concerned
about
is
just
like
publicity
like
and
if
somebody
says
like.
I
want
nfs
you're
like
great
here's,
your
nfs
server
and
then
because
I
mean
the
way
nfs4
works
anyway.
A
You
already
have
like
multiple
exports
under
and
whether
they're
under
one
server
or
multiple
servers
like
doesn't
matter
that
much,
except
that
the
ip
is
going
to
be
different.
So
having
the
user
just
see
like
an
nfs
server
running
and
then
they're
like
all
right,
I'm
going
to
add
an
rgb
export,
I'm
going
to
add
a
ffs
export
to
coordinator.
Another
sx
word
like
it
just
seems
like.
K
It's
not
quite
that
simple
stage,
because
the
nfs
clusters
coordinate
with
their
their
recovery.
So
we
do.
K
You
know
I,
I
think
the
grand
vision
was
that
we'd
be
able
to
set
up
clusters
based
off
of
like
tenants,
right,
yeah
yeah,
and
so
we
were
trying
to
keep
them
separate
so
that
recovery
could
be
more
rapid
right.
So
right
now
the
then
of
each
nfs
ever
has
to
go
into
grace.
Whenever
you
have
a
failover.
K
K
Yeah,
it's
cheap
to
deploy
more
nfs
servers,
at
least
that's
true,
with
rook,
with
step
8m,
it's
harder,
because
they
can
only
listen
on
one
port.
A
F
K
F
I
think
the
recovery,
though,
is
like
based
on
ganesha
grace,
so
that's
based
on
the
nfs
daemon.
It's
nothing
to
do
with
the
export
really.
F
K
Quite
correct
because
these
this
ffs
has
to
has
to
participate
with
the
nfs
demon
during
recovery.
One
thing
it
has
to
do
is
recover
the
session,
the
prior
session
of
the
nfs
server.
That
way
it
doesn't
lose
caps
and
to
be
complete.
I
don't
think
that's
actually
quite
working
correctly,
so
one
thing
it
does,
is
it
just
kills
off
the
the
prior
session?
I'd
have
to
check
with
jeff
how
that
currently
works.
A
K
That
might
be
the
it
might
be
recommended
to
use
v3
with
that
just
for
backwards
compatibility
purposes,
but
I
don't
really
think
that's
required.
K
A
A
A
They're
gonna,
have
lots
of
different
innovations
like
the
manila
use
case
with
openstack
and,
and
we
need
to
be
able
to
support
those
well
and
they're,
not
gonna
care
about
an
rgw
export
like
that
tenant,
probably
certainly
not
anytime
soon,
and
then
there's
the
other
case
where
you
just
have
like
a
medium-sized
stuff
cluster,
and
you
just
want
nfs
for,
and
you
don't
want
to
have
to
think
about
anything,
and
I
think
in
that
case
it's
going
to
be.
A
It
would
be
weird
to
like
not
that
weird
and
it'd
be
kind
of
it
might
be
confusing
for
the
user.
It's
where
there's
like
one
ip,
that
you're
going
to
use
if
you're
going
to
mount
ffs,
there's
a
different
ip
that
you're
going
to
use
if
you're
going
to
mount
rgw
and
then
the
idea
of
having
just
like
the
nfs
server
serve
both
sounds
kind
of
appealing.
In
that
case,.
D
A
Well,
or
I
mean
I
think
they
should
be
able
to
choose
if
there
are
multiple,
but
they
shouldn't
have
to
like
create
a
separate
one
if
they
don't
care
they're,
just
like
that's
the
nfs
cluster
just
in
one,
but
it
should
be
easy
to
not
have
to
worry
about
it.
I
guess.
A
F
We're
thinking
about
adding
it-
and
I
still
kind
of
agree
with
that
simplicity
case,
though
I
kind
of
like
that
better
personally.
A
I
mean
that
the
at
the
end
of
it
I'm
not
really
sure
how
people
are
using
the
nfs
gateway
for
rgw,
but
the
the
way
that
I
think
it
was
intended
to
be
used
is
for
like
import
and
export,
I'm
imagining
that
most
of
those
users
like
having
one
service
that
is
used
for
all
the
rgw
and
the
cluster.
Like
is
probably
fine.
A
A
B
I
think
that
the
functionality
is
there:
okay,
not
using
probably
the
orchestrator
or
the
dashboard,
but
you
can
configure
the
system
in
order
to
do
a
lot
of
things.
Okay,
so
maybe
we
need
only
to
decide
to
provide
a
simple
use
case
in
the
in
the
orchestrator
and
in
the
dashboard,
okay
or
maybe
more
complex,
use
cases
in
the
orchestrator
and
more
simple
one
in
the
dashboard.
Okay
and
well
probably
I
don't
know
if
we
are
going
to
to
to
go
to
the
most
use
or
most
usable
use
case.
B
Okay,
but
probably
it's
it's
just
to
provide
a
kind
of
service
if
we
want
to
cover
all
the
different
needs
in
the
orchestrator
and
in
the
dashboard
and
overall
in
the
dashboard.
Probably
we
are
going
to
have
something
very
complex
to
use,
so
maybe
start
with
something
something
simple
and
go
in
in
the
direction
that
going
to
provide
my
more
more
usability.
Okay,
but
I
think
that
it
would
be
good
to
start
with
something
simple
in
the
dashboard,
something
a
little
bit
more
complex
in
the
state
or
layer.
B
And
if
you
want
a
very
complex
use
case
with
different
kind
of
possibilities,
you
always
have
directly
the
command
line
with
and
interact
with,
the
different
configuration
of
the
theft,
cluster
and
the
theft
diamonds.
F
So
in
my
mind,
I
think
we
definitely
need
like
the
multi-port
thing.
We
need
the
aj
proxy
thing
that
you're
doing
sage,
and
then
we
need
the
work
that
barsh
is
doing
to
split
out
the
volumes
module
into
an
independent
module
and
then
lift
this
config
up
into
the
orchestrator.
F
I
think
those
four
things
we
definitely
want
and
then
we
could
easily
come
back
later
and
add
a
flag
for
the
backing
store
and
just
change
the
config.
It
would
be
pretty
trivial
to
add
if
we
discover
we
need
that.
A
A
A
H
A
F
Well,
I
think
we
still
need
this
keyring
for
the
ganesha
grace
for
the
daemon
startup.
That's
how
we
ended
up
with
most
of
this
in
the
bootstrap
config.
To
begin
with,
and
in
the
case
of
rgw,
there
isn't
really
a
suffix
keyring.
It's
it's
a
access
key
and
a
secret
key,
that's
more
based
on
the
s3.
F
F
A
A
A
Right,
so
that's
the
only
well
oh
and
the
radius
url
sorry,
and
then
this
radius
urls
is
the
one
that's
used
for
the
url.
That
includes
the
rest
of
the
rest
of
the
configs,
but
that's
the
one
user
that
needs
to
go
on
the
route
and
then
this
rgw
one
that
could
basically
go
copy
of
that
in
each
export
block
for
each
rgw
export
so
that
each
one
has
a
separate
user.
A
The
volume
already
does
that
right.
So,
if
they're,
if
I
don't
know
if
the
idea
is
that
the
volumes
plug-in
also
understands
rgw
exports
or
whatever,
but
whatever
it
is,
that's
managing
their
gw
exports
wherever
that
is
like
that,
should
be
responsible
for
setting
up
keys,
because
that's
common
for
both
work
and
stuff
adm
right.
It
doesn't,
especially
since
yeah
they're,
both
deploying
the
same
ganesha
config
thing.
Hopefully
identical
yeah.
F
A
J
A
A
A
F
A
Well,
I
mean
right,
I
mean,
I
think
the
way
forward
would
be
to
create
a
manager
module
that
manages
rgw.
Stuff,
add
add
the
ability
to
manage
this
export
there
and
then
change
the
dashboard
to
consume
that
just
the
way
it
consumes
volumes
plugin
that
way,
we're
just
basically
moving
the
code
from
the
dashboard.
To
this
other
thing,
this
new
module
adding
access
to
that
same
interface
from
the
cli.
A
J
That
actually
works
in
line
with
what
I'm
doing
currently
like
I'm
moving
out
the
entire
nfs
related
stuff
from
the
volumes
plugging
out.
So
I
can
simplify
that
further
that
it
can
also
like
dashboard
can
also
consume
it,
but
a
substantial
amount
of
work
needs
to
be
done
on
the
dashboard
part
to
get
that
integrated.
I
would
say.
F
A
A
It's
kind
of
a
janky
like
architecture,
but
it
like
it,
unifies
the
cli
and
gives
the
manager
allows
the
manager
to
control
everything
and
then
in
the
future.
We
might
you
know,
change
it
on
the
back
end,
so
that,
instead
of
actually
having
that
code
instead
of
shelling
out
radio
skateboarding,
we
actually
just
make
that
rate
of
skateboard
code
run
in
the
manager
itself
through
library,
gw
or
something,
but
that
can
be
sort
of
an
asynchronous
future
task.
A
But
if
that,
if
that
were
the
case,
if
that
is
the
direction
that
we
go,
then
it
might
make
sense.
Maybe
this
lives
in
that
module
instead
of
in
the
orchestrator
or
maybe
not.
I
mean
this
is
really
kind
of
specific
to
managing
nfs
servers
and
just
creating
keys.
So
it's
not
actually
super
related
to
rgw
itself.
So
maybe
it
does
live
in
new
york
shooter.
Even
then
I
guess
that
would
be
my
yeah.
That's
kind
of
my
god
says
just
put
it
in
orchestrator.
F
F
A
A
A
A
F
H
J
Worried
about
this,
let
me
just
to
make
sure
we
are
on
the
same
page,
so
the
rgw
export
interface
will
take
care
of
the
hearing
which
is
required
right.
So
the
nfs
clustering
interface
will
need
not
have
to
worry
about
all
that
stuff.
It
will
just
deploy
the
details,
and
this
rgw
block
would
be
removed
from
the
surface
right.
Yeah.
F
A
L
L
Well,
it
was
jason
all
right.
This
week
we
were
going
to
discover
who
jason
prime
was.
L
Well,
we
can
review
where
we
are
or
where
we
all
think
we
are
and
which
is
in
this
system
of
ether
pads
here,
yeah.
A
H
L
Right
so
where
we
are
is
basically,
we
have
agreed,
I
think,
on
what
we
want,
which
is
actually
no
small
thing.
It
was
surprisingly
easy
to
come
to
consensus
on
what
we
wanted,
probably
because
it's
pretty
obvious
the
key
things
there
are.
Basically,
we
want
an
nvme
amiibo
fabrics
to
sef
gateway.
L
I,
as
the
intel
representative
here,
am
interested
in
this
from
the
point
of
view
of
enabling
adn
to
be
used
here,
but
it's,
but
it's
pretty
simple
and
it
makes
sense
to
construct
this
gateway
so
that
that's
optional
there's
a
number
of
reasons
at
various
scales
where
at
extreme
scale
you
know,
there's
some
some
challenges
for
adn
at
moderate
scale.
It
can
work
and
we
can
use
this
gateway.
That
way,
and
at
at
other
scales,
you
may,
for
various
reasons,
want
dedicated
gateways.
L
So
we
actually
talked
just
have
previously
discussed
the
topic
of
scale,
and
there
is.
There
is
some
interest
in
sort
of,
although
we're
trying
to
come
up
with
a
solution
that
works
at
any
scale.
Even
the
you
know
extreme,
you
might
even
call
it
absurd
scale.
Thousands
of
nodes
right
everything
is
hard
at
that
scale.
We
want
to
make
sure
it's
not
impossible
there,
but
there
is
quite
a
bit
of
interest
in
sort
of
medium-sized
clusters
that
may
have
32-ish
nodes
too.
Something
like
that.
L
It's
an
explicit
goal
of
us
of
ours
to
enable
the
bare
metal
post
use
case,
which
is
which
requires
a
smart
nic
in
the
host
that
includes
some
nvme
fabrics,
initiator
capabilities
potentially
including
the
a
dnn
host
redirector.
So
you
can
have
a
one
hop
solution
to
a
to
a
set
back
end
of
medium
size.
L
I
don't
think
it's
in
our
scope
of
work
to
explicitly
construct
this.
The
you
know
a
host
smart
nic
and
everything
that
that
requires
mostly
because
there's
no
common
platform.
We
can
all
use
right,
especially
if
you
work
for
intel
so.
L
So
we're
concentrating
on
on
the
gateway
part
and
leaving
the
host
part
as
an
exercise
for
for
some
of
you,
there
were
some
I
always
get
confused
about
which
of
you
ibm
folks,
with
ibm.com
in
your
email
address.
Work
for
the
research
part,
I'm
guessing
that
if
you're
in
zurich,
you're,
ibm
research
and
if
you're,
not
in
zurich,
you're,
probably
ibm
cloud
or
something
else.
M
A
N
Yeah,
as
someone
worked
in
ibm
israel
in
israel,
you
just
look
like
it's.
You
are
from
israel,
you
cannot
see
if
you
are
from
research
or
for
maybe
but.
M
L
Okay,
so
some
of
the
people
that
work
for
ibm,
research
and
some
of
the
red
hat
people
have
been
under
various
ndas
and
have
been
in
other
meetings.
Where
we
talk
about
things
that
we
will
not
talk
about
here,
you
know
going
either
directions
one
of
the
things
I
wanted
to
point
out.
So
we
talk
about
smart
knicks.
L
There's
you
know
we're
here
talking
about
a
sort
of
generic,
smart,
nick
or
one
that
everybody
can
buy,
which,
unfortunately,
for
me
is
from
our
competitor,
but
that's
the
way
it
is
so
we
can
talk
about
that,
but
anything
else,
more
specific,
we'll
be
conspicuously
absent
here.
You
guys
have
all
been
in
well.
Most
of
you
have
been
in
this
meeting
before,
but
we're
sort
of
catching
up,
the
other
people.
L
That's
why
there
may
be
strange
blank
places
here,
it's
because
of
that
unannounced
products,
things
like
that
so,
but
but
it
seems
to
me
that
this
hasn't
really
slowed
us
down,
because
it's
you
know
it's
all.
The
building
blocks
are
pretty
much
the
same.
So
so
we
can
march
towards
that.
Why
I
mentioned
the
ibm
cloud
guys
was
they
had
some
interesting?
I've
heard
some
interesting
comments.
I
have
some
notes,
but
I
never.
I
never
managed
to
note
who
it
is.
L
That's
speaking
because
you
know
we
all
sound
kind
of
like
robots
on
this
impressed,
audio
call
so
about
scale
and
and
other
things
like
that.
So
but
you're
right
we're
all.
You
know
we
have
pretty
much
the
same
goals
here
so
so,
where
we're
at
progress
wise
is,
we've
decided
to
just
go
off
and
build
something,
and
if
you
look
at
the
which
one
is
it
here,
so
jason
constructed
a
set
of
companion
ether
pads
here
and
there's
the
one,
the
nvmf
management
layer.
L
No,
that's
not
really.
Well.
He
created
a
git
repo.
It
has
sort
of
the
prototype,
basically
refactored
from
the
iscsi
gateway,
all
that
python
stuff.
He
he
started
constructing
a
skeleton
gateway
that
would
work
for
mv
member
fabrics.
I
have
actually
not
looked
at
this
code.
In
specifically,
I
had
an
ar,
which
I
had
not
yet
completed,
to
add
a
version
of
svdk
to
that
repost
so
that
we'd
have
you
know
the
the
actual
target
process
and
that
hasn't
happened,
because
the
release
version
of
spdk
isn't
the
right
choice.
L
Unfortunately,
we
need
some
kind
of
frankenstein
branch
here
it
has
to
have
has
to
be
based
on
something
stable.
It
has
to
have
the
fixes
to
the
rbdb
dev.
Oh,
incidentally,
there's
a
list
here
in
the
rbd
nvmf
target
ether
pad
of
things
that
various
people
that
have
done
experiments
with
sbdk
to
talk
to
rbd
at
scale
have
run
into,
and
chief
among
them
was
this.
L
This
huge
scaling
limit,
which
we
discovered
was
because,
when
an
rbdb
dev
is
instantiated,
it
actually
creates
a
rados
context
per
reactor
thread
per
per
image
and
opens
the
image
per
reactor
thread,
which
was
not
the
right
answer.
There's
a
an
rfc
patch.
Do
we
actually
yeah?
I
think
if
you
look
in
the
meeting
ether
pad
a
couple
weeks
back,
I
pointed
at
an
rfc
yeah.
L
This
was
on
the
16th,
so
for
anyone
interested
you
can
look
at
the
status
of
that
spdk
patch,
which
addresses
the
the
most
egregious
issues
there-
and
maybe
I
haven't
reported
this
yet,
but
there
has
been
a.
I
had
a
conversation
with
with
the
spdk
folks
about
making
the
rados
context
a
singleton
like
it
needs
to
be
and
they're
in
agreement
and
that's
accepted
and
will
be
done
by
someone.
I'm
not
sure
who
I
think
I
know,
but
I
can't
speak
for
him
so.
L
So
the
spdk
we
use
here
in
this
git
repo
has
to
include
that.
So
do
I
I
have
to
do
that.
I
have
to
basically
make
us
a
custom
branch
that
which
is
going
to
be
kind
of
a
nightmare.
Really
we
have
we're
going
to
have
patches.
We
have
to
keep
rebased
in
there
and
you
know
by
nightmare
I
mean
there'll
be
status
that
has
to
be
reported
and
it'll
be
rebased
every
once
in
a
while
and
whoever's
using.
L
It
will
need
to
know
that,
and
so
so
that's
the
sort
of
planning
thing
we'll
have
to
we'll
have
to
undertake
here
when
someone's
ready
to
actually
try
this.
Oh
speaking
of
that
we
got
someone's
was
it?
Was
it
jonas
that
offered
to
try
out
the
fixed
rbd,
bdev
or
speculation.
A
M
I,
but
I
didn't
have
time
yet,
but
I
have
something
for
later.
If
you
want
to
finish
first
for.
L
Oh
right,
right
yeah,
I
got
some
feedback
from
the
spdk
guys
on
that
too.
So
so,
just
wrapping
that
up
then
the
the
patch
you
can
see
the
earl
to
there
only
addresses
the
opening
the
image
multiple
times
on
each
reactor
thread.
So
it'll
help
some,
but
you'll
still
have
one
whole
rados
context
per
image.
So
I'm
not
sure
it's
going
to
actually
increase
the
number
of
images
you
can
open
tremendously.
L
But
but
we're
talking
about
this
because
the
spdk
team
has
asked
for
feedback
on
whether
this
helps
at
least
goes
the
right
direction.
They,
you
know,
don't
a
real
self
cluster
is
not
part
of
the
ci
environment
for
for
spdk,
so
they'd
you
know
have
to
go
construct
something
specific
to
do
that
and
the
message
is
that
while
they
can
get
this
fairly
simple
patch
done,
that's
not
a
science
project.
They
can
do
quickly,
so
it
would
help
if
someone
you
know
that
could
be
me
too.
L
But
myself
cluster
can't
be
powered
on
in
my
lab
right
now,
because
I've
only
got
two
kilowatts
per
rack,
which
is
dumb,
but
that's
where
I'm
at
so,
let's
see
wrapping
back
up.
That
was
a
detour
through
the
rbdb
dev,
but
our
our
overall
plan
is
to
is
to
put
something
together
so
that
we
can
at
least
talk
about
use
that,
as
this
place,
to
stand
to
say:
okay,
here's
what
we
should
be
building
right.
L
So
so
in
this
management
layer,
there's
got
to
be
we
sort
of
agreed
there
would
be,
although
I
don't,
I
don't
even
know
where
to
look
for
it
yet
a
mechanism
to
specify
exactly
where
gateways
go
mechanism
to
associate,
obviously
to
to
define
which
images
get
exposed
to
which
hosts
and
ideally
a
way
to
associate
each
host
with
a
specific
gateway
or
set
of
them
so
that
you
can
decide.
You
know
where,
in
your
data
center,
this
io
should
flow.
L
What
the
state
of
that
is,
I
can't
say,
but
that's
going
so
that's,
probably
a
messy
and
complete
enough
summary
of
where
we
are
for
those
of
you
just
joining
us
but
feel
free
to
ask
any
questions
or
join
us
if
you
wanted
to
take
over
from
there.
A
Can
I
ask
a
couple
questions
about
where
we
think
we're
going?
I'm
just
trying
to
get
a
just
trying
to
get
a
high-level
view
of
like
what
the
what
the
so
there's
I
mean.
There's
all
this
there's
the
work
that
needs
to
be
to
happen
on
the
set
in
gmeof
repo,
which
is
like
the
apa
endpoint
to
configure
all
the
stuff
and
set
it
up.
A
There's
some
work
on
the
spdk
side
to
move
to
a
single
interface,
but
I'm
wondering
like
what
what
we
think
is
going
to
need
to
happen
on
the
labarbody
side,
because
I
know
that
there
is
this
big
transition
that
just
happened.
A
Moving
to
a
reactor
model
for
all
the
I
o,
but
it's
using
boost
azio
and
one
of
the
things
that
jason
mentioned
was
that
it
would
be
great
if
we
could
swap
out
the
boost
azure
reactor
with
a
with
the
c
star
reactor
so
that
it
could
run
sort
of
more
tightly
integrated
with
the
rest
of
the
spdk
and
also
reuse,
like
the
the
c
star
implementation
of
the
messenger
that
we're
using
for
crimson
and
so
on.
A
I'm
just,
but
I
don't
have
a
sense
of
like
whether
that's
something
that
can
be
done
or
not.
I
don't
having
not
worked
with
bustasio
or
cstar
yet.
Is
that
something
that
we're
thinking
about.
L
I
think
that's
something
we
thought
about
thinking
about
after
we
got
the
got
a
prototype
working,
so
we
could
see
yeah
that
all
makes
sense.
It
came
up,
at
least
in
general
terms.
I
personally
have
little
knowledge
about
about
what's
going
on
in
life
rpd.
In
that
sense,
it's
been.
You
know.
Last
time
I
looked
in
there
closely
was
the
rwl
stuff,
so
okay,
so
ultimately
that
would
make
sense.
L
I
mean
where,
where
I
think
we're
going
is
there
is
a
requirements
page
by
the
way
it's
rbd
and
vmf
requirements,
there's
a
link
to
it
from
the
main
one,
probably
in
one
of
the
early
meetings.
So
that's
sort
of
our
statement
of
what
we
all
collectively
want
and
you.
C
L
General
terms,
it's
for
there
to
be
a
package
that
definitely
works
with
a
released
version
of
ceph.
That
gets
you
this
capability,
so
people
can
just
you
know,
use
this
yeah.
If,
obviously,
if
library
evolved,
the
rbdb
dev
for
spdk
would
need
to
evolve
with
it.
L
There
seems
to
have
been
in
recent
meetings,
mention
of
of
using
kernel
in
vmware
fabrics,
which
is
not
something
that
interests
me
greatly.
It's
not
really
aligned
with
with
our
specific
goals.
From
this.
You
know
it
may
make
architectural
sense
for
some
people.
L
Now
we
splits
with
the
iscsi
gateway
does,
on
the
other
hand,
in
my
experience
with
that
stuff,
about
demons
that
steer
the
kernel
on
hosts
is
always
kind
of
a
nightmare,
because
it's
like
a
shared
resource
and
things
go
wrong
so,
but
that
plays
into
the
what
somebody's
about
to
bring
up
here,
which
is
the
multiple
subsystems
versus
lun
masking
in
one
subsystem.
That
spdk
cannot
support
multiple
subsystems
per
target
and
the
kernel
can
so
does
that
answer
your
question,
but
where
we're
going.
L
In
a
nutshell,
you
know
in
the
the
sort
of
object
naming
scheme
in
vmware
fabrics
which,
unless
I
have
the
spec
in
front
of
me,
I'm
liable
to
misquote
so.
M
Let
me
share
my
screen
because
yeah
the
nomenclature
is
always
a
bit
confusing.
Can
you
see
my
screen.
L
M
Right,
okay,
so
this
this
is
just
basically
from
from
the
spec.
I
took
this
from
some
other
slides.
I
did
for
any
respect
summary
just
to
for,
for
all
the
naming
stuff
right
and
essentially
right.
You
have
a
controller.
A
controller
has
name
spaces
underneath
it
and
the
namespaces
are
the
actual
data
right.
M
So
this
is
actually
what
you
see
in
dev.
Nvme0N1
would
be
namespace,
one
right,
n2
would
be
namespace,
2
and
so
on,
and
all
these
are
part
of
a
subsystem
and
and
by
the
way,
it's
just
a
fun
thing
in
linux.
The
naming
scheme
is
actually
that
the
0
in
nvme
0
is
controller
0
and
not
subsystem
0..
M
So
there's
actually
a
second
naming
scheme
they
have
for
multipath,
which
includes
all
three.
So
then
you
have
a
controller
id,
you
have
a
subsystem
id
and
you
have
a
namespace
id.
M
M
That
means
we
also
want
to
it
to
be
able
to
handle
well
to
do
this
with
multi-path
in
in
nvme,
and
how
this
works
is
essentially
that
two
controllers
share
a
namespace,
but
the
namespace
for
that
to
be
shared
need
to
be
in
the
same
substance,
and
the
controllers
obviously
can
be
on
different,
can
be
different
targets.
M
Okay,
then
one
thing
that
kind
of
is
also
something
that
comes
up
when
talking
about
this,
and
resources
is
that
you
need
a
admin
jupiter
and
an
ioper
to
talk
to
a
controller,
and
what
this
means
in
terms
of
nvme
over
fabrics
is
that
you
actually
need
per
definition
from
the
standard
I
just
copy
them
out
here
is
that
you
need
a
single
connection
for
each
queue
you
create.
M
L
M
M
M
Right
right
so
yeah
I
mean
yeah.
Maybe
you
can
view
it
that
way,
but
I
mean
what
it
means
is
you
need
resources
right
for
it?
For
yes,
yes,
for
that
right,
and
this
is
kind
of
how
the
mapping
works
right,
so
rdm
eq
paired
to
io
admin,
coupe
and
the
tcp
connection
to
a
admin
or
io
compare,
and
what
this
means
is.
This
is
basically
what
we
want
to
build
right,
so
you
have
some
cell
nodes
here
in
red.
M
M
Also,
that's
not
perfectly
correct,
but
I
find
that
that's,
I
think,
okay
for
this
picture,
and
then
you
have
a
shared
namespace
and
the
namespace
is
basically
directly
reflects
an
a
rdp
rbd
image
right
and
what
this
means
is
you
have,
if
you
have
an
initiator
on
this
side,
is
that
you
need
at
least
four
connections
to
just
basically
to
have
multipathing
to
a
single
rbd
image
right,
because
you
have
one
two,
three
four
cubes
right
and
why
I
mentioned
this
is
because
that
is
important.
M
If
we
talk
about
the
subsystem
versus
namespace
model
in
terms
of
resources,
you
need
right,
and
essentially
this
is
what
I
wrote
down.
Very
simple-
is
probably
totally
incomplete,
but
what
you
want
is
probably
you
want
an
initiated
that
it
cryptographically
authenticates
itself
right
towards.
So
it's
a
vanity,
so
you
probably
want
to
have
both
ipsec
on
the
fabric
and
some
enemy
invent
identification
and
obviously
the
initiator
should
only
be
able
to
access
and
see
those
volumes
which
he
has
access
to
right
and
one
thing
right
right.
M
So
so,
and
this
is
how
so
I
I
just
pictured
the
two
models
that
we
discussed
before.
So
this
would
be
the
subsystem
model,
which
means,
essentially,
you
do
the
isolation
at
the
subsystem
level.
You
have
multiple
subsystems.
I
only
drew
one
cell
node
in
here,
but
I
mean
you
get
the
picture
with
multiples
and
so
on.
So
essentially
you
connect
to
this
spdk
target.
M
You
authenticate
yourself
and
then
you
can
access
subsystem,
for
instance,
oh
in
this
case,
and
if
you
can
list
all
the
namespaces
that
is
in
there
done
right
and
then
another
initiator
can
connect
to
so
to
authenticate
itself
gets
access
to
the
subsystem
p
and
it
sees
its
namespace
right.
So
that's
the
subsystem
model,
as
I
said
before
right.
So
the
connections
were
the
the
the
queues
you.
They
are
basically
on
the
basis
of
a
subsystem
right.
So
you
need
a
admin
and
a
if
you
prepare
subsystem.
L
M
M
So
this
this
picture,
essentially
you
only
have
let's
say
one
subsystem.
You
of
course
can
still
have
multiple,
but
for
simplicity,
let's
say
one
subsystem
and
then
for
the
lack
of
better
term.
I
call
them
you
create
kind
of
virtual
subsystems,
so
let's
call
them
dynamic
subsystems,
whatever
you
like
them,
to
call
them
right,
which
basically
lets
you
connect
to
the
subsystem.
But
when
you
authenticate
yourself
with
the
subsystem
and
some
client
id,
you
only
see
the
namespaces.
You
are
allowed
to
see
right.
So
you
cannot
enumerate
all
the
name
spaces
in
this
subset.
M
Right
exactly
so
from
the
initiator
point
of
view
right,
it
looks
like
it
only
has
this
one
namespace
in
it
right,
so
you
don't
see
a
difference
in
connecting
to
any
other
subsystem
right
right,
so
yeah
so
and
then
I
wrote
down
a
few
points
that
just
came
to
my
mind
by
creating
these
right.
So
obviously
what
we've
already
talked
about
the
subsystem
model
means
you
need
a
lot
of
connections,
at
least
four.
M
If
you
do
multi-parting
per
subsystem,
so
that
could
become
a
scaling
issue
depending
on
how
many
volumes
you
want
to
support,
and
you
want
to
isolate
every
one
of
them
in
a
subsystem
and
basically,
then
you
do
the
authentic
vacation
authentications
or
you
do
on
a
subsystem
level
right,
whereas
a
namespace
model,
you
do
the
authentication
on
basic,
you
need
and
one
additional
id.
You
have
a
client
id
and
a
subsystem
or
something
like
that.
L
M
Yes,
that
is
correct
in
spdk,
there's
a
host
that
you
can
filter
by
a
host
or
a
loud
host
list
right.
So
it
takes
the
host
nqn
and
checks.
If
it's
in
the
loud
host
list
that
you
can
do.
Obviously
it's
not
cryptographic.
It's
just
a
simple
check.
I
mean,
even
if
you
do
ipsec,
it's
not
super
secure.
So.
L
M
M
Ibm
not
sure
I
I
mean
we
have,
we
have
access,
I
mean
we,
we
get,
we
got
access
to
the
working
groups.
Okay,
you
mean
the
the
security
working.
F
L
Is
open
only
to
its
members
and
you
can't
disclose
what's
going
on
in
there
until
they
release
a
spec
or
or
one
of
the
charter
members
just
a
bit
of
press
release
about
it,
which
has
happened
so,
but
that
works
to
our
advantage,
because
if
you
can
find
that
you
can
talk
about
it
in
public
what
the
other
companies
said,
but
we
can't
so
if
you
want
to
know
exactly
what's
going
on
with
authentication,
you
currently
really
need
to
go
to.
You
need
to
get
access
to
that
twg.
L
M
Right
since
the
current
specification
talks
about
environmentation,
but
it's
basically
just
a
placeholder-
it
doesn't
have
anything
specific
in
there.
It's
really
really
weak
right.
I
think
that
that
will
change
yeah.
It's
basically
not
done
yeah
yeah,
that's
essentially
it
so
a
few
points
I
made
here
about.
Okay,
yes,
you
basically
can
share
them.
The
connections
right,
that's
great!
M
You
basically
inherit
the
properties
of
the
controller
you
connect
to
so
maximum
transfer,
size
arbitration
and
these
kind
of
things,
but
that's
also
something
you
could
basically
come
around
by
having
allowing
you
to
connect
to
different
sets
of
controllers
would
be
possible,
for
instance,
and
one
thing
I
think
that
that
you
have
to
keep
in
mind
here-
is
that
let's
say
you
have
a
virtualized
environment.
You
have
a
vm
here
and
a
vm
here.
M
You
have
a
vm
here
and
you
move
it
to
here
right
and
you
would
need
to
basically
remove
the
namespace
here
and
create
a
new
namespace
in
this
virtual
subsystem
here
right.
So
you
probably
want
to
want
your
if
you
use
smart
mix,
want
your
smart
nick
to
support
something
like
namespace
attribute,
changed
as
asynchronous
event.
Notification
to
handle
these
kind
of
events
right.
L
Yeah,
the
spdk
stuff
already
already
handles
that,
if
you
add
a
namespace
to
a
target,
the
the
initiator
gets
that
notification
and
and
bdev
for
the
namespace
just
appears
in
right
in
the
client
so
and
the
other
way
around.
M
L
Okay,
so
this
is
your
focus
is
on.
I
got
to
say
this
is
multipathing
in
engineering
fabrics
is
called
a
a
for
a
certain
asymmetric
name
space.
I
can't
remember
exactly
why
they
chose
it.
M
F
L
And
and
the
and
the
sort
of
goal
here
for
the
group
well,
this
is
this
is
more
coming
from
the
ibm
side,
and
this
is
intel
is
not
specifically
interested
in
the
a
a
h,
a
case.
This
is
you
know,
just
another
multi-node
h,
a
j
buff
from
from
us
old
storage
veterans,
point
of
view.
It
should
exist,
but
you
know
somebody
will
do
it.
I
gotta
point
out
that
adn.
The
thing
that
actually
brought
me
here
does
not
require
a
a
so
in
an
adn
situation.
L
The
connections
you'll
only
have
one
connection
from
each
host
to
each
gateway,
but
the
use
you'll
want
to
be
connecting
to
a
lot
of
gateways,
so
that
was
not
addressed
in
any
of
your
slides,
not
surprisingly,
because
you're
focused
on
the
on
the
the
a
a
thing
a
few.
If
I
was
prepared
for
this,
I
sh
could
have
a
companion
slide
here.
That
shows
that
other
that
other
situation,
if
you
look
at
the
I
I.
M
L
So
in
that
situation,
we've
got
a
gateway
in
every,
ideally
every
osd
node
when
there's
some
reasonable
number
of
them,
and
we
don't
have
to
do
this
multipathing,
because
all
of
those
all
of
those
are
separate
subsystems
with
separate
nqns.
That's
another
thing
to
observe
here
is
the
subsystem
actually
has
an
nqn,
and
if
you
split
this
over
hosts,
then
every
all
the
properties
have
to
be
so
you
have
to
synchronize.
You
know
the
actual
ncds,
the
state
of
everything
it
has
to
be.
L
You
know
tightly
coupled
just
like
any
other
h,
a
sort
of
heartbeat
situation
pairwise
in
the
adn
case.
It's
all
much
more
loosely
coupled,
but
you
have
a
lot
more
connections
so
and
of
course
you
need
an
initiator
that
understands
the
hints.
M
Right
but
I
mean
from
most
that's
out
there
from
us
experience
that
we
had
right,
you
probably
can't
do
line
rate
on
a
smart,
nic
arm
side
right,
you
need
something
more
powerful
yeah.
Maybe
you
can
do
this
with
the
fpga
smart
make,
but
certainly
not
with,
for
instance,
at
melanox
bluefield,
or
something
like
that.
Not
that
line
speed
at
least.
L
So
that's
our
goal
is
you
know
we
want
all
that
packed
into
the
host,
smart,
nick
and
and
a
a
then
becomes
orthogonal
to
this.
You
there's
reasons
that
you
might
want
all
your
paths
to
be.
Maybe
you
can't
take
the
you
know
the
the
latency
hit
for
a
failed
path
that
now
has
to
whatever
so
just
pointing
out
that
distinction,
so
so
this
makes
our
for
for
sage's
benefit
here
about
what
are
we
doing
and
why
it
sort
of
makes
solution.
Space
have
another
dimension
here.
M
Just
maybe
one
thing
to
add
is
that
in
the
beginning,
I
guess
you're
fine
with
not
having
active
active,
we
can
do
active
passive
and
then
maybe
the
problem
is
a
bit
easier
for
the
chords.
L
G
L
F
L
All
right
so
we're
just
sort
of
punting
that
for
now
so
one
view
of
that
is
well
one
of
the
things
you
might
need
that,
for
is
reservations
and
well.
You
can
implement
reservations
at
the
end,
viewer
fabrics
level
and
not
actually
hold
the
lock
on
them
on
the
image.
If,
if
that
actually
works
for
the
feature
set
you're
using
so
so
anyway,
we
have
two
dimensions:
that's
why
the
gateway
requirements
and
the
management
layer
requirements.
They
might
seem
vague.
L
It's
because
they're
trying
to
include
both
of
these
use
cases
in
all.
In
all
the
situations
you
might
say,
I've
got
an
h,
a
pair
gateways
for
these
images
for
these
hosts
and
I've
got
a
fleet
of
ada
egress
gateways
that
these
other
hosts
all
see,
and
they
don't
use
a
a,
but
that
should
be
manageable.
L
M
A
A
You
make
an
attachment,
not
on
ether
pad.
No,
if
you
just
put
them
on
like
google,
slides
or
something
and
make
it
public,
you
can
put
the
url
on.
M
M
A
A
L
Yeah
so
some
of
the
obviously
some
sdk
things
it
makes
sense
for
you
know
intel
to
be
doing
like
let's
fix
the
rbdb
dev,
let's
figure
out,
you
know
whether
you
know
what
is
the
best
path
towards
some
kind
of
lung
masking
and
I'm
glad
jonas
had
these
slides,
because
that
was
the
thing
I
was
feeling
like.
I
needed
to
explain
the
connection
count
is
a
problem
and
it
really
hits
adn
if
we
had.
L
You
know
basically
these,
if
you
do
your
hardware,
probably
like
some
reasonable
number
of
qps
like
128,
it
may
fall
over
dead.
If
you
wanted
to
have
4
000
of
them,
some
of
them
just
can't
get
there.
So
so
you
really
gotta
pay
attention
to
that.
L
I
had
if
you
scroll
back
through
the
through
the
action
items,
there
have
been
a
request
or
an
offer,
or
some
combination
to
get
an
sbdk
person
from
intel
to
attend
this
meeting,
occasionally
like
maybe
next
week
and
tell
us
about
what
we're
likely
to
obstacles,
are
likely
to
hit
trying
to
do
unmasking
that
you
know
the
namespace
set
type
thing
versus
multiple
subsystems
and-
and
I
have
provisional
buy-in
to
do-
that-
we
just
need
to
commit
to
a
date
and
to
that
end
I
was
going
to
encourage
us
all
to
review
the
list
of
spdk
issues
in
the
rbd
nvme
of
target
ether
pad,
which
I
think
has
already
been.
L
L
One
so
this
one
basically
was
started
out
as
an
explanation
of
what
the
sbd
target
would
look
like.
I
just
walked
through
how
to
configure
in
the
poc
we
did,
but
it
also
has
at
the
top
a
list
of
issues
or
questions
about
sbdk.
So
if
we
could
all
review
that
list
and
make
sure
everything
you're
wondering
about
spdk
or
every
question
we
think
we
have
outstanding
that
isn't
answered
is
on
that
list.
That,
for
me,
is
in
light
blue
under
needed
enhancements.
L
Then
I
can
invite
jim
or
ben
or
both
to
come
and
speak
to
that
next
week
or
if
they're
not
available,
you
know
I'll,
let
you
know
which
day
they
can
actually
make
it
does.
That
is
that
good
enough
will.
A
That
address
our,
I
think
so
I
mean
that
the
big
question,
in
my
mind,
is
how
the
c-star
reactor
and
the
boost
azio
reactor
relate
and
whether
code
written
for
the
gustation
reactor
can
be
adapted
or
reused.
Wholesale.
L
Yeah,
that's
something
that
might
surprise
them.
I
don't
know
if
I
don't
know
who
the
right
person
asks
is
about
that.
L
A
Yeah
yeah,
no,
no,
but
I
I
just
want
to
make
sure
I
mean
if
I
have
to
like
look
into
the
future.
I
imagine
what's
going
to
happen,
is
we're
going
to
get
all
these
pieces
glued
together
and
we're
going
to
get
a
gateway
that
we
can
stand
up
and
we're
going
to
be
disappointed
by
the
performance
and
the
way
forward?
L
L
A
A
L
C
Right
now,
I'm
doing
the
maintainer
stuff
that
jason
was
doing
for
for
a
while,
at
least
and
we'll
be
learning
things
like
that,
but
jason
mentioned
trying
to
trying
to
plug
the
crimson
messenger
in
front
of
rbd
to
me.
But
we
didn't
talk
about
the
technical
bits
of
that
at
all,
and
I
I
have
no
idea
about
your
azio
question.
L
But
as
far
as
the
prototype
npmf
gateway
code
that
he
put
in
the
git
repo
that'd
be
a
that's
a
thing
that
needs
to
move
forward
too.
Who
can
pick
that
up.
L
C
A
L
So
that's
the
state,
then
yep
and
I
guess
we'll
see
next
week
or
overtime.
Oh
yeah
way
over
sorry,
yeah
cool.