►
From YouTube: Ceph Orchestrator Meeting 2021-05-11
Description
A
A
Modules,
lots
of
different
restrictions,
lot
lots
of
different
requests
and-
and
we
we
are
here
to
find
the
solution
to
find
a
way
forward
for
for
openstack
right.
A
So,
where
are
we
we
have?
We,
we
have
the
the
dashboard
module,
that's
great
as
it
is,
but
it's
supposed
to
make
use
of
the
new
nfs
module.
A
B
Asap,
I
guess
we
can
start
with
the
configuration
tracker
that
we
have,
because
it's
mostly
related
to
the
use
case
we
are
going
to
cover
with
with
manila
and
the
reason
why
the
manila
team
need
this
kind
of
configuration
for
ganesha,
which
is
supposed
to
work
the
same
way
we
did
with
the
fancy
board,
so
I
tested
in
the
shot
the
the
driver.
Basically,
the
the
other
use
case
is
the
independent
ganesha
deploy
with
cfdm.
B
In
that
case,
we
are
able
we
are
already
able
to
create
this
big
json
file
containing
the
ganesha
conf
and
deploying
this
this
component,
but
we
have
different
pieces
in
openstack
like
peacemaker
for
and
I
think
right
now,
it's
problematic
moving
from
this
this
model
to
the
new
hae
nfs
approach
provided
by
cfedm.
B
According
to
the
plan,
the
media
team
has
upstream,
but
for
now
the
the
idea
and
the
purpose
of
this
two
trackers
that
we
have
is
to
reach
fisher
party
with
what
we
did
with
stephanie.
So
I
don't
know
if
tom
or
victoria
would
do
explain
the
use
case.
The
reason
why,
in
this
kind
of
configuration
and
this,
these
requirements
for
cfdm
and
the
potential
problems
sure.
C
Like
I
can
make
a
couple
of
remarks
so
when
ganesha
was
introduced
as
a
gateway
for
openstack
just
ffs,
there
was
no
independent,
aha
support,
so
we
developed
a
pacemaker
support
and
the
way
it
was
done
was
to
integrate
it
with
pacemaker
control.
That
openstack
does
this
also
provided
a
virtual
ip
on
a
separate
network
than
the
suff
public
network,
which
was
essential
for
a
lot
large
number
of
our
use
cases
where
tenants
are
not
trusted
and
should
not
have
access
to
storage
infrastructure.
C
We
never
wanted
this
to
be
the
long-term
solution,
and
back
in
as
late
as
2019,
we
were
imagining
a
future
where
we
didn't
know
about
cepheum.
We
did
know
about
rook
seth
and
we
were
imagining
a
future
where
an
asef
cluster
would
be
I'll
use.
C
The
word
loosely
orchestrated
by
rook,
seth
and
the
consumers
wouldn't
be
containers
per
se,
at
least,
but
would
be
openstack
clients,
and
then
that
that
we
had
a
little
bit
of
momentum
forward
on
that,
but
that,
for
a
variety
of
reasons,
which
we
don't
really
need
to
go
into
here,
that
that
did
not
materialize
for.
C
What
downstream
we
would
call
our
rhcs
it
was.
The
emphasis
was
entirely
on
on
for
container
storage,
so
we
have
been
kind
of
sitting
with
the
original
interim
architecture,
and
it's
not
that
we
want
to
sit
on
it
forever.
So,
when
we
ask
for
backwards
compatibility
here,
we
do
want
to
move
forward,
but
we
need
to
do
something
that
handles
what
we
have
today
and
we'll
handle
an
upgrade
path
and
so
on.
So
we,
it
was
very
pleasant.
C
Surprise
to
me
to
find,
because
we
had
not
been
particularly
synchronized
to
find
that
jeff
adam
without
rook
seth
will
have
roughly
the
same
features
that
we
were
desiring.
C
So
we
do
need
to
move
to
that
by
what,
by
that,
what
I
mean
is:
not
only
will
it
have
jeff
layton's
nfsha
work,
which
may
not
be
entirely
here,
so
I
see
some
other
people
working
on
it
too,
which
we
were
aware
of
and
tracking
out
of
the
side
of
our
eye,
but
also
something
roughly
I'll
call
it
a.
C
I
think
sebastian
placed
it
this
way,
a
a
kubernetes
ingress
without
kubernetes,
so
which
is
a
critical
feature
for
us,
then
how
we
expose
it
into
openstack,
networking
and
so
on
has
to
get
worked
out,
but
I
mean
that's.
The
critical
feature
is
that
you
have
h
a
that
will
start
a
failure
of
nodes,
not
just
restart.
C
You
know
within
a
of
a
process
within
within
a
node
when
there's
failure,
but
also
that
there's
a
vip
in
front
of
it
that
we
can
expose
to
openstack
and
that
that
be
separate
from
the
set
public
network,
which
we
call
storage
networks
and
so
on.
So
that's
as
I
understand
it,
work
that's
happening
fairly
rapidly
and
it's
in
progress.
That's
all
delight
delightful.
C
You
want
to
align
with
that
and
move
to
it,
but
in
the
meantime
we
have
to
support
what
we
have
now
with
seth
ansible
and
in
that
model,
irrespective
of
whether
the
ceph
cluster
is
deployed
by
openstack
or
is
deployed
independently
of
openstack,
and
we
point
to
it
from
openstack
we
run
the
ganesha
daemon
under
openstack
run
it
you
know
so.
Openstack's
responsible
for
its
life
cycle
is
responsible
for
the
service
vip
for
migrating
that
vip
between
nodes.
C
In
the
case
of
failure
and
stuff,
like
that,
so
we're
going
to
have,
I
think,
unless
there's
some
miracle,
I
don't
understand,
we
have
to
support
that
model
for
osp
17.0,
ga,
even
if
we
get
creative
with
minor
releases
and
back
ports
and
so
on.
I
I
don't
know,
but
that's
my
working
assumption
all
the
feature.
C
You
know
we
do
stuff
in
manila
itself,
not
just
in
triple
o
for
the
aha
model
that
we
have
now
and
stuff
and
all
the
feature
development
for
any
stream
or
was
supposed
to
be
done
in
the
last
cycle.
That's
when
we,
for
example,
we
cut
over
to
the
cefa
sub
volumes.
So
we
we
have
been
doing
work
to
try
to
stay
paced,
but
we
were
not
doing
work
in
that
release
to
adapt
to
this
model.
C
So
that's
what
I'm
running
on,
but
I
just
want
to
register.
First
of
all,
it's
not
for
lack
of
appreciation
of
of
the
new
stuff
that
you're
doing
that.
We
want
to
that.
We're
saying:
look.
We
need
a
backwards
compatibility
path
and
we
understand
it's
ugly
and
we'd
like
to
get
rid
of
it.
This
one
is
reasonable.
D
My
my
sense
is
that
there
are
a
couple
things
to
work
out,
one
just
to
make
sure
that
the
we
understand
exactly
what
the
long
term
view
is,
where
we're
going
to
end
up
and
that
everything
is
going
to
fit
together
properly.
So
we
didn't
so
we
should
just
sort
that
out
make
sure
everything
is
okay
there,
and
then
we
want
the
the
very
minimum,
the
absolute
minimum
we
can
do
to
make
the
short
term
situation
work,
and
then
we
need
some
sort
of
migration
path
from
the
short
term
to
long
term.
C
Right
and
that's
exactly
right
and
I
haven't
thought
through
exactly
what
number
three
would
be
and
that's
part
of
why
I
don't
see
it.
As
you
know,
some
kind
of
miracle
enabling
us
to
remove
the
compatibility
path
in
the
meantime
is
that
we
have
to.
We
have
to
actually
work
that
through
and
test
it
and
everything
else.
But
yes,
I
agree
with
all
three
points.
D
So,
can
we
talk
about
what
the
long
term
view
is
really
quick
just
to
make
sure
it
makes
sense
so
manager,
nfs
module?
Has
the
staff
nfs
export
commands
to
like
create
and
delete
exports?
Is
that
something
that
you
would
expect
manila
to
be
consuming
or
would
it
be
directly
reading
and
writing
the
export
objects
like
it
is
now.
C
I
think
manila
needs
manila,
allows
access
to
clients
and
when
it
allows
access
to
clients
today
it
uses
the
the
mechanisms.
Do
you
bus
but
it,
but
it
it
effectively
reconfigures
ganesha
and
and
gives
it
something
like
a
sig
hub
to
get
it
to
not
not
shut
down,
but
the
export
and
run
with
it.
So
I
think
we
need
some
mechanism
like
that,
because
manila
user,
the
owner
of
a
share,
needs
to
be
able
to
say,
let
me
expose
it
to
sage's
vm.
E
C
I
think
that's
yeah
yeah,
I
mean
in
some
sense
we're
happy
to
do
whatever
that
will
achieve
the
same
functionality,
and
you
know
the
original
vision
we
were
talking
about
a
couple
years
ago
appears
to
be
more
or
less
what
you're
implementing
now,
which
is
will
run
as
part
of
the
stuff,
the
bus
cluster,
and
we
will
interact
with
it
via
an
api
and
just
the
way
we
would
interact
with
a
app
or
something
like
that,
and
you
know
how
how
we
that's
just
fine.
C
It
just
needs
to
be
a
dynamic
export
update
and
it
needs
to
not
involve
any
service
downtime
good
issues
in
the
data
path,
so
we
can't
shut
down
the
process
and
restart
it.
We
had
a
brief
conversation
with
marsha
when
she
said
she
thinks
that's
feasible.
The
original
motivation
for
the
dbus
mechanism
was
that
there
was
not,
at
the
time
at
least
a
feasible
alternative
to
that
for
for
the
dynamic
update,
as
as
we
understood
it,.
D
Okay-
and
I
guess
this-
the
last
part
was
that
when,
if
cephadium
is
managing
the
nfs
daemon
and
the
ingress
service,
the
the
only
real
parameter
for
ingress
two
parameters
are
placement
like
which
sets
of
nodes.
You
want
it
to
be
deployed
on
well,
you
wanted
to
go
on
like
two
or
three
of
them
or
whatever
for
aha
purposes,
and
then
what
virtual
ip
you
want
it
to
bind
to
import
is
that
is
that
sufficient?
C
Yes,
I
think
so,
and
I
will
rely
partly
on
john
and
francesco
and
so
on
in
terms
of
you
know,
but
if
openstack
has
isolated
networks
for
storage
which
connects
to
the
staff,
public
and
storage
nfs
and
that's
already
deployed
in
openstack,
and
it's
a
matter
of
wiring
by
a
vlan
or
whatever
to
the
in
the
cluster
that
should
just
work.
I
think
I'll
add
a
the
stretch
goal
here.
C
Originally
we
wanted
to
be
able
to
deploy
multiple
ganesha
clusters
and
put
them
on
different
dips,
and
that
would
be
a
next
yet
a
further
step
and
I've
not
thought
through
how
we
we
were
going
to
use
courier
because
it
was
kubernetes,
but
you
know
we
would
eventually
have.
We
want
to
not
paint
ourselves
in
a
corner
relative
to
moving
towards
that
kind
of
picture,
as
it
becomes
possible.
D
C
And
then
okay,
perfect
and
then
in
terms
of
the
parameter
that
controls
the
scheduling
or
placement
of
the
ganesha
servers,
we're
always
going
to
want
more
than
one
as
a
minimum
requirement
just
because
we
have
to
deploy
with
aha
and,
in
some
sense,
correct
me,
john
francesco,
julio
from
from
our
point
of
view,
we
just
assume
run
them
if
there's
no
downside
to
it
on
as
many
nodes
as
reasonable,
because
we
want
to
scale
out
whether
that
actually
makes
sense
in
terms
of
running
lots
of
ganesha's,
pointing
at
you
know
through
us
through
the
same
mds
or
something
like
that.
C
I
don't
know,
but
I'm
just
saying
you
know
we
don't
really.
We
don't
really
care,
except
that
we
need,
and
ideally
we'll
scale
out
as
well
as
possible,
because
one
of
the
disadvantages
of
our
current
deployment
is
that
we're
concentrating
everything
on
these
controller
nodes.
Sorry,
john.
F
That's
no
problem,
I
mean
so
currently
everything
we're
doing
where
we
have
triple
o's
composable
services,
which
lets
you
put
different
services
on
different
nodes.
Any
way
you
want
we're,
basically
translating
that
with
triple
o
into
a
a
seth
adams,
spec
right.
So
if
we're
going
to
have
the
the
ability
to
put
a
node
in
the
service
or
ever
via
that,
spec
and
that's
totally
fine,
including
connecting
in
the
appropriate
network
which
the
user
can
define
as
many
networks
as
they
want
as
well.
F
So
that
would
I
I
would
think
we
would
do
the
same
thing
to
achieve
that
that
first
goal
and
then
for
the
stretch
goal.
That's
trickier,
but
it
doesn't
sound
like
staff.
Adm
has
a
limitation
on
the
second
goal,
it's
more
of
a
how
to
say
that
in
triple
o,
but
so
far
so
good.
Well,.
C
Well,
john,
actually,
the
the
vision-
and
we
can
talk
about
this-
was
that,
for
that
stretch,
goal
triple
o
would
not
actually
affect
the
life
site
would
not
control
the
life
cycle
of
these
nfs
clusters.
It
would
be
manila
directly,
so
it's
dynamic,
you
would
dynamically
spin
them
up
as
you
have
a
new
tenant.
C
So
that
way
we
could
and
then
we
would
network
them
back
to
the
tenants
directly
that
that
was
the
vision
and
we
we
can
talk
about
it.
We
can
even
shift
the
goal
if
we
need
to,
but
I
mean
I'm
just
for
historical
purposes.
At
least
you
know
you
can
do
that
kind
of
thing
with
say
an
external
netapp
appliance
or
a
delhi
alliance.
C
D
D
In
this
case
that,
when
set
a
dam,
is
running
ganesha,
it's
it's
creating
containers
on
the
hosts,
but
if
you
want
the
ganesha
edema
to
run
in
a
virtual
machine
somewhere,
that's
a
whole
another.
C
Perhaps
I
shouldn't
use
the
no.
I
didn't
actually
mean
that
I
think
I
said
you
get
your
own
virtual
server,
but
all
I
mean
by
that
is
you
get
your
own
ganesha
clock?
Each
tenant
gets
a
dedicated,
ganesha
cluster
and
networking
that's
isolated
directly
to
those
tenants,
yeah
consuming
vms,
so
that
works.
D
C
C
Right
and
that
I
don't
I,
I
want
us
to
give
up
management
of
the
national
life
cycle
of
the
ganesha
life
cycle
and
and
to
the
extent
so
as
a
goal.
But
that's
where
we
want
to
go
the
problem
is
the
hitch
is
in
the
short
term.
I
think
we
need
to
keep
it
for
a
little
bit
or
keep
away
to
do
it
still,
but
that
I'm
shifting
subject
and
not
trying
to
steer
us
back
to
the
short
term
point,
but
that
that's
the
problem.
A
Yeah
definitely
controls
the
deployment,
but
it's
not
triple
or
that
instructs
the
fading,
but
it's
manila
directly
that
instructs.
Definitely
I'm
right.
If
I
have
understood
it
correctly,.
C
C
Okay.
Does
that
make
sense?
Am
I
being
clear
yeah?
I
think
so
it's
not
it's
not
actually
spawning
it.
The
the
ganesha
processes
manila
is
not
doing
it
triple.
O
is
not
doing
it.
Let's
go
but
minimalist.
Manila
manila
knows
that
it
needs
to
be
done.
We
have
a
new
openstack
tenant
wanting
shares,
so
manila
will
trigger.
C
F
All
right
so,
instead
of
below
calling
seth,
eddie
manila
is
and
triple
o's
role
is
reduced,
but
that's
a
that's
the
future
goal
and
I
wonder
how
how
we'll
do
that.
But
it
sounds
like
in
theory:
seth
adm
should
be
able
to
support
that.
I
just
worry
about
that:
the
tenant
boundaries
and
other
openstack
stuff.
We
can
talk
about.
C
Okay-
I
I
mentioned
this
further
goal
more
as
not
that
everything
about
it
needs
to
be
designed
or
understood,
but
I
I
do
think
that
if
we
keep
it
in
mind,
we
can
try
to
do
things
that
don't
foreclose
it.
C
The
future
goal
of
manila
asking
cephadium
to
bring
up
a
new
ganesha
cluster
for
a
new
tenant.
F
F
We
were
talking
about
short
term
and
then
a
long
term,
and
now
it
seems
we
have
like
long
term
like
extra
extra
long
term,
which
sounds
more
like
what
you're
talking
about,
and
I
think
what
we
want
to
worry
about
is
getting
off
the
short
term
solution
and
getting
into
using
stefadm's
nfs
deployed.
The
way
cepheum
would
like
to
do
it.
The
the
new
nice
way,
and
then
this
other
matter
of
having
manila
versus
triple
o
calling
cepheum
is
like
a
third
get
a
third
step.
D
D
C
D
Shifting
back
to
the
under
the
under
the
other
end
of
the
spectrum,
just
to
make
sure
I
understand
what
is
happens
today,
though,
currently
manila
orchestrates,
the
ganesha's,
pacemaker
and
manila,
manages
the
radius
objects
that
have
all
the
ganesha
and
figs
in
them
and
manila
reloads.
A
D
C
Via
pacemaker,
truffalo,
really
pacemaker
controls
the
life
cycle
of
the
ganesha
demons.
Today,
not
the
exports,
but
the
life
cycle.
The
j,
the
the
initial
startup,
the
migration,
the
vip
the
service
interface
manila
does
not
control
that
today,
pacemaker
does
and
pacemakers
deployed
by
triplo
at
deployment
time.
B
So
basically,
it's
a
fantable
for
both
internal,
safe
and
external
ganesha
just
creates
the
the
first
configuration
file
according
to
the
parameters
specified
and
then.
C
Right,
you
make
sure
like
look
like
like
what
was
illustrated
a
moment
ago
in
that
facebook
or
whatever
tracker,
and
it
has
no
exports
in
it,
and
then
it
gives
pacemaker
control
of
running
kadesh.
Okay,.
C
C
It
manages
the
not
really-
and
I
I
think
I
saw
that
romano
joined
us,
so
he
can
correct
if
I
misspeak,
but
it
manages
the
exports
by
a
debuss.
C
Okay
is
what
I
say
makes
sense,
so
it's
not
using
rados
watch
url
or
something
like
that.
It's
using
dbus.
C
It
it's
using
dbus,
it
feeds
it
feeds,
it
runs.
A
dbus
command
that
refers
to
a
local
file
pulls
the
contents
out
of
the
file.
C
E
C
We
we
don't,
we
don't
use
any
best,
b3
either.
So
I
don't
have
a
history
on
that
and
I'm
not
not
not
disagreeing
with
you
but
empirically
and
the
solution
came
to
us
from
from.
I
say
you
folks,
but
you
know,
combination
of
ganesha
and
sap
people.
C
We
run
with
nfsb
4.1
only
and
4.1
plus.
D
Okay,
so
if
just
thinking
here,
like
the
the
minimum,
viable
short-term
fix
for
this,
if
we
go
from
seth
answerable
to
stephanie
m,
it's
just
to
fill
the
gap
that
stephanie
played
right,
where
triple
o
could
even
like
deploy
ganesha's
the
way
that
set
ansible
did
and
then
nothing
else
would
have
to
change.
That
might
be
the.
B
Yeah,
this
could
be
a
solution.
I
think
we
creating
a
consistent
configuration
file
for
ganesha
like
easy
from
zero
point
of
view,
because
we
already
collect
the
operator's
input,
so
we
have
both
variable
set
and
creating
the
systemd
unit.
The
same
way
defensible
did
it's
the
the
fastest
solution.
I
guess
right
now
for
the
short
term,
I
think
the
cefadm
team
should
double
check
and
verify
that
it's.
B
This
kind
of
work
is
consistent
with
your
vision,
at
least
for
the
we're
talking
about
the
short
term,
and
but
I
don't
know
that's
all.
I
think
this
could
be
enough
to
have
this
component
working
the
same
way
possible
and
maintaining
this
backward
compatibility.
D
It's
it
seems
like
there's
going
to
be
some.
It
seems
like
we're
we're
really
close
to
having
everything
in
place
for
the
long
term.
Duration.
On
the
step
side
like
we
have
a
little
bit
of
work
to
do
in
the
manager,
nfs
module
and
that's
basically
it
and
then
there's
on
the
manila
side,
there's
a
whole
bunch
of
hopefully
code,
simplification
that
would
just
make
it
all
the
apis
to
add
and
remove
exports
into
create,
destroy
clusters
and
then,
like
you,
should
like
have
like
this
much
code
go
down
to.
D
Hopefully
this
much
code,
and
so
my
sense
is
that
we
should
try
to
get
there
as
quickly
as
possible,
but
that
there
we
need
a.
We
need.
A
transition
like
we
need
to.
D
Basically,
right,
we
need
to
go,
have
something
that
finds
all
those
little
objects
and
puts
them
into
the
new
thing.
They'll,
probably
be
in
a
totally
different
pool,
so
probably
just
having
a
one-shot
thing
that
just
copies
them
all
over
as
part
of
like
the
upgrade
path
you
the
way
to
go.
C
And
we
we
do
rediscover
the
exports
currently
when
manila
restarts,
so
that
would
be
a
natural
place
to
do
the
upgrade.
It's
just
it's
not
that
it's
conceptually
daunting
to
us
yeah,
there's
just
some
logistics
osp
moves
slowly
getting
moving
up
streams
downstream,
takes
time
and
so
on.
So
so
I
I
as
much
as
I
would
love
to
wave
my
hands
and
say
we
can
just
skip
this.
C
D
D
A
The
thing
is,
the
thing
is:
if
we,
if
we,
if
we
deploy
ganesha
manually
independent
of
cfdm,
like
as
it
does
as
it
is
right
now
right,
we
need
to
configure
ganesha
in
a
way
that
is
able
to.
A
A
C
A
B
I
I
think
in
theory
in
theory,
we
already
have
some
code
in
place,
interpolatible
that
is
able
to
deploy
several
self
demons,
and
nfs
is
one
of
them.
So
having
a
couple
of
different
tasks
that
configure
the
same
way
of
defensible,
the
ganesha
demons-
it's,
I
think
it's
okay
for
for
triple
o
from,
and
then
we
can
start
working
on
the
long
term
solution
that
should
be
the
the
manila
gold
for
the
next.
B
And
yeah-
and
I
said
that
triple
h
could
do
that
because
I
saw
analyzing
this
kind
of
scenario.
I
saw
many
problems
on
in
the
cfdm
context
and
assuming
we
can
solve
the
networking
binding
and
the
configuration
file,
we
still
have
cdm,
deploying
and
owning
this
component
and
I'm
not
sure
that
peacemaker
is
happy
when
a
different
orchestration
tool
is
zoning.
The
units
I
don't
know,
but
I
think
this
is
the
most
viable
solution
to
do
for
the
short
term.
B
F
Everything
you
have
ready
today
for
us
to
build
on.
We
still
gonna
have
our
customers
who
are
using
it
today,
who
have
to
upgrade,
but
we're
going
to
need
some
playbook
or
something
that
will
move
people
from
the
old
way
to
the
to
the
new
way
yeah
just
for
existing
customers
so
yeah.
This
is
just
an
intermediate
step,
so
we
can
get
all
the
pieces
in
place
and
be
off
of
steph
ansible
for
our
you
know,
upcoming
release,
yeah.
D
I
guess
the
question,
in
my
mind,
is
if,
if
we
have,
you
know
tomorrow
or
whatever
a
week
in
a
week
from
now,
if
we
had
all
the
pieces
in
place
for
the
long-term
stuff,
I
guess
you
still
have
all
the
manila
changes
to
make
to
actually
utilize
that
and
the
migration
code
to
do.
Is
there
any
world
in
which.
C
B
I
think
it's
necessary.
I
I
think
there
is
an
impact
on
the
manila
code
base
and
it
requires
types.
I
I
think
one
of
two
cyborg
I
don't
know,
but
I
think
the
short-term
solution
is-
is
necessary.
Okay,
we
cannot
skip
it.
We're
gonna
take
escape
for
this.
This.
A
Okay,
but,
but
to
be
clear,
triple
o
ansible
isn't
going
to
deploy
ganesha
how
how
self
ansible
did
and
not
how
the
fidm
does.
D
A
B
Right
the
goal
is
having,
at
the
end
of
this
execution,
the
same
pieces
provided
by
stefan
sibo.
So
you
have
a
different
path
because
it's
a
different
code
base,
of
course,
that
you
can
of
course
validate,
but
the
goal
is,
is
the
same.
So
you
end
up
having
the
same
thing.
Defensible
produces
perfect.
C
Right-
and
this
will
give
us
a
position
from
which
we
can
work
on
the
migration
path
without
safe
ansible
in
play.
So
we
can,
we
can
go
on
and
say:
osp17
has
no
more
suspensible
dependency,
we're
just
using
stuff
adam
instead
of
suppansible,
but
we're
also
using
a
little
playbook
and
triple
0
or
whatever
that
got
pulled
out
of
stephanie
in
in
the
time
for
the
interim.
A
B
Ie,
hello,
yeah.
I
think
we
should
end
this
part
together
with
the
sephonsible
team,
like
we
did
in
the
past
or
the
the
rolling
update,
but
we
had
intervalo,
I
don't
know
without
without
skipping
the
ganesha
park,
but
ending
in
the
proper
way
for
people.
We
already
did
these
kind
of
things
julio
go
ahead.
H
B
I'm
not
familiar
with
the
adoption
playbook
and
the
adoption
steps,
but
probably
we
can
think
about
skipping.
The
galaxy
part
I
think
the
rolling
update
playbook
is
always
is
also
to
change
containers
and
I
don't
know
the
details,
but
we
I'm
not
sure
we
can
just
skip
the
playbook,
but
we
can
handle
it.
The
same
way
we
did
with
the
blue
store
file
store
to
bluestone
migration
or
other
kind
of
infrastructure
playbooks
from.
B
D
Okay,
well
I
mean
I'm
I'm
fuzzy
about
how
exactly
this
transition
path
with
the
stuff
ansible,
the
low
ansible,
and
how
that
ownership
of
the
existing
deployed
ganesha
demona
mission,
but
I
think
that's
something
to
work
out
with
the
ansible
on
the
ansible
side.
F
F
C
Working
in
upstream
manila,
independent
of
tripoli,
independent
of
safancible,
just
to
make
sure
that
we
can
update
exports
via
stuff
adam
and
even
play
with
the
life
cycle
management
from
manila.
If
we
want-
and
those
can
we're
very
excited
about
it,
so
we
will.
We
will
try
to
press
ahead
on
that
front
and
if
there's
some
kind
of
impedance
mismatch
or
something
or
something
we
haven't
understood,
we
should
be
able
to
discover
that
relatively
early.
F
D
Yeah
I
mean,
I
think,
it's
something,
that's
something
that
you
should
think
about,
because
I
I
have
a
feeling
that
the
manila
changes
to
use
the
export
api
are
going
to
make
the
driver
vastly
simpler
and
so
it
might,
it
might
not
be.
It
might
be
less
work
if
you
can
do
it
quickly
and
view
which
else
work.
C
I
I
keep
waiting
waking
up
in
the
middle
of
the
night
in
the
last
couple
of
weeks
or
weeks
right,
so
we're
motivated
to
do
that.
I
just
don't
think
it's
responsible
at
this
point
to
leave
people
thinking
that
we
will
be
able
to
shortcut
it.
You
know
we're
going
to
start
upstream.
C
If
you
know
we
would
need
to
talk
product
product
managers
in
the
back
ports
and
stuff,
but
we
can't
do
that
until
we
have
something
working
end
to
end
and
understand.
But
yes,
if
I'm
all
for
simplifying
and
cutting
out
throwaway
code,
if
there's
a
way,
we
can
do
that.
But
I
just
don't
want
us
to
plan.
You
know,
leave
this
meeting
saying
it's
our
record
for
the
pets.
C
Yeah
yeah,
that's
the
that's
the
that's
the
battle
plan,
pardon
the
military
analogy
here,
but
you
know
let's
say
I
don't
think
it's
a
responsible
plan.
It's
a
stretch
thing
we
can
try
to.
F
Achieve
so,
the
the
triple
o
team
could
go
with
the
stop
gap
and
the
manila
team
can
go
with
the
new
way
and
no
no
work
will
be
thrown
away
and
if
well,
if
we're
fortunate
enough
to
go
to
the
new
way
we
we
can
just
do
it,
but
in
the
if,
if
it
doesn't
happen,
we
have
the
insurance
yeah.
That
sounds
good.
D
A
A
It
as
far
as
I
understood
russia,
the
nfs
module,
actually
supports
more
than
what's
possible
to
do
when,
when
coding
export
create
so
for
from
a
futurehead,
it's
already
capable
of
doing
more,
including
the
client,
ips
and
and
stuff
not
but
not
user
friendly
from
nfs
yeah.
A
A
Anything
it's
just
instead
of
having
one
command.
You
know
you
need
to
do
two
commands
right.
First,
one
is
nfs
export
create
and
the
second
one
is
nfs
export
update,
dash.
I
and
then
the
json
block
stuff.
C
Ugly
way
and
then
a
simpler
way
too,
I
mean
I
I
don't
you
know
I
I
don't
see
a.
C
I
mean
it
would
be
desirable
to
have
a
nice,
simple
interface,
that's
as
nice
as
the
rest
of
the
stuff
you're
developing,
but
doesn't
mean
we
can't
do
it
with
a
couple
of
calls
and
making
some
json
or
something
in
the
short
run.
D
G
This
is
one
thing
I
think
that
we
can.
We
can't
talk
tomorrow
about
that.
D
Thank
you
actually,
one
last
thing
just
that
one
that
one
ticket
that
came
up
on
chat.
Am
I
right
in
understanding
it's
just
it's
just
that
the
port
support
change
right.
That
was
all
that
was
what
happened.
A
If
it's
just
support
change
yeah,
I
guess
it's
too
late
right,
but
for
future
updates
we
need
to
kinda
think
about.
D
Yeah,
I
don't
actually
remember
changing
the
default
port.
I
just
know
that
the
default
port
now
is
80
or
443
and
it
used.
I
guess
it
used
to
be
7280
so
yeah
that
was
yeah.
That
was
an
excellent,
but
we
should
probably
just
make
sure
we
understand
what
version
he
was
upgrading
from
and
we
should
have
him
dump
his
service
back.
So
we
can
see
what
it
used
to
be
and
then
just
add
something
to
the
upgrade
notes.
A
Yeah,
I
think
I
saw
the
the
the
trace
back
and
the
in
in
the
tutorial
to
look
for
the
same
traceback,
so
it
it.
I
think
it
it
waits
a
few
times.
I'm
I'm
seeing
the
trace
back,
maybe
five
to
ten
times,
and
then
it
suddenly
disappears
and
everything
work,
at
least
in
totality,
even
though
the
ntw
might
be
listening
on
a
different
pattern.