►
From YouTube: Ceph Orchestrator Meeting 2021-03-23
Description
A
Okay,
well,
we
can
start.
I
the
first
thing
I
wanted
to
ask
you
about.
One
me:
was
the
oc
services
pull
request?
Yes,.
B
C
Yes,
I
I
saw
it
okay
and
now
I
understand
what
what
is
the
the
generality
okay,
so,
let's
go
to
have
only
one
ost
service
for
all
the
unmanaged
wasted
diamonds
and,
let's,
let's
put
all
these
diamonds
in
into
this
service.
Okay-
and
I,
the
thing
that
I'm
going
to
do
is
what,
when
we
try
to
to
delete
this
service,
what
to
put
a
message
reporting
that
this
is
not
possible,
that
you
need
to
to
remove
this
and
this
osd
in
order
to
do
that.
C
Okay-
and
I
think
that
what
is
clear
now
how
how
how
to
protect.
So,
thank
you
for
the
for
the
suggestion.
A
And
then
should
we
allow
you
to
delete
other
if
you
have
oc.foo
like
for
some
other
drive
group,
or
should
we
allow
that
to
be
deleted
if
they're
still
ost.
C
D
A
Okay,
it'll
just
go
away,
but
that
means
that
that
it
needs
to
that
service
service
only
be
deleted.
E
C
Service
without
deleting
the
the
osds,
but
I
think
that
is
this-
is
worst
okay,
because
yeah
this
is
going
to
cause
confusion.
Okay,
so
yeah,
I
think
that's
the
way.
The
way
to
protect
is
just
to
show
the
services
that
cannot
be
deleted
until
you
delete
manually
the
adostats
and
the
same
behavior
in
both
certain
kind
of
services.
A
E
A
Okay,
so
there
was
another,
there
was
another
pull
request
somewhere
in
here.
That
was
changing
it,
so
that
when
we
run
set
volume
we
use
the
same
container
version
as
the
manager.
A
C
A
A
All
right
lost
sebastian
when
he
gets
back
yeah
if
nothing
else,
the
the
current
code,
like
it
retries
without
that
batch
filter
by
batch
or
whatever
argument,
so
it
works,
but
it
still
spans
the
log
with
all
the
error
messages
which
are
confusing
during
upgrade.
A
So
it's
pretty
nice
to
have
the
other
one
in
there,
okay
and
then
the
last
thing
I
had
on
my
list
was:
I
went
to
go,
look
at
the
rgw
service,
both
because
I
never
looked
at
it
before
I
didn't
know
how
it
worked,
and
I
wanted
to
make
sure
that
it
worked
with
the
new
ports
stuff.
A
I
opened
up
pull
requests
to
fix
the
ports
thing
so
now
it
aj
proxy.
The
hp
proxy
config
is
adjusted,
so
it
binds
to
the
right
port
based
on
how
the
whatever
the
new
demon
description.
Now
that
separation
is
tracking
what
ports
rgw's
bind
to
and
you
can
have
multiple
per
host
it'll
properly
populate
that.
A
So
that's
the
easy
part,
but
I
think
there's
like
a
larger
issue
with
how
the
service
is
designed.
Do
you
know
who
wrote
that
initially?
Where
did
that
come
from.
A
Okay,
okay,
but
I
had
a
couple
concerns
about
it,
so
the
the
thing
that
bothers
me
is
that
it
takes
the
h
proxy
part
and
the
keep
alive
part
and
it
smashes
them
together
into
one
one
service
and
I'm,
and
it
seems
like
there
are
instances
where
you'd
want
to
run
just
aj
proxy
or
just
keep
a
live
d,
and
you
might
want
to
use
them
separately
from
rgw.
A
So,
for
example,
you
might
want
to
have
two
aha
proxies
in
front
of
a
bunch
of
rgw's
and
then
use
round
robin
dns
like
you
might
not
want
to
use
the
keep
alive
d
with
a
virtual
ip,
for
example.
That
seems
like
a
valid
use
case.
A
You
might
also
want
to
use
proxy
with
something
other
than
rgw
like
you
could
use
proxy
with
nfs.
I
think,
actually,
I'm
not
sure
about
no.
A
Do
that
okay,
but
you
could
use
aj
pro
or
sorry,
you
could
use
keep
a
live
d
with
nfs
like
you
could
have
a
virtual
ip
in
front
of
the
nfs
server
or
even
sis
once
they
support
that
too,
and
not
and
not
usage
proxy
in
that
case.
So
that
was
one
thing.
The
other
thing
is
you
can
right
now
it
it
takes
every
rgw
daemon
in
the
cluster
and
it
puts
those
all
in
the
haproxy
config.
C
A
A
Is
it's
used
by
people
id,
and
so
I
can.
I
couldn't
actually
use
it
yeah
and
then
it
and
then
it
ran
into
some
other
issue
where
it
kept
like
trying
to
deploy
the
keep
alive
the
part
and
then
it
would
fail
and
then
it
would
delete
it.
And
so
I
couldn't
actually
see
what
was
going
wrong.
It
looked
like
there
was
some
issue
with
the
way
that
it
was
getting
scheduled
or
whatever,
but
I
wasn't
really
sure.
C
A
Yeah,
well,
I
guess
I
looked
at
the
whole
thing
and
then
I
stepped
back
and
I'm
wondering
if
it
makes
more
sense
to
make
it
two
separate
services,
one.
That's
just
h,
a
proxy
that
does
the
age
proxy
part.
I
mean,
I
think
it's
basically
just
a
matter
of
taking
that
code
and
just
like
putting
it
in
two
separate
files
and
then
having
another
one.
That's
just
to
keep
alive
part
and
then
adding
adding.
A
So
you
say
which
service
it
is
that
you're
you're
you're
laying
in
front
of
because
then
you
could
use
the
virtual
ip
with
nfs
people
id
you
could
use.
You
could
deploy
h
a
proxy
without
the
virtual
ip
stuff
which
in
my
case
wouldn't
work,
but
it
would
still
be
useful.
C
C
Our
aim
was
to
do
something
similar
to
the
functionality
that
we
have
now
in
the
fancy
world:
okay,
that
well
to
provide
a
list
of
parameters
and
to
configure
all
the
all
the
system-
okay,
in
order
to
make
more
transparent
or
to
avoid
any
kind
of
configuration.
Thinking
in
configuration
for
the
final
user.
Okay,
but
obviously
is
more,
is
more
flexible
in
the
way
that
you
are
suggesting
okay,
so
we
can.
We
can
change
that,
let's,
let's
go
to
the
change.
A
Yeah,
okay,
I
wanted
to
get.
I
want
to
check
with
everyone
to
see
before
yeah
I
mean
it's
nice
that
it's
all
like
one
go,
but
it's
I
don't
know
that
it'll
be
that
much
harder
to
separate
into
pieces
for
the
user,
because
you
still
there
are
still
like
20
different
parameters
that
you
have
to
fill
in
with
the
configuration.
The
other
thing
was
that,
like
you,
have
to
specify
all
the
users
and
passwords
in
the
like
the
service,
spec
yaml.
C
No
okay,
this!
This
is
more
problematic
because
in
report,
the
the
passwords
to
the
final
user
or
is
something
that
is
going
to
be
sacred.
C
A
C
C
F
Okay,
I
don't
think
everything.
G
A
A
The
other
thing
I
noticed
was
that
the
the
code
and
stuff
adm-
I
think
this
is
the
only
case
where
the
service,
like
the
spec
service,
produces
two
different
types
of
demons.
A
Yeah
and
there's.
I
noticed
that
there's
a
bunch
of
like
complexity
in
the
code
to
like
deal
with
that.
One
situation-
I
don't
know
if
that's
not
gonna,
come
up
later,
but
it
would
simplify
things
a
little
bit.
I
think
that's
actually
the
source
of
the
bug
that
I
was
seeing
where
it
kept
deleting
the
the
demon,
because
I
think
the
demon
type
didn't
match.
G
Yeah
this
one
was
definitely
different
because
there
was
like
the
two
surfaces.
I
think
the
only
other
thing
that
does
that
is
iscsi,
but
they
do
it
differently
because
they
deploy
earth.
I
think
it's
nice
because
it's
like
the
tcmu
runner
container
that
also
gets
deployed
at
that,
but
they
do
that.
That's
to
put
differently.
A
Okay,
well,
I
might,
I
might
take
a
stab
at
that.
If
I
have
some
free
time,
probably
just
like
not
touching
the
har
jw
at
all,
but
just
creating
a
new
haproxy
one
and
then
copying
the
code
over
and
then
what
I
think
once
all
that
stuff
works.
Then
we
can
delete
the
old
one
and
or
convert
automatically
between
the
two
or
something
but.
A
A
Okay,
that's
pretty
much
all
I
have.
I
think
the
only
other
real
item
on
the
list
is
the
like.
The
osd
memory,
auto
configuration
auto
tuning
that
kind
of
like
tabled
a
while
back,
but
I
think
that's
the
main
thing
that
I
know
that's
that
from
a
guess:
that's
hrtw,
that's
a
gap
between
what
ansible
does
and
what
stuff
idiom
does.
A
Yeah
yeah
and
all.
A
All
the
urgent
bugs
are
closed,
kinda
cool,
I
think
we're
overall
we're
looking
pretty
good.
I
think
the
main
the
main
failure
I'm
still
seeing
in
qa
is
that
stupid,
catatonic
the
whole
there's
like
three
bugs
that
are
from
the
ubuntu
2004
cubic
version
of
podman
that
keep
popping
up.
A
D
C
A
C
Well,
it
runs
okay
and
we
have
the
basic
functionality.
Okay,
I
tested,
I
think
that,
one
month
ago,
more
more
or
less
because
it
was
a
attack
that
prevents
the
integration
test
in
the
project
to
to
run
okay-
and
I
fixed
the
the
problem
and
well
at
least
it
starts
it
is
available
and
it
provides
the
basic
functionality
okay,
but
we
need
to
to
test
with
overload
overall,
the
functionality
is
related
with
the
different
services
rcw
and
the
phase
object
blocks.
C
So
I
think
that
what
is
the
next
big
point
to
to
start?
Okay,
maybe
what
I
think
is
that
probably
we're
gonna
start
with
that
in
april.
Okay,
I
think
that
pacific
is
is
going
to
be
the
freeze
in
this
week
or.
C
Yep
yeah,
okay.
Okay,
so
I
think
that,
but
we
can
expect
to
have
a
limited
number
of
packs
or
or
problems
also
in
downstream
okay.
So
maybe
it's
time
to
start
also
to
to
do
new
things,
and
one
of
the
the
first
thing
that
we
need
to
do
obviously
is
look,
because
we
are
what
the
difference
in
functionalities.
A
I
Want
to
talk
about,
I
was
just
gonna.
Add,
though
yeah
our
plan
is,
as
you
know,
when
we
have
our
new
employee
starting
the
first
that
at
some
point
we're
going
to
transition
you
off
want
me
on
the
road
correct.
C
Well,
I
I
want
I
I'd
like
to.
I
would
like
to
do
that:
okay
and
to
move
to
the
rook
orchestrator,
even
to
start
to
do
things
in
the
project.
Okay,
but
this
is
open
it
okay,
so
more
people
can
work
in
in
the
rook
orchestrator.
So
I
agree
as
we
discussed.
I
So
but
so
sage
we
will
have
someone
starting
on
it
hopefully
soon,
and
you
know
overall,
though
there
is
no
downstream
need
at
this
point,
you
know.
So
it's
really
an
upstream
more
of
an
upstream
discussion,
yeah.
A
I
I
mean
so
it
shouldn't
be
a
big
deal
and
yeah.
You
know
over
time.
We
just
got
balanced
all
of
our
existing
work
with
it,
and
hopefully
we
still
have
some
help
with
michael
and
others.
I'm
not
sure
if
they're
gonna
michael
will
transition
over
to
rook
as
well,
but
we'll
see
what
happens.
C
Francesco
and
john
are
present.
Okay
from
the
openstack
team.
I
don't
know.
If
do
you
want
to
comment
something.
J
For
your
support-
and
you
know
triple
o-
is
using
seth
adm
and
thanks
for
fixing
helping
merge
that
fix
recently,
so
everything
everything's
good
and
thanks
for
working
on
those
those
new
features.
You
were
talking
about
we'll
look
forward
to
that.
A
You
saw
the
the
the
subnet
thing
is
in
there
now,
as
well
as
the
rjw
port
assignment
scheduling,
stuff.
D
Hey
I
yeah.
I
just
saw
this
the
devil
rivers
that
got
merged,
so
we
should
be
able
now
to
specify
the
network
for
rgw
right
yeah.
I
just
saw
that
a
few
minutes
ago,
so
I
didn't
have
too
much
time
to
check
what
changed
it.
But
as
far
as
I
understand
it's
not
a
spec
field
but
top
level
field
of
the
spec,
the
network
network
key
and
that's
something
we
are
going
to
test
soon.
D
We
are
trying
to
with
which,
as
john
said
with
which
the
obvious
tax
ci
on
using
cfadm
just
for
drbd,
and
now
we
are
trying
to
integrate
more
services.
Rgw
will
be
the
first
one
with
the
new
containers
and
the
pendings
fading
bits
for
pacific
that
now
we
are
able
to
build
on
cbs
on
center
s.
So
we'll
keep
you
posted
on
that
short
for
sure.
Okay,.
H
A
A
D
Yeah,
I
have
just
one
question
on
this.
This
feature
it's
is:
is
it
the
same
for
all
the
other
demods
or
is
just
for
rgw.
A
Just
rgw
right
now,
it's
the
only
one,
that's
doing
ip
assignment
like
that.
A
For
the
other
demons
for
the
theft
demons,
you
set
the
public
network,
that's
config,
yeah.
D
A
E
J
Triple
o
openstack
wallaby
is
going
to
ga
soon,
but
we're
trying
to
time
it
so
that
the
default
version
of
sep
is
specific.
J
But
I
know
pacific
comes
out
soon
and
then
hopefully
there'll
be
enough
time
of
testing.
In
between
that
we
can
switch
the
default
version
to
pacific.
Presently,
it's
octopus
so
we'll
when
pacific
comes
we'll
test
it
with
that
and
try
and
switch
the
default
with
w.