►
From YouTube: Ceph Orchestrator Meeting 2022-05-04
Description
Join us weekly for the Ceph Orchestrator meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
All
right
we
can
start,
I
guess.
Apparently,
I
only
have
one
topic
in
here
and
something
we
we
could
push
back
a
couple
weeks.
A
We
still
want
to
talk
about
eventually,
which
was
what
do
we
want
to
do
for
the
binary
refactoring.
A
I
guess
we
start
with
an
update
on
the
current
state.
Mike,
have
you
looked
at
the
thing
recently,
the
zipper
stuff
or
whatever
it
was.
B
It's
been
a
while,
so
we
have
this
pr,
that's
been
out
there
for,
like
I
don't
know,
maybe
like
a
year
almost
that
we
keep
rebasing.
I
did
right
now.
It's
just
basically
a
really
basic
python
script
that
creates
a
zip
file,
executable
one
of
them,
the
python
binary
which
works.
Okay.
It
would
be
interesting
to
investigate
other,
like
methodologies
for
building
that
zip
file
they're
a
little
bit
more
pythonic,
but
that
works
for
now
and
then
we
also
went
through
and
just
kind
of
hammered.
B
Of
the
unit
tests
and
pathology
testing-
and
things
like
this
to
just
get
things
passing
in
the
build
system
into
our
test
frameworks,
I
believe
we
got
really
fairly
close
to
that.
We
were
just
down
to
the
point
of
confirming
that
this
was
actually
posted
somewhere,
such
as
downloadstuff.com
during
a
formal,
stable
release
and
some
documentation
tasks.
However,
the
I
believe
the
branch
has
kind
of
become
stale
so
probably
need
to
rebase
that
and
don't
that
emerge
conflicts
any
things
up,
but
I
believe
that's
the
current
state.
A
A
A
A
matter
of
the
timing
of
when
we're
going
to
do
it,
making
sure
that
it's
working,
I
remember
what
it
was
that
was
missing
before
I
know
you
were
talking
about
dan
mick
vows,
the
whatever
builds
getting
posted
in
the
right
spot,
but.
B
Yeah,
well,
there
were
two
things
he
did.
Some,
let's
see
on
the
dev
new
builds,
I
believe,
is
what
they're
called
he
had
added
there's
like
a
push,
pulled
a
chakra
thing
and
that
wasn't
quite
working
totally
right.
I
think
I
modified
our
technology
tests
actually
pull
from
the
chakras
to
do
a
little
search
and
find
the
binary
and
pull
it
for
the
appropriate
distro.
Those.
A
B
Those
ones
actually
did
pass,
and
that
was
right
before
we
decided
to
kind
of
put
this
on
pots.
Okay
did
get
that
part
of
it
done.
I
think
the
outstanding
question
was
during
the
these
builds.
Do
we
actually
post
the
binary
as
an
artifact
to
like
download
stuff.com?
I
think
dan
was
hoping
that
that
would
just
happen,
but
we
actually
never
confirmed
that
because
we
hadn't
released
a
stable
release
yet
so,
but
we.
A
Yeah,
if
those
things
are
working,
then
we
basically
have
this
in
a
good
state
where
it
would
be
ready.
I
guess
to
just
sort
of
plan
around
it
and
when
you
want
to
do
it
and
everything.
A
I
guess
that
was
sort
of
the
point
of
the
topic
was
to
decide
that,
because
we
have
the
whole
thing
where
we
want
to
refactor
this
and
it
seems
like
we're
sort
of
close
to
being
able
to.
But
it's
too
big
of
a
change
into
some
minor
release,
and
we
don't
want
to
do
it
too
far
away
for
the
next
major
release.
Because
then
you
have
to
make
all
the
back
parts
really
painful.
C
Yeah,
so
I
do
have
a
couple
comments
on
that
piece
about
the
back
ports.
So
if
you
don't
mind
I'd
like
to
kind
of
summarize
what
how
I
think
it
works,
because
I
think
if
you
correct
me,
if
I've
made
any
major
mistakes
now,
I
won't
go
down
the
wrong
path
with
some
suggestions.
I'd
like
to
raise
oh,
so
I
do
believe
that
the
code
of
sephadm
is
literally
just
renamed
to
sephadm.py,
so
it
just
gets
a
new
name
with
a
dot
py
extension.
C
C
All
right
so,
first
off
is
there
any
reason
why
we
shouldn't
get
this?
You
know
merged
sooner
rather
than
later
and
again
I
have
a
riff
on
the
back
porch
thing:
that's
assuming
that
backwards,
weren't
difficult!
Is
there
any
reason
to
pause
on
this.
A
C
I
agree
so
to
me
this
is
like
a
big.
It's
like
step,
zero
to
refactoring,
making
it
possible
to
do
the
refactoring
and
if
we
get
it
merged
sooner
rather
than
later,
I
think
it
should
hopefully
shake
out
any
other
issues
with
the
build
process
or
people
downloading
the
right
thing.
The
other
thought
I
had
is
once
this
is
merged
you
could,
then
you
could
still
have
something
in
the
tree
called
self
adm,
even
if
it
was
just
a
python
script
that
literally,
when
you
ran
it
print.
C
This
is
not
how
you
get
ceph
adm
anymore
kind
of
a
jokey
thing,
but
if
if
people
are,
if
it's
more
of
a
training
thing
to
get
people
to
use
the
build
version,
we
could
also
do
something
like
that.
If,
if
other
tools
were
like,
oh,
I
pull
set
fadi
I'm
out
of
get
blah
blah
blah.
There
are
other
things
we
could
do
on
top
of
the
new
compiled
version
compiled
as
in
built.
Not
you
know,
you
know
what
I
mean
so
after
that,
then
we
could
talk
about
okay.
C
How
do
we
start
actually
breaking
up
the
new
giants?
F80M
dot
py,
but
up
until
then,
the
back
ports
are
literally
just
a
matter
of
changing
the
file
name.
C
When
you,
when
you're
doing
your
backboards,
does
anyone
disagree
with
that.
D
D
Okay,
I
think,
what's
whatever
change
we
make,
we
have
to
keep
in
mind
that
there
are
a
lot
of
people
using
direct
links
to
the
to
the
binary
right.
C
A
Yeah,
as
far
as
in
master
goes,
we
can
sort
of
break
things
in
there
right
and
then
like.
Obviously,
if
we
do
the
actual
refactoring
the
backbone
initially.
But
if
we
just
have
this
one's
f8m.pi,
we
can
do
the
backboarding
fine
and
then
it
doesn't
really
matter.
If
the
way
you
get
cepheidium
is
different
in
master
and
then
we
could
just
just
introduce
it
into
the
next
release
in
r
again
the
issue
just
sort
of
pops
up
eventually,
when
we
want
to
do
the
real
refactoring.
A
You
know
when
do
we
want
to?
We
want
to
do
that,
because
we
don't
want
to
change
the
way
you
get.
The
the
binary
in
in
quincy
or
pacific
right,
yeah
balance
that,
but
I
it
does
seem
like
as
far
as
the
first
step
of
this,
but
with
this
specifically
this
fdm
zip
stuff,
we
could
merge
it.
A
Maybe
on
the
early
side,
just
make
sure
it's
working
and
everything
as
long
as
we
don't
do,
the
actual
splitting
up
the
file
a
little
closer
or
if
we
have
some
some
other
strategy
around
that
that
it
may
still
backboard
well
or
something.
B
On
the
topic
of
splitting
up
the
file
and
the
timing
of
that,
I
think
one
of
the
things
that
would
be
kind
of
imperative
us
to
do
is
write
much
better
unit
tests
around
that
to
make
sure
we're
not
breaking
backwards
compatibility
for
existing
functionality
during
the
refactor.
B
I
think
we
can
add
unit
tests
at
any
time.
I've
slowly
been
trying
to
do
that
for
each
one
of
the
various
cli
commands
as
I've
gone.
But
you
know
it's
kind
of
daunting,
there's
quite
a
few.
We
have
a
bit
of
technical
debt.
There.
C
A
Okay-
and
I
kind
of
like
these
ideas,
just
we
can
sort
of
start
working
on
getting
the
sort
of
zip
stuff
merch
because
it
doesn't
seem
like
it'll,
actually
be
a
big
problem
as
long
as
we
don't
immediately
start
splitting
up
the
file,
and
then
we
want
to
focus,
I
guess
next,
on
making
sure
things
are
stable
and
that
we
have
a
sol
a
good
set
of
unit
tests
behind
it.
A
C
D
We
have
civilian
binary,
okay,
a
component
and
right
now
I
think
we
have
eight
bucks
for
us
for
seven,
eight,
seven
bucks.
A
C
C
A
D
A
D
Yeah,
probably
we
have
to
do
some
triage
and
just
to
make
sure
that
we
are
putting
the
right
tags
in
the
backs.
Yeah,
more
confident
numbers.
A
C
A
That's
fine,
it
is
going
to
be
a
long
running
thing
we
could,
it
definitely
could
have
its
own
pad.
I
guess
for
today,
I'll
just
have
notes
in
here
and
then.
A
Copy
and
paste
yeah,
I
have
to
figure
it
out
too,
but
yeah
that
seems
like
this
would
be
a
good
idea,
especially
if
we
want
to
track,
like
the
say,
the
individual
issues
that
we
want
to
get
fixed
first
or
the
things
getting
unit
tested
right,
yeah
that
could
be
good.
There
would
be
a
lot
of
info
there's
a
lot
of
bugs
you
want
to
fix.
There
are
a
lot
of
things
we
decided
to
get
tested.
We
could
have
a
whole
long.
C
A
Yeah
that
could
be
good
yeah,
because
I
think
there's
definitely
enough
there.
That
could
have
its
own
path.
D
A
A
C
A
A
A
A
I
mean
I
like
the
plan
overall
sort
of
starting
with
this
testing
and
getting
this
zipper
stuff
in
and
then
somebody
I'll,
probably
I'll,
try
to
triage
some
of
the
the
bugs
figure
out
which
ones
we
want
to
get
done
for
the
binary
before
we
do
the
refactoring
as
well,
and
those
things
are
really
stable
and
everything,
and
we
can,
you
know,
move
on
to
the
actual
effect
which
I
guess
will
have
a
new
conversation
on
that
when
we
get
there.
D
A
But
I
don't
think
we
should
right
now,
because
we
don't
want
to
actually
change
it
until
a
major
release
or
I
guess,
because
you're
taking
anything
completely
in
master.
Maybe
we
should.
C
C
D
A
A
A
I
need
to
check
the
pr
I
don't
know
like.
Is
there
any
documentation
in
there.
B
C
Yeah,
actually,
I'm
interested
in
this
topic.
It
was
one
of
the
books
that
got
sebastian
used
to
get
me
kind
of
help
out
in
this
area,
but
I
feel,
like
I
haven't
done
anything
on
it.
So
yeah.
Let
me
know
if
you
wanted
to
done
as
part
of
this
pr,
if
we're
gonna
just
do
it
as
a
follow-up.
Pr,
that's
that's
easy
enough,
but
yeah
we
can
talk
about
it
in
future.
B
A
Right,
yes,
I
mean
I'm
pretty
satisfied
with
what
we
have
there.
Is
there
any
other
things
you
want
to
bring
up
about
that.
C
One
last
thing,
which
is
when
we
deliver
this
via
a
rpm
or
like
the
enterprise
distros
and
stuff,
is
there
any
additional
changes
that
will
need
to
be
made,
or
is
it
just
kind
of
a
done
deal
already.
C
A
All
right,
so
I
know
there
are
packages
like
cipher,
n
packages
and
stuff
people
will
install
like.
I
know,
there's
one
like
under
ubuntu
and
stuff,
but
they
don't
seem
to
get
updated
properly
anyway.
I
know
like
the
ubuntu.
One
is
like
still
some
like
really
old
octopus
one,
so
that
should
probably
be
something
we
just
look
over
in
general,
not
having
proper
packages
that
from
like
the
stepping
staff's
repos,
instead
of
just
relying
on
the
district
ones.
Anyway,
I
think
we
already
are
trying
to
pretty
sure
it's
release.
C
E
Oh
finally,
thanks
I
was,
I
was
trying
to
get
that
fixed
during
the
the
course
of
the
call
yeah
not
know
that
you
hear
me,
I'm
not
sure
what
to
say.
I
have
a
question
about
the
documentation
for.
E
In
our
current
box
we
have,
we
have
the
the
url
to
the
binary
which
points
to
github
and
and
the
branch
or
octopus.
It
points
to
octopus
and
search
what
I
was
not
sure
about
is.
If
we
compile
the
binary,
how
would
this
url
need
to
look
like,
and
is
it
true
that
it
would
only
be
updated
on
minor
releases.
A
Yeah,
I'm
guessing
we'd
either
have
to
have
just
one
that
gets
uploaded
with
each
release
or
maybe
the
same
way
that
except
packages
themselves
get
updated
like
we
have
like
a
master
one
that
gets
put
up
there
like.
I
know
when
we
do
like
the
actual
builds
and
everything
on,
except
here
or
something
like
once
again.
A
Maybe
we
could
added
a
package
up
there
or
something
that
people
could
go
to
as
like
a
latest
version.
They
wanted
that
and
then
I
just
had
one
with
the
minor
releases
people
could
use
as
well
or
something
like
that.
It
would
kind
of
tie
out.
I
guess
to
the
way
it's
already
being
handled
for
the
topology
test.
A
I
guess
because
I
think
for
sure-
or
I
don't
like
if
correctly,
but
I
I
thought
that
we
said
earlier
was
the
way
that
I
was
getting
handled
was
that
it
was
sort
of
building
something
up
there
and
then
it
was
pulling
it.
I
was
using
that
for
the
test.
B
So
the
the
tricky
part
is
that
goes
into
chakra,
and
you
have
to
do
some
pretty
clever,
searching
and
querying
to
find
those,
and
I
don't
think
they
live
forever.
B
So
it's
not
very
user
friendly
to
consume
those.
The
the
most
user-friendly
thing
is
download.ceph.com
and
we
do
have
like
say
an
rpm-quincy
directory,
for
example,
it's
relatively
stable,
but
those
those
are
really
just
the
stable
releases.
The
point
releases
there
isn't
really
like
a
nightly
build
for
master.
I
don't
see,
builds
supposed
to.
A
Yeah
yeah,
I
feel
like
the
mono
releases
f,
will
be
able
to
handle
it
the
major
ones,
because
I
know
I
think,
even
with
quincy
they
they
built
like
a
fadm
package.
They
put
it
up
there,
but
it
just
like
had
a
copy
of
the
binary
in
it.
It
wasn't
really
anything
we
could
just
have
them
put
like
our
zip
in
there
as
well.
We
just
coordinate
with
whoever's
sending
that
stuff
up.
D
And
with
this
new
approach,
are
we
going
to
build
the
battery
automatically
every
time
we
submit
some
pr
and
generate
it
and
put
it
on
on
the
git
app
or
are
we
going
to
handle
this.
D
B
D
D
B
Yeah,
I
think,
there's
a
there's
a
time
to
live
on
each
one
of
those
cfci
builds
because
jenkins
goes
through
and
purges
them
occasionally
right.
They
only
live
long
enough
to
do
some
technology
testing.
The
other
really
difficult
thing
is
the
chakra
interface
is
not
terribly
intuitive.
C
Okay,
so
I
think
going
back
to
back.
I
think
that
was
patrick's
earlier
statement,
which
is
you
know
the
documentation
today
is
points
a
person
at
you
know
github
branch
name
path
to
the
binary
that
always
works
when
we
start
building
it.
C
C
B
Yeah
right,
that's
kind
of
the
awkward
thing
that
I
ran
into
at
the
documentation
like
a
very
stable
release.
It's
pretty
easy
to
document,
but
this
this
portion
of
it
like
if
you
want
from
the
master
code
it's
like,
do
we
write
a
user
facing
or
developer
facing
doc.
Split
that
and
then
say
you
know,
get
cloneless
and
build
it
yourself,
or
do
we
want
to
post
it
somewhere?
C
D
E
Would
it
be
an
issue
if
the
developer
just
compiled
it
themselves?
I
mean
every
developer
needs
to
compile
ceph
and
if
there's
a
way
to
just
compile
self
adm,
which
would
be
a
lot
quicker.
Would
that
be
an
issue.
C
It's
probably
not
the
worst
case
scenario
for
sure
the
question
is:
is
the
person
are
only
developers
the
people
who
use
the
latest
stuff
if
you're,
if
you're
say
I'm
just
gonna,
try
out
whatever's
on?
I
don't
know
it's
a
question.
That's
for
sure.
In
my
in
my
mind,.
B
B
Yeah,
this
might
be
something
we
need
to
open,
like
maybe
david,
galloway
or
dan
can
ask
them
what
they
would
suggest
here.
A
Yeah
it'd
be
nice
if
we
could
handle
them
the
same
way.
We
do
the
containers
for,
like
the
latest,
one
we've
built
for
for
the
master
container,
for
example,
like
the
master
devil.
One
is
just
online
somewhere.
Yeah
well
it'll
be
cool.
If
you
could
do
that
for
this
video
as
well,
you
at
least
have
one
from
like
the
last
day.
C
A
Yeah,
we'll
have
to
see
what's
possible.
I
guess
because
it
would
be
really
nice
if
that's
how
it
could
work
is
that
as
part
of
the
nightly
build
it
could
also
post.
They
say
they
had
some
like
constant
url
that
just
kept
getting
updated
with
a
new
version
of
the
cypherdium
binary
like
compiled
one
it'll,
be
really
convenient.
A
D
E
If
you're
interested
about
how
this
began,
which
I
wasn't
to
we
wasn't
able
to
talk
about
earlier,
the
idea
was
not
only
to
be
able
to
split
up
the
code
of
self-idm
into
several
files
to
have
it
more
structured
and
organized,
but
also
to
be
able,
which
seems
to
be
global,
at
least
when
they
created
the
pull
request.
I
don't
know
last
year
or
so
that
you
can
also
add
external
dependencies
into
that
binary.
A
I
remember
being
a
big
problem,
especially
the
ammo
parsing
in
particular.
I
remember
you
couldn't
do
properly
in
the
binary
example.
That
would
be
good
to
do
as
well.
A
All
right,
do
you
have
anything
else
to
say
about
that
topic
for
now,
or
we
have
enough
that
we
can
sort
of
get
started.
A
Okay,
move
on
then
the
next
one
we
have
in
here
is
about
an
h,
nfs
yeah.
Let's
see
ingress
in
here.
Wait!
So
is
this
tracker.
A
Okay,
I
mean,
I
think
me
and
mike
and
ramon
are
all
aware
of
it.
Ronald
do
you
want
to
give
an
overview
for
everyone
else?
What's
going
on
with
it.
F
Sure
adam,
so,
basically
we,
although
the
openstack
team
and
the
suffers
team,
were
trying
to
set
up
the
fadium
deployed
nfs
service
osfs,
and
I
also
deploy
ingress
service
over
nfs
to
be
to
allow
sharing
and
office
exports
that
can
be
mounted
by
by
clients,
openstack
client
vms,
for
example,
and
what
we
found
that
found
out
was
if
we
restrict
nfs
exports
to
certain
ips
client
ips,
and
we
try
to
mount
our
nfs
exports
from
those
particular
ips
that
should
have
access
to
the
server.
F
That's
our
denied
access
of
those
of
those
requests,
the
reason
being
that
the
backend
nfs
server
sees
only
the
hip
proxy
that
you
know.
That's
that's
in
front
of
it
instead
of
the
client
id.
F
So
this
seems
to
be
a
problem.
That's
already
known,
and
I
I
researched
researched
about
it.
So
basically,
you'd
have
to
set
up
headship
proxy
to
run
in
what
they
call
a
transparent,
transmit
mode
transmit
mode
to
allow
the
back-end
server
to
see
us
to
see
the
client
ip
and
not
the
h8
proxy
ip.
F
I'm
not
sure
how
we
can
do
that.
So
one
of
the
I
mean
one
of
the
ways
to
achieve
this
is
to
is
to
have
the
back-end
server
support
proxy
protocol
which
which
the
backend
ganesha
server
doesn't.
So
that
is
out
of
question.
F
I
I
still
need
to
figure
out
if
there
are
other
ways
that
we
can
set
this
up
a
haproxy
in
a
transparent
mode.
If
that
isn't
possible-
and
I
don't
know
what
we
need
to
do
or
maybe
come
up
with
a
different
solution
to
set
up
our
table
floating
ips
for
the
ganesha
demons.
C
F
I
had
a
question
for
for
mike
because
mike
was
able
to
set
set
up
the
ingress
servers
with
ganesha
service
and
was
able
to
do
some
h.a
testing
was.
Did
he
encounter
this
issue
mike?
Were
you
able
to
did
you
ever
test
setting
client
ip
based
access
restrictions
and
trying
to
mount
the
exports.
B
A
So
for
my
info,
because
I'm
not
super
update
on
all
the
aha
stuff
works,
what
is
different
about
your
deployment
mic?
That
makes
it
possible
there
that.
B
A
B
A
small
nuance,
I
suppose,
and
how
the
networking
is
set
up.
In
my
case,
I
just
have
a
I
just
added
an
additional
ip
address.
That's
like
an
ip
address
to
my
bridge
network.
B
Then
I
use
that
range,
and
so
I
there
may
or
may
not
be
a
bug
around
like
there's
an
additional
option
in
the
spec
file
to
say,
specify
which
interface
you
want
to
use,
rather
than
trying
to
infer
the
cider
that
mask
from
that,
so
there
could
have.
There
could
also
be
a
potential
difference
there
like
say
if
you
have
like
some
high
p
address
range.
That's
not
discoverable
through
the.
B
E
A
B
Set
adm
is
able
to
introspect
all
of
the
interfaces
on
the
host
machine
and
then
figure
out.
Okay,
you
know
this.
This
ip
address
for
the
vip
is
within
the
cider
range,
and
so
therefore
I
should
bind
to
this
address
or
to
this
interface
so,
but
I
don't
think
we've
specifically
done
a
lot
of
testing
around
that
additional
option
that
allows
you
to
actually
get
around
this
yeah.
B
F
We
won't
be
able
to
enforce
that
with
the
current
solution.
The
way
the
hip
blocks
is
set
up
and
if
we
can't
figure
out
if
he
can't
set
up
h8
proxy
in
this
transparent
mode
with
the
back
in
nfs
server,
then
yeah,
like
I
said
before,
we
might
have
to
come
up
with
a
different
solution
for
providing
stable
ips
to
the
ganesha
servers.
F
I
mean
it
works
for
mike,
like
mike
said
it
works
for
mike,
because
basically
his
exports
would
have
would
give
universal
access
to
all
client
ips.
So
the
ganesha
server
keeps
saying
the
proxies
ipn
since
all
ips
have
access
to
the
export.
A
F
Difficult
yeah
I'm
currently
looking
into
that
and
see
if
it's
feasible
with
the
nfs
server
backend,
so
I'll
research
that
and
I'll
I
plan
to
reach
out
to
other
other
nfs
folks
and
see
if
they,
you
know
if
they
know
how
we
can
do
this.
A
F
Bring
attention
to
this
issue
so
that
we're
all
on
the
same
page
and
yeah,
we
need
to
figure
out
the
next
steps.
A
All
right,
well,
I
know
mike's
still
testing
his
stuff
and
then
ramona
says
like
you're,
asking
people
around
about
transparent
mode
stuff
as
well,
so
hopefully
out
of
those
two
things.
Maybe
we
come
up
with
something,
or
at
least
be
confirmed.
Whether
or
not
this
is
is
possible.
It'll
work.
A
All
right,
thanks:
do
you
have
anything
else
you
want
to
say
on
this
topic,
information
or
ideas
or
whatever.
A
Yeah,
all
right
and
we'll
just
go
with
that,
we'll
go
with
the
word
best
skinny
and
we'll
be
doing
so.
I
guess
and
move
off
to
maybe
I
guess
reevaluates,
maybe
next
week
in
the
weekly,
we'll
keep
this
like
a
running
topic
where
people
are
at
with
it
having
good
ideas,
and
that
was
the
last
topic
we
had
in
the
weekly
other
pad
thing.