►
From YouTube: Ceph Orchestrator Meeting 2021-09-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
and
welcome
to
today's
orchestrator
meeting,
I'm
still
recovering
a
bit,
so
I'm
not
going
to
be
you're
not
going
to
hear
too
much
from
myself.
B
Again,
yes,
this
is
this
me:
okay,
just
like
they
have
been
taking
a
look
at
what
is
the
current
state?
Okay
and
what
I
think
that
we
have
in
this
moment
the
basic
functionality?
Okay,
and
what
maybe
we
have
some
details
that
should
be
discussed
or
to
see
if
it
is
worth
to
implement
that?
Okay,
because
there
are
differences
between
how
is
managed
in
rook
and
how
is
managed.
B
For
example,
network
management
with
monitors,
okay
example,
the
way
that
there
are
several
things
I
have
to
think
okay
to
the
details.
Okay,
so
we
can.
Maybe
we
can
comment
the
things
more
detailed,
okay,
but
I
I
wanted
to
know
what
is
the?
What?
What
is
the
position
about
the
couple
of
things?
Okay,
mainly
one
is
the
escasi
implementation
with
the
fdm
orchestrators.
We
have
the
possibility
to
deploy
esk's,
yes,
gateways,
okay
and
to
use
the
us
case.
B
B
C
C
C
D
Can
add
a
bit
more
to
that
so
just
like
travis
said
not
so
much
of
a
trim
traction
so
far,
however,
downstream
has
raised
epics,
which
involved
working
with
windows,
notes
just
to
get
some
sort
of
standard
integration.
We
wanted
to
go
with
the
iscsi
approach.
D
No,
so
it
wasn't
for
nine,
but
have
it
hasn't
really
resurfaced
in
410,
but
at
some
point
we
might
have
to
implement
it
just
to
more
or
less
like
natively
work
with
windows
notes.
Instead
of
requiring
them,
I
mean
requiring
people
to
deploy
with
the
native
driver
that
we
have.
D
Although
I
have
no
idea
of
what
the
instead
of
the
driver
is.
I
know
we
have.
I
think,
someone
working
on
it
right
now.
I
think
we
have
a
contractor
to
work
with
the
windows
driver
but
yeah,
I
think
we're
just
improving
it
anyways.
So
maybe
next
year
I
guess.
If
the
ocp
windows
node
worker
integration
is
there
and
odf
has.
D
I
guess
odf
wants
to
integrate
that
with
windows
nodes.
Then
we
might
have
to
do
it
in
rock
deploying
as
busy
targets
and
gateways
or
just
gateways.
I
guess
which
yeah
I'm
not
really
looking
forward
to
honestly,
because
at
least
a
few
years
back,
it
was
a
bit
painful
to
deploy
that
in
containers.
I'm
not
sure
what
the
state
of
the
art
is
today
but
yeah.
A
My
take
on
it
would
be
to
just
just
just
make
rook
decide
on
itself
if,
if
you
want
to
implement
iscsi
or
not,
and
if
that
means
there
is
a
difference
between
how
the
management
module
is
implemented
between
safe
idea
and
the
rook
manager
module,
and
I
think
it's
okay,
we,
I
think
we
shouldn't
force
implementing
a
gateway
just
by
the
need
of
having
100
compatibility
between
those
two
orchestrator
management
modules,
yeah
yeah.
E
I
mean
I
have
a
like
a,
I
guess,
a
high
level,
maybe
dumb
question
like.
Should
this
be
something
that
the
csi
driver
is
responsible
for
like
a
csi
driver?
If
you
have
a,
I
don't
really
know
how
the
claim
process
works.
If
you
have
a
windows
vm
or
whatever
it
is,
that
needs
to
use
iscsi,
but
like
shouldn't.
This
like
be
set
up
on
demand.
If
something
needs
to
attach
to
it
and
a
cs5
driver
is
like.
D
Yeah,
but
we
still
need
to
deploy
the
step
gateways
for
this
to
work
and
that's
what
rook
will
be
doing.
But,
yes,
you
will
also
have
a
csi
driver
that
does
iscsi
so
that
it
can
actually
map
to
devices
like
targets.
I'm
not
sure
what
the
word
is
for
ice
crc,
but
I'm
just
mapping
it,
but
we
still
need
to
get
the
ice.
These
gateways
deployed
the
one
from
self
so
that
the
initiator
can
actually
connect
to
it.
D
Yeah
something
like
this,
I
think
I
don't
think
we'll
be
deploying
the
csi
one,
because
it's
already
a
generic
driver
and
I
don't
think
we
want
to
really
embed
all
the
csr
drivers,
but
still
we
will
just
deploy
the
gateways
and
then
expose
via
services.
I
guess
and
then
somehow
pass
that
config
down
to
the
csi
driver
so
that
it
knows
where
the
gateways
are
and
then
map
the
devices
and
reach
out
to
the
staff
and
pass
the
credentials
as
well
to
things
like
that.
D
But
then
we
still
need
some
kind
of
a
hook
as
well
to
use
the
to
use
the
iscsi
cli
as
well
to
create
the
volumes
like
the
rbd
images
as
well
so
yeah.
There
is
work
to
do
for
sure.
C
Yeah
more
general
question
about
the
manager
module
for
rook
too.
So
what
why
miguel?
When
you
say
where
did
we
lose
swarm
ago?
But
I
think
his
maybe
his
connection
dropped
him
all
right.
Well,
maybe
somebody
else
knows
when
we
say
basic
functionality
is
almost
there.
Does
that
mean
like
feature
parity
with
the
with
the
stuff,
adm
module
or
or
what
exactly
does
basic
functionality
mean,
so
we
can
use
it
from
the
dashboard
to
configure
rook
to
create
osds,
I
mean
that's
a
bigger
they're,
probably
not
there
yet,
but.
E
E
Although
we
haven't
actually
tested
the
dashboard
part,
but
the
orchestrator
cli
works,
nfs
works,
rvd
mirror
works,
lsd
like
a
subset
of
the
osd
stuff
works
like
we
can
support
some
drive
groups
that
use
whole
devices,
but
not
the
ones
that
have
partial
db
stuff
that
isn't
possible
until
we
have
a
smarter
local
storage
operator,
but
much
of
it
much
of
it
works.
E
E
A
Yeah,
I
think,
at
least
in
my
experience,
it
makes
little
sense
to
to
use
the
orange
cli
in
a
pure
cube
in
this
environment
because
it
feels
a
bit
non-cuban
dislike.
A
I
I
mean
it
it
it's
already
fun.
To
use,
I
mean
just
calling
self-arch
apply
a
video
mirror
and
then
the
crs
automatically
pushed
into
into
kubernetes
and
rook
cluster
rest.
It's
definitely
fun
to
use.
The
question
is:
do
we
want
to
recommend
the
rook
user
community
to
switch
over
the
into
the
using
the
rook
manager
module
or
not?
That's.
I
think
it's
up
to
you.
B
B
Yes,
if
you
have
something
that
work,
but
this
manual
deployment,
okay,
using
the
theft
for
a
funny
match.
Okay,
that
includes
the
the
rafana
dashboards,
but
I
think
that
maybe
we
can
include
in
the
same
way
that
in
the
fdm
orchestrator
we
have
the
possibility
to
deploy
by
default.
It
is
the
project
the
monitoring
stack
automatically
in
order
to
work,
to
configure
the
dashboard
and
to
prepare
everything
in
order
to
be
ready
to
to
work
to
do
the
same
in
the
in
the
book
operator.
B
D
D
B
But,
but
what
are
you?
What
are
you
saying
just
to
provide,
for
example,
the
crts,
in
order
to
deploy
these
components
and
to
provide
instructions
in
order
to
do
the
manual
configuration
or
to
include
the
deployment
of
these
components
inside
the
operator?
B
I
I
mean,
for
example,
to
have
in
in
the
same
way
that
we
have
in
the
cluster
crt.
Now,
apart
for
monitoring
that
is
only
affecting
prometheus,
I
I,
I
think
that
it
would
be.
It
would
be
nice
to
to
to
have
this
second
section,
improve
it
with
more
possibilities
in
order
to
deploy
all
the
other
components.
C
Some
some
context
that
could
be
helpful
is
that
prometheus,
isn't
just
deployed
for
seth
like
a
kubernetes
administrator,
could
have
prometheus
running
for
their
entire
kubernetes
cluster
and
want
to
to
link
like
rook
and
or
ceph
rather
into
that.
So
there's
not
really
like
necessarily
prometheus
instance.
That
is
started
just
for
the
ceph
cluster.
B
If
we
do
not
include
this
possibility
well,
part
of
the
dashboard
is
not
going
because
the
step,
that's
what
it's
not
going
to
work.
Okay,
because
it
is
using
the
grafana
and
it
is
using
the
rest
of
elements
of
the
monitor.
B
D
So
maybe
the
ui
from
the
dashboard
can
display
something.
If,
if
the
rook
operator
is
enabled,
then
when
the
user
tries
to
look
up
like
a
page
or
something,
then
we
can
maybe
display
instructions,
some
deploy
something.
But
the
problem
is
that,
like
also
for
prometheus,
for
example,
it
is
super
popular
in
kubernetes,
but
it's
not
necessarily
deployed
in
kubernetes.
It
can
be
outside
of
kubernetes
too.
So
it's
a
little
bit
strange
even
to
maybe
recommend
to
have
it
deployed
as
part
of
kubernetes
too.
D
So
we
could
just
say
I
mean
if,
if
you're
looking
at
using
it
and
getting
a
like
full
experience
of
the
dashboard,
then
you
need
this
this
this
and
that,
but
of
course
we
can
write
a
little
documentation
on
how
to
achieve
it.
But
at
the
end
it's
the
user
that
has
to
to
decide.
We
can
provide
guidance
for
sure,
but
it
will
have
to
be
like
external
links
and.
B
B
Do
you
think
that
what
what
is
the
the
order
of
reality
or
preferences
in
the
in
the
things
that
we
have
now
to
do
for
to
do
in
the
in
the
state
of
part.
E
You're
talking
about
what's
on
that
list,
the
things
that
are
missing
you
mean
for
the
rook
manager
module.
Is
that
what
you're
talking
about
yeah
yeah?
Well,
sorry
before
I,
I've
won
a
really
quick,
dumb
question
on
the
on
the
monitoring
part
and
sorry,
if
you
already
met
yes,
but
does
rook,
do
the
the
configuration
integration
to
like
tell
the
dashboard
how
to
talk
to
the
kubernetes
prometheus?
So
you
can
do
all
the
graphing
stuff.
E
B
The
problem
is
that
the
only
component
of
the
monitoring
staff-
that
is
the
project
okay
in
the
in
the
government's
cluster,
could
at
this
moment
we
have
information
documentation
for
deploying
only
from
service
and
that's
all
okay.
So
we
do
not
have
information
on
the
deployment
of,
for
example,
the
node
exporter.
E
E
That's
kind
of
counterintuitive
that
you
have
like
a
less.
You
have
a
more
difficult
process
of
getting
all
these
things
integrated
when
you're
in
kubernetes,
and
you
in
principle
should
have
all
the
tools
at
your
disposal.
Then,
when
we're
sort
of
setting
up
separately
but
yeah,
I
guess
I
would,
if
there's
a
way
to
address
that
like
that
would
probably
be
the
first
place.
I
would
start
because
it's
the
thing
that
makes
the
dashboard
really
usable
is
being
able
to
get
all
that
monitoring
stuff.
E
But
after
that
I
think
the
main
thing
is
the
drive
groups,
the
osd
deployment.
That's,
I
think,
that's
sort
of
the
key
thing
that
you
need
to
make
work
better
for
environmental
deployments
and
I
guess
complex
drive
groups
and
also
drive
groups
that
consume
storage
storage
classes.
I
think
that's
the
other
gap,
and
I
mean
we
have
to
introduce
some
sort
of
new
abstraction,
probably
or
something
so.
E
E
G
E
I
mean,
I
suspect,
that
you
know
it'll
boil
down
to
a
drive
group
that
has
a
new
property
called
storage
class
and,
like
count
or
something
like
that,
like
I
suspect,
that's
how
it'll
be
expressed,
but
but
we
have
to
have
some
way
of
providing
an
inventory
of
available
storage
classes
through
the
orchestrator
interface,
which
will
work
will
implement,
but
stephanie
won't
must
be
some
way
of
like
collecting
that
and
then,
when
somebody
has
to
think
about
like
what
the
dashboard
should
show
and
what
the
cli
should
show
like.
F
B
B
Okay-
and
I,
what
are
you
saying
is,
I
is
the
way
to
do-
is
to
add
a
new
attribute
in
the
drive
group
in
order
to
describe
to
define
what
is
the
storage
class
that
we
need
to
use
and
just
try
to
create
the
osd
using
this
pieces.
You
can
pass
it
in
in
the
pvs
using
that
storage
classes.
So
I
think
that
is.
A
A
Okay,
so
we
we
still
have
cfci
transfer,
rook,
isn't
breaking.
E
I
think
we
have
that
now
with
the
orchestrator
rook
tests,
it's
pretty
minimal
right
now.
It's
just
doing
like
stuff
works,
apply
on
stuff
and
it's
not
actually
verifying
that
anything
comes
up,
but
but
it's
like
a
little
bit
of
time
spent
flushing
out
those
tests.
E
Yes,
it
turns
out
that
blaine
helped
identify
that
I
was
configuring
calico
with
the
wrong
pass-through
mode
or
something
I
was
using
vxlans,
which
don't
work
properly
on
the
lab
for
some
reason,
so
I
switched
to
ipip
encapsulation
and
that
works,
so
that
part
is
resolved
and
I
have
a
pull
request
that
turns
flannel
back
on,
I'm
not
sure
exactly.
E
I
can't
remember
exactly
why
I
stopped
using
flannel,
but
in
my
testing
at
least
final
seems
to
work
fine,
so
I
think
we
should
just
turn
on
and
if
we
hit
problems
again,
then
we
can
turn
it
back
off
and
document
this
time.
Why
it's
a
problem
but
yeah.
I
just
need
to
plot
one
on
that
pull
request
here.
E
Yeah
and
then
I
guess,
there's
there's
still
a
backlog
of
pull
requests
that
joseph
has
put
together
that
implement
all
these
missing
pieces.
I
just
oh.
Actually,
there
is
a
there's
sort
of
a
big
to-do
here,
cleanup
yeah.
E
So
when
I
went
to
go
test,
the
porch
apply
rgw
pull
request
when
you
go
and
it
you're
done,
and
you
try
to
delete
the
stuff
cluster,
crd
or
cr
to
like
tear
down
the
cluster
work
refuses
because
there's
also
another
cr
for
the
step,
object
store,
and
so
it
won't
delete
the
cluster
if
there
are
like
dependent
service
definitions,
and
so
we
have
to
decide
how
that
not
quite
sure
how
that
should
how
this
should
work
like.
A
E
E
E
I
mean
probably
nobody-
I
mean
it's
probably
a
good
thing
that
rook
does
that
it's
just
a
question
of
how
we
make
how
we
want
to
structure
the
tests
right
like
carefully
delete
what
they
do.
H
Yeah
yeah,
I
came
up
with
a
temporary
solution
that
I
pushed
to
that
to
like
that
pr
branch,
where
it's
just
basically
the
reason
that
it
doesn't
it
doesn't
delete
the
the
cluster
cr
is
because
there's
like
a
finalizer
on
the
on
the
cluster
cr,
when
you,
when
you
create
it
like
the
rook
operator,
adds
that
finalizer
and
you
can
like
what
I've
done
is
just
manually.
Remove
the
finalizer,
but
normally
like
the
rook
operator,
would
do
that
if
it
detects
that
there's
no
like
other
dependents,.
E
F
B
Many
theories-
okay,
so
probably
I
I
don't
know,
but
I
think
that
in
this
moment
we
have
in
the
authority
only
one
test
for
for
rook
the
book
operator.
Okay,
that
is
creating
the
cluster,
he's
doing
a
lot
of
things
and
he's
deleting
the
class
there
is
that
true.
E
All
right,
it's
part
of
that
the
test
is
doing
much
at
random.
I
wonder
if
I
guess
I
would
worry
about
deleting
the
finalizer
as
being
too
intrusive
until
like
what
rook
is
trying
to
do
like.
Maybe
we
should
just
have
a
task
that
runs
steering
shutdown
that
does
an
ls
and
for
any
service
types
that
we
identify
like
rgw,
nfs,
fs,
mds,
whatever
it
is,
it'll
just
do
a
forge
delete
and
so
via
the
orchestrator
we
delete
everything
before
it
tries
to
delete
the
cluster
because
then
it'll,
then
it
should
clean
up.
H
E
Would
have
the
benefit
of
exercising
the
like
cleanup
paths
or
the
deletion
pass
the
orchard
and
stuff?
E
E
Yes,
finally,
part
you
could
do
sevorch
that
force
ls
format,
json
or
whatever,
and
so
you
have
that
and
then
just
iterate
over
the
json
elements
and
if
the
type
is
under
is
known
like
if
it's
an
mds,
if
it's
in
some
list
of
types
that
we
identify,
which
I
think
is
just
rtw
rbd
mirror
nfs
mds,
whatever
whatever
it
is,
then
I'll
just
do
separate
rm
on
it.
H
A
G
The
next
topic
is
curious.
What
the
current
best
practices
are
for
running
when
containers
as
a
soft
developer,
who's
modifying
the
stuff
cluster
itself
and
wants
to
be
able
to
quickly
iterate
and
redeploy
and
test.
A
Yeah,
I've
made
it
already
to
the
to
the
to
the
other
battery.
We
have
two
good
solutions.
I
guess
what
one
is
restart.
Dashboard
self
radium.
I
guess
that's
the
fastest
one
right,
just
recompile
just
and
then
do
the
restart
cluster
start
the
reset
cluster
and
then
you're
done.
I
guess
that's
the
fastest
way
to
to
get
a
new
cluster
if
that's
not
supported
by
vstart
like
if
you
they
have
different
demon
like
other
than
monitoring
manager.
You
have
to
use
c
start
h.
That's
your
c
step.
E
But
there's
also
this
new
thing
that
will
because,
like
the
c-star
thing,
you
basically
have
to
compile
locally
and
then
do
c-patch,
which
is
this
kind
of
flowish
process
that
updates
your
container
image
and
then
you
can.
You
can
use
it,
but
is
there
a
right
so
there's
the
shared
stuff
folder.
I
guess
that's
the
one
because
that
lets
you.
E
G
G
G
Yeah
I
remember
like
a
year
and
a
half
ago
I
was
playing
around
with
like
c-star
stuff
and
trying
to
make
a
virtual
c
patch
that,
like
use
a
shared
folder
instead,
so
that
it
wouldn't
need
to
you
would
need
to
copy
any
data.
G
But
it's
a
little
bit
more
cumbersome
and
it
would
take
more
more
work
to
get
it
working
well
with
hdm
and
everything.
A
G
E
I
think
the
main
gap
here
is
that
there's
it
sort
of
only
works
if
your
development
environment
is
also
fantastic,
so
we
don't
have
sort
of
a
documented
way
to
use
like
a
build
container.
So
you
can
get
on
a
different
distro
of
your
build
inside
an
up-to-date
container
that
has
all
the
dependencies
and
build
requirements.
G
Yeah
yeah
I've
been
thinking
like
longer
term.
We
really
want
to
have
a
easy
way
for
developers
to
build
and
test
any
containers,
and
that
probably
includes
like
a
local
build
environment
that
can
matches
the
container
environment
so
that
you
don't
have
to
worry
about,
but
but
you're
running
it
locally
and
once
that's
working,
we
can
even
use
that
for
like
local
physiology
testing
too.
E
I
think
that
the
containers
that
david
built
to
do
this
make
test
could
work
for
build
containers,
but
it
just
needs
to
be
documented
and
tested,
and
probably
just
script
that
like
launches,
because
if
you
do
docker
run
with
like
all
the
right
arguments
or
whatever
you
get
your
shell
inside
and
then
you
can
run
ninja.
G
E
Whatever,
but
having
a
a
script,
that
does
that,
so
you
don't
have
to
think
about
it.
Every
time
would
be
nice.
E
E
E
A
A
A
E
This
came
up,
I
can't
remember
where,
in
fact
or
something
but
like,
would
it
be
simpler
to
just
have
a
docker
file
in
tough
dot
kit,
someone
in
the
branch,
one
of
the
master
branch,
so
you
can
just
like
type
and
then
have
a
you
run,
whatever
docker
build
or
whatever
it
is,
because
the
whole
the
whole
set
container
project
is
like
so
convoluted
and
complicated.
It's
so
hard
to
understand,
and
I
don't
think
we
need
almost
any
of
what
it
does.
A
No,
we
don't
need
anything.
We
just
need
to
copy
the
generated
docker
file
and
commit
it
into
this
f3
and
then,
where
we
are
mostly
done
and
then
like
add
a
readme
of
what
command
you
run
to
build
this
container.
E
A
Like
no,
no,
it's
going
to
be
a
shell
script
because
the
command
will
be
super
huge,
especially
when
it
comes
to
getting
the
chef
version
into
the
container.
A
E
I
think
that
would
be
a
great
project
to
get
it
into
the
tree
and
then
once
that's
there.
Then
we
can
adjust
the
jenkins
straps
to
use
that
instead
of
set
container
and
yeah
that'd
be
great,
and
I
think
I
don't
know
if
we
can
it's
possible
that
we
can.
Just
I
don't
know
if
we
need
all
the
other
flavors,
also
like
the
one.
A
Right
yeah,
we
will
still
need
the
demon
container
for
sephensible,
but
I
think
for
the
demon
container,
we
can
still
leave
this
f
container
repository
ongoing
and
and
just
maintain
it
for
this,
for
the
demon
container,
yeah.
G
At
that
point,
we
could
also
have
like
the
dev
target
for
the
like
that
make
check
containers
that
dude
you've
heard
of
this
too
yeah
yeah
yeah.
E
A
If
we
really
want
to,
then
we
could
even
create
a
small
ciphering
cluster
and
make
check.
G
This
still
depended
on
like
building
rpms
and
everything.
First.
G
G
E
Not
make
sense
to
do
that
because
that
the
packaging
has
is
like
a
good
description
of
all
the
dependent
software.
So
we
probably
don't
want
to
use
that,
but
I
think
dan
actually
wrote
something
that
basically
extracts
all
the
bill
dependencies
into
a
dummy
package
and
then
uses
that
to
install
all
the
dependencies.
So
you
can.