►
From YouTube: 2019-11-25 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Yeah,
I
guess
you
are
typically
out
for
the
whole
week
or
just
first
and
friday,.
A
B
D
Juan
miguel's
father's
in
the
hospital,
so
he's
went
back
to
his
hometown
and
taking
care
of
him
right
now.
So.
C
C
C
C
Is
overholding
the
installation
documentation
for
the
ssh
orchestrator,
which
is
great
really
great,
and
we
have
a
new
package-based
installation
mode
for
self-demand.
So
you
can
install
safetyman
via
rpm
package
and
then
instruct
the
ssh
manager
module
to
make
use
of
that
ceph
daemon,
which
was
installed
via
packages.
C
No
then.
C
B
There
sage
go
yep,
sorry
yeah,
I'm
just
trying
to
get
a
bunch
of
pull
requests
merged,
sebastian
I'll,
probably
ping
you
in
the
next
hour
with
a
bunch
of
those
and
then
what's
next
on
my
list,
some
test
improvements.
I
think
the
big
next
step,
though,
is
really
around
this
stage
or
consider
provisioning
the
rest
of
the
dashboard
dependencies,
and
I
need
to
talk
to
paul
about
that.
It's
like
3
a.m,
in
australia.
It's
a
shame!
He
can't
make
this
meeting.
C
Yeah
and
he
he
can't
even
attend
the
wednesday
meeting,
because
it's,
I
think,
10
p.m,
still
too
late.
Yeah.
Do
we
want
to
make
a
new
upstream
orchestrator
meeting
where
paul
can
definitely
attend?
I
think
that
would
be
nice.
D
That
would
be,
I
mean
his
day
starts
at
around
2
30
p.m,
eastern
standard
time,
which
would
be
probably
bad
for
germany,
and
it
goes
eight
hours
beyond
so
I
mean
there's
no
good
time
pulling
in.
Maybe
what
we
should
do
is
have
two
meetings
have
one,
maybe
with
sage
people,
travis
and
other
people
on
the
you
know
more
times
unfriendly.
Maybe
at
the
end
of
you,
know
the
east
eastern
standard
time
day,
and
we
could
pick
up
paul,
that's
his
mid-morning,
I
mean
would
that
work
sage
will
just
have
to
do.
D
And
we'll
just
that
way
at
least
we'll
have
continuity,
we'll
bridge
the
you
know,
bridge
the
issues,
because
I
know
he
has
a
a
lot
of
opinions
so
do
why
I've
been
waiting
to
go
back
stage.
We
have
to
talk
at
some
point
as
well:
yeah,
okay,
yeah,
just
make
sure
we're
on
the
same
page.
D
Sure
I
mean
I
I'm
going
to
be
talking
with
him.
Let
me
just
look
at
my
calendar
at
2
30..
If
we
want
to
say
I
got
to
call
at
3
p.m,
maybe
we
could
play
in
it
for
4
p.m.
E
D
D
Well,
I
don't
know
sage
if
we
want
to
have
the
first
one
we're
gonna,
probably
go
over
some
product
related
downstream
stuff
as
well
yeah.
I
think.
D
C
Do
we
have
anything
else
yeah
we
have
if
we
want
to
actually
discuss
these
topics,
yeah
so
host
labels
to
the
ssh
orchestrator,
that
that
would
be
some
kind
of
similar
thing
to
how
rook
stars
labels
two
hosts
or
two
nodes
and
yeah.
C
The
thinking
is
about
adding
that
also
to
the
ssh
orchestrator
too
yeah
provide
solutions
for
for
similar
problems
or
provide
a
similar
solution
for
a
similar
problem
like
where
can
containers
be
be
scheduled
on
yep
and
we're
not
it's
kind
of
a
very
a
bit
simpler
approach
than
kubernetes,
of
course,
but
it
seems
about
right
to
go
into
the
direction
yep.
A
B
It's
just
the
cli
commands
that
add
remove
host.
You
can
also
set
labels
on
those,
that's
it
so
in
the
gui.
You
probably
like
click
a
bunch
and
say
label
for
osds,
like
these
ones,
label
them
for
mds's
or
had
custom
labels.
My
my
thinking
was
that
we
would
the
labeling
would
allow
arbitrary
label
names,
but
we
would
have
generic
labels.
That
would,
if
you
just
label
it
for
mds
it'll,
use
those
nodes
to
schedule
mds
demons,
but
you
don't
have
to
specify
host
names
all
the
time.
B
B
Yeah,
so
that
way
the
yeah,
the
the
dashboard,
you
would
see
the
cube
labels,
I
don't
know
we
might
have
to
filter
them
or
something
so
that
it's
only
the
ones
that
are
relevant
to
rook
or
maybe
they're
in
the
work
name
space.
I
don't
know
how
labels
work
in
kubernetes.
B
B
C
And
I
didn't
don't
need
I.
I
don't
think
that
we
should
care
about
any
other
orchestrators
right
now,
just
as
orchestrated
and
rock
and
the
other
one
yeah
I
don't
care,
then
there
was
some
discussion
about
making
the
ssh
orchestrator
declarative.
C
C
Kind
of
how
rook
works,
so
you
have
a
bunch
of
specifications
and
then
using
the
orchestrator
command
line
to
modify
that
specification
and
then
making
the
ssh
orchestrator
apply
that
specification
to
the
real
world
kind
of
how
rook
works.
Basically,.
B
My
sense
is
that
we
could
definitely
do
that,
and
it
kind
of
seems
like
we
probably
want
to
eventually,
but
it
also
seems
like
we
don't
have
to
go
all
the
way
down
that
road
everything
would
still
work
sort
of
if
we
sort
of
sort
of
that
part
way
mode,
so
that,
for
example,
if
we
add
all
these
labels
and
then
the
labels
could
be
used
only
when
you're
updating
a
service
when
you
issue
the
imperative
command
to
update
the
mds
or
add
an
mbs
or
whatever
it
is
and
making
it
declarative
would
basically
mean
storing
what
storing
this
the
declarative
state
and
then
having
something
an
operator
thing.
C
B
B
B
I
think
so
I
think
well,
I
think,
as
soon
as
we
have
host
labels,
I
think
the
next
step
would
be
to
create
a
scheduler
that
will
just
randomly
assign
a
service
to
a
node,
either
across
all
nodes
or
the
ones
that
are
labeled.
B
And
then
you
can
enter
it
on
that
to
then
look
at
the
services
and
try
to
pick
the
one
with
the
few
services
like
you
just
do
some
like
really
simple,
really
simple
scheduler,
but
that
basically
will
mean
that
all
of
all
the
other
things
you
don't
have
to
specify
a
placement
specification
like
you
do
right
now
you
have
to
pass
those
names
all
over
the
place,
and
so
the
experience
will
be
more
or
less
equivalent
between
when
you're,
using
rook
and
when
you're,
using
ssh
yeah
it'll,
just
because
users
for
the
most
part
like
they
don't
care
where
their
manager
runs
like
they
just
have
the
five
notes
on
their
cluster.
B
They
want
to
create
some
gateways
like
they
don't
care,
they
just
want
to
work
so
and
if
they
do
care,
then
they
can
add
labels,
and
then
everything
else
will
still
work.
C
B
Get
some
more
feedback
from
him
yeah
I
mean
I
wouldn't
suggest
us
releasing
with
the
random
scheduler,
but
we
can
merge
it
like
tomorrow,
and
then
everything
would
work
to
a
first
approximation
until
we
have
something
slightly
smarter,
I
guess
right.
The
few
service
one
also
is
pretty
easy,
just
look
at
the
inventory
and
count
services
and
just
pick
the
one
that's
easily
saluting.
B
I
think
the
real
question
is
right.
Now
we're
not
really
doing
anything
around
setting
limits
on
any
of
these
containers.
We
don't
restrict
how
much
cpu
they
get
or
anything
like
that,
and
so
that's
really
where
the
scheduler
probably
needs
to
get
a
little
bit
intelligent
like
how
many
cores
does
this?
How
much
memory
does
this
need?
Do
I
have
actually
enough
memory
on
this
host?
B
B
Yeah
so,
like
I
don't
know
if
we
want
to
surface
that
through
the
orchestra
api,
so
you
can
say
like
I
want
to
give
my
writers
giveaways
lots
of
memory
and
that'll
work
on
both,
but
that
is
sort
of
the
prerequisite
before
we
can
make
a
smart
scheduler
that
takes
those
things
into
consideration.
I
think.
E
So
it
kind
of
seems
like
that
that
we're
creating
something
like
a
lightweight
kubernetes
and
like
kubernetes.
E
B
I
mean
these
are
this:
it's
not
like.
We
saw
kubernetes
and
said
we
wanted
to
re-implement
it.
These
are
problems
that
we
had
before.
Kubernetes
came
along
right,
because
users
didn't
know
how
many
gateways
to
put
on
a
server.
They
were
just
doing
all
this
stuff
manually,
and
so
the
user
experience
was
was
shitty.
So
I
think
this
is.
This
is
something
that
belongs
in
the
orchestration
layer,
and
if
it's
going
to
be
there,
then
we
have
to
have
sort
of
a
simple,
simple
way
to
do.
It.
E
And
I
think
also,
we
have
to
make
sure
that
we
are
pointing
people
in
the
right
direction
so
when
they
start
requesting
features
that.
B
E
B
Okay
yep
one
of
my
questions
is
I've,
just
put
it
certain
than
assuming
that
once
you're,
whatever
continuation
completion
thing,
merges
that
it'll
be
easier
to
make,
expand
the
work
the
size
of
the
worker
queue
in
the
state
orchestrator.
So
we
can
have
lots
of
stuff
going
in
the
background.
Is
that
true.
C
B
C
Right
now
that
parallelism
is
hardcoded
to
one
yeah
in
the
end
orchestrator.
So
when,
when
creating
the
thread
pool,
there
is
a
hard
coded
one
to
make
sure
only
one
thread
at
a
time
is
working
right
now,
so
we
have
everything
serialized
right
now
that
would
be
kind
of
trivial
to
make
it
work
in
parallel,
but
be
careful.
It's
still.
C
Would
mean
that
we
have
to
use
a
different
approach
like
starting
an
operation
on
the
host
and
closing
the
connection
and
then
querying
the
state
of
the
connect
of
the
of
the
operations
later
on,
but
that
that's
a
lot
of
complexity.
I
mean
these
are
all
green,
lit
threads.
C
Oh
really,
okay,
really
re
cpus
right.
C
Yeah
sharing
the
same
girl
yeah,
but
that's
not
a
big
problem.
If
you're
waiting
for
the
network
to
reply,
then
the
guild
is
not
held,
but
we're
using
the
thread
pool
which
is
optimized
for
parallelizing
cpu
intensive
task,
which
is
only
possible
with
a
kernel
thread
and
not
your
green,
green.
B
C
C
Work
right
and
it
will
kind
of
slow,
but
it
will
work
sure.
If
we
want
to
use
green
threads,
then
we
should
rethink
that
soon.
C
B
B
Yep,
well,
it's
that
and
like
service
ls,
if
you
basically
have
to
just
check
the
services
on
all
the
nodes
or
devices
os,
those
also
will
be
super
slow.
So.
E
E
C
Oh
we're
using
a
completely
different
transport
mechanism
than
ssh
like
I.
E
A
C
But
then
here
we
are,
then
we
are
really
doing
sold
over
again.
E
I
think
ssh
is
not
bad
for
our
purpose.
I
mean
for
day
one
operations.
Certainly
this
can
take
quite
long,
but
if
we
have
some
kind
of
optimization,
we
think
we
can
get
it
down
to
an
acceptable
amount
of
time,
but
for
day
two
that
there
are
no,
not
that
many
operations
right.
So
it's
yeah,
some
of
them
are
long
running,
but
they're
long
long,
running,
anyways
and
yeah
don't
shrink
it
down
by
by
using
a
different
transport.
C
B
C
B
It's
like
it's
everything,
except
for
the
opensuse
one,
but
you
should
be
able
to
run
tests
the
other
ones.
You
only
need
the
centos
two
centos
ones
and
the
bionic
fault
one,
and
then
you
can
run
the
qa
suite.
B
So
much
upgrades
all
right,
so
the
the
check
one,
if
that
adds
one
function
to
the
orchestrator
api
that
basically
asks
the
orchestrator
to
translate
container
names
to
hashes,
because
I
didn't
want
right.
So
basically,
in
the
ssh
case,
it
just
picks
a
random
node
and
does
a
new
set
demon
command
called
called
pull,
and
that
will
pull
the
latest
image
and
it'll.
Tell
you
what
this
shaw
one
is
for
it
and
then
just
compares
that
what
basically,
what
images
we
want
to
be
running
to
whatever
is
actually
running.
B
Should
probably
refresh
my
memory
and
look
at
this
thing
again,
so
this
is
basically
the
upgrade
check
command.
That's
that
this
does
and
then
the
next
step,
I
think,
is
to
do
the
actual,
like
minor
release,
upgrade
my
question.
B
There
is
how
to
sort
of
combine
this
with
the
rook
works
management
of
upgrades
like
how
much
of
it
we
want
to
do
because,
with
there
are
two
things
I
guess
so
for
like
things
like
upgrading
stateless
services,
rook
is
doing
it
on
a
per
deployment
basis,
so
it
basically
tells
kubernetes
that
it
wants
this
service
set
demon,
set
deployment
whatever
to
be
a
different
image,
and
then
kubernetes
restarts
in
one
by
one
that
right.
A
B
But
the
so
rookies
goes
on
a
deployment
basis,
which
is
basically
going
to
be
like
five,
if
rgw
demons,
for
example,
but
it
tells
kubernetes
to
upgrade
l5
and
kubernetes-
will
go
across
the
yeah.
A
B
I
think
the
main
question
for
me
is
how
much
of
the
upgrade
logic
should
live
in
orchestrator,
cli
sort
of
above
the
orchestration
abstraction
and
how
much
of
it
should
be
below,
but
that
the
way
that
kubernetes
does
it
makes
me
think
that
the
stuff
above
the
layer
should
be
like
upgrade
all
the
monitors.
First
make
sure
everything's.
Okay,
then
upgrade
all
the
osds
make
sure
whatever
like
on
a
per
service
type
basis.
B
Yes,
but
it's
like
it's
the
simple
logic
right,
so
it'll
work,
fine
for
point
releases,
but
when
we
do
major
upgrades
it
won't
because
we
want
to
have
more
stuff
in
there
so
and
we
need
to
implement
that
same
logic
for
ssh
anyway.
B
So
I
think
the
question
is
whether
we
sort
of
shift
some
of
that
back
into
the
manager
or
we
sort
of
have
two
parallel
things,
because,
for
example,
for
like
octopus
to
specific
there's
going
to
be
a
bunch
of
steps
like
after
you
do
the
monitors,
then
you
probably
fiddle
some
option
after
the
osds.
You
change
the
require
osd
release
like
there's
all
this
other
stuff,
that's
sprinkled
in
there
that
varies
per
release.
B
A
B
B
Because
what
what
I
was
planning
on
doing
for
for
the
manager
module
would
be
that
you
would
you'd
run
the
check
just
to
see
what
it
wants
to
do,
or
things
should
happen
and
then
you
would
basically
say
upgrade
like
start
or
something
like
that,
and
it
would
kick
off
a
background
process
that
would
do
it
and
basically
for
each
daemon,
it
would
update
the
sef
config
for
that
one
demon,
that's
gonna,
restart,
reprovision,
it
and
and
so
on,
and
then
it
would
work
its
way
through
and
it
would
have
some
basic
state
that
says
that
the
upgrade
is
the
target.
B
Is
this
and
it's
in
progress
so
that
if
the
manager
restarts
it
can
pick
up
where
it
left
off
and
then
also
you
would
have
a
cli
command
that
would
like
upgrade
pause,
cancel
or
something
like
that
upgrade
resume.
B
So
if
like
it's
in
the
middle
of
upgrading
and
restarting
all
this
stuff
and
you're
like
you
want
to
stop,
you
just
wanted
to
take
a
break.
You
can
do
that
and
it
could
have
like
a
progress
event.
Also.
That
shows
that
it's,
you
know,
70
the
way
done,
with
the
upgrade
that's
sort
of
what
I
was
thinking,
but
the
operator
is
the
work
operator
like
sort
of
owns
all
that,
and
so
you
don't
have
things
like
paws
right.
A
B
So
if,
if
this
stuff
was
moved
into
the
manager
module,
then
basically,
I
think
the
interface
would
have
to
be
a
little
bit
richer
so
that
we
could
tell
rook
on
a
per
daemon
basis
like
a
per
osd
base
as
a
per
monitor
basis,
like
upgrade
this
monitor
or
re-provision.
This
monitor
with
this
image.
A
A
A
B
I
I
mean:
maybe
it
could
so
one
one
option
is
there's.
Basically
a
new
config
option
called
container
image
that
says
what
container
image
you
want
to
be
running
for
the
cluster
and
rook
totally
ignores
that
right.
It
uses
the
crd
property,
but
it
could.
It
could
use
that
because
it
was
added
as
a
config
option
so
that
you
could
have
like
the
monitors
running
one
image
and
the
osd's
running
a
different
one.
An
osd
56
running
a
different
one,
so
you
could
sort
of
have
granular
in
a
granular
way
that
the.
B
And
so,
if
rick,
just
like
looked
at
that
config
option
for
whatever
daemon
it
was,
then
it
could
just
it
could
use
that
instead
of
the
global
property
right
and
if
in
that
case,
then
it
almost
the
upgrade
stuff
would
almost
be
like
it.
Work
almost
wouldn't
have
to
do
anything
right.
The
manager
would
change
the
option
and
it
would
probably
just
have
one
call
that
says,
like
reconsider.
A
B
Okay,
well
that's
what
I'm
currently
thinking,
but
I
think
we
should
start
with
the
check
and
get
that
in
and
then
I
can
prototype
something
with
this
agent
just
see
if
it
looks
like
it'll
work.
Well,
I
might
this
one
might
actually
make
sense
to
wait
also
until
we
have
like
a
release
candidate
or
something
a
little
bit
further,
because
what
I
was
also
thinking
here
is
as
making
it
so
that
it's
user-friendly,
so
there
would
be
instead
of
having
the
config
option,
be
like
the
full
container
name.
B
It
could
like
leave
off
the
tag,
for
example,
or
something
like
that,
and
so
you,
when
you
say
the
command,
would
basically
they
upgrade
to
you,
know
1426,
and
then
we
would
construct
the
appropriate
lab.
We
figure
out
what
the
appropriate
label
is
for
that.
B
B
Yeah
or
something
right,
exactly
e
or
whatever,
who
knows
but
hey,
I
don't
know,
I
probably
brought
that
one,
some
more,
not
quite
sure.
B
B
A
D
B
A
D
B
We
don't
think
about
all
this
stuff
and
so
okay
and
that's
now
installed
in
the
in
the
base
theft
container
image
with
a
new
enough
version
that
works
for.
However,
many
vendors
had
their
support
in
that
version.
So
we'll
update
that
periodically,
but
yeah.
B
C
A
Yeah,
that
was
my
question
of
that
yeah.
Last
week,
kubecon
I
was
talking
to
seb
about
the
upgrades
and
whether
or
not
demons
need
to
be
restarted
during
their
rook
upgrade,
and
so
the
only
reason
the
manager
pod
is
being
restarted
is
because
of
that
environment
variable
we
set
on
the
pod
effect
and
I
didn't
see
anywhere
in
the
cef
code
base
that
consumes
that
environment
variable.
I
remember
adding
it
a
few
months
ago,
but
I
can't
remember.
C
C
The
the
rook
version
itself
doesn't
really
matter
it's
about
the
rook
api
version,
which
is
more
important.
B
A
And
then
yeah
on
the
upgrade
topic,
though
there,
the
so,
if
you're
doing
a
rook
upgrade
not
updating
the
sap
image,
the
only
demons
that
were
restarting
were
the
manager
mod,
the
manager,
because
of
that
and
then
the
osds,
the
osds
are
going
to
need
a
lot
more
work
to
avoid
the
reboot,
because,
basically
we
need
to
activate
the
lv
when
we
start
the
sd.
Before
we
start
the
stuff
process
we've
got,
we've
got
the
rook
image
in
the
set
the
osd
pod
spec.
A
A
B
Mean
the
way
that
the
way
that
I
guess
this
age
orchestrator
is
doing
it
is
it's
just
running
that
volume
activate.
That's
like
the
one
command
that
runs
inside
the
container.
That's
the
unit,
perhaps
edge
stage
or
whatever,
but
maybe
the
rook
pod
spec
can
be
a
to
do
the
same
thing
or
just
runs
deactivate
command
without
having
running
rook
and
asking
work
to
do
the
same
thing.
B
That's
what
I
thought
two
and
I
actually
added
up
made
a
patch
just
that
volume
to
have
like
a
dash
dash
without
temperfest.
I
would
write
actual
files,
but
I
found
that
if
the
if
the
var
lib
cephos
d,
stuff
dash
number
directory
was
already
a
mount
point,
which
it
just
happened
to
be
in
the
seth
demon
case,
because
that
was
being
passed
in
from
the
host
on
that
point,
then
we
don't
mount
a
mess.
We
don't
do
a
template
pass
step,
one
just
skips
it
or
maybe.
B
No,
I
just
well
this
worked,
but
it's
basically
just
that
if
that
directory
already
exists,
I
think
it'll
just
work
you
could
probably
just
do.
You
could
probably
do
a
test
on
some
hosts
where,
if
you
do,
if
you
do
an
activate
and
it
does
a
temp
assess
like
tear
it
all
down
and
then
just
make
her
that
directory
and
then
run
it
again.
D
B
A
A
Yeah
someone
came
to
the
rook
booth
and
talked
to
seb.
I
wasn't
there
at
the
time,
but
I
said
yeah
there's
this
one
issue
where
we're
we're
not
sure
we
can
deploy
rook
because
of
this
one
thing,
and
it's
because
the
osd's
restarted
during
rook
upgrade
like.
Why
do
that?
Yeah?
Okay,
we
don't
want
to
do
that
fix
it.
B
Last
thing:
right
now,
when
you
do
a
service
discovery,
it's
doing
an
exec
inside
every
container,
just
to
run
soft
dash
view
to
get
the
stuff
version,
which
is
there
are
two
problems,
one
it's
it's
slow.
The
second
problem
is
on
my
latest
fedora
podman
exact
seems
to
be
broken.
B
I
guess
that's
my
problem
and
not
anybody
else's,
but
I
really
want
there
just
to
be
a
label
on
the
container
that
just
has
this
f
version
that
is
in
that
container
and
for
some
reason,
because
of
the
way
that
the
the
make
file
works
for
generating
the
docker
files
and
whatever
it's
just
like
awkward
to
do
that.
So
it's
not
an
easy
change,
but
I
think
we
really
should
do
it.
B
B
I
don't
know
who
I
mean,
I
don't
know
who
we
can
bug
to
do
that
sebastian.
I
don't
know
if
there's
anybody
over
there
who
wants
to
might
be
able
to
take
tackle
that
one.
B
B
B
B
I
think
the
last
big
refactor
on
stuff
container
was
done
by
blaine
about
a
year
ago.
Okay,
so
he
might
have
like
the
background,
but
I
imagine
he's
super
busy,
so
I
don't
necessarily.
C
I'm
I'm
I'm
going
to
talk
to
christopher
and
to
play
christopher
basically
took
over
the
wool
container
part
of
from
blind.
So
I
guess
it's
going
to
be
christopher.
Okay,.
B
D
Stage
and
travis
I
did
send
out
a
meeting
for
four
if
you
guys
want
to
join
travis.
I
just
added
you
know
if
you
wanted,
and
hopefully
paul
can
join
just
predicated
on
that
I'll.
Let
you
guys
know
I'll
cancel
it.
If
you
can
I'll
find
out
at
2,
30.