►
From YouTube: 2020-01-20 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
A
D
D
A
C
After
I
would
just
I
just
replied
to
her
as
my
email
for
like
an
hour
ago,
that's
a
DM
issue
with
those
to
create
upbeat
dead
list,
the
shoes
setting
this
up
on
some
dead
machines
and
that,
like
forever
trying
to
figure
out
why
I
wasn't
working
it
turns
out.
She
was
passing
dead
disc
by
ID
whatever,
and
checking
to
see
that
it's.
It
only
works
right
now.
If
you
use
like
the
canonical
device
name
like
dev,
STV
or
whatever
think
it's
cause,
don't
think
we
don't
want
to
do
I
just
remove
that
check.
C
A
C
B
B
C
C
E
A
A
A
C
I
was
I
thought
so
too,
so
the
next
thing
I
was
going
to
work
on
as
soon
as
that
service
OS.
Well
now
that
the
upgrade
stuff
is
in
is
the
thing
that
basically
tries
to
keep
the
background
set
of
connections
all
open
with
the
serve
record,
periodically
pinging
all
the
hosts,
and
if
it
hits
an
error,
it
resets
it
and
the
new
connection,
I
figure,
so
try
doing
that
change
and
then
see
what
and
then
go
from
there.
We
were
we
end
up,
should
be
able
to
have
a
so.
A
B
A
C
C
Still
think
what
we
but
I
still
think
we
need
it
to
do
a
simple
connection,
for
we
can
always
refactor
that
later.
If
we
switch
out
the
library,
because
the
main
thing
I
want
to
do
is
basically
have
the
serve
thread,
just
loop,
basically
and
periodically
probe
all
the
hosts
in
the
background,
and
if
the
host
goes
down
just
raise
it
hopefully
at
this
and
reachable-
and
we
were
it
when
it
comes
back
that
sort
of
thing
so.
C
B
Plug
mode,
if
you
don't,
then
nothing's
being
done,
I
think
it's
an
hour
by
default,
then
we
we
look
at
you
live,
we
monitor
you
them
and
then
we
look
at
any
events
and
then
we
catch
them
and
then
we
auto
adds
the
disk.
But
that's
only
sure
if
you
use
that
hot
plug
mode
or
device,
which
is
due
to
nothing.
C
C
C
I
think
we
want
I.
Think
I
was
imagining
like
10
minutes
in
my
head,
but
I'm
I
mean.
Ideally,
those
updates
will
be
really
lightweight
like
the
service.
Ls1
is
really
quick
now
and
I'm,
hoping
I
haven't
remeasured,
but
I
think
the
stuff
volume
inventory
should
be
much
faster.
Now
that
we
have
a
Python
3
that
Python
2,
we
don't
know
that
you
limits
it
at
ease,
but
I
haven't
tested,
checked
it
yet.
Ok,
I'll
take
a
look
yeah.
C
C
B
D
F
A
C
B
Think
we're
also
getting
stats
from
the
self
demons
themselves
right.
We
could
somehow
rely
on
the
hard
bits
that
is
currently
happening
instead
of
doing
checks
from
that
would
be
easier,
I
guess,
because
we
have
the
distributed
fashion
already,
so
we
don't
really
need
to
be
implemented
elsewhere,
Lyle
extract
demon
or
extra
connection
checks,
especially
SSH
when
you
scale
it
becomes
like
a
bottleneck.
If
you
have
like
many
conditions
opened.
If
you
have
a
lot
of
a
lot
of
demons,
so
perhaps
you
should
just
rely
on
the
USD
health
check
or
extended
somehow.
A
B
For
the
Christ
demon:
well,
it's
totally
a
demon,
it's
more
like
a
script
at
first
and
it
is
balanced
by
system
D.
So
if
something
fails
on
the
host
and
then
we
will
do
it
if
we
want
to
make
crash
a
little
bit
more
smarter
than
it
should
be
a
real
demon
and
the
part
of
the
service
map
as
well,
so
it
will
appear
as
as
listed
in
the
service
list
of
SEF,
not
the
service
list
that
comes
with
the
manager,
module,
I,
guess,
I,
think
there
are
options.
C
What's
gonna
happen
when
hosts
are
Dan,
because
there
are
a
bunch
of
functions
that
rely
on
the
service
inventory
well
like.
If
you
try
to
remove
a
service
and
houses
down,
then
it
would
just
they
be
busy
or
error,
or
how
would
you
like,
or
what,
if
you
wanted
to
do,
an
upgrade
and
it
host
is
down,
then
it
would
just
say:
I
can't
upgrade
right
now,
cuz
the
hostess
Dan
or
it
would
skip
that
host
or
reduce
of
stale
inventory.
C
Yeah
that
works
it's
a
little
bit
funny
too,
because
what
happens
if
you're
moving
from
the
inventory,
but
the
host
is
still
there
and
it
restarts
and
the
services
come
up,
and
so
you
have
these
running
services.
But
you
have
a
host.
That
is
an
inventory
I,
wonder
if
stuff
ATM
should
also
detect
demons,
running
demons
that
are
on
hosts
that
aren't
in
your
matauri
and
warned
about
this
a
lot
of
ticket
for
that
yeah.
D
C
C
C
In
it
right
now,
I
have
a
different
I
have
another
alternative
proposal,
and
that
is
that
life
is
better
if
SEF
ATM
is
just
installed
on
the
host,
because
you
can
login
to
that
host
and
run
stadium.
Shell
accept
that
it's
hard
to
keep
in
sync
with
what
you're
running
but
I
think
we
really
need
a
tool
somewhere.
That
will
configure
package
repositories
that
you
can
install
packages
right
now.
C
Stuff
deploy
is
filling
that
gap,
but
except
deploy,
is
deprecated
and
also
doesn't
work
on
santos,
8
and
newer
distros,
and
whatever
is
just
what
it's
not
the
right
thing,
though
it
has
to
go
somewhere.
I
think
we
should
just
put
that
functionality
in
to
set
the
ATM,
so
it
can
detect
what
destroyed
is
and
what
tools
using
and
know
how
to
configure
a
repository
to
point
to
upstream
or
to
you
know,
shaman
builds
for
dev
builds
and
so
on,
and
if
we
do
that,
then
it
would
be
a
small
step
to
make.
C
G
G
F
A
B
B
C
G
G
D
So
personally,
I
would
I
would
really
if
we
want
to
go
that
route.
I
would
definitely
go
with
pip,
because
if
we
go
with
package
managers,
then
first
of
all,
we
have
to
implement
at
least
three
different
ways
of
installing
a
package.
You'll
stay
have
all
have
slightly
different
syntaxes
and
also
we
have
to
maintain
three
different
ways
of
building
a
package
and
also
distributing
a
package
with
the
repositories
evening.
The
repositories
in
sync
and
all
of
that
stuff
that
comes
for
them
and
with
pip.
C
B
C
B
Of
the
tree
right
so
I
think
it's
a
little
bit
strange
to
have
like
safe
EDM,
build
as
a
package
as
part
of
Brooke
and
it's
FIDM
being
like
pure
pure
Python
package.
I
mean
if
this
is
the
case
that
we
should
probably
extract
it
from
self
so
that
it
will
live
in
the
penalty.
Otherwise,
I
don't
see
the
point
of
having
an
entry,
but.
C
A
C
F
B
C
E
C
C
C
B
B
C
B
C
C
I
just
have
I
always
get
confused
by
pi
PI,
because
if
I
run
it
as
root,
then
it
goes
in
someplace
half
the
time
it
like
doesn't
work
like
it's
I
can't
tell
if
it's
like
putting
in
the
root
user
directory
or
if
it's
in
the
system,
one
it
like
it
just
messes
up
and
then
I,
don't
know
how
to
uninstall
it
and
I
think
maybe
I
just
because
I
don't
know
tip
very
well,
but
it
always
like
I
find
it
very.
You.
C
B
F
C
B
C
C
Of
separate
stream,
I
mean
they're
also
available
upstream,
but
it's
just
this
like
this
getting
started.
Thing
is
way
more
complicated
because
you
have
to
configure
the
repositories
and
then
you
have
to
install
the
package.
It's
like
it's
a
lot
more
work
and
the
automated
upgrade
stuff.
Doesn't
it's
harder
and.
D
C
D
And
also
couldn't
we
couldn't
we
just
maintain
like
a
metrics
that
says,
you
are
currently
running
that
in
that
version,
and
you
don't
have
the
proper
version
of
cefe
D
and
installed
and
then
just
bail
out
and
ask
the
user
to
download
the
right
version.
And/Or
also
tell
saturation
that
right,
yeah
and
also
safe
IDM
could
just
go
out
before
it
executes
the
command
and
make
sure
it
is.
D
C
C
A
C
Think
my
preference
would
be
to
stick
with
the
root
mode,
so
it
still
does
the
thing
where
it
SSH
is
over.
Just
that
your
rear
bootstrap
mode,
when
you
install
it,
would
be
via
pip
instead
of
curl
and
the
other
notes
when
haven't
installed,
I
think
in
the
future
we
could
add
a
thing
that
makes
stuff
ATM
automatically
install
Sofia
time
in
every
node.
That's
part
of
the
cluster
yeah,
that's
it!
We
need
to
be
careful
because
we
have
to
keep
it
in
sync
and
that's
a
little
different
yeah.
A
C
D
C
C
C
A
C
A
We
are
uploading
it
to
PI
pi,
PI
P,
and
that
problem
is
soft
and
the
picketers
can
be
zipped.
That
might
be
like.