►
From YouTube: 2019-02-11 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
A
D
C
D
And
this
time
around
too
we,
we
also
decided
that
it
should
be
a
lot
more
orchestration
as
part
of
this
as
well.
It
was
you
know,
lightly,
covered
topic
last
time
and
that's
why
we're
opening
up
the
date.
You
know
the
doodle
here
and
making
sure
that
we
get
a
good
orchestration
presence
there
as
well.
Yes,.
B
A
When,
when
calling
device
LS-
and
it
was
not
yet
implemented
in
the
interface,
but
it
does
make
sense
to
have
that,
especially
for
the
SS
8
Orchestrator
Travis,
you
do.
You
have
an
idea
if
it
is
possible
to
trigger
a
refresh
of
the
device
inventory
and
rook
and
if
it
is
feasible
to
actually
call
it.
B
B
C
A
E
A
F
G
A
A
B
A
G
What
is
the
I
think
that
may
be
probable
is
good
to
have
this
cage
labor
the
layer
at
the
estate
or
level
a
most
of
the
operations
that
we
are
returning
things,
but
we
have
the
two
means
just
about
completely
on
object,
so
maybe
to
implement
a
cats.
Are
cats
for
the
for
the
companions?
Is
it
could
be
useful
for,
for
everybody.
B
B
G
I
think
that
went
probably
not
all
the
operations
will
need
a
catch.
Okay,
but
I
think
that
in
certain
cases
we
should
implement
a
cat
simply
storing
the
data
compression
cops.
Yet
okay,
I,
don't
know,
I
think
that
memory
cuts
will
be
enough.
I
don't
know,
but
it
could
be
a
good
solution,
I
think
in
the
in
the
short
term,
and
if
we
need,
if
we
see
that
more
sophisticated
cuts
is
needed
when
we
can
go
forward
with
with
the
solution.
G
So
Matt
I
think
that
in
this
case
the
cat
could
be
a
service
that
is
useful
for
every
Orchestrator,
not
for
every
operations,
but
probably
in
each
of
their
k-staters.
We
can
use
this
service
in
different
operations,
though
I
think
that
to
put
in
the
arc
estate
or
a
cat
that
everybody
can
use
with
completely
on
opiates
is
a
good
idea
by
my
opinion,.
A
Caching
is
a
bad
example,
because
rook
already
caches
tipsy
already
caches,
and
it
does
not
make
sense
to
to
cache
the
inventory
and
Isis
agent
in
rook
in
the
world.
Okay
story,
time
and
tips
you
Orchestrator
yeah,
that's
true,
but
in
general,
if
it
makes
sense,
there
are
some
common
functionality,
yeah
sure.
When.
G
A
E
Me
I
just
tossed
that
on
there
I
just
wanted
to
to
highlight
that
the
inventory,
filter,
abstraction
and
the
orchestrator
has
labels.
But
as
of
last
week,
we
seem
to
be
like
removing
labels
from
the
from
the
generic
Orchestrator
interface.
So
I'm
just
wondering
if
those
have
a
different
purpose.
If
we
miss
something.
A
G
A
C
A
A
C
B
C
A
G
G
No,
it's
okay,
it's
okay!
What
but
I
think
that
in
what,
for
example,
in
there
in
in
the
when
we
are
creating
new
hosts
or
we
are
Edina-
and
he
was
the-
what
we
decide
is
to
automatically
standard
label
to
mark
that
in
this
halt.
We
have.
This
advised
is
disagree,
said
a
shared
service.
Okay,
and
the
same
for
for
other
service
is
what
we
we
decide
in
a
in
the
last
meeting,
but
true.
A
The
problem
was
removing
the
add
host
functionality
and
remove
host
from
on
functionality
is
that
we
then
no
longer
can
add
hosts
to
get
the
inventory
of.
So
we
need
to
have
a
device
Ellis
even
for
nodes
that
don't
have
any
OS
DS
running
on
them
and
so
I'm
I
would
be
in
favor
for
keeping
added
remove
host
for
for
this
use
case.
But
why.
F
F
The
SSH
one
it's
the
backend
implementation,
so
the
SSH
implementation
knows
which
host
is
managing
and
queries
the
host
for
the
devices
and
then
delivers
it
to
the
interface
to
the
orchestrator
interface.
The
same
way
as
the
Dipsy
knows,
which
minions
it
has.
The
cerfancy
will
knows
how
many
machines
it
has
and
the
rook
knows
what
are
the
resources
available
to
the
closer
and
the
orchestrator
interface
just
asks.
What
are
the
devices
I
have.
B
F
That's
why
I
don't
see
no
reason
to
have
to
manage
hosts
unless
we
want
in
the
future
and
I
think
this
just
applies,
maybe
more
to
bare
metal
the
environment
and
where
we
we
want
to
use
the
orchestrator.
To
also
add
machines
to
our
cluster
in
the
sense
of
the
orchestra
will
tell
us
all
to
a
Dominion.
It
will
tell
referenceable
to
add
a
machine
and
nds
it
to
the
ssh
requisite
so
to
add
a
new
host,
but.
G
Iii
agree
with
with
you:
Ricardo
I
think
that
at
the
orchestrator
level
in
the
orchestrator
FD
a
probably
will
a
one
meet
these
two
functions,
but
in
any
case,
this
makes
more
difficult
implementation
in
all
the
methods
in
turkish
tato,
because,
for
example,
in
order
to
remove
one
OSD
you
you
should
check
in
in
the
host
that
there
is
nothing
about
another
OS
ad
was
d.
Do
you?
Do
you
understand
it's
time
that
we
are
going
to
do
an
operation
over
a
for
removing
one
service?
G
G
F
A
You're
actually
doing
house
management
within
the
NGO
Orchestrator
you're,
adding
house
for
you,
so
you
want
to
add
hosts
if,
if
there
is
no
we'll
see
on
that
specific
house-
and
you
want
to
add
our
want
to
remove
hosts,
if,
if
you're,
removing
the
last
or
Sdn,
that's
very
much
similar
to
this
edge
Orchestrator
that
also
needs
to
do
a
house
management
interfere.
If
it
is
to
Orchestrator
set
to
host
management,
then
that
would
be
a
case
for
keeping
it
in
the
in
the
API.
B
B
C
A
B
A
B
B
B
C
B
So
we
know
one
issue
got
a
PR:
okay,
we're
good
thing
the
and
then
another
not
necessarily
added
to
the
orchestrator,
but
rook
is
failing
to
as
testing
on
Friday
rook
is
not
able
to
orchestrate
Nautilus
right
now,
it's
not
connecting
to
the
Mons,
so
I'm,
not
sure
or
wondering
if
it's
related
to
the
messenger
to
changes,
Sebastian
Han.
Did
you
find
anything
the
last
couple
of
days
on
was
that
your
messenger
to
work?
If,
if
you
can
get
that
working
again
so.
H
I
guess
after
Ben's
PR
idea,
I
might
have
to
I.
Guess
I
did
something
wrong
doing
to
replace
or
a
didn't.
He
work
things
changed,
so
the
PR
is
is
not
working
anymore
and
I
haven't
had
much
time
to
look
into
it,
but
but
if,
if
because
is
it
something
we
see
in
the
Roxy
I
at
the
moment
like
every
single
jobs
are
feeling
too
different
notice
or
so.
B
C
B
The
PR
has
emerged
yet,
and
so
then
I
was
testing.
Just
you
know,
without
the
integration
tests
just
trying
manual
tests
to
try
out
a
nautilus
and
without
so
without
your
messenger,
to
changes
in
rook
a
couple
weeks
ago,
at
least
it
was
working
again
after
I
fixed
the
port
to
be
six
seven.
Eight
nine
mm-hmm.
D
H
B
H
It's
last
week
during
my
work
on
building
the
csv
files
for
for
the
openshift
installer
I
came
across
an
instable
orchestra,
but
you
know
that
now
everything
that
is
deployed
through
the
new
open,
shipped
installer.
So
there
is
a
monolithic
install
where
you
have
all
the
basic
components,
but
anything
that
you
want
to
add
after
this
has
to
be
as
part
and
okay
sugar.
That
will
deploy
one
thing
and
it
seems
that
ansible
has
something
for
this
too,
which
actually
relies
on.
H
The
ansible
runner
service,
but
this
is
actually
but
this
operator
brands
as
a
pod
in
a
in
a
kubernetes
environment,
though,
if
if
this
is
really
what
I
think
it
is,
then
maybe
we
should.
We
should
start
looking
at
this
a
little
bit
more
in
detail
so,
but
at
the
deploying
staff
we've
been
Cibola
can
also
be
done
through
the
anti-balaka
shredder
in
communities
instead
of
having
our
standalone
instance
station
of
the
ancillary
service.
H
H
H
Good
thanks
welcome
and
that's
that's
it
for
me
yeah
I
guess
this
could
be.
My
point
is
that
if,
if
this,
if
this
is
really
what
I
think
it
is,
then
we
this
my
help
on
reducing
the
work
we
have
to
do
to
mimic?
What
book
does
with
we've
given
at
this,
because
if
the
sensible
Orchestrator
is
part
of
communities
already,
then
there
are
certain
things
that
will
get
inside
communities
like
inventories
and
stuff
like
that.