►
From YouTube: 2019-08-05 :: Ceph Orchestration Meeting
Description
Ceph Orchestration Meeting
A
A
B
C
B
B
Yep
just
specify
the
the
persistent
volume
claim,
template
and
the
Mons
will
start
up.
Babies
yep
and
then
related
that
starting.
That
was
these
on
PBS
a
separate
PR
hoping
to
have
that
done
this
week
and
once
that's
done,
the
only
thing
left
running
on
the
host
path
would
be
collecting
a
logs
and
crash
dumps
and
I.
Think
we'll
save
that
for
a
separate
discussion
about
the
right
way
to
collect
those
if
we
create
separate
PBS
or
whatever.
B
As
far
as
how
we
expose
this
in
the
the
orchestrator
module.
Maybe
that
would
be
worth
thinking
through,
since
this
really
only
applies
probably
to
kubernetes
environments.
It's
not
sort
of
a
general
pattern
for
other
orchestrators,
that's
specific
to
committees,
so
I
think
two
thing
to
think
about.
A
A
B
Do
we
even
still
have
the
option
to
use
directories
but
there's
nothing
exposed
that
would
allow
you
to
eat
to
build
TVs
if
you're
in
a
kubernetes
environment,
but
we
have
it'd
have
to
be
some
sort
of
other
settings
which
comes
down
from
the
dashboard
that
says.
Oh
I,
see
that
this
is
a
curried
environment.
Allow
the
user
to
select
using.
You
know
this
storage
class,
so
we
know
how
to
generate
the
Peavey's
from.
B
B
C
A
That
cool,
can
you
make
them
join
the
Wednesday's
Orchestrator
meeting
sure,
because
Akiva
from
the
dashboard
team
is
also
joining
the
time
that
meeting
and
if
we
need
something
specific
for
TVs
for
the
dashboard,
then
it
might
make
sense
to
thing
up
there.
A
As
the
code,
that's
that
is
basically
identical,
yeah
not
ideal,
but
I.
Think
it's
okay
for
the
moment.
B
B
C
B
B
A
A
A
A
A
B
E
E
One
reason
to
separate
it
and
there's
also
this
sort
of
separate
question
to
if
you
have
a
single
stuff
cluster
and
multiple
kubernetes
clusters
are
consuming
forwards
from
it
like
where
you
send
the
events
or
even
if
you
send
them
into
multiple
places,
yeah.
It
just
wasn't
clear
that
it
like
directly
mapped
under
work,
yeah.
A
E
E
A
No,
he
didn't
anyway,
yeah.
A
A
E
A
E
E
E
D
The
so,
maybe
maybe
sage
can
chime
in
here,
I've
been
trying
to
we're.
We
have
this
issue
in
rook,
where,
when
we're
building
the
set,
comps
were
using
IPS
for
the
Mon,
the
initial
monitor
list
and
some
of
the
things
we'd
like
to
do
to
refactor
some
of
them
on
stuff.
That
causes
a
bootstrapping
issue
where
we'd
like
to
you
know
where
we
we'd
like
to
do
something.
But
at
that
point
we
don't
necessarily
have
the
IP
addresses,
and
so
we've
been
trying
to
figure
out
if
we
can
just
use
DNS
for
everything
and.
D
D
So
so
we
can
so
basically
the
issue
is
that
there's
a
mode
in
kubernetes
where
we
can
create
a
portable
IP
or
a
monitor
pod,
and-
and
this
is
great
because
we
can
create
all
our
pods
and
just
have
them-
bind
to
this
portable
IP.
But
when
we
use
hosts
networking,
we
need
to
create
the
config
file
at
the
time
we
create
the
pod,
but
there's
the
bootstrapping
issue.
It
has
to
do
with
scheduling
so
right
now
we
do
explicit
scheduling.
We
know
where
we're
gonna
put.
It
do.
E
That
should
Mon
initial
members
should
be
I,
remember
as
a
list
of
names
and
then
Mon
host
as
host
of
I
t's,
then
I
just
have
it
back:
yeah,
yeah,
yeah
yeah,
but
yeah
I.
Think
if
you
do,
if
you
just
use
DNS
as
long
as
we
know
that
the
DNS
will
be
updated
in
correct
by
the
time
the
pot
actually
just
start
up
and
and
therefore
exists,
and
the
query
happens
like
that:
there's
not
some
like
annoying
race
condition
there.
That
should
be
okay,
that's
by
the
simplest
way,
yeah.
B
Brooke
was
created,
it's
always
has
had
this
assumption,
that
we
need
the
IP
addresses
for
the
mons
and
it's
just
a
fundamental
part
of
the
identity
for
them.
So
if
we
can
get
away
from
that,
then
yeah
it
would
solve
some
of
these
scheduling
challenges
and
let
us
use
memories,
scheduling,
that's
the
goal
and.
E
D
Mean
okay,
it's
getting
a
little
detailed,
but
I
mean
I.
Think
the
high
level
was
that
we
have
like
an
a
record
or
something,
and
then
kubernetes
registers
the
IPS
of
each
of
the
pods,
though
I
think
for
clients
that
just
point
to
a
single
a
record.
They
just
get
round-robin,
but
maybe
we
can
special
case.
The
mons
themselves,
I
think,
is
what
you're
kind
of
thinking
about
is.
D
E
D
D
E
B
E
Ones,
the
other
ones
are
gonna,
they're,
gonna
go
and
add
themselves
in,
but
maybe
it
actually
might
not
matter,
but
probably
if
you
were
to
start
like
three
monitors
all
at
once,
they
might
not
form
a
quorum
properly.
If
you
start
one
of
them
and
you
let
it
do
its
full
bootstrap
and
actually
form
the
quorum
and
create
the
cluster
and
then
you
add,
monitors
I,
think
they'll
all
be
okay.
Luckily,.
D
F
So
I
mean
I.
It
is
my
understanding,
at
least
from
talking
to
Israel,
that,
like
part
of
the
monitors,
identity
is
still
the
IP
address,
which
I
mean
we
can
still
find
the
monitors
from
DNS,
but
I
mean
it
to
my
understanding.
I
think
we
still
would
have
to
like
create
a
new
monitor
ID
if
we're
moving
it
to
a
new
like
yeah,
a
new
host.
E
Yeah,
yes,
but
if
you
are.
E
Again
that
the
cluster
creation,
one
is
the
tricky
one
because
it
was
designed
so
that
you
could
have
like
an
monitors,
come
up
in
parallel
and
correctly
form
a
quorum,
and
so
there's
some
some
weirdness
there.
But
in
general,
as
setting
that
aside,
if
you're
adding
monitors
monitor
comes
up
it
will,
it
will
talk
to
the
existing
monitors
and
add
itself
in
and
a
sort
of
a
state
structured
way.
That
part
I
think
is
at
least
will
work.