►
From YouTube: 2019-04-01:: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
have
actually
two
face-to-face
meetings,
but
one
is
the
cephalic
on
2009
meeting
on
Saturday.
It
I
I,
don't
know
if
we
have
time
or
when
we
have
time,
as
the
other
meetings
are
a
bit
more
important
at
at
least
some
sage
meeting
is
a
bit
more
important
for
me.
So
I
really
don't
know.
If
and
when
we
can
meet
there.
A
A
A
A
A
A
Would
make
sense
if
there
is
not
really
an
interest
in
that
orchestrate
a
meeting
and
further,
then
we
can
basically
strip
it
down
to
a
dashboard
face
to
face
meeting
I'm
going
to
join
it
in
any
case,
I'm
also
interested
in
the
despot
of
course,
but
yeah
the
the
orchestrator
part
depends
on
if
we
can
get
people
to
attend.
A
B
A
B
Yeah,
so
the
system
named
service
changes
and
I'm
still
catching
up
on
PTO
by
the
way,
so
I'm,
probably
gonna,
forget
some
things
or
miss
some
things,
but
the
latest
I'm
aware
of
is
around
this,
so
the
Hamill
refactoring
that
I
started
before
I
left,
but
still
I
need
to
go
back
and
update
the
that,
based
on
a
couple
of
new
comments,
I
think
it's
about
there
I
know
Sebastian
Han
you're,
depending
on
this,
for
LM
work
too.
So
I
need
to
make
this
a
priority.
So.
C
A
C
B
Yeah
so
there's
so
there's
my
PR
for
yeah,
more
refactoring
and
then
to
go
along
with
that.
There's
a
couple
other
PRS
to
manage
what
happens
in
passing
in
bars
of
iron
variables
to
the
manager,
pod
Rohan,
there's
not
on
this
call
created
that
and
and
then
Sebastian
your
PR
around
picking
up
and
variables
in
the
manager.
I
saw
that
that's
looking
good,
so
we
can
make
sure
we
get
all
the
settings.
A
B
A
A
B
B
Yes,
so
if
there
or
would
you
change
the
CRT
version,
because
there's
some
major
change
or
a
breaking
change
or
whatever,
then
then
yeah
that
CRT
change
version
will
change,
and
maybe
we
could
assume
that
yeah.
Let's
start
with
that
assumption
that
it
will
be
compatible
until
we
change
the
CRT
version.
C
C
C
Basically,
what
I
have
and
I
guess
one
of
the
things
related
to
llamo
changes
are
also
some
changes
with
our
bags
and
potentially
supporting
or
not
supporting
any
more
mythical
blisters
on
the
simulator.
We
had
various
discussions
about
this,
but
we
never
really
settled,
but
we
here
what
we
want
to
do,
I
think
I'm,
I'm
kind
of
taking
back
what
I
was
saying,
I
think
at
some
point,
I
was
advocating
for
simplicity
to
have
one
Operator
one
cluster
for
operator.
C
But
although
in
practice
is,
this
might
be
a
little
bit
inconvenient,
though
I
suspect
that
users
and
customers
would
want
to
have
multiple
clusters
for
one
Operator
and
that
this
this
is
potentially
something
we
will
still
have
to
support.
I
think
we
definitely
need
to
have
a
more
various
discussion
about
this,
but
I
just
wanted
to
raise
that
pretty
quickly,
so
people
are
aware
of
that.
Changes
might
come
on
that
topic.
Yes,.
B
Yeah
and
my
thought
on
that
is
that
I
mean
we
need
to
continue
supporting
multiple
clusters
per
operator,
yeah
at
least
for
backwards
compatibility,
but
doing
it
or
in
my
ml
refactoring,
I'm
kind
of
making
I
want
to
make
it
so
that,
or
by
default.
The
examples
show.
Well,
here's
how
you
create
one
operator
and
a
cluster
in
the
same
namespace
just
make
this.
You
know
90
percent
case
simple,
where,
because
not
almost
all
users
just
create
one
namespace
or
I'm
one
cluster.
They
just
don't
need
multiple,
but.
C
B
It's
still
and
really
the
only
as
long
as
we
leave
a
little
bit
of
code
in
place
like
a
simple
form
anyway,
a
few
simple
Co
changes,
code,
things
that
are
already
and
there
then
the
only
need
needed
change
from
the
the
core
examples
would
be.
Some
are
back
to
work
across
namespaces
and
then
some
I
am
old.
Changes
should
also
have
an
example
of
how
to
do
that.
B
C
B
B
Yeah
on
the
external
cluster,
what
else
was
I
gonna
say
the
yeah
a
week
or
two
ago,
I
was
thinking,
oh
we'll,
just
we
don't
have
time
for
this
in
one
night,
oh
I'll,
just
save
it
for
1.1,
but
as
I
was
sitting
on
the
plane
last
week,
it's
like
I
need
something
to
do.
I'll
work
on
external
cluster,
so
maybe
we'll
get
it
in
for
one
night,
Oh
or
at
least
a
prototype
version
of
it.
We
can
start
testing.
C
A
A
A
B
A
A
A
C
C
It's
something
I
already
discussed,
which
is
auto
fix
ago,
but
think
we
we
all
felt
that
last
week
when
Travis
was
away,
so
the
Travis
is
away.
Nothing
gets
in
yeah
because
yeah,
so
so
I
think
this
is
slowing
everything
down.
It's
not
good
for
anyone
at
this
point,
because
we
can't
rely
on
Travis
to
merge
everything
to
review
everything,
but
in
a
long
time
this
is
not
be
sustainable.
B
Yeah
definitely
and
I've
I've
been
pushing
for
a
change
here
for
a
few
months
really
since
Kuk
on
there
just
a
little
context
around
it.
So
at
coop
con
maintainer
x'.
We
had
a
discussion
around
well.
How
can
we
add
another
maintainer,
because
the
project
was
already
growing
at
that
point
and
then
so
the
outcome
of
that
discussion
has
been
in
the
works
and
the
other
maintainer
x'
want
to
create
a
new
concept
of
owners.
In
addition
to
maintainer
swear
owners
could
for
quests
and
things
so
anyway.
B
B
C
C
At
this
point,
the
issue
is
they
have
one
PR
that
gets
merged.
Your
PR
was
winning
but
CI
passed
and
then
you're
not
up-to-date
with
masters.
So
maybe
what
your
so
being
green
is
now
is
now
red.
But
you
don't
know
about
this.
You
should
be
pays
again,
run
the
CI
and
make
sure
everything
works,
but
yes,
after
them
do
this
I
guess,
for
simple
reason,
is
that
the
CI
takes
a
lot
of
time
to
run
at
this
point,
but
yeah
I
guess
it's
to
be
a
balance.
C
C
But
I
mean
there
are
ways
to
decouple
the
CIA
or
the
way
this
young
ones
with
such
a
specific
component,
don't
always
true
clips,
maybe
90%
of
the
time.
You
know
that
this
particular
component
from
the
isolated
entities
individually,
because
it
doesn't
have
any
impact
on
the
rest
of
the
this
guy.
In
this
week,
you
can
take
up
all
your
charts
into
smaller
drops
and
then
reduce
the
time
it
takes
to
to
run
this
yeah.
C
A
B
Yeah,
there's
I've
been
looking
around
for
people
to
because
there
is
a
a
guy
at
Uptown
that
spends
a
little
bit
of
time
on
on
CI
for
rook.
But
it's
just
really
when
things
are
blocked,
because
there's
only
they
have
access
to
the
AWS
account.
So.
B
E
C
I
think
it's
a
little
bit.
I
have
discussed
that
with
sage
when
we
were
in
India
I
guess.
One
of
the
conclusions
we
had
was
that
that
deaths
goal
or
responsibility
to
support
every
single
installers
guess
it's
a
nice
to
have,
but
every
project
store
consuming
death
to
be
running
its
own
CI.
In
the
case
of
defensible
in
civil
has
its
own
CI,
then
rook
should
be,
should
also
have
its
own
CI
to
consume
stepped
away
it
once
because
if
we
come
for
the
handle,
all
the
orchestrators
in
ethology
on
its
own
would
be
yeah.
B
B
C
C
Yes,
I'm
really
not
sure
how
to
do
that.
We
have
to
allocate
people
resources
do
it
is
a
factor.
We
also
have
to
have
compute
resources
to
run
the
CI.
That's
going
to
take
a
while.
It's
like
a
six-month
project
or
one
or
two
people,
but
think
we
are
almost
reaching
the
point
where
we
have
to
have
it,
because
we
keep
adding
functionality.