►
From YouTube: Ceph Orchestrator Meeting 2021-11-30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We
have
a
big
elephant
in
the
room
at
at
the
party
cluster,
where
we
have
access
for
the
rest
of
this
week.
Here.
A
B
B
I
know
he
was
making
a
build
last
night,
but
from
what
I
saw
today,
the
manager
has
been
up
for
15
hours,
so
I
don't
think
he's
gotten
a
chance
to
put
those
new
changes
in.
B
I
think
the
things
we're
trying
to
do
is
fix
that
walking
stuff
and
also
I
wanted
that
change
with
the
agent
to
see
if
one
reducing
the
amount
of
work
that
it's
doing
whatever
the
agent
reports
makes
a
difference
and
then
also
I
wanted
that
logging
extra
stuff
there.
So
I
could
see
how
long
it's
taking
the
agent
to
get
a
response
from
the
manager,
but
none
of
those
things
have
gotten
in
yet
because
I
think
the
build
hasn't
been
able
to
make
it
onto
the
cluster.
Yet
so
waiting
on
that
stuff.
A
Okay,
yeah,
you,
you
had
thanksgiving
last
week,
yeah
other
than
that,
so
my
big
picture
is
that
I'm
pretty
happy
about
what
happened
and
what
the
state
of
the
cluster
right
now
is.
A
At
this
point,
I
would
like
to
make
it
available
for
other
components,
unfortunately
not
rook,
because
that's
not
a
cuban
in
this
cluster,
but
I
think
patrick
wanted
to
have
a
look
at
it,
and
I
don't
know
if
others
are
also
going
to
have
a
look
at
the
cluster
yeah.
That's
it's
about
pausing.
A
Sarah,
do
we
do
we
quickly
wanna
discuss
the
topo
lvm
status.
I
mean
sage
isn't
here,
but
we
still,
I
think
it
still
probably
makes
sense
to
just
at
least
give
a
small
status
update
right.
C
Yeah,
it
sounds
good,
I
thought,
since
I
haven't
been
able
to
make
it
for
a
couple
weeks,
we
could
just
update
with
the
latest.
So
the
I
mean
the
discussion
around
or
that's
in
the
the
notes.
There
that's
linked
for
changing
top
lvm
designed
to
incorporate
raw
devices
and
not
just
lvs
that
that's
kind
of
stalled
now
really
update
there,
and
so
the
I
mean
what
we're
working
on
for
downstream
for
odf
is
to
have.
C
We
use
we're
using
this
project
called
topolo
lvm
operator
and
one
update
there
is
that
the
top
ovum
operator
it
actually
isn't,
or
it's
a
different
team
owned
by
different
team
from
the
core
topo
lvm
csi
driver
folks,
and
they
the
operator
it
looked
like.
We
could
just
jump
on
board
there
and
reuse
that
and
contribute
to
it
and
improve
it,
and
that
was.
D
C
So
that's
the
that's.
The
latest
approach
we're
going
to
have
have
an
operator,
and
hopefully
it
will
become
actually
part
of
the
top
lbm
project
instead
of
being
a
completely
separate
project
from
top
lbm,
but
we're
still
waiting
for
confirmation
from
the
top
lvm
folks
that
that
they're
interested
in
bringing
that
into
the
kind
of
the
top
lbm
umbrella.
C
Let's
see,
I'm
still
catching
up
from
the
holiday.
Last
week
I
haven't
seen
the
project,
but
I
know
there
are
a
couple
people
starting
to
work
on
it.
They
probably
pushed
it
somewhere.
I
think
they're
starting
to
work
in
a
downstream
branch
and
then
kind
of
put
something
together
and
propose
to
add
it
to
the
top
of
lvm
project
when
it's
a
little
further
along
okay,
because
we
yeah
for
odf.
We
do
want
to
get
this
in
for
4.10,
which
you
know
we
have
basically
a
month
left
for
feature
complete.
A
That,
oh
a
month
for
for
writing
a
complete
operator
is
a
bit
ambitious.
C
Yeah
it'll
be
a
basic
operator,
it'll
get
us
started,
but,
and
we
have
basically
some
yeah
we're
not
starting
from
scratch
either.
I
guess
we
have
a
lot
that
we
learned
already
from
the
other
top
of
the
operator
and
all
the
other
operators
that
we
work
on.
Okay,
yeah.
It
is
a
bit
tight
for
sure,
but
that's
the
goal.
A
A
A
Okay,
perfect
yeah,
then
then
thank
you
travis
and
then
I
think
we
can
move
on.
We
have
in
the
orchestrator,
we
have
a
thing.
That's
called
a
placement
spec
which,
which
is
used
for
two
different
for
two
different
occurrences.
One
one
is
to
place
demons
and
the
other
one
is
to
select
hosts
and
we
are
mixing
those.
A
So
the
placement
specification
that
we
have
for
placing
demons
contests
of
things
like
if
you
want
a
local
located
demons,
or
you
know
how
many
demons
you
want
to
have
in
general
and
the
other
use
case
is
a
just
a
plain
selection
of
hosts
by
labels
or
by
a
pattern
and.
A
A
B
Yeah
they
wanted
to
filter
and
have
like
the
accounts
for
the
host
ls
because
it
was
useless
at
scale
if
you
just
had
all
every
single
hostess
every
time,
but
we
were
talking
yesterday
that
it
didn't
make
sense
to
maybe
use
a
placement
spec
for
that,
because
then
you'd
have
options
like
count
and
things
make
no
sense.
In
that
instance,.
A
And
yeah,
I
I
don't,
I
don't
know,
is
there
anything
else
we
can
discuss
today?
Otherwise,
it's
going
to
be
a
very
short
meeting.
E
I'd
like
to
bring
up
something,
if
I
can
so
that
right,
yes,
sir,
please
all
right
based
on
our
pausing
experience,
I
mean
you
know,
based
on
the
scale
issues
we
did
find.
I
know
we
did
some
some
patching
and
some
fixing
I
mean
do
we
do
we
have
thoughts
and
what
we
need
to
put
as
a
longer
term.
You
know
architectural
fixes
changes
things
that
we
want
to.
Do
I
mean,
should
we
start
listing
those
before
maybe
the
end
of
january
and
make
sure
we
flush
them
out
really?
Well?
A
But
what
I
find
fascinating,
that
this
hasn't
really
happened,
but
it
would
history
osf,
even
though
sev
is
supposed
to
be
very
good
at
large-scale
clusters,
and
it
is
actually
so
is
there
any
reason
why
we
haven't
implemented
this
kind
of
testing
in
the
past
already
jeff.
E
That's
a
really
good
question.
I
think
it's
a
lot
depends
on.
You
know,
upstream
a
lot
of
our
upstream
vendors
that,
basically,
you
know,
have
been
utilizing
this
I
mean
a
lot
of
those
big
vendors.
Aren't
utilizing
deaf
adm
as
we
move
forward
right.
I
mean
we're
learning
that
as
well.
You
know
so
they're
not
going
to
run
into
the
same
scale
issues
potentially
with
the
orchestration
layers.
Right
I
mean
so
overall,
like
you
know,
so
I
don't
know
it's
a
good
question
and
but
again
it.
E
I
think,
we've
learned
enough
that
we
probably
should
have
some.
You
know,
work
items
moving
forward
on
on
on
on
our
list
right
for
first
fadm
and
orchestration
in
general
I
mean
yes,
no,
you
know
if
so
just
articulate
those
make
sure
we're
very
clear
on
what
those
are
what
we
know
so
far,
so
we
can
make
sure
we
don't,
you
know,
lose
them
moving
forward.
A
It's
not
only
safe
idiom,
it's
also
the
promises
manager,
module.
A
A
A
So
the
big
suite
that
sage
right
now
was
working
on
with
70
hosts
was
just
not
possible
to
schedule.
Previously.
We
had
to
completely
clean
the
tautology
queue
and
then
schedule
this
big
suite,
and
then,
after
that
was
finished,
we
had
to
restart
the
totality
queue
again,
so
that
was
just
not
possible
so
which
means
that
by
now
we
at
least
have
at
least
some
kind
of
a
an
idea
of
how
to
even
start
testing
on
at
least
70
hosts.
A
A
E
Well,
I
I
really
think
we
should,
like
I
said,
create
this,
this
backlog
and
list
of
things
that
we
really
have
to
start
addressing
moving
forward
in,
and
I
think
there's
we
already
have
a
few
things.
I
I
I
know
you
know,
and
but
I
think
we
should
just
make
sure
it's
it's
complete,
based
on
our
experience
so
recently.
A
At
least
we
could
have
a
look
at
the
70
house
cluster
that
we
have
in.
G
E
And
even
understanding
which
things
you
know
from
the
point
of
view
a
scale
of
linear
versus
you
know
non-linear
you
know
and
trying
to
figure
out.
You
know
we
could
based
a
lot
just
on
the
linear
assumptions
on
the
smaller
scale.
You
know
well
that
the
opportunity
of
what
we
had
was
was
you
know
unusual.
You
know
and
unfortunate
that
it
is
unusual,
but
we
have
to
figure
out
how
to
get
around
that
and
figure
out.
These
scale
related
questions
independent
of
having
real
hardware.
A
A
A
We
we
want
to
we,
we
could
ask,
or
talk
to
to
just
to
touch
right
now,
looking
into
adding
more
functionality
to
totality.
Isn't
there
a
testing
weekly.
Indeed,
there
is
a
testing
weekly
going
on
on
wednesdays.
A
E
E
So
it's
technology
weekly
that
okay,
yeah,
okay,
yeah
yeah
yeah.
I
can
I
can
go
this
week
and
I
can
bring
this
topic
up
and.
A
I
I
don't
know
if
this
week
is
going
to
be
a
good
week
to
talk
about
it,
because
there
is
a
theft,
developer,
monthly,
okay
happening
in
parallel,
so
which
means
that
probably
every
everyone
that
would
be
at
the
totality
weekend
would.
This
is
instead
going
to
join
the
chef
developer
monthly,
but
the
week
afterwards
would
be
a
great
great
idea.
You
guys,
I
guess,
okay.
E
Correct
dave
is
going
into
friday,
so
he
won't
be.
C
E
Oh
okay,
so
he's
out
this
friday,
I
guess
is
my
I
cuz
I
talked
to
him
yesterday
about
some
other
things
and
I
just
asked
him
what
his
last
day
was,
and
so
I
got
lucky.
I'm
glad
I
told
him,
I'm
glad
I
didn't
wait
till
next
week.
You
know,
as
I
think,
his
original
announcement,
the
you
know
the
end
of
next
week
was
his.
You
know
official,
but.
A
A
A
So
adam,
when
I
wanna
bring
forward
your
question
your
head,
maybe
we
can
crowdsource
the
information.
B
Do
things
like
that
stage
associate
implemented
something
about
us
before
and
we
weren't
sure
when
what
triggers
failovers
right
now
what
we
need
to
be
looking
for,
and
I
guess
we
need
to
define
like
what
cases
we
need
to
actually
take
action
in
like
so
like
I
was
saying
before,
like
say,
systemd
should
be
covering
the
case
where
the
nfs
individually
goes
to
an
error
state
by
restarting
it,
but
other
cases
we
need
to
be
worried
about.
Then
they
call
this
work.
A
B
G
A
B
I
haven't
looked
at
much
of
the
actual
failover
code.
I
said
our
technology
tests
around
this
we're
only
testing
specifically
us
manually,
starting
or
stopping
and
then
starting
the
daemon
and
also
removing
it
even
entirely.
There
were
no
real
cases
for
anything
like
this,
where
we're
actually
doing
a
proper
failover.
What
I
could
tell
not
sure
exactly
what
it
does.
I
need
to
look
at
it.
A
But
I
think
the
failover
code
doesn't
happen
in
the
course
code
path
that
is
executed
like
daemon
stop,
but
it's
executed
in
the
following
up
code
that
takes
over
when
the
state
change
of
a
particular
demon
happens.
A
It's
just
that
the
failover
is
being
triggered
pretty
manually
and
explicit,
so
everything
should
be
in
place
as
far
as
I
know,
or
as
far
as
as
I
assess
it
right
now,
except
for
for
if
a
host
gets
offline,
I
don't
think
this
case
is
handled
properly.
B
All
right
I'll
have
to
look
more
at
the
actual
code.
What
it
does,
because
I
didn't
even
know
if
that
case
was
handled.
I
wasn't
sure
what
was
done,
but
I
definitely
need
to
implement
the
offline
one
and
I
guess
you
have
to
maybe
get
in
touch
with
sage
and
ask
about
the
other
parts
you're
talking
about
right
now.
Maybe
you
can
explain
it
to
me.
B
A
Yeah,
it's
a
bit
of
a
pity
that
in
fs
ganesha
it
doesn't
really
support
native,
but
you
have
to
actually
invest
a
lot
of
work
in
order
to
make
nfsknaser
do
proper
failovers
make
things
a
bit
complicated
and
I
think
it's
also
being
done
in
kubernetes,
if
I'm
not
mistaken.
So
in
kubernetes
we
are
using
the
kubernetes
scheduler
to
deploy
new
nfl
nfs
ganesha
gateway
on
a
different
host.
I
think
that's
how
it
is
implemented
and
we're
just
hoping
that
the
cumulative
scale
schedule
is
fast
enough.
H
Yeah,
I
I
think
that's
that's
correct
for
what
it's
worth
and
I
I
also
have
not
really
dived
into
the
failover
behavior
of
nfs
on
kubernetes
to
make
sure
that
it
is
like
it
that
it
actually
works
for
an
fs
and
that
it's
not
just
like
a
you
know.
We
hope
and
assume
this
works.
H
A
Okay,
so
adam
did,
do
you
know
what
the
time
budget
was?
Was
it
a
minute
or
so.
A
Or
90
seconds,
yeah
so
90
seconds
and
in
those
within
those
90
seconds
you
have
to
get
the
new
nfs
ganesha
demand
up
to
speed.
A
Okay,
adam,
I
don't
think
that
we're
going
to
be
able
to
get
a
come
up
with
any
more
answers
as
of
today
without
saying
here
at
this
point,
okay,
we'll
have
to
get
in
touch
with
them.
You
need
to
their
biggest
yeah.
We
we
are
going
to
have
the
stand-ups
for
the
rest
of
the
week,
so
that
should
be
pretty
good.
E
F
Yeah
exactly
we,
we
have
jeff
leighton.
Yes,
I'm
just
kidding
you
when
I
say
that
so.
A
And
even
for
seven,
we
have
mike
fritz,
who
was
unfortunately
also
not
here
today,
but
he
also
had
a
lot
of
has
done
a
lot
of
work
when
it
comes
to
nfsij
mike,
is
here
mike.
A
D
Awesome
so
this
is
my
last
week
at
red
hat
so
same
with
stuff,
so
I
just
wanted
to
say
thank
you
to
everyone
for
all
the
help,
with
everything
that
I've
gotten
over
the
past
two
years
and
everything.
So
thank
you.
I
guess
that's
about
it.
A
And
was
was
nice
to
have
you
working
at
cfadm
from
actually
close
inception
of
stefarium?
It
was,
was
it
already
two
years
ago?
I
don't
know
anyway.
Thank
you,
daniel.