►
From YouTube: Ceph Orchestrator Meeting 2021-11-23
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There's
not
much
in
there
right
now
for
today
I
just
put
in
the
fuzzy
and
snmp
topics
I
figured
what
we
could
do
is
we
could
start
with
other
teams
other
than
like
this
fvm
team.
If
they
had
the
topics
they
wanted
to
bring
up,
and
then
we
could
maybe
discuss
like
pausing
and
stuff
at
the
end,
because
we're
having
some
issues
there
to
talk
about.
A
So
here
we
have
like
the
spill
from
like
brook
like
like
one
me.
Do
you
have
anything
any
topics
from
that
group,
but
this
week.
B
No
not
too
much
to
say:
okay,
we
are
working
in
the
topper
vm
operator.
Okay,
there
are
good
news
because
well,
I
think
that
we
have
a
fixed
version
of
topol
vm
working,
total
vm
operator
working
with
the
original
topology
project.
The
developer
is
is
very
open
to
contributions
and
we
have
started
to
work
closely
and
I
think
that
for
next
week
we
can
start
to
provide
a
design
for
the
crd
in
order
to
start
the
implementation
of
smart,
dynamic
provisioning.
B
That
is
just
the
thing
that
we
need
in
order
to
to
manage
so
acid
osds
in
fruit
environments
in
governance,
environments
in
the
same
way
or
similar
way
that
which
we
are
using
in
the
video.
So
what
slow
progress
but
step
by
step
so
happy
in
that.
A
Okay-
well,
that's
good
to
hear.
I
don't
know
too
much
about
it
myself,
I'm
glad
that
it's
making
progress,
still,
okay,
francesco
or
john.
Do
you
have
anything
you
want
to
bring
up
from
your
team
here.
A
Though,
okay,
what
about
your
nesto,
do
you
have
anything
from
the
dashboard
team?
The
latest.
C
Orchestrator
nope,
I
mostly
joined
just
to
see
if
I
can
help
with
the
poster
thing.
A
A
So
I
don't
know
if
sage,
if
you
want
to
give
us
like
fellow
center,
if
you've
seen
anything.
I
know
you've
been
looking
with
gdb
into
what's
going
on
with
the
manager
right
now,.
D
D
I
guess
that's
the
current
the
current
issue.
I
guess
just
a
little
bit
of
news
from
paul
who
talked
to
luca
overnight.
They
aren't
planning
on
deploying
the
suspend
they're,
gonna
use,
stuff,
ansible
and
they're
gonna
have
a
toilet,
they're,
not
gonna,
use
our
monitoring
stock
they're
gonna
be
using
something
else,
so
I
guess
the
pressure's
off
to
like
fix
every
bug
and
get
a
backboard
or
anything.
D
D
You
know
paul's
last
night
he
was
looking
into
the
prometheus.
Module
is
being
some
serious
like
blowness,
which
is,
I
guess,
not
super
surprising,
given
that
all
4000
osds
and
are
feeding
all
their
stuff
into
the
manager
and
then
we're
trying
to
scrape
it
in
python
code.
But
even
so
there
are
some
functions
that
were
taking
way
longer
than
they
should
so
he's.
I
don't
know
if
he
got
very
far
on
that.
A
D
A
D
A
little
bit
after
I
I'm
not
sure
I
guess
a
little
bit
after
I
don't
know
this
particular
freezing
thing
I
haven't
noticed
until
today,
so
maybe
something's
different,
but
I
gotta
yeah.
I
gotta
sit
through
these
traces
and
add
it
better.
A
I
think
we
also
need
to
prioritize
a
little
bit
this
ssh
error
in
the
logs.
D
A
Agent
cherry
pie
logs,
it
wasn't
for
the
generic
that
one,
but
you
could
probably
do
something
similar.
What
that's
doing
where
you
just
have
to
find
that
log
and
you
can
just
send
it
to
a
filter
function
as
well.
You
know
which
log
it's
actually
coming
from.
I
don't
think
you
can
put
it
like
the
higher
level
log.
You
can
just
put
it
like
the
generic
top-level
one,
but
if
you
put
it,
you
find
where
it's
coming
from.
D
A
E
D
D
On
some
of
the
machines,
when
I
try
to
ask
ssh
into
them
the
ones
that
are
like
we're
not
able
to
reach
it,
if
I
do
ssh-v
and
then
try
to
connect
it
like
connects
and
a
second
later
it
like,
gets
disconnected
like
there's
some,
the
host
isn't
dead,
you
can
ping
it
and
it's
accepting
the
connection,
but
then
it's
closing
it.
D
A
There's
like
three
of
them
right
now:
yeah
yeah,
okay
yeah!
Unless
you
saw,
I
showed
you
the
tracker
yesterday,
so
you
saw
the
was
there.
So
there
was
a
pretty
easy
way
to
recreate
this.
It's
not
in
the
logs.
It's
like
just
more
directly
with
like
the
host
ad
commands,
but
at
least
there's
a
good
way
to
test
it.
D
A
A
And
then
the
the
other
topic
that
I
have,
if
we
don't
have
any
more
things
to
say
about
the
positive
cluster,
is
the
snmp
which
that's
still
one
progress,
that's
kind
of
like
a
big
one,
that's
changing
that's!
It
was
mostly
paul's
work
and
then
sebastian
started
took
it
up
because
paul's
been
focusing
on
the
positive.
A
F
F
F
I
was
wondering
if
you've
done
more
tests
like
when
you
developed
it
more
tested
on
bringing
down
one
of
the
hosts
or
vm
where
the
services
nfs
service
runs
in
like
if
they
get
redeployed.
A
I
think,
as
far
as
hdl,
that
was
like
the
case
that
was
like
not
covered
at
all.
Was
we
don't
have
any
case
for
hosts
going
entirely
offline?
A
Like
the
pillow
yeah,
I
think
we
were
at
least
going
to
implement
something
where
in
the
scheduling
where
it
would
try
to
move
demons
if
they
were
on
offline
house
or
like
see
certain
services,
but
I
don't
know
if
that
ever
actually
got
done.
A
The
most
part
it
doesn't
touch
it,
it
won't
remove
or
add
any
demons
yeah
which
is
normally
okay,
it's
usually
good,
but
I
think
in
this
case
we
need
to
deploy
another
demon
or
we
need
to
change
it.
So
the
other
nfs,
if
there
is
one,
is
active
or
something.
A
Yeah,
and
if
that's
in
particular,
do
we
do
we
have
a,
we
just
want
to
try
to
make
the
other
one
active.
Or
do
you
want
to
move
things.
A
A
I
think
we
started,
I
think
it
got
broken,
though,
because
I
think
it
was
causing
that
bug
with
when
you
rebooted
the
host,
the
monitors
would
leave
them
on
map,
whatever
it
was
like
removing
demons
from
offline
hosts.
We
tried
to
fix
that
and
I
think
it
like
undid
this
change
not
working
now
that
we
ever
got
back
to
fixing
it.
A
A
I
think,
like
the
the
actual,
keep
alive
and
hd
proxy
stuff,
are
there
and
then
it'll
redeploy
if
it's
an
error
state,
but
I
think
the
offline
host
case
isn't
like
isn't
covered
yet
so
that'll
have
to
go
yeah.
F
So
adam
sage,
do
you
want
me
to
create
a
target
to
get
to
start
a
mail
thread?
How
do
you
want
me
to
progress
regarding
this?
Because
this
would
block
the
openstack
team's
work,
because,
right
now
in
openstack
manila,
what
they
have
is
a
pacemaker
coreosync
that
helps
to
redeploy
ganesha
if
all
of
the
host
goes
down.
So
we
want
our
like,
like
a
equal
replacement
right,
so
yeah.
D
Yeah
yeah
yeah.
I
would
open
a
ticket
a
truck
tracker
ticket
bug
that
says
that
manifest
demon
not
rescheduled
one
hostess
town.
A
A
D
A
Work:
okay,
yeah
thanks
romano,
if
you
set
that
up
we'll
take
a
look
at
that
as
well.